This document should be seen in conjunction with the DDPA’s multi-year supervisory agenda for 2020-2023 in which AI was identified as one of the DDPA’s focus areas. We note that this vision document also reflects the contents of the letter on unfair algorithms of the European Data Protection Board (adopted on 7 February 2020) which was sent to Mrs. S. In ‘t Veld (member of the European Parliament) in response to her letter of 31 July 2019, posing several questions regarding the appropriateness of the GDPR as a legal framework to protect citizens from unfair algorithms and requesting guidance from the data protection authorities.
The DDPA identifies three main risks that must be taken into consideration when personal data is processed by AI or algorithms. First, the risk of discrimination and unfair treatment, in particular due to limitations and bias in datasets used. The second risk is the predisposition of parties that use AI to collect as much data as possible for AI-training purposes rather than observing the principle of data minimization. The third identified risk is the black-box phenomenon, meaning that parties using AI will no longer be able to explain and understand the internal logic of the AI in relation to the processing of personal data.
Just as any other processing activity that falls within scope of the GDPR, the processing of personal data through AI or algorithms must be in accordance with the data protection principles, such as fairness, transparency and data minimization. In particular, the accountability principle must be complied with. In relation to AI and algorithmic processing of personal data, the DDPA considers the requirements of a legal basis, the proactive provision of information about automated decision-making and, in some cases the provision of meaningful information about the logic involved, to be important elements for the accountable use of AI and algorithms. Parties must also take into account their obligations to maintain a register of processing activities, to adhere to the privacy by design principle and to perform adequate data protection impact assessments, and possibly prior consultations with the DDPA, prior to the implementation and/or commencement of AI or algorithmic processing of personal data. In short: nothing new here.
To ensure compliance on this topic in practice, the DDPA shall proactively provide training and information to data subjects whose personal data is being processed by AI and algorithms. In addition, information shall be made available to parties that (envisage to) make use of such techniques. Further, the DDPA shall focus particularly on providing education for data protection officers, since it also states that companies that use AI and algorithms will probably be required to appoint a DPO. This is noteworthy, because this statement is not further substantiated by the DDPA even though in our view, such requirement does not follow directly from the GDPR nor from the Guidelines on DPOs as adopted by the European Data Protection Board.
The vision document stresses that supervision and enforcement by the DDPA may relate to the situation prior, during and after the processing of personal data. Inspections to monitor compliance may be initiated by the DDPA itself but may also pursuant to a complaint or tip. However, in all investigations the DDPA shall take into account the fact that the working and consequences of an AI are difficult to understand and assess for citizens. Finally, the DDPA shall continue to cooperate with other data protection authorities within the EU as well as other national supervisory authorities, where relevant, to ensure consistent and coherent supervision and monitoring of the use of AI and algorithms.
In essence, in its vision document the DDPA confirms that the GDPR applies to processing of personal data by AI and algorithms and should not be treated differently than other processing activities. It remains to be seen what this will mean in practice, as such we will continue to follow these developments. For more information on privacy in relation to AI or algorithms, or data protection in general please contact Nina Orlić or Kim Lucassen from Loyens & Loeff’s Data Protection & Privacy Team.