AI Risks: the Legal Landscape in Italy and in The Netherlands
Talking about artificial intelligence means talking about algorithms. An “algorithm” is an “explicit computational procedure describable by a finite number of rules that leads to the result after a finite number of operations”.
The European Union is now close to the adoption of a specific regulation about the use of artificial intelligence, known as “AI Act”. In fact, despite this regulation presenting different overlapping with the General Data Protection Regulation (Reg. EU/2016/679), the AI Act means to define a more specific and systematic legal framework, focusing in particular on the managing and control of the risks deriving from the use of artificial intelligence.
One of the main risk produced by the artificial intelligence and algorithms is perorating discrimination based on social bias “learnt” during the training. Moreover, “generative” AI could violate copyright or provide misinformation.
Wisely, the Netherlands’ Data Protection Authority (Autoriteit Persoonsgegevens, AP) in the report “AI & Algorithmic Risk Report Netherlands (Report winter2023-2024)” directs the attention of its Government to the adoption of a national master plan on AI, in order to strengthen the future enforcement of the AI Act in the prevention of artificial intelligence risks.
The Report suggests five main goals.
First of all, human control should be intensified to guarantee a safe use of algorithms.
Thus, education of young, working-age and elder people should be strengthened and the AI systems should provide adequate level of transparency. In parallel, access to complaints offices should be granted.
Quite interesting is that the Authority finds in the “trust in algorithms and AI” throughout society an indicator of the level of this goal achievement.
Second goal, the master plan should take action to provide that applications and systems available in the market are secure, especially those which impact on human rights and public values. Indicators of security are, on one hand, the increase of number of registered applications and systems and, on the other, the decrease of reported incidents. To foster enterprises to guarantee high quality products, the plan should provide public or private-public support and investments in sufficient supervisory capacity.
Another goal concerns directly producers and providers. Organizations should be fully, at all stages, in control of the use of algorithms and the consequences of their applications. Thus, particular attention should be paid to the quality and quantity of impact assessments and evaluations performed and recurring audit should show organizational improvement. To guide and support private companies to comply, the instruments of soft law could offer the right level of compromise between the necessity of a clear framework of reference and technical flexibility.
Artificial intelligence and algorithms should also be promoted as an opportunity to improve public interest such as the welfare, wellness and stability.
Lastly, the Dutch Government should never consider the AI and algorithm risk just as a national topic. The interconnections which comes from the AI’s production and development chain may involve different judicial systems, not to mention the wide impact they have on the global economy.
Thus, to address the necessity of a global interconnectedness in AI systems, the adoption of global standards settings and supervision should be promoted.
As the Dutch report is mainly based on general principles, it could be considered as a reference by other States to elaborate a strategy for prevent and managed the algorithm risks.
In Italy, a similar program has been proposed alongside the “Strategic Program of Artificial Intelligence” for the years 2022-2024, which includes different policies covering three areas: education and training, research and applications, both for private enterprises and public administration.
However, a specific set of norms concerning the “algorithm legality” is still absent in the Italian legislation. Waiting for the adoption of the AI Act and relatives guidelines, the GDPR and the opinions of the Data Protection Authority still make the principal source for a specific discipline.
In fact, the GDPR has introduced an essential level of minimum guarantees that can be applied to the algorithm process, as they use personal data.
For example, art. 22 states “The data subject shall have the right not to be subject to a decision based solely on automated processing”. People have also the right to obtain information about “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject” (art. 13, co. 2, lett. f).
Based on this legal framework, the Consiglio di Stato, with the decision n. 2270/2019 was able to solve a controversy concerning the fair use of an algorithm in the administrative procedure.
The winners of a public tender alleged that the algorithm with which the Public Administration had decided their displacement through the national territory was invalid as it appeared completely unclear why the results were so distant from the general criteria that rule the public employment sector (in the case, employees were located very far from home or in an office they never chose).
The Consiglio di Stato elaborated two fundamental principles of fair use of algorithm in the administrative procedures. First, as the algorithm can be considered a way a norm can be applied, the public power must respect the principle of transparency, in its particular meaning of “knowability”, which “recognizes the citizen’s right to be fully aware of the existence of any automated decision-making and to know the information and instructions relating to the operation of the algorithm, the modules and criteria applied, as well as to access the source code of programming itself”.
Second, a decision based on algorithm is allowed only when no space for discretionary power is left by the law, so when the procedure is basically “serial and standardized”.
In this case, the “human control” is still guaranteed, because the “algorithm rule” is predetermined by a human person. In addition, the public office is still held responsible for its correct functioning and control.
This decision can be considered a milestone in the development of a legal knowledge about the use of algorithms in AI systems and the managing of related risks, expressing principles that can be applied also to the private sector.
However, if principles can be useful for interpreters such as judges and lawyers, all the operators urgently need a clear framework of technical rules that adequately guide them in their activities. Only by this practical way the algorithm risks could be effectively dominated.