0
GPAI models and risk in the AI ACT

EU AI Act: Risks and Regulation of GPAI Systems

NewsFebruary 17, 2024

As artificial intelligence systems have already become an integrated and essential part of multiple human activities, multiple doubts are arising  concerning the fair use of these innovative technologies. In fact, the automaticity and the significant lack of human contribution in the development of a content may increase the risk of unfair use of personal data, copyright violations or discriminations based on cultural bias automatically learnt by the AI.

In such a scenario, the European Union has confirmed its typical governance method: regulation and enforcement.

This way of governing new phenomena – generally known as the “Bruxelles effect” – is now covering the topic of artificial intelligence by way of the “AI Act”, with which the E.U. aims to become the first and most advanced regulator on the transnational horizon.

The discipline of the AI Act – which, at the present time, is still a proposal of regulation, but proximate to be definitely adopted – follows the “risk-based approach”. This means that different types of risk can be identified and, then, ruled by different (thus, more adequate) disciplines, in order to take into account the specific characteristics of that kind of system, with the effect of providing more efficient solutions.

The AI Act applies this method by identifying three main levels of risk: unacceptable, high and minimum risk.

The regulation bans those AI systems which produce an “unacceptable risk”, as they have great capability of violating fundamental human rights because they work in the shadow of people awareness (subliminal techniques) or they can damage the psycho-physical sphere of vulnerable group of people (e.g. social scoring, which could exclude large groups of people based on the repetition of cultural bias reproduced by the AI tool).

Quite interesting for the GPAI topic is the category of risk known as “high-risk”.

In this case, despite these systems carrying an unavoidable level of danger for human rights, the sacrifice derived from their prohibition is considered unacceptable in comparison with their actual benefits. Thus, these systems are allowed, but they must comply with strict rules stated by the AI Act.

However, as the progress is rapidly changing the artificial intelligence world – so that the legal definition of “AI systems” itself is going to lose its capability of clearly define a specific technological phenomenon –, some particular tools may end up not correctly fitting under the formal legal classifications, creating a possible “loophole” in the regulatory framework.

The general-purpose AI models and the “systemic risk”

This is what is happening with one of the most common types of artificial intelligence tool: the general-purpose AI models (GPAI models). These are “AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications” (art. 3, par. 1, point 44b) of the AI Act).

Consequently, “general-purpose AI system” is an AI system based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems (art. 3, par. 1, point 44e).

In other words, the GPAI models and systems are characterized by not having a specific purpose of using, so that they not only can be used for different scopes, but also can be integrated in other more specific systems as a pretrained data tool. Examples of GPAI systems are Chat GPT-4 by OpenAI or Gemini by Google.

Inside the spectrum of the AI Act’s risk-analysis paradigm, GPAI stand in an ambiguous position, as they do not respond correctly to the triple classification of the AI Act. In fact, based on their applications and, especially, interaction with other AI systems, they may spread their negative effects through the whole chain, enlarging the scale of risk in each level of the chain itself.

This type of risk is known as “systemic risk” and it is referred to high-impact capabilities’ GPAI systems, which are GPAI models that can have a “significant impact” on the internal market, as that they may generate negative effects on public health, safety, public security, fundamental rights, affecting the whole economy and society’s chain.

In order to guarantee the certainty of law both in its interpretation and enforcement, the trilogue between the Commission, the European Parliament and the Council, that has taken place during the last two years, has pointed out the necessity to define a specific legal framework also for the GPAI models with systemic risk.

In light of this, under the AI Act “high-impact capabilities” GPAI will be considered the “capabilities that match or exceed the capabilities recorded in the most advanced general purpose AI models”.

More specifically, the high-impact capability is reached when the models has been trained with a total amount of computational power not inferior to 10^25 FLOPS.

This computational threshold – which shall be regularly updated with the technological developments creates a legal presumption based on the fact that if the system is much more powerful than others, meaning that it is also much more dangerous as well.

Shortly, the systemic risk related to high-impact capabilities GPAI differs from other class of risk defined by the AI Act in the extent that it is more dynamic, because, as a GPAI models or systems can be integrated in other AI systems, it’s able to affect the AI systems’ application chain in a unpredictable way, especially considering how fast this technologies evolve.

In light of these discrepancies between “systemic risk” and, in particular, “high risk”, the European Parliament and the Council pointed out the necessity of clarifying how the AI Act will be applied to GPAI systems and also the opportunity to establish specific rules and common codes of practice for these particular technologies, originally not included in the draft of the AI Act.

The proposal of regulation for GPAI in the AI Act: the obligations for GPAI providers

Recently, the trilogue has lead to another provisional agreement, now including specifically a set of norms on the GPAI phenomenon (reference to the “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Analysis of the final compromise text with a view to agreement”, 26 January 2024).

a) Information and cooperation

The last proposal of regulation dedicates the Title VIIIA specifically to “General purpose AI models”, distinguishing between “GPAI models” and “GPAI models with systemic risk”.

In the first case, the art. 52c (“Obligations for providers of general purpose AI models”) requests to providers to “draw up and keep up-to-date the technical documentation of the model, including its training and testing process and the results of its evaluation”, which may be shown to the AI Office or national authorities during their controls (par. 1, lett. a).

In addition, in order to guarantee the quality of the content automatically generated, the provider have also to publish a “sufficiently detailed summary about the content used for training” the models (lett. d).

One of the main challenges of artificial intelligence use is the interconnection between the final product and its AI tools (for example, because, it used a pretrained systems to elaborate more complex results). Consequently, multiple subjects are involved in the production chain.

The necessity to specifying some obligations for GPAI providers responds to the general principle of liability and the legal criteria of “vicinitas”: in other words, from both normative and enforcement perspectives, it doesn’t appear equal and efficient to put on the sole provider of an artificial intelligence final product with integrated a GPAI systems the whole charge of guarantee the correct functioning of the GPAI as such integrated.

With specific regard to the hypothesis of a GPAI integrated in another more specific AI system, it is quite clear that the subject who’s likely the most capable to guarantee the quality and the security of a product is its direct producer: in this case, the former provider.

So, as all the chain providers shall guarantee the quality and the security of their products, they should know how the AI component they use works and the risks connected.

For this reason, the last AI Act proposal requires an higher level of collaboration between providers. Whether a GPAI model is integrated into an AI system, the art. 52c, par. 1, lett. b) of the proposal states that GPAI providers have to share the functioning information to “enable providers of AI systems to have a good understanding of the capabilities and limitations of the general purpose AI model and to comply with their obligations pursuant to this Regulation”.

It is important to underline that, as the art. 52c, par. 1, lett. b) gives to AI systems providers the right to access to functioning information of a GPAI models, at the same time, they have also the duty to protect the GPAI providers’ intellectual property rights and confidential business information or trade secrets in accordance with the EU and national IP law.

b) Codes of practice

The highest level of technicity of the GPAI applications and the speed with which the knowledge evolves requires more specific discipline, mainly oriented to the executive and practical aspect of compliance.

Codes of practice are necessary: based on the art. 52c, par. 3, “Compliance with a European harmonised standard grants providers the presumption of conformity”.

The par. 3 states that “until a haronised standard is published”, GPAI models can be considered compliant with the EU regulation of par. 1 cit. if conform with codes of practice as defined by the art. 52c. However, such codes of practice haven’t been adopted yet, increasing the uncertainty around not only the effectiveness of GPAI discipline, but also the predictability of compliance liability.

c) Obligations for GPAI models with systemic risk.

Additional obligations are required for GPAI models with systemic risk. The art. 52d, par. 1 imposes to providers to:

  • perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identify and mitigate systemic risk;
  • assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, placing on the market, or use of general purpose AI models with systemic risk;
  • keep track of document and report without undue delay to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them;
  • ensure an adequate level of cybersecurity protection for the general purpose AI model with systemic risk and the physical infrastructure of the model”.

Moreover, the art. 28 (“Responsibilities along the AI value chain”) extends the high-risk systems provider’s obligations under art. 16 also to any distributor, importer, deployer or other third-party when “they modify the intended purpose of an AI system, including a general purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such manner that the AI system becomes a high risk AI system in accordance with Article 6”.

Briefly, the extension of the subjective liability to each professional involved in the productive cycle aims to guarantee that the interconnection between different models of artificial intelligence used as tools for the functioning of a certain final product won’t affect negatively the effective level of risk of the item itself, with potential evasion of the E.U. legal protection.

This could happened especially with GPAI systems whether integrated to a more “specific-purpose” AI systems, as the result of the GPAI may not always be predictable and, thus, may violate fundamental rights.

Similarly to GPAI models discipline, GPAI models with systemic risk must comply also with code of practice. However, providers can choose not to adhere to an approved code of practice on the condition they demonstrate to successfully are in line with an alternative adequate means of compliance for approval of the Commission (par. 2).

Conclusion

The more a technology shows its dangerous potential, the more a clear regulation is needed.

However, the weakness of such a method is not being able to keep up with the rapid scientific development, that always tend to challenge the ability of the law definitions and discipline to read and rule a phenomenon in continuous evolution.

Artificial intelligence is a brave new field, still unknown in its truly potential.

It can certainly be said that providing a clear legal framework for this new technology and its different declinations is a strategic choice, as it responds to the need of people of more predictability of the law application, which promote more investments and economic development.

On the other hand, the legislative procedure may take too long to give appropriate answer to citizens, resulting, in the end, ineffective.

All this considered, a greater attention should be paid to the code of practices and executive guidelines, aimed to provide a common technical framework to ensure an equal application of liability and compliance discipline,  smoothly evolving with the innovation progress of this emerging technology.

1 Star2 Stars3 Stars4 Stars5 Stars (4 votes, average: 5.00 out of 5)
Loading...

Leave a Reply

Your email address will not be published.