POLICY BRIEF  

15 05 2024

12

Stefano Tramacere

Transparency in the AI Act

BACKGROUND AND FIELD OF APPLICATION

Transparency is a key principle and an overarching obligation in the whole EU legislation and in particular within the Digital Strategy, but it is also an important legal requirement provided by national laws in different sectors to solve the problem of information asymmetries between parties.

For example, this problem is evident in the healthcare sector, where transparency refers to the access to information in a comprehensible way to empower patient and healthcare professionals and to enable them to make informed consent.

The new AI Act provides for specific measures to ensure transparency in the design, development, deployment phases for both (1) high-risk AI systems regulated by the Chapter III; and (2) low and minimal risk systems ruled by Chapter IV; and (3) General Purpose AI models regulated by Chapters V.

HIGHLIGHTS

Transparency of AI is an overarching legal principle that can be implemented through measures such as: e.g., information provision, record keeping and documentation, auditability, traceability, explainability, interpretability.

The AI Act concerns three different categorizations of transparency for different types of AI systems:

Interpretability for high-risk AI systems (Chapter III – Section 2, more specifically Articles 13, 14 and 86);

Information/communication requirements when interacting with low and minimal risk systems (Chapter IV, Article 50); and

Technical documentation of the model (including its training and testing process and the results of its evaluation) and information to enable providers of AI system to have a good understanding of the capabilities and limitation of General-Purpose AI models (Chapter V, Section 2, Article 53).

Regarding the first category, the first paragraph of Article 13 states that “High-risk AI systems shall be design and develop in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately”. Here, the concept of transparency and the related concept of interpretability of the final output matches the techniques of explainable AI (XAI): if the models used are opaque, such as deep learning (DL), it is necessary to use XAI methods to explain and enable the deployer (e.g., the doctor) to interpret and use them properly.

In this regard, the paragraph 3 states “that the instruction of use shall contain (iiia) the technical capabilities characteristics of the AI system to provide information that is relevant to explain its output”. In addition, to ensure human oversight of the system as also provided for in Article 14, it is necessary (among others) to “put in place technical measures to facilitate the interpretation of the outputs of AI system by the deployers (Article 13 (3)(d)).

The second part of the first paragraph of Article 13 states that “An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligation. This implies that transparency is instrumental in ensuring an effective compliance system, and indeed the same Section provides for specific measures that contribute to respect the legal principle of transparency: to ensure proper data governance in the training, validation and testing phases (Article 10); an adequate technical documentation system (Article 11 which refers to Annex IV); a record keeping system to ensure the traceability of the system for the purposes for which it was developed (Article 12).

In addition to ex ante protective measures – listed above – to ensure transparency and an overall accountability system to minimize the risks to health, safety and to safeguard the fundamental rights, the AI Act provides an ex-post remedy, heavily inspired by Article 22 of the General Data Protection Regulation (EU, GDPR 2016/679). Indeed, the regulation states that the deployer shall have the right to obtain clear and meaningful explanations (1) of the role of the AI system in the automated decision-making and (2) of the main elements considered (Recital 171, Article 86).

In this respect, as well, XAI techniques can play an important role if certain opaque models are used.

Regarding the second category which includes low and minimal risk systems (Chapter IV, Article 50) the AI Act states that providers shall ensure that if the AI system interacts with a natural person, that person is informed that s/he is interacting with an AI system, unless this is obvious by virtue of the specific context of use.

Regarding the third category, which concerns General-Purpose AI models, the AI Act establishes in Article 53 (1)(a) and (1)(b)(i) that the providers shall draw up and keep up to date technical documentation of the model, including its training and testing process and the result of its evaluation; and to make information and documentation available to providers who intend to integrate the General-Purpose AI model into their AI systems, thus to facilitate a good understanding of the capabilities and limitations of the General-Purpose AI.

CASE LAW

One of the most recent and important cases concerned algorithmic transparency, which creates a direct connection between the AI Act and the data protection Regulation is Shufa case of the CJEU (C-634/21) regarding Article 22 GDPR. In paragraphs 44-45 of the judgement, the Court states that “the concept of ‘decision’ (…) refers not only to acts which produce legal effects concerning the person at issue but also to acts which similarly significantly affect him or her”, thus giving a broad interpretation of the scope of “decision”, in line with what Recital 71 stated.

In this regard, the judgement is strictly connected with AI Act ex-post and individual empowering remedy: the right to obtain clear and meaningful explanations from the deployer of the role of the AI system in the decision-making procedure and the main elements of the decision taken.

Indeed, under the GDPR, Article 13 (2)(f) states that the data controllers have to inform data subjects on the “existence of automated decision-making, including profiling, referred to in Article 22 (1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. Therefore, if the concept of a “decision” is interpreted in a similar way that the CJEU did in the Shufa case, the Article 86 will have a wider scope and will complement the GDPR’s obligations on automated decision-making.

By virtue of this, transparency and human oversight obligations will have a more concrete scope, thus ensuring that deployers should be better able to understand and oversee the proper functioning of the decision-making process, given the direct or significantly similar legal effects the decision will have on the person.

IMPACT ON PROJECT

The AI Act will have a major impact on all projects, especially when using high-risk AI systems. For example, an AI system used for medical research purposes within the BRIEF infrastructure and may be commercialized or put into service on the European market at a later stage will have to comply with all the requirements of the Chapter III, Section 2, including the transparency rules. In addition, General-Purpose AI models used in a high-risk context for health, safety and fundamental rights will likely have to comply with the same rules for high-risk system, since only technical documentation of the model laid down in Article 53 may not be sufficient to guarantee adequate protection. Therefore, this could imply that to ensure AI transparency under the new regulatory framework, for instance an AI decision support system used for diagnostic purposes in the healthcare sector would have to meet the requirements for high-risk AI systems before being placed on the market or put into service.