Filippo Bagni
Abstract
This contribution clarifies the rights, obligations and responsibilities of lawyers in the context of the AI Act and Italian Law No. 132/2025. Shifting the focus from AI as a technology to its professional applications, it explores the required standard of diligence and the evolving nature of intellectual professional activity in the age of artificial intelligence.
The recent entry into force of Italian Law No. 132/2025 on artificial intelligence (AI) provides a timely opportunity to outline a set of reference points for understanding the role and responsibilities of lawyers in this domain.
Italian law operates within a broader European framework, the cornerstone of which is the Artificial Intelligence Act (“AI Act”, hereinafter also referred to as “the Regulation”). It is important to note that the AI Act does not regulate AI as a technology per se. Rather, it constitutes a form of product regulation. Much like the introduction of CE marking for other categories of goods, the Regulation governs AI systems and models placed on the European market. It requires compliance with harmonised requirements calibrated according to their level of risk — the so-called “risk-based approach” — and subjects them to conformity assessment procedures, as is the case for products such as toys or medicinal goods.
The AI Act covers AI systems in three distinct areas: placement on the market, putting into service and use. The latter is particularly significant: once an AI system is placed on the market, its use also falls within the scope of the Regulation.
Several key principles follow from these premises. As a general rule, a lawyer who uses AI systems in the course of their professional activity is not among the most heavily regulated actors under the AI Act, as they do not qualify as ‘providers’ of such systems. Similarly, the AI tools typically employed in legal practice do not, in themselves, fall within the category of ‘high-risk’ systems listed in Annexes I and III of the Regulation.
Nevertheless, a lawyer may qualify as a ‘deployer’ — a term originally referred to in the legislative proposal as ‘professional user’ — and this entails rights and obligations within the framework of the AI Act.
Article 4 of the Regulation requires deployers to ensure an adequate level of AI literacy for themselves and those under their supervision. While this obligation may be revisited in the context of the forthcoming ‘Digital Omnibus on AI’, it already reflects the straightforward principle that a diligent professional must properly understand the tools they use.
At the same time, deployers are entitled to receive all the necessary information from providers to ensure the proper use of AI systems (Article 13 of the Regulation). The analogy with the patient information leaflet accompanying medicinal products is instructive: understanding what an AI system is designed to do, how it operates and its limitations is essential for its safe and informed use. However, if a deployer ‘substantially modifies’ a system or model, their legal status changes: they assume the role of provider and become subject to the full set of corresponding obligations.
In terms of the national legal framework, the Italian law most directly relevant to lawyers is Article 13 of Law No. 132/2025, entitled ‘Provisions concerning intellectual professions’, which introduces two specific obligations for those practising regulated professions.
Paragraph 1 stipulates that AI systems can only be used for instrumental and supportive functions, while ensuring the predominance of intellectual activity. This provision appears to reflect a degree of mistrust towards AI, or perhaps towards the professional who uses it. Substantively, however, it reiterates a principle already embedded in the Italian legal system, as reflected in Article 2222 of the Civil Code.
Paragraph 2 introduces a more specific requirement: professionals must provide clients with ‘information relating to the AI systems used’, expressed in clear, simple and comprehensive terms. The stated objective is to safeguard the relationship of trust between professional and client.
This requirement raises questions about its necessity and proportionality. It is difficult to argue that a client’s trust would be undermined solely by a lawyer using a technological tool of the trade. Clients have rarely inquired in the past whether a legal opinion or document was produced entirely by the professional or with the assistance of technological tools such as legal databases or the internet. So why introduce such a specific obligation now? The answer appears to lie, again, in a combination of mistrust of the technology itself and of those who use it.
However, what is clear is that while tools evolve, the foundational approach to legal practice should not. The current context is characterised by increased vigilance, particularly among professionals such as lawyers who work closely with the protection of fundamental rights. Certain AI systems, especially generative ones, may be problematic because they can produce exactly what is requested, even if it has no basis in reality. Where identifying the ‘perfect’ judicial precedent was once a complex task, it can now be artificially generated on demand.
Yet the underlying professional error of citing a decision without verifying its relevance to the specific case remains unchanged. The tools may change, but the nature of professional diligence and responsibility does not. Well-documented cases of false judicial decisions being included in legal filings illustrate this clearly: a bad lawyer is likely to remain so, whether they use AI or not, whereas a good lawyer may enhance their performance by using it.
Therefore, the crucial question is not whether AI is used predominantly, but how it is used. In this context, Italian law — and Article 13 in particular — should be interpreted as shifting the focus from the mere use of the tool to the modalities of its deployment within professional practice.
This perspective raises a deeper conceptual question implicitly posed by Article 13: how is “intellect” understood today within intellectual professions that make use of AI? If
professionals rely on AI systems designed to replicate human reasoning, does the traditional notion of intellectual performance remain intact or require reconsideration?
In this regard, the selection of the AI system and the formulation of the prompt — the set of instructions that guide the system’s processing and shape its output — may constitute integral components of intellectual professional activity. These components reflect the professional’s knowledge, expertise, and accountability. Prompt design is not merely a technical or mechanical task. It requires contextualising the specific case, selecting relevant parameters, and making deliberate choices regarding the objectives pursued. A well-crafted prompt can therefore be seen as an expression of professional expertise, just like the selection of case law or the construction of a legal argument. In this sense, it is reasonable to argue that intellectual activity is embedded in the use of AI systems in 2026.
Furthermore, in international contexts and when dealing with sophisticated clients who are already familiar with these technologies, the focus is unlikely to be on formal transparency regarding AI use, as envisaged by Article 13(2). Instead, real negotiations are more likely to concern substantive issues such as which AI tools are used, who selects them, who bears ultimate responsibility, which prompts are employed and what level of human oversight is ensured.
All of these considerations are closely linked to the broader issue of AI literacy. This is not just a concern for lawyers and professional associations, but for clients too. Without genuine literacy on both sides, there is a risk of liability disclaimers becoming purely formal. These issues should be addressed upstream instead, forming part of a conscious and informed dialogue between professionals and their clients.
Filippo Bagni currently works at the European Commission’s DG Connect as a Legal Officer. The opinions expressed are solely those of the author and do not reflect or represent any official policy or position of the European Commission.
This text is an English translation of an article originally published in Italian in Il Quotidiano Giuridico, edited by Wolters Kluwer Italia and Altalex. The original version is available at the following link: https://www.altalex.com/documents/2026/03/06/intelligenza-artificiale-professioni-intellettuali-ruolo-avvocato



