Sekėjai

Ieškoti šiame dienoraštyje

2026 m. balandžio 24 d., penktadienis

From Enthusiasm to Responsibility: What every Lithuanian organization needs to know about AI regulation

 

Atea advertisement


“Imagine: a hospital entrusts its heating system to artificial intelligence, but one day it makes a mistake, and all patients feel it. Who would be responsible for this mistake, how to avoid such situations and what legal consequences would await if damage were caused? As artificial intelligence solutions rapidly penetrate various areas of life and affect ever larger groups of people, these questions are becoming increasingly relevant. Therefore, the European Union has adopted the EU Artificial Intelligence Act. Some of its provisions have already entered into force, others will come into force in the coming years. Violations of the legal act will be subject to fines in the millions. So what is important for every organization to know about this regulation and how to prepare for it?

 

Technology lawyer, attorney and partner at the law firm TRINITI JUREX Aurelija Rutkauskaitė claims that the EU AI Act is special in that it is the first time an attempt is made to regulate the technology itself, and not just its use consequences. “Until now, it has been said that legal acts must be neutral to technology. However, in the case of the AI ​​Act, the EU is trying to control this important technology, because its impact on humanity is comparable to the emergence of the steam engine or the Internet,” the lawyer emphasizes.

 

According to the expert, the essence of the legal act is to protect human rights and at the same time not to stop technological progress.

Prohibited practices – assessment of employees’ emotions

 

The EU AI Act enters into force in certain stages and parts. The two parts of the legal act that have already entered into force are the most relevant for society and business. “One of them is related to employees’ AI literacy. Organizations are obliged to ensure that employees have a sufficient level of AI literacy, i.e. understand the possibilities and legal risks of the technology,” explains A. Rutkauskaitė.

 

Provisions on prohibited practices are also already in force, for example, AI solutions cannot be used for social ranking or assessment of employees’ emotions. “Such practices are not as far-fetched as they may seem at first glance. For example, a call center uses AI solutions to assess the quality of employees’ conversations with customers. The threshold when emotions are assessed from the timbre of the voice is not far away,” the lawyer claims.

 

Other provisions of the AI ​​Act will come into force later: some in August 2026, some in August 2027. Organizations will have to assess which of the risk levels their AI activities are assigned to. “Risks are divided into four levels. In addition to the already discussed prohibited practices, high, limited and minimal risk practices are distinguished. Obligations arise according to these risk levels – from the lowest to the highest,” the technology lawyer points out.

 

A. Rutkauskaitė also draws attention to the fact that the AI ​​Act applies to the entire life cycle chain of an AI product or solution: both to developers, importers and users, with the exception of so-called household users who use AI for their personal purposes. “However, if I came to work with my Chat GPT in my pocket, I cannot use it for work purposes. The use of AI at work must be defined and regulated,” warns the lawyer.

Importance for the public sector

 

Although the AI ​​Act applies to all organizations, A. Rutkauskaitė emphasizes that its impact may be even greater in the public sector. “Decisions of public sector organizations affect large groups of people or socially more vulnerable individuals: from healthcare or education to the administration of public services or critical infrastructure,” she says. Therefore, it is the public sector that may be subject to stricter requirements for documentation and control, and may also require greater human intervention, more safeguards and clarity.

 

It should also be remembered that public sector organizations will not only have to comply with the requirements of the AI ​​Act themselves, but some of them will also be responsible for supervising the implementation of this act. Here, responsibilities are shared by several institutions. The Innovation Agency helps businesses and the public sector prepare for the entry into force of the AI ​​Act, while the Communications Regulatory Authority will supervise existing AI solutions and their compliance.

 

“However, it does not end there: the State Consumer Rights Protection Service, data protection institutions, medical or children's rights supervision institutions will also be involved within their competence. AI issues will affect many areas,” explains the lawyer.

 

Old responsibilities have not disappeared

 

A. Rutkauskaitė reminds that the AI ​​Act is not the only legal act that organizations must comply with when it comes to AI solutions. “It is possible that problems will arise for organizations not so much because of the AI ​​Act, but because of the excessively irresponsible use of technologies when viewed through other areas of legal regulation. Data protection, intellectual property or public procurement requirements have not “disappeared” anywhere. For example, if an AI solution violates the GDPR or consumer rights, there will certainly be legal liability under the legislation in these areas,” she says.

 

Intellectual property rights may also require special attention, including the issue of ownership. “For example, if a university makes an invention using AI, copyright protection does not necessarily arise automatically, because the current regulation recognizes only a person as the author,” the lawyer says about various legal pitfalls.

 

Problems can also arise when entrusting the evaluation of public procurement to an algorithm, which can also be wrong. The dispute that arose would then be considered under public procurement law, not the AI ​​Act.

 

The lawyer also provides a real, high-profile example from Lithuanian practice, when one lawyer prepared a cassation appeal using an AI program, but it contained non-existent court cases. “This and other examples show how important it is to keep a person in the decision-making chain, there is even a term in English called “human in the loop”. AI cannot be used without critical assessment,” she warns.

 

Ginta Kirkutė, Head of Process Automation at Atea, agrees. “Only when organizations start working with large language models do they see their limits. While organizations only read the headlines, AI seems infallible. However, in practice, it turns out that it can lie with certainty. This creates a more realistic expectation: AI quickly generates drafts, but fact-checking is a human responsibility,” notes the Atea expert.

Has anyone read the Copilot terms of use?

 

A. Rutkauskaitė says that one of the most common mistakes in organizations when implementing AI solutions is excessive trust in the technology and too little attention to the conditions of its use. “A simple example is that employees start using Copilot or another AI tool, but does anyone read the terms of use? "Do they even ask what check mark should be checked so that the system does not learn from your information," the lawyer asks.

 

According to her, employees often think that they are talking to a "black hole" and do not realize that they are actually transferring information to a specific technology company. "If we upload confidential documents or personal data to an AI system, we lose control over them. And the responsibility lies not with the developer of the technology, but with the organization that used it," she says.

 

Another common mistake is insufficient attention to data quality. If the AI ​​learning base is collected from anywhere, if it is "dirty" or illegally obtained, the technology can provide incorrect decisions or cause legal problems. Documentation is no less important. "If an organization develops or implements an AI solution, it must be clear how it was developed and what data it was trained on," emphasizes A. Rutkauskaitė.

 

The Atea expert notes that organizations' expectations regarding AI are often much higher than the actual capabilities of the technology. “We are currently seeing a kind of market maturity stage. The initial euphoria and hype are being replaced by a practical understanding of what AI can actually do and where it still gets stuck,” says G. Kirkutė

 

According to her, at first many organizations expected AI to become a kind of “magic button” that would solve business problems on its own, but in fact AI is the second, not the main pilot. The quality of the results depends largely on the accuracy of the query, and human quality control still remains necessary.

Sharing of responsibility between the AI ​​supplier and the user

 

Lawyer A. Rutkauskaitė also draws attention to the sharing of responsibility between the supplier and the user. “A supplier who develops or distributes an AI solution must provide very clear usage guidelines: how the technology can be used and in what cases it is appropriate. If the organization uses the solution as intended, the responsibility remains with the supplier. However, if the technology is applied for a completely different, let’s say, bad purpose, the user is responsible for the consequences,” she explains.

 

Therefore, AI projects will inevitably mean more documents, contracts and issues of responsibility sharing. “Organizations will have to not only implement the technology, but also assess its supplier, security, and compatibility with data protection requirements. Often, technology is purchased without even checking whether the supplier is reliable, whether the system is secure, or whether it can be used for a specific purpose at all,” warns A. Rutkauskaitė.

 

Tips on how to prepare for the AI ​​Act

 

When asked where to start when preparing for the AI ​​Act, A. Rutkauskaitė primarily emphasizes education. “I would start with AI literacy training. It is very important that employees understand how this technology works, what risks it poses and what responsibilities arise when using it,” she says.

 

The next step, according to the expert, is to clearly define the role of the organization. “You need to answer the question: will we just use AI solutions, or will we also develop them ourselves? This also determines the duties and responsibilities. And then you need to set internal guidelines: how do we use the technology, what data can we upload, who supervises the processes,” she explains. In addition, it is very important to follow the recommendations of the responsible state institutions and consult: they promise to actively help organizations prepare.

 

The internal responsibility structure is also important. “In case of AI will not necessarily have to appear an  officer, as in the case of GDPR, but someone in the organization will have to take on this topic – an IT manager, a security specialist or a compliance team. Organizations often have curious people interested in technology, it may be worth involving them and giving them responsibility,” notes A. Rutkauskaitė.

 

The expert emphasizes that there will be no absolute security in the field of AI, but organizations can become much safer. This requires continuous learning, internal rules and a responsible approach to data and suppliers.

 

“Let’s not have the illusion that the AI ​​Act will be a one-time list of tasks, where, after doing your homework, you will only have to tick things off. AI regulation will work in cycles – you will need to constantly assess risks, follow the recommendations of institutions, and check whether our solutions still meet the requirements,” says A. Rutkauskaitė.

 

The most important thing is to understand that technology does not replace human decisions. “The human ability to think is like a map, and artificial intelligence is just a car traveling according to that map,” summarizes G. Kirkutė.

 

Even more expert insights – for the sixth time, Atea organizes the conference “Atea Public IT” for state leaders. This time, the event will focus on the strategic role of technology in organizations: from their effective use to measurable value and impact on results.”

 

More information can be found here: https://www.atea.lt/renginiai/2026/atea-public-it-kaip-technologijos-tampa-efektyvumo-varikliu/

 


Komentarų nėra: