The EU's AI Act will have a significant impact on both individuals and industries using AI. Internal guidelines covering both the ethical and legal aspects of AI are needed, says Wenche Karlstad.
What kind of requirements should be placed on AI-generated content? Will tighter restrictions slow down the rapid pace of innovation? Where is the line between legislation, guidance and ethical judgement?
The EU regulation on artificial intelligence (the AI Act) was approved by the EU Parliament in mid-June. This means that within 1-2 years, the EU will have a law in place that will not only apply in EU countries, but will also set the standard for much of the world.
- I believe that the AI law will spread much faster than the EU's General Data Protection Regulation (GDPR). But they will coexist closely. Data protection will become even more central in the future.
This is according to Wenche Karlstad, head of digital sovereignty initiatives at Tietoevry. She is one of those who have followed the process closely over time.
Although there is still a round of negotiations with the European Commission and the Council of Ministers before the law is finalized, the draft says a lot about the future framework for the use of AI systems.
- This is the world's first AI regulation, and anyone developing or introducing AI applications in Europe will have to comply with it," Karlstad points out.
Professor Marija Slavkovik, head of the Department of Information and Media Science at the University of Bergen, also believes the AI Act will have a major impact.
- Even though many people have objections, we need to regulate the use of artificial intelligence. What we are seeing now is the beginning of a new era of automation.
Professor Marija Slavkovik from UiB.
Karlstad believes that we are only beginning to see the outlines of what AI can do for us. She cites a few applications:
The list of areas where AI will impact everyday life and work is endless.
- Ultimately, AI will be able to add value to society as a whole," says Karlstad.
The forthcoming EU regulation classifies AI applications according to their level of risk. The scale has four levels: unregulated, limited risk, high risk and unacceptable.
- Real-time social monitoring, for example, will be completely banned. The high-risk category entails a number of requirements, including registration, risk assessment and labelling.
"Anyone who is affected should prepare now for the upcoming legislation and what it means. Those who fail to do so may face sanctions," says Karlstad.
She emphasizes that the EU regulation is not only aimed at restricting AI. It also aims to encourage innovation.
- The EU wants to strengthen Europe's position in an area where the US and China dominate. In this respect, the AI law is a deliberate move by European Commission President Ursula von der Leyen.
Wenche Karlstad believes that there is a need for internal guidelines that address both the ethical and legal aspects of AI.
Establishing clear guidelines will create a more predictable environment.
For the EU, the desire to strengthen Europe's digital sovereignty is at the heart of the matter. In a broader perspective, it is about technological and economic growth as well as about safeguarding citizens' fundamental rights and values.
Credibility is a key word when it comes to using AI. You need to know where the data is and how it is secured. But as a consumer, you should also - as NTNU researcher and author of "Machines that think" Inga Strümke points out - gain insight into how the technology works.
What is "real" and what is AI-generated? What paths do data take in an increasingly intertwined value chain? When are our fundamental rights at risk?
- It's equally exciting when we move beyond the realm of regulation to ethical principles. Risk assessments and technical documentation are most important in the high-risk category, but everyone who uses AI has a responsibility," says Karlstad.
This responsibility may involve ethical dilemmas.
- For example, how to balance the desire to play with open cards against the need to protect business-critical information, unwanted discrimination in hiring processes or privacy concerns when using sensitive information.
Those who can explain their judgements and clarify accountability in the development and use of AI will have an advantage.
Our Strategic Foresight paper provides unparalleled insights from EU institutions, Nordic decision-makers, and security specialists on upcoming EU initiatives for sovereignty, security, and sustainability.
Wenche is passionate about creating value for our customers and enabling growth with attractive service offerings. She has near twenty years of experience in the IT business with different roles within management and advisory, bringing new services to the market.
In her current role as Head of Strategic Differentiation Programs at Tietoevry Tech Services, she is leading a global team of experts and managers.