noun_Email_707352 noun_917542_cc Map point Play Untitled Retweet Group 3 Fill 1

AI is stumbling over its own regulation in Europe

Download the report

09 July 2021

The EU has been in the process of creating an extensive regulatory framework that enables a unified European AI market across all member states.

The Commission has suggested a risk-based approach with the new regulation applying to AI applications that are considered high-risk. Based on the draft document from the EU, the plan is to put extremely tough and unclear new regulations and rules for what is deemed high-risk AI. The vagueness of how high-risk is defined is a big problem, as when this regulation will come into use within a few years, it is likely to have a similar massive effect on companies to what the GDPR did some years ago. AI is still in early phase, but it can soon be integrated widely to applications that impact our daily lives.

The AI applications that would be banned include algorithms that are considered to manipulate human behavior causing a decision that would potentially be harmful to an individual. High-risk applications would include e.g., biometric identification, systems establishing priority of emergency services or access to educational institutes, recruitment algorithms and those systems evaluating individual’s risk assessment or creditworthiness. It has been even suggested that the regulation could be expanded to include all public sector AI systems, regardless of their assigned risk level.

TietoEVRY has been commenting on this matter for some time on the EU level. However, the vague regulation and arbitrary definition of low and high risk will significantly slow down any investments and therefore weaken us further in the development of AI in Europe compared to the global competition. This approach can lead to increased dependence on American and Asian AI companies as importers of AI, with the EU losing control of the AI development process.

The new regulation will be challenging for society, industry and the government. Small and medium-sized companies or innovative government initiatives that create much of the innovation will face high barriers of entry. Suggesting that all government authorities should have the highest degree of safety effectively means that in the already risk-averse environment, AI technology will have a very high burden of proof and will not be used practically. The EU must weigh the high risks of any application against the high opportunities that it presents for society. Otherwise, we could miss out on the opportunity of AI bringing solutions to many societal problems.

We believe the responsible use of AI is a better approach to vague regulations. AI is an area that requires deep expertise and human oversight. It is crucial that any decision on AI, whether related to its development or meaningful regulation, needs expert input. Companies that apply AI need to ensure they follow ethical principles such as transparency, explainability, and accountability. Both, executive leaders (CEOs and board members) and operational developers are required to have deep education in AI and machine learning, on PhD level, and deep mathematical skills as well as interdisciplinary knowledge e.g. in ethical business leadership and implementation. This is essential for making sound decisions. Without that skillset, IT and consultancy companies will develop and sell nonsense AI.

As we had many discussions with the leaders at the key decision-makers in the EU institutions over the years, we are pleased that the dialogue has proven fruitful and many of our recommendations have been adopted, such as establishing regulatory sandboxes. However, if an innovation matures it would still need to comply with the risk-based regulation while the risk assessment can prove to be highly problematic, not to forget EU’s plan to consider fines as high as 4% of global turnover for prohibited use cases. We hear concerns on this all the time also in discussions with our customers and industry peers.

If the approach of the EU commission is to play it too safe, it can jeopardize any innovation. We cannot be strong in AI if law steers technology development in this way. One of the most important take away points is that Europe needs to be able to build its own AI companies, technology, and products - otherwise, we will continue to weaken our ecosystem, employment, and our continent’s future.

Download our position paper here. Let us know your thoughts on this topic under the hashtag #TietoEVRYAI.

christian g.jpeg

Christian Guttmann, Global Head of AI and Data, TietoEVRY.

Contact us

Christian Guttmann

Head of Global AI & Data

Share on Facebook Tweet Share on LinkedIn