Calamos Supports Greece
GreekReporter.comEuropeEU AI Act: Did Europe Just Kill Its AI Development?

EU AI Act: Did Europe Just Kill Its AI Development?

An AI depiction of the EU AI Act
The EU AI Act is a legislative milestone for the future of artificial intelligence. Credit: DALLE for the Greek Reporter

The European Parliament recently adopted the world’s first comprehensive regulation, known as the AI Act, on artificial intelligence.

According to EU officials, this landmark and historic piece of legislation aims to ensure that AI systems that are developed, sold, or used in the EU are safe, transparent, and in accordance with fundamental human rights.

While the AI Act is a very significant step towards responsible AI governance, some people around the world worry its strict requirements could stifle innovation and put Europe at a disadvantage in the global AI race. This could see geopolitical rivals such as China or even the US take a significant lead in the field.

EU’s AI Act takes risk-based approach to AI regulation

The European AI Act takes a fundamentally risk-based approach. It categorizes AI systems into four different levels: unacceptable, high, limited, and minimal risk.

It outright bans certain “unacceptable” AI practices, such as social scoring and real-time biometric identification in public spaces, across the European Union.

For “high-risk” AI used in critical areas for European societies such as healthcare, infrastructure, education, and law enforcement, the EU’s act imposes very strict requirements both on development and testing as well as in monitoring.

Even AI systems that interact with people in their everyday lives, such as chatbots and emotion recognition tools, will face severe transparency obligations if they aim to continue operating in Europe under the new rules.

Nonetheless, according to the new legislation, “limited” and “minimal” risk AI will be subject to much lighter requirements, focusing mainly on transparency and user information.

Strict requirements for high-risk AI in the EU

Providers of “high-risk” AI systems will have their work cut out for them in the EU under the AI Act. Before being put on the market and becoming a commercial commodity, these systems must undergo thorough assessments of conformity in order to get the green light. Providers of such services will also need to implement robust risk management systems, meet high data quality standards, and maintain detailed technical documentation that should always be available.

High-risk AI must also enable human oversight and continuously be monitored throughout its lifecycle to avoid dangerous and unwanted complications. If, despite these strict rules, incidents occur, providers will be legally obliged to report them to relevant European authorities, and citizens will have the right to file official complaints.

While these requirements clearly aim to protect the public from the unknown consequences of an uncontrolled AI development, some fear they could be overly burdensome, especially for smaller AI companies and startups that have neither the funds nor manpower to comply with all these regulations.

EU’s AI Act introduces rules for general purpose AI

The AI Act of the EU also includes dedicated rules aimed at foundational models, such as those behind OpenAI’s ChatGPT and Musk’s Grok.

These general-purpose AI systems, which can be adapted for various applications depending on their use case, will need to comply with transparency requirements regarding their training data, energy consumption, and potential copyright issues that they may encounter.

High-impact general-purpose AI will also face additional obligations. These will include issues such as conducting risk assessments that will help providers deal with potential undesirable eventualities.

However, there is still ongoing debate among MEPs and AI experts about whether these requirements are sufficient enough to mitigate the broader risks posed by increasingly powerful AI systems to our societies as a whole.

EU aims to balance innovation and enforcement in AI regulation

It’s not all doom and gloom, however. To help startups and small and medium-sized businesses navigate and survive the new rules, the AI Act introduces crucial “regulatory sandboxes.”

These sandboxes will work as areas where companies can develop and test their AI systems with useful guidance and support from suitably trained European authorities. Nonetheless, violations of the act following this initial stage can result in hefty fines of up to €30 million or six percent of a company’s global revenue.

Enforcement of these regulations will primarily fall to national authorities, as the EU does not have an EU-wide policing mechanism. Nevertheless, a new EU-level AI Office overseeing compliance and advising on AI policy will be established to support individual member-states with their new duties. Most of the rules will start taking effect in 2025 after a two-year transition period, though some provisions may come into force sooner, depending on the European Commission’s future decisions.

Will the EU’s AI Act hinder or help AI development?

The EU’s AI Act is a landmark piece of legislation that will undoubtedly shape the future of AI development in Europe and the world. It is among the first such efforts and, according to the European authorities, it seeks to strike a balance between protecting fundamental rights and public safety while still promoting innovation and competitiveness in the AI sector.

However, many things remain to be done to implement these additional rules and governance structures outlined in the act. Its effectiveness will largely depend on how well member states of the European Union enforce requirements and utilize resources provided by the EU to national authorities to support compliance with the new law.

As governments worldwide, including the US and China, find it difficult to keep up with the challenges of regulating astonishingly rapidly advancing AI technologies, the EU’s approach sets an important precedent, which you either support or not.

While some worry that the AI Act could hinder Europe’s AI ambitions and pose hurdles to wider AI development, others argue that responsible AI governance is fundamental and crucial for long-term success.

Only time will tell if Europe has struck the right balance with this groundbreaking legislation.

See all the latest news from Greece and the world at Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!

Related Posts