EN CZ
Both

The EU AI Act: Can Regulation Keep Up with Innovation?

The EU AI Act (Regulation 2024/1689), which entered into force on August 1, 2024, is the world's first comprehensive legal framework for artificial intelligence. It establishes a risk-based classification system that categorizes AI applications from "minimal risk" to "unacceptable risk," with corresponding obligations for developers and deployers. Its passage marked a watershed moment in technology regulation — and sparked intense debate about whether it helps or hinders the AI revolution (EU, 2024).

The Act bans four categories of AI outright: social scoring systems by governments, real-time biometric identification in public spaces (with narrow law enforcement exceptions), AI that exploits vulnerabilities of specific groups (children, disabled persons), and subliminal manipulation techniques. High-risk AI — including systems used in healthcare, education, employment, law enforcement, and critical infrastructure — must undergo conformity assessments, maintain human oversight, and ensure transparency and non-discrimination (EU AI Act, 2024).

Supporters argue the Act is essential for protecting fundamental rights. Without regulation, AI systems that affect people's health, employment, and legal standing would operate in a legal vacuum. The Act's requirement for transparency — including disclosure when people interact with AI — addresses the growing problem of AI systems making consequential decisions without accountability. The EU positions itself as a global standard-setter, much as GDPR became the template for data protection worldwide (European Parliament, 2025).

Critics, however, raise serious concerns about competitive disadvantage. European AI companies already lag behind American and Chinese competitors, and the compliance burden could widen the gap. Startups face particular challenges: the cost of conformity assessments, documentation requirements, and legal compliance may be manageable for Google and Microsoft but prohibitive for small European AI companies. Some fear a "Brussels effect" that exports regulatory burden without exporting the innovation ecosystem needed to sustain it (Arms Control, 2025).

Other nations are following the EU's lead — but with variations. South Korea enacted the AI Framework Act effective January 2026. The United States issued 59 AI-related regulations in 2024 but has no comprehensive federal AI law, instead relying on sector-specific rules and executive orders. China has implemented targeted AI regulations focused on algorithmic recommendations, deepfakes, and generative AI. The global regulatory landscape is fragmenting, creating challenges for companies operating internationally — and raising questions about whether any regulation can truly keep pace with the speed of AI development (CSA, 2025; Frontiers, 2025).

Key Sources

  • European Union (2024). Regulation (EU) 2024/1689 — The AI Act.
  • European Parliament (2025). Defence and artificial intelligence.
  • Frontiers in AI (2025). Algorithmic fairness: challenges to building an effective regulatory regime.
  • CSA (2025). AI and Privacy: Shifting from 2024 to 2025.

You may also like