EU AI Act

Intermediate

European regulation classifying AI systems by risk.

AdvertisementAd space — term-top

Why It Matters

The EU AI Act is crucial for establishing a framework that balances innovation with ethical considerations in AI development. By regulating AI systems based on their risk levels, it aims to protect citizens' rights and promote trust in AI technologies, influencing global standards and practices in AI governance.

The EU AI Act is a regulatory framework proposed by the European Commission aimed at establishing a comprehensive legal structure for artificial intelligence within the European Union. It categorizes AI systems into four risk tiers: minimal, limited, high, and unacceptable, with corresponding regulatory requirements. High-risk AI systems, such as those used in critical infrastructure, education, and law enforcement, are subject to stringent compliance measures, including risk assessments, data governance, and human oversight. The Act emphasizes a risk-based approach, mandating that developers and deployers of AI systems ensure transparency, accountability, and safety. The legal framework is grounded in principles of human rights and ethical considerations, reflecting the EU's commitment to fostering trustworthy AI while promoting innovation. The Act is expected to influence global AI governance standards significantly.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.