The European Union's AI Act, effective today, introduces comprehensive regulations governing the development, deployment, and use of artificial intelligence across member states. Aimed at fostering human-centric and trustworthy AI, the Act ensures the protection of health, safety, and fundamental rights, including environmental sustainability and democracy. The regulations highlight the importance of innovation while safeguarding against potential harmful effects of AI systems.
High-risk AI applications identified by the Act include medical devices, biometric identification, and automated processing of personal data. Specific regulations are placed on AI determining access to essential services like healthcare, emphasizing the need for special considerations in such applications. The Act defines biometric identification as automated recognition of physical and behavioral features, with exceptions for authentication purposes. Moreover, the use of AI for emotional recognition is restricted, particularly for detecting emotions and intentions, but allowed for identifying physical states like fatigue in safety-critical roles.
Transparency, traceability, and the mitigation of bias are key requirements for certain AI applications, ensuring explainability and fairness. High-risk AI systems must be tested in real-world conditions with informed consent, and incidents during testing must be reported to relevant authorities. While the Act aims to protect EU citizens, it also seeks to support innovation, explicitly excluding AI systems developed solely for scientific research from its scope to prevent stifling research and development activities.
Click here to read the original news story.