The European Union will implement a new law on Artificial Intelligence (AI) in the following months, affecting companies that design or use AI systems within its territory. This legislation aims to ensure these systems are safe and respect individuals’ fundamental rights, while promoting investment and innovation in the AI field.
This legislation is similar to Europe’s General Data Protection Regulation (GDPR) but focuses specifically on AI regulation. It sets requirements for companies designing or using AI systems in the European Union, backed by severe penalties.
Organizations that have already implemented responsible AI programs and ethical risk management have an edge. They are better positioned to address a broader range of risks than those introduced by the law, potentially lessening the impact on meeting legal requirements.
Key Requirements of the AI Law:
The law states that AI applications will be classified according to the risk they pose. For example, practices like social profiling or biometric identification in public spaces will be prohibited. Moreover, high-risk applications, such as those controlling critical infrastructures or determining access to education, must undergo a conformity assessment and be notified to the relevant authorities.
There are particularly sensitive areas where some applications may be categorized as high risk, including:
- Educational or vocational training, which can determine access to education and someone’s professional life course (e.g., exam grading)
- Product safety components (e.g., AI application in robot-assisted surgery)
- Employment, worker management, and access to self-employment (e.g., resume sorting software for hiring procedures)
- Essential public and private services (e.g., banking or credit scoring that enables or denies citizens the opportunity to obtain a loan)
On the other hand, general-purpose or mass-use applications (like Chat GPT and similar) must ensure system transparency so users know when they’re interacting with a machine.
Supervision and Compliance
There’s an intention to create the European AI Office, which will oversee general-purpose AI models and collaborate with independent experts in the field.
To ensure companies meet these requirements, the law establishes the need for appropriate governance and monitoring measures to ensure AI systems comply with defined ethical and legal standards.
The new regulation requires AI vendors to ensure high quality data for training, validation and testing of their products, however, it does not clearly specify what constitutes “high quality data.”
Companies must reconsider how they use their data and access others’. A federated data infrastructure, for example, where data doesn’t move and allows organizations to work across organizational and geographic boundaries while meeting regulatory requirements, will enable organizations to leverage AI’s potential while mitigating breaches or penalties under new and emerging regulations.
Companies Facing the AI Law
The responsibility to comply with this law falls across the company, from top management to managers. A compliance program for the Artificial Intelligence Law covering design, implementation, and maintenance is expected to be established. This involves conducting gap analyses, risk assessments, and maturity assessments to determine the necessary resources.
It will be necessary to assess whether an AI model falls into the categories of prohibited or high-risk AI according to the AI Law.
Much of the compliance responsibility will also fall on engineers and data scientists. The lack of continuous assessment of the data used by AI to learn and improve can trigger various ethical and regulatory issues, including the generation of biased outcomes.
Specific training and development for each position will be crucial in this regard.
Timeline for the AI Law Application
Once the law is officially approved between May and July 2024, a two-year grace period will be granted for companies to prepare for its full implementation.
Full implementation of the law is expected by the end of 2025 or early 2026. Technical conversations to finalize the regulation are ongoing, and it is expected to be published in the Official Journal of the EU by the end of 2024. In addition, specific timelines have been set for implementing different aspects of the law, with full application planned within 24 months.
The new EU AI law represents a significant step toward regulating AI for the safety and rights of citizens. Companies must be prepared to meet the set requirements and adapt to the changes this legislation will bring.
Designing and complying with a program for developing AI products, ensuring ethical risk assessment and responsible AI use, will allow organizations greater transparency and credibility with their customers and a positive impact in the medium and long term through the use of these technologies.




0 Comments