Please ensure Javascript is enabled for purposes of website accessibility

11th Sep, 2024

Jade Beddoe
Author
Jade Beddoe
Job Title
Technology Strategy Lead

The European Union's Artificial Intelligence Act (AI Act), which came into force on August 1 2024, marks a significant milestone in the regulation of artificial intelligence within the EU.

It applies across all EU member states, creating a regulatory framework for AI systems to promote and foster trustworthy, safe, and ethical AI development and deployment. Its jurisdiction extends to any AI system used within the EU, regardless of where the system was developed or deployed. This means any company using AI systems outside of the EU for decisions or interactions related to the EU workforce must also comply with the act. 

A risk-based approach

The act provides an AI system categorisation based on risk levels: 

  1. Unacceptable risk: AI systems that pose a threat to safety, livelihood, and rights, such as social scoring AI, biometric categorisation, and predictive policing on profiling, are banned. 

  2. High risk: AI systems that can negatively impact safety or fundamental rights, such as those used in sectors such as healthcare, policing, transport, and employment, will be subject to strict regulations such as conformity assessments, data governance, and oversight. 

  3. Limited risk: These systems pose less threat; however, transparency is key here. End-users must be aware AI is being used and should require informed consent to interact. Examples here include AI-altered content such as images, deepfakes, and chatbots. 

  4. Minimal risk: These pose minimal or no risk and can be developed freely. Most AI use falls into this category. Examples may be AI-enabled video games and spam filters. 

The obligations within the act are clear and enhance the risk-based categorisations with additional requirement such as:  

Transparency and accountability

The act emphasises the need for transparency in AI operations, particularly for high-risk systems. This includes clear documentation, event logs, and technical details to ensure accountability. 

Human oversight

High-risk AI systems must be monitored by humans to ensure fairness and accuracy, preventing biases and errors. 

Data governance

The act complements the General Data Protection Regulation (GDPR), emphasising robust data governance. Employers must use anonymised data where possible and establish clear data processing agreements with third-party AI providers. 

What is the impact on recruitment?

There are significant implications for recruitment processes, especially where most of the AI used in recruitment would be categorised as high-risk. This classification brings several specific requirements which includes:  

  1. Risk management: Employers must implement measures to minimise risks associated with AI in recruitment, ensuring systems are unbiased and fair. 

  2. High-quality data sets: Data used in AI systems must be relevant, representative, and free of errors to prevent biases. 

  3. Transparency and documentation: Employers must inform candidates about the use of AI in recruitment and maintain detailed records of AI operations. 

  4. Human oversight: AI-driven recruitment processes must be monitored by humans to ensure decisions are fair and accurate. 

While compliance will fall heavily on the developers of AI systems, where an employer has procured and is deploying AI in their hiring process, they too must be prepared to ensure the AI deployed is compliant with the specific obligations of a deployer of the systems, especially when considering the particular information you provide candidates and in terms of providing full transparency as to how they are interacting with AI systems throughout your hiring process. 

With non-compliance or breach of the legislation being subject to fines of up to 35m euros or seven per cent of total worldwide annual turnover, whichever is higher, businesses need to ensure they are ready for this change.  

Preparing for readiness 

The phased implementation of the act gives employers time to be proactive, adapt and prepare for full compliance. Employers could consider these proactive steps to navigate the transition smoothly and ensure they can continue to harness the benefits of AI in a compliant and ethical manner.  

  • Conduct an internal audit: Review all current and planned AI systems, consider the risk category and prepare for a compliance review directly or with the third-party provider. 

  • Engage legal and regulatory experts to understand the act and implement necessary changes and practices.  

  • Consider internal training and policy creation to educate employees about the AI act and its implications ethically, financially and reputationally. 

  • Establish new or enhance data governance frameworks that align with the EU AI Act, GDPR and any other jurisdictions AI regulatory guidance for your business. 

Whatever workforce challenges you're faced with, our experts can help match the right talent solutions to the unique needs of your organisation. Get in touch with one of our experts today.


Two circles with mini circles inside

Ready to discuss your unique workforce requirements? Let's talk them through.


Speak to an expert