Dentons US LLP

11/20/2024 | News release | Distributed by Public on 11/20/2024 09:52

AI tools in hiring ICOs recommendations for recruiters

November 20, 2024

Background

AI is starting to feature in recruitment in a variety of ways, from identifying potential candidates to summarising CVs and scoring candidates. It is not difficult to see how bias in the AI tools used, or improper use of personal data, could lead to harm to jobseekers.

Given the risks AI poses for data privacy and individuals' rights, it is a focus area for the ICO. Over the last 18 months, it has been carrying out audits (with consent) with developers and providers of
AI-powered sourcing, screening and selection tools.

The audits led to the ICO making nearly 300 recommendations to the developers and providers, all of which they partially or fully accepted. The ICO has now published a report that summarises the key findings, along with questions for employers to consider when utilising AI recruitment tools.

Risks with AI in recruitment processes

Whilst recognising the benefits of using AI tools within recruitment, such as efficient processing of high numbers of applications, the ICO report identifies key issues and risks. These include several data protection and privacy management shortcomings, such as:

  • tools allowing recruiters to filter candidates by protected characteristics, risking discrimination;
  • inferring candidates' characteristics such as gender or ethnicity from their names, instead of directly asking;
  • excessive collection of personal information, often with some pulled from social media, to build recruitment databases without candidates' awareness; and
  • AI providers misclassifying themselves as data processors instead of data controllers, failing to adhere to data protection principles and shifting compliance responsibility to recruiters through unclear contracts.

The ICO's seven recommendations

The ICO has refined the almost 300 recommendations made to the developers and providers involved in the audit into seven key recommendations for anyone who is designing and using AI recruitment tools to follow.

  • Fairness: AI providers and recruiters must ensure fair processing of personal information by monitoring and addressing fairness, accuracy and bias. Special category data used for bias monitoring must be adequate, accurate and compliant with data protection laws.
  • Transparency and explainability: Recruiters must inform candidates how they will process their information, by providing detailed privacy information. AI providers should supply technical details about AI logic to recruiters to aid with this, and contracts between recruiters and providers must specify which party is responsible for delivering privacy information to candidates.
  • Data minimisation and purpose limitation: AI providers must evaluate the minimum data necessary for AI development, its processing purpose and duration of use. Recruiters should ensure they collect the minimum amount of personal information possible to achieve the tool's purpose, process the data solely for that purpose and not store, share or re-use it for other purposes.
  • Data protection impact statements (DPIAs): AI providers and recruiters must conduct DPIAs early in AI development if they anticipate high-risk processing. They should update the DPIA as the tool evolves, assessing privacy risks, implementing mitigating controls and analysing trade-offs between privacy and other competing interests.
  • Data controller and processor roles: AI providers and recruiters need to set out whether the AI provider acts as a controller, joint controller or processor for each instance of personal data processing. The parties should document this designation in contracts and privacy notices. An AI provider is considered a controller if it has overall control over the processing methods and purposes.
  • Explicit processing instructions: Recruiters must provide detailed written instructions to AI providers for processing personal data, specifying data fields, processing methods and purposes, desired outputs and safeguards. They should regularly verify compliance. AI providers, acting as processors, must adhere strictly to these instructions and not use the data for their own purposes.
  • Lawful basis and additional condition: Before processing, AI providers and recruiters must determine the lawful basis for processing personal data and an additional condition for any special category data. They should document these bases and additional conditions in privacy information and contracts. If using legitimate interests, they must complete a legitimate interests assessment. When relying on consent, it must be specific, clear, logged and easy to withdraw.

Questions for employers

Based on these principles, the ICO has also devised six questions for organisations to consider when using AI within their employee recruitment processes, to aid with compliance:

  • Have you completed a DPIA?
  • What is your lawful basis for processing personal information?
  • Have you documented responsibilities, such as who is a controller and processor of information, and set clear processing instructions?
  • Have you checked the AI provider has mitigated bias?
  • Is the AI tool being used transparently?
  • How will you limit unnecessary processing?

Key takeaways

The ICO's recommendations emphasise that both employers and AI providers must actively comply with data protection laws when using AI in recruitment. While these recommendations are not legally binding, non-compliance could lead to breaches of the Data Protection Act 2018 and UK GDPR, and expose organisations using AI tools in recruitment to discrimination claims.

The government has also issued guidance on the responsible use of AI in recruitment. To minimise risk in this developing area, employers should ensure they follow the government guidance and use the ICO's questions as a framework when considering the procurement of AI recruitment tools.