Skip to main content
Practical guide to EU AI Act hiring compliance for CHROs and HR leaders, covering high-risk AI systems in recruitment, inventories, documentation, contracts, human oversight and enforcement readiness.

From black box to inventory: mapping your high risk hiring systems

EU AI Act hiring compliance now turns recruitment technology into regulated infrastructure. Regulation (EU) 2024/1689 of the European Parliament and of the Council (the AI Act) presumes most AI systems used for employment decisions to be high-risk under Article 6(2) in conjunction with Annex III, point 4. Over the next hundred days, HR Operations leaders must treat every hiring application and every connected artificial intelligence workflow as a potential high-risk asset, not just another SaaS subscription. That means a disciplined inventory of systems intended for sourcing, screening, ranking, interviewing, task allocation and performance monitoring across all member states where you employ natural persons.

Start with your core stack and list each system with its concrete purpose and data flows. For Workday, SAP SuccessFactors, Greenhouse, Lever, iCIMS or SmartRecruiters, document which modules use artificial intelligence or general purpose AI (GPAI) models for matching, scoring or recommendations, and whether those GPAI models are embedded, optional add-ons or external APIs. GPAI refers to models that can serve many purposes, including hiring, without being designed solely for one task. Tag each use case against Annex III, point 4 (employment, worker management and access to self-employment) and related categories for high-risk systems, including recruitment, promotion, termination and biometric identification or verification used in access control or time tracking.

Next, map providers and deployers for every risk system in scope. Under Article 3, your organisation is a deployer when you use, configure or fine-tune purpose models or general purpose GPAI for your own hiring practice, while vendors are providers when they place systems on the EU market under their name. A concrete example: if you use a third-party chatbot powered by a GPAI model to pre-screen candidates for roles in Germany, the chatbot vendor is the provider and your HR function is the deployer of a high-risk system. For each combination of providers and deployers, record the legal entity, the relevant article obligations (for example Articles 8–15 for high-risk systems and Article 26 for deployers), the location of data processing, and whether any public authorities or law enforcement agencies interact with the same infrastructure, which can raise systemic risks and trigger stricter market surveillance expectations.

To make this operational, build a simple inventory template that your HRIS and compliance teams can maintain. At minimum, include columns for: system name, provider, modules using AI or GPAI, Annex III category and point, deployer entity, data location, affected roles or jurisdictions, risk score, and owner. A practical, copy-pasteable structure could be:

  • System name | Provider | Deployer entity | AI / GPAI modules | Annex III category & point | Use case (e.g. sourcing, screening, promotion) | Data location | Affected roles / jurisdictions | Risk score | Business owner | Technical owner | Last review date

Use this register as the single source of truth for EU AI Act hiring compliance, and update it whenever you introduce new automation, change vendors or expand into additional member states. For internal enablement, export the register as a CSV and share a read-only version with Talent Acquisition, Legal and Risk so they can filter by country, provider or use case during audits and regulatory queries.

Documentation, contracts and human in the loop that actually works

Once the inventory is stable, EU AI Act hiring compliance becomes a documentation and contracting exercise. High-risk hiring systems require technical documentation under Article 11 that explains the model architecture, training data sources, performance metrics, foreseeable risk scenarios and built-in safeguards for natural persons affected by decisions. CHROs should already be asking Workday, Greenhouse, Lever, iCIMS and SmartRecruiters for system-level technical documentation and for model cards that cover both general purpose GPAI and narrower purpose GPAI components, including information needed to meet Article 13 transparency and Article 14 human oversight requirements. Where official guidance from the European Commission or the European AI Office on the transition period and secondary legislation is still evolving, note any assumptions you make about interpretation, cite the relevant recitals or articles, and keep a log of updates with dates and sources.

Vendor responses will shape your obligations as deployers and as joint controllers under data protection law. Demand explicit clauses on obligations of providers and on shared responsibilities for monitoring high-risk systems, logging, incident reporting and cooperation with market surveillance authorities in all member states where you operate. Renegotiate data processing agreements to cover AI-specific risk, including prohibited practices such as emotion recognition in interviews, opaque biometric categorisation, or any practice that nudges candidates in ways that undermine free choice. A practical clause could state that the provider will: (a) identify which modules qualify as high-risk under Annex III, (b) supply and update Article 11 technical documentation and model cards, (c) support Article 61–63 market surveillance requests, and (d) notify the deployer without undue delay of serious incidents, performance degradation or changes that affect compliance.

Human in the loop cannot be a recruiter clicking approve on an automated shortlist without understanding the underlying risk system. To count as meaningful oversight under Article 14, humans must be able to contest model outputs, override rankings, and access explanations that are grounded in the system’s technical documentation and in the general purpose model behaviour. Build review workflows where recruiters and HR Business Partners log when they diverge from AI recommendations, and where public authorities or the European Commission could later see a clear audit trail that supports enforcement rather than theatre. Align these workflows with your bias monitoring and adverse impact analysis so that every override, escalation or complaint feeds back into model evaluation and risk mitigation.

Global scope, enforcement readiness and the CHRO’s 100 day checklist

EU AI Act hiring compliance does not stop at the EU border, and US-based talent acquisition teams often miss this. If your organisation uses AI-enabled hiring systems to recruit natural persons into roles located in any EU member states, your US recruiters and HRIS managers are deployers of high-risk systems under EU law. That includes general purpose GPAI models embedded in chatbots, screening tools or internal mobility platforms, even when the providers are headquartered outside the Union. The main obligations for high-risk systems will apply after the transition period set out in Regulation (EU) 2024/1689, which is expected to run for roughly two years from entry into force, giving organisations a limited window to inventory systems, update contracts and implement human oversight before enforcement begins. Always verify the latest implementation timeline in official EU communications, such as Commission notices or guidance from the European AI Office, as dates, delegated acts and secondary legislation can evolve.

Regulators and market surveillance authorities will expect CHROs to produce a coherent dossier when enforcement actions begin. At minimum, you should be able to show a live register of all high-risk systems, the associated providers and deployers, the relevant Annex III category and point, the applicable article obligations, and the documented risk assessments for each risk system. Keep evidence of staff training, bias monitoring, adverse impact analysis, and any remediation steps taken when systemic risks or prohibited practices were identified in day-to-day practice. Structure this dossier so that it aligns with Articles 9–15 on risk management, data governance, technical documentation, record-keeping, transparency and human oversight.

The European Commission and national authorities have signalled that penalties for non-compliance with obligations for high-risk AI can reach up to EUR 35 million or 7% of global annual turnover, as set out in the sanctions regime of the EU AI Act, which puts HR technology decisions firmly on the board agenda. Over the next hundred days, your checklist should include finalising the inventory, securing technical documentation from all key providers, updating contracts, and validating human oversight workflows in real hiring scenarios. A simple 100-day plan could assign owners for each stream: HRIS for the inventory and data flows, Legal for contract updates and interpretation of Articles 6, 9–15 and 26, Talent Acquisition for human oversight design and training, and Risk or Internal Audit for independent review. The metric that will matter in the end is not the RFP score, but the twelfth month of adoption when your systems, processes and people still align with both the letter and the spirit of the law, and when you can demonstrate this alignment to any supervisory or market surveillance authority on request.

Key statistics on EU AI Act hiring compliance

  • Penalties for non-compliance with high-risk AI obligations can reach up to EUR 35 million or 7% of global annual turnover, making HR technology governance a material financial risk under the EU AI Act sanctions regime.
  • Recruitment, worker management, task allocation and performance monitoring AI systems are explicitly classified as high-risk under Annex III, point 4 of the EU AI Act, covering employment, worker management and access to self-employment.
  • High-risk AI system obligations apply after a transition period of around two years following entry into force, giving organisations a limited window to inventory systems, update contracts and implement human oversight before full enforcement, subject to confirmation in official EU guidance.
  • Market surveillance authorities in each EU member state will supervise enforcement, supported by coordination from the European Commission and the European AI Office for cross-border issues.

Questions people also ask about EU AI Act hiring compliance

Which hiring technologies are considered high risk under the EU AI Act ?

Hiring technologies that perform or support recruitment, candidate screening, ranking, selection, task allocation or performance monitoring of workers are generally treated as high-risk systems when they rely on artificial intelligence. This includes AI-enabled modules in ATS platforms, algorithmic scheduling tools, productivity monitoring software and biometric access or time tracking systems used in an employment context. Systems that only provide spell checking, basic workflow automation or simple rule-based filters without learning components usually fall outside the high-risk category defined in Article 6 and Annex III.

How should HR teams work with vendors to meet EU AI Act obligations ?

HR teams should require vendors to identify which parts of their products qualify as high-risk systems and which rely on general purpose GPAI models. Contracts must allocate obligations between providers and deployers, including documentation, logging, incident reporting and cooperation with market surveillance authorities. HR Operations should also test vendor claims in practice by running bias audits, validating human-in-the-loop workflows and ensuring that explanations provided to natural persons are understandable and actionable. Where vendors use GPAI, ask them to confirm how they comply with the specific GPAI obligations and how they expect deployers to use the models safely in hiring.

Are non EU companies hiring into the EU covered by the EU AI Act ?

Yes, non-EU companies that use AI-enabled hiring systems to recruit or manage workers in EU-based roles are in scope as deployers. The location of the provider or the data centre does not remove these obligations when the impact is on natural persons in EU member states. US-based talent acquisition teams using AI screening tools for European vacancies therefore need to align their practice with EU AI Act hiring compliance requirements, including risk management, transparency and human oversight for high-risk systems.

What documentation should a CHRO have ready for regulators ?

A CHRO should be able to produce an inventory of all high-risk hiring systems, the associated providers and deployers, and the Annex III categories that apply. They should also maintain technical documentation from vendors, internal risk assessments, records of human oversight procedures, training logs and evidence of corrective actions taken when risks or prohibited practices were identified. This dossier should be structured so that market surveillance authorities and other public authorities can quickly understand the organisation’s governance of AI in hiring and see how Articles 9–15 obligations are implemented in practice.

How does human oversight need to work in AI supported hiring decisions ?

Human oversight must give recruiters and managers real authority to question, override or ignore AI recommendations, not just rubber stamp them. Oversight requires access to meaningful explanations of how the system reached its outputs, awareness of known risks and limitations, and clear escalation paths when something looks wrong. Organisations should log these interventions so they can show regulators that natural persons remain central to decision making and that systemic risks are actively managed, consistent with the human oversight requirements in Article 14 of the EU AI Act.

Published on