CyberHeed builds your AI Management System end to end - AI policy, impact assessments, lifecycle controls, evidence and surveillance. The first AI standard, handled like the management system it is.
ISO/IEC 42001:2023 is the first international management system standard for Artificial Intelligence. Issued in December 2023 by ISO/IEC JTC 1 SC 42, it specifies the requirements for establishing, implementing, maintaining and continually improving an AI Management System (AIMS). It applies to any organisation that provides or uses products or services that utilise AI systems, regardless of size or industry.
It addresses what classical IT management does not: automated decision-making that can be non-transparent, data-driven systems whose behaviour comes from training rather than human-coded logic, and continuous learning systems that change in production.
ISO 42001 applies to any organisation that develops, deploys, procures or uses AI systems. It is particularly relevant for:
Companies building AI products, machine learning platforms, foundation models, or AI-powered features. ISO 42001 demonstrates responsible AI practice to customers, partners and regulators - and is widely discussed as a supporting standard for conformity assessment of high-risk AI systems under the EU AI Act.
Organisations embedding third-party AI into their products and services, or using AI to make decisions about customers, employees or operations. ISO 42001 establishes accountability across the AI supply chain and meets growing buyer expectations on responsible AI.
Health, finance, transport, employment, energy and defence. Annex D of the standard explicitly names these as sectors where AI Management Systems are particularly applicable. Sector regulators are increasingly looking to ISO 42001 as the reference for AI governance.
AI procurement is rapidly being formalised in government and public sector tenders. Organisations seeking to supply AI capability to government will increasingly need to demonstrate a credible AI management system.
Like other ISO management system standards, ISO 42001 certification follows a structured cycle delivered by accredited certification bodies:
Identify where current AI practices fall short of ISO 42001 requirements. Map AI systems, organisational roles, and existing governance against the clauses and Annex A controls.
Build the AI Management System: AI policy, risk assessment, AI system impact assessment, Statement of Applicability, lifecycle controls, data governance, human oversight, and training.
Conduct internal audits to verify the AIMS meets all requirements before external assessment. Close nonconformities and confirm impact assessments are complete and current.
The certification body reviews documentation: AI policy, Statement of Applicability, AI risk assessment, AI system impact assessments, and management system records. They confirm readiness for Stage 2.
The auditor verifies the AIMS is implemented and operating effectively across the AI system lifecycle. They interview staff, sample controls, and test how AI risks and impacts are actually being managed.
Annual surveillance audits verify continued compliance. AI moves fast - models change, data shifts, new systems are deployed. Evidence must be current, and the AIMS must keep pace.
Full recertification audit. The entire AIMS is reassessed. The cycle begins again for another three years.
ISO 42001 uses the Harmonized Structure shared with ISO 27001, ISO 9001 and other ISO management system standards. The clauses establish how AI is governed, risk-assessed, resourced, operated and improved - so that controls are part of a system, not a one-off project.
Understand internal and external context, the needs of interested parties, and define the scope of the AIMS. Critically, the organisation must determine its role(s) with respect to AI - provider, producer, customer, partner, subject, or relevant authority - because this determines which requirements and controls apply.
Top management must demonstrate commitment and establish an AI policy aligned with the organisation's strategic direction. The note to clause 5.1 explicitly highlights modelling a responsible AI culture as a demonstration of leadership.
Identify AI risks and opportunities. Conduct AI risk assessment, AI risk treatment, and - distinctly - AI system impact assessment covering individuals, groups and societies. Set measurable AI objectives. Produce a Statement of Applicability covering Annex A controls.
Provide resources, ensure competence (AI demands interdisciplinary skills), build awareness, manage communication and documented information. People, capability and infrastructure for the AIMS to operate.
Execute the planned AI risk assessments, risk treatments and impact assessments. Operate the AI system lifecycle controls (Annex A.6). Retain documented evidence that each was carried out as planned.
Monitor, measure, analyse and evaluate the AIMS and AI system performance. Conduct internal audits. Perform management reviews. AI systems drift - this clause ensures the AIMS sees it.
Address nonconformities with corrective actions. Drive continual improvement of the AIMS. AI moves faster than most management systems - the AIMS has to be built to keep up.
Annex A of ISO 42001 contains 38 reference controls organised into nine topic areas (A.2 to A.10). As with ISO 27001, every control must be considered in the Statement of Applicability, with justification for inclusion or exclusion based on the AI risk assessment.
AI policy, alignment with other organisational policies, and review of the AI policy. The governance anchor for everything that follows - documented direction on how AI is developed and used.
AI roles and responsibilities, and a process for reporting concerns about AI systems with confidentiality, anonymity and protection against reprisal. Accountability and a safe channel for raising AI-specific concerns.
Documentation of the resources needed across the AI system lifecycle: data, tooling, system and computing resources, and human competence. The foundation for understanding what an AI system is actually made of.
A formal AI System Impact Assessment process - distinct from risk assessment - covering impacts on individuals, groups and societies. Fairness, accountability, transparency, safety, privacy, accessibility, human rights, and societal effects must all be considered.
The largest topic area. Management guidance for responsible development, plus lifecycle controls covering requirements, design and development documentation, verification and validation, deployment, operation and monitoring, technical documentation and event logging. AI-specific threats - data poisoning, model stealing, model inversion - are explicitly in scope.
Data management for development and enhancement, acquisition, quality, provenance and preparation. Data governance is treated as a first-class concern because in AI, data is the system.
System documentation and user information, external reporting channels, incident communication, and obligations to inform regulators, customers and authorities. Transparency made operational.
Processes for responsible use, objectives for responsible use (fairness, accountability, transparency, explainability, reliability, safety, privacy, security, accessibility), and ensuring AI is used within its intended use and documentation. Where and how human oversight is required is determined here.
Allocating responsibilities across the AI supply chain, supplier processes, and consideration of customer expectations. AI accountability follows the system - it does not stop at the contract boundary.
ISO 42001 is not a paper exercise. The AI System Impact Assessment, lifecycle controls and continuous monitoring requirements mean the AIMS has to keep pace with how AI actually evolves - retrained models, new datasets, drifting behaviour, and shifting regulation.
Every phase of ISO 42001 compliance - from initial preparation through ongoing surveillance - managed in one platform. The same agentic GRC approach as our ISO 27001 implementation, adapted for AI-specific requirements.
An adaptive, AI-guided journey covering every ISO 42001 topic area. AI policy, AI risk and impact assessment, lifecycle and data governance, transparency and human oversight, supplier and customer responsibilities - each session adapts as the conversation unfolds, follows up on gaps, and captures how your organisation actually develops or uses AI.
At the end, your complete documentation suite is generated from the knowledge gathered - including the AI System Impact Assessments that are unique to this standard.
Upload evidence for any of the 38 Annex A controls. AI reads each document, assesses whether it satisfies the requirement, and tells you specifically what's strong and what an auditor would flag. Scored 0-5 with actionable feedback.
AutoMatch reads your existing documentation and maps it to the right controls across ISO 42001, ISO 27001 and any other framework you maintain. Hours of cross-referencing, handled automatically.
Surveillance audits become routine. Impact assessments are kept current as systems change. Model performance and data drift are tracked. Recurring tasks have owners and deadlines. Gaps are flagged before auditors find them.
When Year 2 and Year 3 come around, your posture is current. When recertification arrives, you're already ahead.
From AI policy to certification. One platform. AI that does the heavy lifting.
Book a Demo