PRODUCT UPDATE

AI governance gets real. CyberHeed now supports ISO 42001 and NIST AI RMF.

May 16, 2026  ·  CyberHeed Team

CyberHeed now officially supports ISO/IEC 42001:2023 and the NIST AI Risk Management Framework (AI RMF 1.0). Both are live in the platform today.

The pace of AI adoption has outrun governance.

AI is moving through organisations faster than governance can follow. Models are being deployed into customer-facing products. Decisions are being made automatically about employees, credit applicants, and healthcare workflows. Foundation models are being integrated into services that touch millions of people.

The risks are real and specific. AI systems can produce outputs that are biased, opaque, or unreliable in ways that classical information security controls were never designed to address. A data breach has a defined perimeter. A biased model, a hallucinating system, or an AI that behaves differently in production than it did in testing does not.

Most organisations know this. What they lack is a structured way to think about it: assign ownership, build evidence, and demonstrate to customers, regulators, and partners that AI is being governed responsibly.

The risks of ungoverned AI are not hypothetical. They are already showing up in regulatory investigations, procurement requirements, and customer due diligence questionnaires.

How ISO 42001 and NIST AI RMF relate to standards you already know.

If your organisation has worked through ISO 27001 or NIST CSF, the conceptual ground is familiar. Both AI frameworks build on risk management principles that should already be part of how you think about security. But they address a different problem.

Existing

ISO 27001 / NIST CSF

Information security risk: confidentiality, integrity, availability. The question is whether your systems and data are protected from compromise.

New

ISO 42001 / NIST AI RMF

AI-specific risk: fairness, transparency, safety, human oversight, societal impact. The question is whether your AI systems can be trusted to behave as intended.

ISO/IEC 42001:2023

ISO 42001 is the first certifiable management system standard for AI, issued in December 2023. It uses the same Harmonized Structure as ISO 27001. If your organisation is already certified on ISO 27001, the management system scaffolding is already in place. ISO 42001 adds what is specific to AI: an AI System Impact Assessment, lifecycle controls, data governance at the level of quality and provenance, and controls for human oversight and transparency. It has 38 Annex A controls across nine topic areas and is certifiable by accredited certification bodies on a three-year cycle.

NIST AI RMF 1.0

The NIST AI Risk Management Framework (NIST AI 100-1, January 2023) is a voluntary, outcome-based framework. It provides a rigorous common language for AI risk across four functions: GOVERN, MAP, MEASURE, and MANAGE. Seventy-two subcategories describe the outcomes an organisation should achieve. Profiles let organisations describe where they are now and where they need to be. ISO 42001 explicitly references NIST AI RMF as a related framework. Many organisations use both: AI RMF for the risk and trustworthiness thinking, ISO 42001 for the certifiable management system shell.

When should your organisation look at these frameworks?

Not every organisation needs ISO 42001 certification immediately. But several situations make this conversation timely now: developing AI products or AI-powered services; deploying AI into regulated workflows in healthcare, financial services, or employment; integrating third-party AI into your products; or building on an existing ISO 27001 programme where the incremental effort of adding ISO 42001 is significantly lower than starting from scratch.

How CyberHeed makes the journey practical.

The problem is never understanding what these frameworks require. The problem is doing the work: building the documentation, validating the evidence, and maintaining the posture as AI systems evolve.

SmartPrep builds your AI governance foundation

Adaptive, AI-guided discovery sessions capture how your organisation actually develops and uses AI. The output is a compliance brain that powers every subsequent assessment and drives the full document suite: AI policy, Statement of Applicability, risk and impact assessments, lifecycle procedures, and the rest. For NIST AI RMF, SmartPrep builds your Current Profile across all 72 subcategories from your actual programme.

The Evidence Validator gives immediate feedback

Upload evidence against any control or subcategory. The Validator reads each document, tells you what holds up and what would be challenged under scrutiny, and scores it with specific, actionable feedback. AI governance evidence is inherently more complex than most frameworks. The Validator handles all of it.

AutoMatch and cross-framework coverage

If your ISO 27001 programme is already in place, AutoMatch reads your existing documentation and maps it across ISO 42001 and NIST AI RMF automatically. Work done for one framework counts toward every other framework it legitimately satisfies. What remains after AutoMatch is the genuine gap, not the full list.

See ISO 42001 and NIST AI RMF in the platform.

30 minutes. Your AI governance requirements. We will show you exactly where you stand.

Book a Demo

GRC, but smart. Now for AI too.

Compliance and capability should grow together. Adding ISO 42001 and NIST AI RMF to the platform is the natural extension of that idea into the part of the technology landscape that most organisations are still figuring out. The frameworks exist. The expectation is building. The work is real. CyberHeed makes that work practical.

Start your AI governance journey.

ISO 42001 and NIST AI RMF are live in the platform today.