NIST AI RMF 1.0

NIST AI RMF 1.0. The risk language for trustworthy AI.

CyberHeed runs your AI RMF adoption end to end - GOVERN, MAP, MEASURE, MANAGE. Profiles built from how you actually develop and use AI, trustworthiness evaluated against the framework.

4 Core Functions
19 Categories
72 Subcategories
7 Trustworthy AI Traits
NIST AI 100-1
Issued January 2023
Voluntary, non-sector-specific
Pairs with ISO 42001
What the AI RMF Is

A risk management framework for AI. Not a checklist of AI principles.

The NIST AI Risk Management Framework (AI RMF 1.0) was issued by the U.S. National Institute of Standards and Technology in January 2023 as publication NIST AI 100-1. It is a voluntary, non-sector-specific framework that helps organisations designing, developing, deploying or using AI systems manage AI-specific risks and build toward trustworthy and responsible AI.

AI risk under the framework is defined as the composite measure of an event's probability and the magnitude of the harm if it occurs. The AI RMF does not prescribe risk tolerance - that is contextual to the organisation and use case. What it does provide is a structured way to identify, prioritise, measure and manage AI risks across the lifecycle.

Who Uses the AI RMF?

The framework addresses AI actors - those who play an active role in the AI system lifecycle. In practice, that means:

AI Developers and Providers

Organisations building AI products, machine learning platforms, foundation models, or AI-powered capabilities. AI RMF gives developers a structured language for risk and trustworthiness that customers, partners and regulators increasingly expect.

AI Deployers and Integrators

Organisations embedding third-party AI into their products and using AI to make decisions affecting customers, employees or operations. AI RMF defines responsibilities across the supply chain and provides the language for procurement and third-party risk management.

U.S. Federal Agencies and Suppliers

U.S. federal agencies and their suppliers are increasingly directed to adopt AI RMF practices for AI used in government contexts. Organisations selling AI to U.S. government - directly or via integrators - benefit from demonstrating alignment.

Organisations Pursuing ISO 42001

ISO/IEC 42001 explicitly references NIST AI RMF as describing AI system roles and lifecycle. Many organisations adopt AI RMF as the outcome-based foundation and then implement ISO 42001 as the certifiable management system on top.

The Adoption Pathway

Unlike ISO 42001, the AI RMF is voluntary and not certifiable. Adoption is structured around Profiles and continuous improvement rather than audit cycles:

1. Establish Context and Risk Tolerance

Determine the organisation's AI risk tolerance, applicable legal and regulatory requirements, and which AI systems are in scope. This is the foundation for everything that follows under the GOVERN function.

2. Build a Current Profile

Assess how the organisation currently performs against the Core's 72 subcategories. The Current Profile describes outcomes being achieved today, identified honestly rather than aspirationally.

3. Define a Target Profile

Determine the outcomes the organisation needs to achieve based on risk tolerance, mission, business value and stakeholder expectations. Cross-sectoral profiles can be reused where they exist.

4. Map the Gap and Plan

Compare Current and Target Profiles to surface the gaps to be addressed. Prioritise based on risk - not all gaps carry the same weight. Build an action plan with owners and timelines.

5. Execute Across MAP, MEASURE, MANAGE

Implement the chosen outcomes across the three operational functions. MAP establishes context for each AI system. MEASURE evaluates trustworthiness characteristics. MANAGE responds to and tracks AI risks.

6. Monitor and Improve

AI moves fast - models change, data drifts, regulations evolve. The AI RMF expects organisations to periodically evaluate whether the framework is actually improving their ability to manage AI risk, and to update profiles and practices accordingly.

The Core

Four functions. GOVERN is cross-cutting, the other three are AI-system specific.

The AI RMF Core is the taxonomy of AI risk management outcomes. GOVERN cultivates the culture and policy that infuses every other function. MAP, MEASURE and MANAGE are applied per AI system and per lifecycle stage. Every subcategory describes an outcome, not a prescriptive control.

GOVERN (19 subcategories, 6 categories)

A culture of AI risk management is cultivated and present throughout the organisation. Policies, processes and accountability structures. The categories: Risk Management Policies (GV1), Accountability Structures (GV2), Workforce Diversity & Inclusion (GV3), Risk Culture (GV4), AI Actor Engagement (GV5), and Third-Party & Supply Chain (GV6). Cross-cutting - every other function depends on it.

MAP (18 subcategories, 5 categories)

Context is recognised and risks related to context are identified. Establishes the framing for each AI system: Context Establishment (MP1), AI System Categorisation (MP2), Capabilities & Benchmarks (MP3), Risk & Benefit Mapping (MP4), and Impact Characterisation (MP5). This is where risks get named before they get measured.

MEASURE (22 subcategories, 4 categories)

Identified risks are assessed, analysed, or tracked. Uses quantitative, qualitative and mixed methods to evaluate AI systems: Methods & Metrics (MR1), Trustworthy Characteristics Evaluation (MR2 - the largest category), Risk Tracking (MR3), and Measurement Efficacy (MR4). TEVV - Test, Evaluation, Verification and Validation - sits here.

MANAGE (13 subcategories, 4 categories)

Risks are prioritised and acted upon based on projected impact. The operational response function: Risk Prioritisation & Response (MG1), Impact Minimisation Strategies (MG2), Third-Party Risk Management (MG3), and Risk Treatment & Communication (MG4). Includes mechanisms to deactivate AI systems that behave inconsistently with intended use.

Trustworthy AI

Seven characteristics that have to be balanced - not chosen between.

The AI RMF defines trustworthy AI through seven socio-technical characteristics. They are not a checklist - they have to be balanced in context, with trade-offs made explicit. Validity and reliability is the foundational condition; the others build on it.

Valid and Reliable

The system performs as intended and generalises appropriately. The foundational condition for every other characteristic.

Safe

The system does not, under defined conditions, lead to a state in which human life, health, property or the environment is endangered.

Secure and Resilient

The system maintains confidentiality, integrity and availability and can withstand unexpected adverse events.

Accountable and Transparent

Information about the system and its outputs is available to individuals interacting with it. Accountability presupposes transparency.

Explainable and Interpretable

Explainability addresses the mechanisms underlying AI operation; interpretability addresses the meaning of outputs in context.

Privacy-Enhanced

The system safeguards human autonomy, identity and dignity through privacy norms and practices.

Fair - with Harmful Bias Managed

The system addresses harmful bias and discrimination. NIST identifies three bias categories that must be considered: systemic, computational/statistical, and human-cognitive.

The AI RMF is not a certification scheme. There is no auditor and no stamp. The discipline comes from doing the work - building honest profiles, measuring trustworthiness against your context, and tracking how AI risk actually evolves in production.

Adopt - Profile - Improve

How CyberHeed handles each phase of your AI RMF adoption

Every phase of AI RMF adoption - from establishing context through continuous improvement - managed in one platform. Outcome-based by design, evidence-backed throughout.

1. Adopt with SmartPrep

An adaptive, AI-guided journey covering every Core function. GOVERN policies and accountability, MAP context and impact, MEASURE trustworthiness characteristics, MANAGE prioritisation and response - each session adapts as the conversation unfolds, follows up on gaps, and captures how your organisation actually develops, deploys or uses AI.

The output is your Current Profile across all 72 subcategories - built from what your team actually does, not what your policies aspire to.

2. Profile with Evidence Validation

Upload evidence against any subcategory. AI reads each document, assesses whether it satisfies the outcome, and tells you specifically what's strong and what is thin. Scored 0-5 with actionable feedback.

Define your Target Profile based on risk tolerance, mission and stakeholder expectations. CyberHeed surfaces the gap between Current and Target Profiles, prioritised by impact and likelihood. AutoMatch maps your existing documentation across AI RMF, ISO 42001 and any other framework you maintain.

3. Improve with Continuous Oversight

AI RMF is built for continuous improvement, not point-in-time assessment. Models drift, data shifts, contexts change. CyberHeed tracks trustworthiness evaluations over time, surfaces emergent risks, and keeps the gap between Current and Target Profiles visible to leadership.

When the AI RMF needs to evolve - new use cases, new third-party models, new regulations - your profile evolves with it.

From AI RMF to ISO 42001

AI RMF is the outcome language. ISO 42001 is the certifiable shell.

Many organisations adopt the AI RMF as their outcome-based foundation for AI risk, then layer ISO/IEC 42001 on top when they need a certifiable management system. ISO 42001 itself references NIST AI RMF as the framework that describes AI system roles and lifecycle. The two are complementary, not alternatives.

AI RMF Strengths

Voluntary, non-sector-specific, outcome-based. Strong on the conceptual framing of AI risk, trustworthiness, and impact characterisation. Built around Profiles that organisations construct for their own context. No prescriptive controls, no audit cycle, no compliance burden - just a rigorous way to think about and manage AI risk.

ISO 42001 Strengths

Certifiable by accredited certification bodies. Uses the Harmonized Structure shared with ISO 27001 and ISO 9001 - integrates with existing management systems. Prescribes specific Annex A controls and mandatory documented information that auditors will look for. Provides the external assurance that customers, regulators and procurement teams increasingly expect.

How They Map in CyberHeed

CyberHeed treats AI RMF and ISO 42001 as one unified AI governance programme. Work done for GOVERN under AI RMF maps directly to ISO 42001 Clause 5 and Annex A.2/A.3. MAP feeds the AI risk assessment in Clause 6.1.2 and impact assessment in 6.1.4. MEASURE supports Clause 9 performance evaluation. MANAGE supports Clause 8 operation and Clause 10 improvement. Evidence captured once is used across both - that's the entire point of an agentic, unified GRC platform.

Start your AI RMF journey.

From Current Profile to Target Profile. One platform. AI that does the heavy lifting.

Book a Demo