The National Institute of Standards and Technology (NIST) has developed an AI Risk Management Framework (AI RMF), which provides a structure for organizations to identify, assess, and mitigate risks associated with the use of Artificial Intelligence (AI). The framework provides guidance on the roles and responsibilities of stakeholders, the process for risk management, and security and privacy considerations. NIST issued a second draft of the framework on September 29, 2022, for public comment and anticipates submitting a final version to Congress in early 2023.
The AI Risk Management Framework (AI RMF) consists of two parts. Part 1 provides important context on the motivations behind the framework and why managing AI risk is so imperative in this moment. It also defines what constitutes trustworthy and responsible AI: systems that are reliable, unbiased, secure, transparent, and explainable.
Referred to as AI RMF Core, Part 2 of the framework focuses on helping organizations actually manage the development and rollout of an AI system. It can be broken down into four functions:
- The Govern function revolves around cultivating and implementing a culture of risk management when it comes to the development of AI systems. It’s meant to provide a structure that aligns an organization’s ambitions in AI with its policies and strategic priorities.
- The Map function establishes the context to frame risks related to an AI system and enables risk prevention, recognition of system limitations, and assessment of impacts to inform a decision about whether the organization should design and develop an AI system at all.
- The Measure function uses knowledge from the Map function and applies quantitative, qualitative, and mixed-method tools, techniques, and methodologies to analyze and monitor AI risk and related impacts. It recommends independent review to improve the effectiveness of testing and mitigate internal biases.
- The Manage function entails allocating risk management resources to mapped and measured risks on a regular basis to decrease the likelihood of system failures and negative impacts.