
- NIST’s new Assessing Risks and Impacts of AI (ARIA) program will assess the societal risks and impacts of artificial intelligence systems (i.e., what happens when people interact with AI regularly in realistic settings).
- The program will help develop ways to quantify how a system functions within societal contexts once it is deployed.
- ARIA’s results will support the U.S. AI Safety Institute’s testing to help build the foundation for trustworthy AI systems.
The National Institute of Standards and Technology (NIST) is launching a new testing, evaluation, validation and verification (TEVV) program intended to help improve understanding of artificial intelligence’s capabilities and impacts.
Assessing Risks and Impacts of AI (ARIA) aims to help organizations and individuals determine whether a given AI technology will be valid, reliable, safe, secure, private and fair once deployed. The program comes shortly after several recent announcements by NIST around the 180-day mark of the Executive Order on trustworthy AI and the U.S. AI Safety Institute’s unveiling of its strategic vision and international safety network.
ARIA expands on the AI Risk Management Framework, which NIST released in January 2023, and helps to operationalize the framework’s risk measurement function, which recommends that quantitative and qualitative techniques be used to analyze and monitor AI risk and impacts. ARIA will help assess those risks and impacts by developing a new set of methodologies and metrics for quantifying how well a system maintains safe functionality within societal contexts.
The results of ARIA will support and inform NIST’s collective efforts, including through the U.S. AI Safety Institute, to build the foundation for safe, secure and trustworthy AI systems.
