Workshop: NIST Artificial Intelligence Risk Management Framework

The National Institute of Standards and Technology (NIST) is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).

The Framework will be developed through a consensus-driven, open, transparent, and collaborative process that will include workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.

NIST’s work on the Framework is consistent with its broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Congress has directed NIST to collaborate with the private and public sectors to develop the AI Framework.

The on-going NIST effort aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as transparency, fairness, and accountability during design, deployment, use, and evaluation of AI technologies and systems. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services.

A initial virtual workshop to enable expert participation from industry, academia and government will be held on October 19-21.

Leave a Reply