National A.I. Research Task Force Releases Final Report

The National Artificial Intelligence Research Resource (NAIRR) Task Force released its final report [pdf], a roadmap for standing up a national research infrastructure that would broaden access to the resources essential to artificial intelligence (AI) research and development.

While AI research and development (R&D) in the United States is advancing rapidly, opportunities to pursue cutting-edge AI research and new AI applications are often inaccessible to researchers beyond those at well-resourced companies, organizations, and academic institutions. A NAIRR would change that by providing AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support—fueling greater innovation and advancing AI that serves the public good.

Established by the National AI Initiative Act of 2020, the NAIRR Task Force is a federal advisory committee. Co-chaired by the White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF), the Task Force has equal representation from government, academia, and private organizations. Following its launch in June 2021, the Task Force embarked on a rigorous, open process that culminated in this final report. This process included 11 public meetings and two formal requests for information to gather public input.

Workshop: NIST Artificial Intelligence Risk Management Framework

The National Institute of Standards and Technology (NIST) is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).

The Framework will be developed through a consensus-driven, open, transparent, and collaborative process that will include workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.

NIST’s work on the Framework is consistent with its broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Congress has directed NIST to collaborate with the private and public sectors to develop the AI Framework.

The on-going NIST effort aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as transparency, fairness, and accountability during design, deployment, use, and evaluation of AI technologies and systems. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services.

A initial virtual workshop to enable expert participation from industry, academia and government will be held on October 19-21.