The National Institute of Standards and Technology (NIST) is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).
The Framework will be developed through a consensus-driven, open, transparent, and collaborative process that will include workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.
The on-going NIST effort aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as transparency, fairness, and accountability during design, deployment, use, and evaluation of AI technologies and systems. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services.
A initial virtual workshop to enable expert participation from industry, academia and government will be held on October 19-21.
Recommends an urgent, comprehensive, whole-of-nation action. The result: a 900-page hybrid mixture of national security policy and technology competitiveness recommendations.
The National Security Commission on Artificial Intelligence (NSCAI) issued its final report on Monday, March 1st, 2021 framed by the great power competition between the United States and it’s allies and China. Commissioners called on the United States to drastically reorient government functions including its national security and technology apparatus to meet the coming national security challenges and opportunities of A.I.. The report is broken into two parts: Part I “Defending America in the AI Era,” and Part II “Winning the Technology Competition,” Both parts are interlinked and the commissioners emphasized that the United States stands to lose it’s technical advantage over geopolitical rivals within the next 10 years.
The 900-page report is a hybrid mixture of national security policy and technology competitiveness recommendations. Part I outlines what the United States must do to defend against the spectrum of AI-related threats from state and non-state actors and recommends how the U.S. government can responsibly use AI technologies to protect the American people and our interests. Part II outlines AI’s role in a broader technology competition and addresses critical elements of the competition and recommends actions the government must take to promote AI innovation to improve national competitiveness and protect critical U.S. advantages.
Part I recommendations:
Defend against emerging AI-enabled threats to America’s free and open society.
Prepare for future warfare.
Manage risks associated with AI-enabled and autonomous weapons.
Transform national intelligence.
Scale up digital talent in government
Establish justified confidence in AI systems.
Present a democratic model of AI use for national security.
Part II recommendations:
Organize with a White House–led strategy for technology competition.
Win the global talent competition.
Accelerate AI innovation at home.
Build a resilient domestic base for designing and fabricating microelectronics.
Several media outlets are reporting that the Trump Administration will issue an executive order on artificial intelligence strategy as soon as today — Monday, February 11th. According to the New York Times, the order does not set aside funds for A.I. research and development, and there are few details on how any new policies will be put into effect. More information on the new order will be posted as it becomes available.
The DARPA AI Next campaign is a multi-year, upwards of $2 billion investment in new and existing programs to create the third wave of AI technologies. To raise awareness if this effort, DARPA is hosting an Artificial Intelligence Colloquium (AIC) from March 6-7, 2019 in Alexandria, Virginia. This event seeks to bring together the DoD research community and defense stakeholders to learn more about DARPA’s current and emerging AI programs, as well as discover how the myriad technologies in development could apply to their diverse missions.
During the two-day conference, attendees will hear from current DARPA researchers and program managers as they discuss work that is advancing the fundamentals of AI, as well as those programs that are exploring the technology’s application to defense-relevant challenges – from cyber defense and software engineering to aviation and spectrum management.