National A.I. Research Task Force Releases Final Report

The National Artificial Intelligence Research Resource (NAIRR) Task Force released its final report [pdf], a roadmap for standing up a national research infrastructure that would broaden access to the resources essential to artificial intelligence (AI) research and development.

While AI research and development (R&D) in the United States is advancing rapidly, opportunities to pursue cutting-edge AI research and new AI applications are often inaccessible to researchers beyond those at well-resourced companies, organizations, and academic institutions. A NAIRR would change that by providing AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support—fueling greater innovation and advancing AI that serves the public good.

Established by the National AI Initiative Act of 2020, the NAIRR Task Force is a federal advisory committee. Co-chaired by the White House Office of Science and Technology Policy (OSTP) and the National Science Foundation (NSF), the Task Force has equal representation from government, academia, and private organizations. Following its launch in June 2021, the Task Force embarked on a rigorous, open process that culminated in this final report. This process included 11 public meetings and two formal requests for information to gather public input.

Workshop: NIST Artificial Intelligence Risk Management Framework

The National Institute of Standards and Technology (NIST) is developing a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI).

The Framework will be developed through a consensus-driven, open, transparent, and collaborative process that will include workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.

NIST’s work on the Framework is consistent with its broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Congress has directed NIST to collaborate with the private and public sectors to develop the AI Framework.

The on-going NIST effort aims to foster the development of innovative approaches to address characteristics of trustworthiness including accuracy, explainability and interpretability, reliability, privacy, robustness, safety, security (resilience), and mitigation of unintended and/or harmful bias, as well as of harmful uses. The Framework should consider and encompass principles such as transparency, fairness, and accountability during design, deployment, use, and evaluation of AI technologies and systems. These characteristics and principles are generally considered as contributing to the trustworthiness of AI technologies and systems, products, and services.

A initial virtual workshop to enable expert participation from industry, academia and government will be held on October 19-21.

U.S. Not Ready to Defend or Compete in the A.I. Era, Commission Concludes

Recommends an urgent, comprehensive, whole-of-nation action. The result: a 900-page hybrid mixture of national security policy and technology competitiveness recommendations.

Image

The National Security Commission on Artificial Intelligence (NSCAI) issued its final report on Monday, March 1st, 2021 framed by the great power competition between the United States and it’s allies and China. Commissioners called on the United States to drastically reorient government functions including its national security and technology apparatus to meet the coming national security challenges and opportunities of A.I.. The report is broken into two parts: Part I “Defending America in the AI Era,” and Part II “Winning the Technology Competition,” Both parts are interlinked and the commissioners emphasized that the United States stands to lose it’s technical advantage over geopolitical rivals within the next 10 years.

The 900-page report is a hybrid mixture of national security policy and technology competitiveness recommendations. Part I outlines what the United States must do to defend against the spectrum of AI-related threats from state and non-state actors and recommends how the U.S. government can responsibly use AI technologies to protect the American people and our interests. Part II outlines AI’s role in a broader technology competition and addresses critical elements of the competition and recommends actions the government must take to promote AI innovation to improve national competitiveness and protect critical U.S. advantages.

Part I recommendations:

  • Defend against emerging AI-enabled threats to America’s free and open society.
  • Prepare for future warfare.
  • Manage risks associated with AI-enabled and autonomous weapons.
  • Transform national intelligence.
  • Scale up digital talent in government
  • Establish justified confidence in AI systems.
  • Present a democratic model of AI use for national security.

Part II recommendations:

  • Organize with a White House–led strategy for technology competition.
  • Win the global talent competition.
  • Accelerate AI innovation at home.
  • Build a resilient domestic base for designing and fabricating microelectronics.
  • Protect America’s technology advantages.
  • Build a favorable international technology order.
  • Win the associated technologies competitions.

Executive Order Outlining U.S. A.I. Strategy to be Unveiled

Several media outlets are reporting that the Trump Administration will issue an executive order on artificial intelligence strategy as soon as today — Monday, February 11th. According to the New York Times, the order does not set aside funds for A.I. research and development, and there are few details on how any new policies will be put into effect. More information on the new order will be posted as it becomes available.

DARPA to Host AI Colloquium

The DARPA AI Next campaign is a multi-year, upwards of $2 billion investment in new and existing programs to create the third wave of AI technologies. To raise awareness if this effort, DARPA is hosting an Artificial Intelligence Colloquium (AIC) from March 6-7, 2019 in Alexandria, Virginia. This event seeks to bring together the DoD research community and defense stakeholders to learn more about DARPA’s current and emerging AI programs, as well as discover how the myriad technologies in development could apply to their diverse missions.

During the two-day conference, attendees will hear from current DARPA researchers and program managers as they discuss work that is advancing the fundamentals of AI, as well as those programs that are exploring the technology’s application to defense-relevant challenges – from cyber defense and software engineering to aviation and spectrum management.