• Dana Deasy, the Defense Department’s CIO, speaks at AFCEA’s 2018 Defensive Cyber Operations Symposium in Baltimore. Credit: Michael Carpenter
     Dana Deasy, the Defense Department’s CIO, speaks at AFCEA’s 2018 Defensive Cyber Operations Symposium in Baltimore. Credit: Michael Carpenter

DOD Formally Adopts Five Ethical Principles for AI

February 24, 2020
Posted by Julianne Simpson
E-mail E-mail the Author

 

The U.S. Department of Defense officially adopted a series of ethical principles for the use of artificial intelligence (AI). Recommendations were provided to Secretary of Defense Mark T. Esper by the Defense Innovation Board last October.

These principles will apply to both combat and non-combat functions and assist the U.S. military in upholding legal, ethical and policy commitments in the field of AI, according to the Pentagon.

Department of Defense (DoD) Chief Information Officer Dana Deasy, along with the DoD Joint Artificial Intelligence Center (JAIC) Director Lt. Gen John N.T. Shanahan, USAF, announced the formal adoption of the Pentagon’s AI ethics principles during a live event on Monday.

“Ethics remain at the forefront of everything the department does with AI technology, and our teams will use these principles to guide the testing, fielding and scaling of AI-enabled capabilities across the DOD,” said Deasy.

The department’s AI ethical principles encompass five major areas:

  1. Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment and use of AI capabilities.
  2. Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  5. Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
Enjoyed this article? SUBSCRIBE NOW to keep the content flowing.


Departments: 

Share Your Thoughts: