Making Sure AI Is Ethical
The idea of responsible artificial intelligence is spreading far and wide across the U.S. Department of Defense and its surrounding ecosystem.
The idea of responsible artificial intelligence (AI) is spreading far and wide across the U.S. Department of Defense and its surrounding ecosystem.
There’s been the new data strategy, the responsible AI memo and the newly approved JADC2 strategy that has a massive data component. “The DoD is very much accelerating its path,” said Thomas Kenney, chief data officer and director of SOF AI for U.S. Special Operations Command, during day two of the virtual AFCEA/GMU Critical Issues in C4I Symposium. “Our chief data officer at the DoD, David Spirk, is doing herculean work to help the entire DoD move forward,” he added.
“That new data strategy, as we think about data sharing, is absolutely essential because it creates the conditions for success where we can open doors to data we maybe didn’t have access to before or maybe data we didn’t even know existed,” Kenney said.
The responsibility of AI is in the opportunity to explain it, Kenney said. “There is not one single source of responsibility for the deployment of responsible AI. There’s the responsibility of the folks who are doing the coding. There’s the responsibility of the folks who are doing the requirements definition. There’s the responsibility of the folks who are doing the testing and evaluation—making sure that we can explain it, making sure it’s operating right,” explained Kenney.
“Just like we would say there’s a chain of custody for ensuring that our software is developed, stable and that we can deliver stable software and it actually works in the field, I think we have to have the same type of mentality for AI. Everywhere along this chain of custody for information and for the code inside of it, is the responsibility of all,” Kenney added.
As a data scientist, Melvin Greer, chief data scientist of the Americas for Intel, is extremely encouraged by the increase of conversation and debate around transparent, explainable, responsible AI.
“Because as a practitioner it actually becomes one of the most significant things we have to be responsible for in the creation of our data science capabilities. At Intel, we have an AI for good and ethics framework, which we run internally as well as externally,” said Greer.
Greer personally belongs to a number of forums that government agencies are also focused on that include sociologists, anthropologists, ethicists, and religious and community leaders because the saying is, “If it’s not diverse, it’s not ethical,” exclaimed Greer.
Greer pointed out that in the very first paragraph of the DoD data strategy it says the DoD is going to treat data as a weapons system. “And if that’s the case then it actually makes it quite clear who is responsible when it comes to the fielding and deployment of AI systems as it relates to their explainability and ethics responsibility. It is the person who owns the weapons system,” he said.
“We do not expect the person who does not own the weapons system to ultimately be the arbiter of whether or not it acts appropriately in the field or it meets a mission capability or if it’s capable of being supported and maintained. That is on the person who owns the weapons system,” added Greer. That’s a pretty important designation.
There are other data strategies coming out from the individual services, including the Army, Air Force and Navy. They have created their own ethics, guidelines and policies, which roll up to support the overall DOD ethics capability, Greer stated.
“That’s because as a field commander you don’t want to have a lack of trust in an AI system for targeting, tracking, logistics or readiness. If we want these commanders to have trust in the system and to put their ability to meet the mission in an asymmetrical battlespace, which includes cyber as well as traditional battle, then we are going to provide them—not just government and Congress—but the field, in-theater commanders a way to trust these systems so that they can use them without any kind of hesitation,” he explained.
“The real impetus of this discussion is focused in on what are we doing from a development and deployment [standpoint] and building the systems so that the ultimate persons using them can have trust and be able to understand how they actually work,” said Greer.
That trust aspect also ties into being agile, added Kenney. “You’re not going to give a heavy weapon to an Army private and give them all the ammunition and tell them to go out into the field and start shooting things. There’s an iterative process to form the building blocks of trust for that private to be able to employ that particular weapon,” he said.
Clearly that’s the approach that we need to take from an AI perspective, Kenney stated. “We can’t just build a massive AI system and then throw it out to the fields and expect the commanders to trust to use it. We need to build that trust in over time that demonstrates cumulative capabilities that lead us to the point that we can leverage AI in weapons systems both offensively and defensively,” he said.