It's Time to Take Seriously the Machine Ethics of Autonomous and AI Cyber Systems
The concern of machine ethics and laws spills into the everyday workings of society, not just the domain of defense. Many concepts revolve around the law of armed conflict, societal law, ethical dilemmas, psychological concepts and artificially intelligent cyber systems, as well as their relationships among each other. In addition to the delineation of machine ethic guidelines, an ethical life cycle is necessary to account for changes over time in national circumstances and personal beliefs. Just recently, the Defense Innovation Board, which serves as an advisory board to the Pentagon, met and published ethical guidelines in designing and implementing artificially intelligent weapons. Artificial intelligence (AI) systems in the Defense Department must satisfy the conditions of responsibility, equitability, traceability, reliability and governability. The Defense Innovation Board approved five ethical principles.
The first principle is Responsibility. Human beings should exercise appropriate levels of judgment and remain answerable for the development, deployment, use and outcomes of Defense Department AI systems.
The second principle is Equitability. The Defense Department should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that inadvertently cause harm to persons.
The third is Traceability. The Defense Department’s AI engineering discipline should be sufficiently advanced so that technical experts possess an appropriate understanding of the technology, development processes and operational methods of its AI systems—including transparent and auditable methodologies, data sources and design procedure and documentation.
The fourth principle is Reliability. Defense Department AI systems should have an explicit, well-defined domain of use, and the safety, security and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
The fifth is Governability. Defense Department AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.
The guidelines serve as an excellent ethical framework with respect to development and use of AI with the department, but there is more to this issue. What about the ethical guidelines and rules that the autonomous systems will follow?
Ethical beliefs are a critical subject across various societies. For example, in the United States, ethics are a pillar of the U.S. justice system, as moral reasoning and acceptable actions throughout society influence lawmakers. The laws of our society will influence the development and intended behavior of artificially intelligent cyber agents. Delving deeper, machine ethics are concerned with autonomous systems operating within the constructs of societal norms, laws and ethical boundaries. The importance that these two forms of ethics are related and influenced by each other and several factors cannot be overstated.
It is important to note that ethics are not a universal, nor strictly defined, concept, because they may be influenced by cultures at various levels. Edmond Byrne from University College Cork, Ireland, posits that ethics are influenced by personal morals, law, ethos of professional peers, organizational culture, societal culture and other professional considerations. Just from the standpoint of culture alone, it has been demonstrated that individual culture, organizational culture, national culture and international culture are dynamically related and influence each other. The ethical beliefs and laws upheld within a society that are exercised throughout everyday life may influence the ethics of autonomous systems within that same country. Of concern is that no two countries will operate under the constructs of the same culture, let alone two organizations in the same country. These differences alone may influence how autonomous systems are programmed and ultimately behave, and still more pieces to the puzzle remain.
In terms of machine ethics for AI cyber agents, several areas of concern require further address. The first priority is achieving an agreed-upon definition of machine ethics acceptable to policy makers and citizens. The next area of concern is how these ethical beliefs are programmed into AI cyber agents, how human biases can be removed in system programming to avoid inappropriate behavior, and who/what is responsible in the case of inappropriate action. The answer to these questions requires considerable discussion of great depth, but from the perspective of defense, it is important to evaluate military teachings.
Ethics are influenced by personal beliefs, professional viewpoints, and prior teachings, which will influence machine ethics. Within the domain of defense, ethics may be influenced by several laws and teachings. For example, the law of armed conflict (LOAC) is not a definitive delineation of specific laws but rather a broad delineation of rules. The four foundational principles of the LOAC are necessity, humanity, proportionality and distinction. Necessity refers to the assessment and determination of validity in respect to a target which satisfies some military need. Humanity refers to the level of force necessary only for the objectives of war and avoiding any unnecessary or excessively costly action. Proportionality ensures that only the necessary level of force is utilized and no more to achieve an objective. Last, distinction proposes that agreed upon rules of war are in place.
By noting the LOAC in conjunction with commonly taught military tactics and strategies, a belief system and delineation of acceptable norms starts to form. Depending on the country in question, teachings and doctrine from Clausewitz, Sun Tzu, Corbett and Jomini, to name a few, are imparted. The lessons are passed down to various disciplines such as military professionals, historians, political scientists and cyber professionals. As an example, from these teachings, individuals apply strategies and tactics such as limited and total war according to their interpretation of the readings. Ultimately, there may be an instance in which interpretations of international laws, doctrine, military history and teachings make their way into the programming of autonomous cyber agents. It is important to acknowledge these differences, be aware of the effects and decide upon an agreed framework.
A key area of concern is how artificially intelligent cyber systems should operate while being considered ethical. Artificially intelligent cyber defense systems are being developed commercially and within academics. Examples include Cylance, Darktrace and the Massachusetts Institute of Technology (MIT) system named AI2. These systems make use of various machine learning algorithms. On the other end of the spectrum, researchers from Endgame and the University of Virginia have developed AI cyber-attack systems that evade detection from defense systems through various means. Early research indicates that attack algorithms evaded detection by 16 percent.
These systems demonstrate the potential rise of an AI cyber arms race. From the standpoint of defense, systems such as these may combat each other, and their ethical implementation may have significant effects to outcome, which may be realized when systems from differing countries compete with each other. If conceptualizing artificially intelligent cyber systems as robots, then Isaac Asimov’s Laws of Robotics may be examined. These laws are: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; and (3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law. Asimov later added another law for which the original three became subordinate: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These laws would be difficult to apply to some military situations. Laws developed for AI cyber agents must be examined now, and the importance of this subject cannot be overlooked. A difficult scenario arises when military requirements, AI cyber systems and machine ethics are examined, especially when the result requires victory within the constructs of acceptable ethics and the laws of war. An autonomous cyber defense agent may have guidelines similar to Asimov’s Laws, such as: (1) An autonomous cyber defense agent must take all actions to ensure that no injury occurs to a human being, or, through inaction, allow a human being to come to harm; (2) An autonomous cyber defense agent must obey orders given to it by human beings except where such orders would conflict with the First Law; (3) An autonomous cyber defense agent must protect all critical services, as long as such protection does not conflict with the First or Second Law; (4) An autonomous cyber defense agent must protect its own existence, as long as such protection does not conflict with the First, Second or Third Law; and (5) An autonomous cyber defense agent may not harm humanity, or, by inaction, allow humanity to come to harm.
These five laws are not intended to be fully comprehensive, but they highlight interesting dilemmas. What should an autonomous cyber defense system do if it is attempting to determine and evaluate the importance of a single life compared to maintaining a critical service? What if the attack to a critical service would render the service disabled for a prolonged period of time, if damaged? This is important to examine because the inability to perform specific tasks to satisfy industry requirements may result in significant economic damage, which in turn may impact national security. Once again, the purpose of presenting these laws is to demonstrate the need to further examine autonomous cyber agents. Additionally, offensive and defensive agents are likely to require different guidelines, which must also be examined.
Michael Hanna is an information warfare officer in the U.S. Navy. The views expressed here are solely those of the author and do not necessarily reflect those of the Department of the Navy, Defense Department or the U.S. government.