Incoming: There Are Two Sides to the AI Coin
Much discussion is underway on artificial intelligence (AI) and what it means for society. Debates rage over the ethics of decisions being made without a human in the process.
Much discussion is underway on artificial intelligence (AI) and what it means for society. Debates rage over the ethics of decisions being made without a human in the process.
Arguments continue about the legality of machine-made choices and the consequences in a world where data is delivered, debated and decided at machine speed. There is talk about slowing down the technology and even some conversations about legislation to limit the development and application of AI.
Personally, I don’t like the name artificial intelligence. The definition of the word “artificial”—humanly contrived, lacking in natural or spontaneous quality, imitation, sham—doesn’t conjure up any favorable images and has rather negative overtones. You can just see machines taking over and a dark world being the outcome.
On the other hand, the term augmented intelligence doesn’t sound so bad. The definition of augmented—made greater, larger or more complete, enhanced—implies beneficial outcomes, an assist to the human master. It is a completely different mindset. I don’t mind being augmented. It isn’t something new. Augmented intelligence today is just like augmented reality. It is adding an overlay to provide information to make an experience more enriched and to allow for better decisions.
Augmented intelligence will help me review and bring value from the vast amounts of data about to be unleashed by the 5G revolution. Augmented intelligence is a tool that will handle tasks, such as searching through huge amounts of imagery, at speeds the human mind cannot. This good AI will allow me only to focus on the most valuable imagery to help find a lost Alzheimer’s patient, to better direct medical treatment and to improve military targeting to spare lives. Good AI will just present me options, enhance my ability to make good judgments and leave me the final decisions. I won’t lose control.
Bad AI will leave the human behind and deliver a cold, calculated answer. Bad AI will allow corrupt governments to find the resistance, let doctors produce genetically superior humans, prompt more lethal outcomes and ensure higher kill ratios through improved targeting. Bad AI will eliminate human accountability and allow the natural forces of evil to prevail. Bad AI has a mind of its own, and it’s more focused on Mr. Hyde than Dr. Jekyll.
Obviously, there is a simple solution: We will change the name, ban artificial intelligence and allow only the benevolent augmented intelligence. If only it were that easy. We need to have a debate about AI, but it needs to recognize that AI is coming and will be used for good and bad. We are not going to be able to stop the technology, and we are not going to be able to legislate where AI can be used.
As with every technology before and—I suspect—after it, AI will be used to gain individual, group and national advantage. AI is, however, the only way we can obtain full value from the magnitude of data that is about to be available as 5G fuels the Internet of Things explosion. At the moment, it may be hard to imagine the sheer quantity of information that will be generated, exposed, collected and analyzed thanks to the coming 5G ecosystem. Sensors will be everywhere, affecting every part of life. Hand-held devices will have more linked capability, and we will need AI to try to protect privacy and control data rights. Without AI, the future data-driven economy and society probably doesn’t function as well as today’s does.
AI will be an essential part of the new toolkit that will create wealth and jobs, enable deeper exploration of this world and others, advance medical technology and practice, and greatly improve the quality of life. But like technology before it, AI will be used for nefarious purposes. In the end, it is we the humans who will determine whether its overall effect is more positive than negative.
Some of my advisers and friends have told me this is stating the obvious and doesn’t really advance the AI debate. I think they are probably right. I also think that sometimes you need to state the obvious because it may not be obvious to everyone.
As always, I welcome feedback and counterbattery at signalnews@afcea.org.
Terry Halvorsen, chief information officer (CIO) and an executive vice president with Samsung Electronics, is the former U.S. Defense Department CIO. He also has served as the Department of the Navy CIO.