Artificial Intelligence Is a Technology That Divides
Experts cannot yet agree on whether it is a helper or a hazard, says guest blogger David Meadows.
In December 2014, Stephen Hawking, the renowned theoretical physicist, warned the world that true artificial intelligence (AI) could mean the death of mankind. Well, that got my attention. His comments stirred up a maelstrom of support. Small wonder, but the AI argument has been ongoing since the late Isaac Asimov wrote the Foundation series.
Hawking’s statement did complement a blast by Elon Musk, Tesla CEO and a strong advocate of driverless cars, who two months prior at the MIT Aeronautics and Astronautics Department's 2014 Centennial Symposium responded to the discussion about AI by saying, “With artificial intelligence, we are summoning the demon.”
The Washington Post on January 29 reported that Bill Gates during a Reddit “Ask Me Anything” session, when questioned about AI and Hawking’s position, said he agreed with the dangers of AI being expressed by others such as Elon Musk and could not understand why others aren’t as concerned.
Yet, not everyone agrees with the “sky is falling” perspective being voiced by some of the smartest businessmen and scientists of today. Eric Horvitz, the director of Microsoft Research Lab—who disagreed with Gates during the same Reddit session—pointed out that Microsoft had “over a quarter of all attention and resources” focused on AI.
Some are saying that if AI had been installed in the Germanwings airliner that is believed to have been intentionally crashed on March 24, that AI could have seized control from the rogue pilot and saved the plane. I think they’re probably right.
Jeff Hawkins, inventor of the Palm Pilot, is known for his research on developing technology that mimics the human brain, which is a roadmap toward AI. He tends to shrug his shoulders as he foresees no threat to mankind.
But, companies and governments are moving toward developing AI, and sometimes it makes me appreciate the foresight of Asimov in his Three Laws of Robotics, which could be tweaked to read the Four Laws of Artificial Intelligence:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
4. A robot may not harm humanity or, by inaction, allow humanity to come to harm.
The fourth law was added by Asimov after the first three, and he believed it was the most important one of the four.
Google has been buying companies that have had great success in the areas of robotics, such as with its 2013 purchase of Boston Dynamics, a leading success story in building robots. According to CNN Money, Boston Dynamics had done more than $140 million in contracts with the Defense Department by the time of its purchase by Google.
You have to visit the Boston Dynamics YouTube video titled “Introducing Spot.” I have watched it several times, and each time I wonder what Spot could do if guided by AI. It definitely is an interesting insight into the world of robotics, which will figure prominently in the age of AI.
In my opinion, the Defense Advanced Research Projects Agency (DARPA) easily is one of the top laboratories for technological achievements that have transformed our world. DARPA was reported working on AI as early as 2013 in developing a machine that could mimic the human brain while concurrently conducting research to build “robots of war.” Later that same year, DARPA hired two renowned scientists who today are working the AI challenge.
So, AI isn't something that just started in 2015. Government and business have been doing it since at least 2013, and traces can be found much earlier.
Am I concerned? I don’t know. I do suspect that when you tie the universal connectivity concept that the Internet of Things brings, which most likely will be accompanied by a host of connected information technology complexities accompanied with unique and multiple vulnerability challenges, then you provide the capability for a true AI to reach globally. And, once that horse is out of the starting gate, there is no stopping it.
Regardless of whether you are concerned about AI or not, you can rest peacefully knowing that Skynet of the “Terminator” series will not be the technological vehicle for AI—China has already grabbed the Skynet term as its campaign operation to fight corruption.
After writing this, I think I will expand my garden this year.
David E. Meadows, MBA, MS, is the author of "The Sixth Fleet"