AI or Not AI?

May 6, 2015
By Lewis Shepherd

Caution tempers opportunity as experts ponder artificial intelligence.

Artificial intelligence, or AI, has been on my mind recently—and yes, that’s something of a sideways pun. But it’s worth exploring the phrase from another double-entendre standpoint by asking whether the nation's intelligence professionals are paying enough attention to AI.

In the past week I have seen two brand-new movies with AI at their center: the big-budget sequel Avengers: Age of Ultron (I give it one star, for CGI alone), and the more artistically minded Ex Machina (three stars, for its lyrical dialogue expressed in a long-running Turing Test of sorts).

With Hollywood’s efforts, the uptick in public attention to AI is mimicking the increasing capabilities of real-world AI systems. And the dystopian plot elements of both Ultron and Ex Machina also are mirroring a heightened sense of impending danger or doom among many of the world’s most advanced thinkers.

Last year Stephen Hawking wrote of the risks of AI, warning it might become the “worst mistake in history. … The development of full artificial intelligence could spell the end of the human race.”

SpaceX and Tesla CEO Elon Musk told a research-oriented group at MIT last fall, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that … With artificial intelligence, we are summoning the demon.” More recently Musk actually donated $10 million to help establish a non-profit organization to examine the risks and benefits of AI.

Even Microsoft co-founder Bill Gates, not known for being squeamish in the face of technological advance, echoed their reservations about AI during an online question-and-answer session, writing, “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

At the same time, scholars with longer histories of leading or conducting AI research do not see the Faustian concern. My friend and erstwhile Microsoft colleague Eric Horvitz is one of the world’s leading computer scientists and AI specialists. Horvitz's Ph.D. in computer science and his M.D. in neuroscience (both from Stanford University) propelled him to explore AI in extraordinary depth during a fascinating career, leading to his recent term as president of the internationally renowned Association for the Advancement of Artificial Intelligence (AAAI).

When Eric recently was awarded the prestigious AAAI Feigenbaum Prize for his contribution to AI research, he thoughtfully disputed the dire warnings of machines taking over: “I fundamentally don't think that's going to happen” he wrote. “I think that we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life."

At a time when the mission systems of intelligence agencies—and military organizations worldwide—are beginning to incorporate robust AI algorithms and decision-loops into analytical and even operational architectures, it is an important debate to explore.

Fortunately, at next month’s AFCEA Spring Intelligence Symposium, we’ll have Musk on hand to discuss AI, along with the intelligence community’s leading research and development professionals. They will probe the issue in depth in the broader context of intelligence community innovation, amid an audience of hundreds of intelligence professionals and technologists. There are very few seats left, but you still can register.

The May 20-21 symposium, held at the headquarters of the National Geospatial-Intelligence Agency, will feature the unveiling of the new classified Science and Technology 2015-2019 Roadmap by Dr. David Honey, director of science and technology, Office of the Director of National Intelligence. Also on the agenda are Dr. Peter Highnam, director, International Advanced Research Projects Activity, and Glenn Gaffney, CIA deputy director for science and technology, among others.

But I’m particularly looking forward to the day-two AI discussion I’ll be having onstage with Musk, in which I’ll explore in depth some of the issues being debated by Hawking, Horvitz and other leading scientists and futurists. I’ll also be getting his thoughts on other future technologies, but I have a hunch he won’t shy away from a spirited AI discussion.

If you have an AI question you’d like to suggest for my AFCEA session with Musk, slip it into a comment box below. I’m eager to take the topic into much more depth than you’ll see in any summer blockbuster movie!


With a background in government and Silicon Valley, Lewis Shepherd is a leading advisor on innovation, technology and national security based in Washington, D.C.


Share Your Thoughts:

Who is more afraid of AI the rich or the poor. Seems like there is a division and that what I would like to see explored. Whom will AI free and whom will is p0wn.

In Kevin Kelly's classic "Out Of Control", he argues that we must relinquish control to make progress with technology. In these talks on AI, shouldn't a certain degree of chaos be allowed for experimentation's sake?

Share Your Thoughts: