Disruptive by Design: Siri, Tell Me a Joke. No, Not That One.
Could machine learning help the voice-activated assistant find its comedic chops?
Ask Siri to tell you a joke and Apple’s virtual assistant usually bombs. The voice-controlled system’s material is limited and profoundly mediocre. It’s not Siri’s fault. That is what the technology knows.
According to a knowledgeable friend, machines operate in specific ways. They receive inputs. They process those inputs. They deliver outputs. Of course, I argued. Not because I believed he was wrong, but because I had a lofty notion of the limitations of machines and what artificial intelligence (AI) could become.
My friend was not wrong. That is what machines do. For that matter, that is what all living beings do. We take external data and stimuli, process it and react as we see fit, based on previous experiences. The processing of inputs is what expands intelligence. Machines, on the other hand, process within specified parameters determined by humans. For a machine, output is limited by programming and processing power.
What is the upper limit of what a machine can learn? We do not yet know, but we do know that today, it takes repetition in the hundreds of thousands for artificial neural networks to learn to recognize something for themselves.
One day, machines will exceed the limits of human intelligence to become “superintelligence,” far surpassing any human in virtually all fields, from the sciences to philosophy. But what really will matter is the issue of sentience. It is important to distinguish between superintelligence and sentience. Sentience is feeling and implies conscious experiences.
Artificial neural networks cannot produce human feelings. There is a lack of sentience. I can ask Siri to tell me a joke thousands of times, and the iOS simply will cycle through the same material over and over. Now, consider superintelligence or an advanced form of AI. Does the potential exist for a machine to really learn how to tell a joke?
The answer depends on whether we think these machines will ever reach a stage where they will do more than they are told—whether they will operate outside of and against their programmed parameters. Many scientists and philosophers hold pessimistic views on AI’s progression, perhaps driven by a growing fear that advanced AI poses an existential threat to humanity. The concept that AI could improve itself more quickly than humans, and therefore threaten the human race, has existed since the days of famed English mathematician Alan Turing in the 1930s.
There are many more unanswered questions. Can a machine think? A superintelligence would be designed to align with human needs. However, even if that alignment is part of every advanced AI’s core code, would it be able to revise its own programming? Is a code of ethics needed for a superintelligence?
Questions such as these won’t be pertinent for many years to come. What is relevant is how we use AI now and how quickly it has become a part of everyday life. Siri is a primitive example, but AI is all around you. In your hand, you have Siri, Google Now or Cortana. According to Microsoft, Cortana “continually learns about its user” and eventually will anticipate a user’s every need. Video games have long used AI, and products such as Amazon’s personal assistant Alexa and Nest Labs’ family of programmable, self-learning, sensor-driven, Wi-Fi-enabled thermostats and smoke detectors are common household additions. AI programs now write simple news articles for a number of media agencies, and soon we’ll be chauffeured in self-driving cars that will learn from experience, the same way humans do. IBM has Watson, Google has sweeping AI initiatives and the federal government wants AI expertise in development contracts.
Autonomy and automation are today’s buzzwords. There is a push to take humans “out of the loop” wherever possible and practical. The Defense Department uses autonomous unmanned vehicles for surveillance. Its progressive ideas for future wars are reminiscent of science fiction. And this development again raises the question: Is a code of ethics needed?
These precursory examples also pose a fundamental question about the upper limits of machine learning. Is the artificial intelligence ceiling a sentient machine? Can a machine tell an original joke or be limited to repeating what it knows? Consider Lt. Cmdr. Data from Star Trek, arguably one of the more advanced forms of benevolent AI represented in science fiction. Occasionally, he recognizes that someone is telling a joke, usually from context clues and reactions, but fails to understand why it is funny.
Just maybe, that is when we will know we are dealing with sentient AI—when machines are genuinely and organically funny. The last bastion of human supremacy just might be humor.
Alisha F. Kelly is director of business development at Trace Systems, a mission-focused technology company serving the Defense Department. She is president of the Young AFCEANs for the Northern Virginia Chapter and received a Distinguished Young AFCEAN Award for 2016. The views expressed are hers alone.