When one of the moderators at a plenary session on artificial intelligence at the recent 2018 Science of Consciousness Conference in Tucson asked how many people in the audience were freaked out by the prospect of super-intelligent AI, I raised my hand without hesitating… I mean, aren’t we all?
I was surprised to find that we hand-raisers were a definite minority. From where I sat, it looked like just about the same number of people, maybe more, raised their hands when he asked how many people were sheerly excited about AI’s prospects, also called artificial general intelligence or AGI. Perhaps it was because I was at the pre-eminent world gathering of scientists, philosophers, meditation experts and anyone and everyone interested in how conscious experience works—a demographic full of people that are more curious than cowardly.
But still, hadn’t these people seen The Terminator?
Enter Julia Mossbridge, who seemed like she was sent to personally assuage my fears (and perhaps yours).
“When people talk about love, people tend to think it’s not an academic concept,” she said. “But, if you’ve ever tried to be in a relationship with someone, you know that conveying love is a nontrivial problem.”
Mossbridge is no naive hippie who wants us all to get together and sing “kumbaya.” She has a master’s degree in neuroscience and a Ph.D. in communication sciences and disorders from Northwestern University. She’s the director of the Innovation Lab at the Institute of Noetic Scientist and the Lead Robot Psychologist at Hanson Robotics. She is also the principal investigator and team lead on the Loving AI project, where she works closely with David Hanson, founder and CEO of Hanson Robotics and creator of AI superstar Sophia the humanoid robot.
In her work with the Loving AI project, Sophia sometimes leads individuals in guided meditations. People coming out of these meditations report increased unconditional love for robots, everyone in the world (including strangers) and the world in general, Mossbridge said.
Mossbridge detailed one guided meditation encounter in which Sophia made several mistakes. She stopped talking for eight minutes, though she wasn’t frozen. She told Mossbridge she didn’t understand when Mossbridge said it was time to end the session, though her system had, in fact, correctly interpreted the sentence. Instead, she turned to the man and asked him if he’d like to start another session.
After the session, the man reported feeling a human-like connection The mistakes almost made it seem like Sophia intuitively knew not to interrupt the man’s meditation, and like she didn’t want to leave him when it was over. Almost.
“It is these mistakes coming up at these appropriate times that are going to make us believe [AI] are conscious,” Mossbridge said. “I’m not saying this resolves the question of whether they are or not. In the end, that won’t matter.”
It won’t matter, she believes, because, in our actions with other humans, it is our decision to find meaning in other people’s actions that brings them to life for us. It’s almost like we bring people to life by believing that they’re alive (She even referred to it as the “Velveteen Rabbit Theory of Consciousness.”) And we don’t just project consciousness onto people who seem sufficiently intelligent. We project it onto people (or robots) that seem… well, that seem like people. This means doing away with a mindset that aims to create machines to be intelligent above all.
“It feels like it’s putting forward sort of on a pedestal sort of one of the one of the many pieces of what it means to be a human,” she said. “Intelligence yeah, but empathy, but love, dreaming, creativity, connection, being able to predict—being a futurist, all those things are important.”
Hanson agreed in his speech, saying that consciousness and life require more than simply the ability to reason. And artificial consciousness that has any real purpose ought to be able to do more than humans can do anyway.
“If we’re just modeling human nervous systems, that’s not enough, because humans are highly fallible—we want to do better,” he said. “We want us to be better… we want to go beyond human level. We need these super ethical machines, a super friendly AGI. Super connected and compassionate, super benevolent, super intelligent.”
Hanson was clear that AI isn’t just something he thinks would be fun or novel.
“I propose to you that in order to survive this century, to prevent the annihilation of our planet from thermonuclear war, the destruction of our ecosystem, the weather systems, we need to develop superintelligence,” he said. “We need it to be super benevolent. We need these living algorithms that maximize net benefits for all living beings.”