The capabilities of Google’s next-generation Assistant are technically stunning, at least in the stage demo shown to the public Tuesday. The people behind this are brilliant and they’ve apparently solved some very difficult software problems. The result is software that can make a call for you and pretend to be human, as well as write your emails based on what it thinks you were going to say.
But as technically impressive as the demonstrations were, I’m left very uncomfortable with the vision that Google wants to sell to us. In a world that already seems colder and more sterile every day, the last thing I want is software that suggests what I ought to write and misleads people into believing it’s a human calling.
What Google is offering is creepy and disturbing. It’s technically impressive, but I absolutely, positively do not want this technology.
Google is a very smart company, but I’ve been increasingly disturbed over the last decade with the disconnect between their people’s technical smarts and their lack of human understanding of how real people need to interact with technology. Their executives have been disdainful of privacy — and that’s no wonder since their very existence and profitability are based on selling information about us.
These very smart geeks are selling a vision of a future where I don’t want to live.
Take the new feature called Google Duplex. You tell your Google Assistant that you want an appointment or a reservation and the software makes the call, pretending to be a human assistant doing the chore for you. Listen to the demo which I’ve embedded below. You can tell the engineers have gone to great lengths to insert “human-sounding” breaks and “hmmms” to fake the person on the other end into believing it’s a real human.
It sounds real and the demoes work well. (I question whether the software will be able to successfully navigate many real-world calls, when humans will throw unpredictable things at it.) But even if it works perfectly, do you really think it’s ethical to fool people into thinking they’re talking to humans when it’s really software?
When a recorded voice calls us today, it’s obvious that the message is canned. Even if it’s a real recorded human voice, it becomes obvious fairly quickly that you’re not talking with a real human. Have you ever gotten one of those calls and first thought it was a real person? I have. How did it make you feel if you were fooled — even just for a moment — into thinking it was a live person?
When it’s happened to me, I’ve felt annoyed and outraged. I’ve felt tricked. I’ve felt as though someone didn’t value my time.
With the new Google Duplex feature, you are being recorded as soon as you answer the phone. There’s no warning. Your voice is simply recorded and the software decides what to say in response. To me, this presents serious ethical and legal issues.
Then there’s the new feature in Gmail that offers to write your email to someone. (Here’s a brief video demo.) The software looks at the subject of your new email and things you’ve written in the past — and things that have been written by millions of other people — and then chooses what seems like the most obvious thing to say.
I don’t know about you, but I don’t want to get software-written emails. The subtext and writing style of emails communicate just as much to me about what I’m getting as the actual text. This software rips the soul and personality out of communicating.
I want to know that I’m dealing with a human being — and I want to be able to judge his or her personality, intelligence and more by the tone and style of what I get. This new feature makes that impossible. Instead, it creates the most bland and impersonal text mush possible.
This gets at the heart of what the geeks at Google don’t understand. Communication is about more than just the text or informational content of an email or phone call. Human contact is about far more. It’s about emotions and unconscious judgments we make of each other.
The brilliant geeks who write this software haven’t stopped to ask themselves whether this is something that real people want or need.
Google’s CEO says this is all about helping you get more done, but if he believes that, he’s fooling himself. This is actually all about Google inserting itself more and more deeply into our lives. It’s about Google knowing more about us — so it can sell more and better targeted advertising. It’s about very smart people who think new technology is cool, but who don’t understand the needs of the humans who will be using it.
I used to think really highly of Google, but the company’s attitudes about privacy and its visions of a future I don’t want have led me to cut it out of my life as much as possible. (I even use DuckDuckGo as my search engine, because the company doesn’t track me or sell my information.)
A lot of the work that Google is doing in artificial intelligence seems like brilliant work. I’m very impressed with what their people have been able to achieve.
But I don’t want the sterile, machine-driven communication that the geeks at Google are selling. Their world is one where I don’t want to live.