On May 8th, Google announced the development of a revolutionary new AI product called Google Duplex. This super-powered extension of the Google Assistant will be able to make phone calls on your behalf and speak to real people in a believable voice to make appointments or find out information. The new technology is undoubtedly impressive and groundbreaking, but what new privacy and security concerns arise when we start letting AIs make phone calls and real-life appointments for us?
Google calls it “an AI system for accomplishing real-life tasks over the phone,” and indeed, the applications for such a technology seem almost limitless. For now, however, Google says that Duplex will be limited to a narrow set of tasks it has been trained to perform. “It cannot carry out general conversations,” they write.
In the examples showcased by Google, Duplex calls a hair salon to schedule an appointment and a restaurant to reserve seats. If you listen to the conversations, you’ll hear what is perhaps Duplex’s greatest achievement – an AI that could potentially pass for a human being. With “ums”, “ahs” and realistic intonations, it’s hard to say whether you’d recognize that voice as an AI if it called you one day asking for a reservation.
The technology behind both understanding and engaging in a natural human English conversation represents an impressive feat by Google, and they say the tool will enter its testing phase this summer. However, alongside the technology’s potential lie concerns about privacy and security – and let’s not forget that this technology is being developed by a data giant that already collects a frightening wealth of data about us.
With the Google Assistant’s voice feature, we’ve already seen how far Google is willing to go to stockpile our data for advertising and developmental purposes. Unless you change your settings, your Android will record every voice request you make of Google, using it to serve up targeted ads and to improve the accuracy of the voice recognition feature.
What sort of data might Google Duplex collect? As it currently stands, the team developing Duplex has already been using user data to teach their highly advanced neural network. “To obtain its high precision, we trained Duplex’s RNN [recurrent neural network] on a corpus of anonymized phone conversation data.” They haven’t said whether the system will continue to store and use your data, but if past Google products are any indication, it probably will.
Google makes money by collecting user data, so protecting our privacy is not a priority. This has been clear with every product they’ve offered. Gmail users’ emails are scanned to serve them ads, and Google Assistant voice users have their every voice query recorded. Images stored on Google Photos are scanned by image recognition software to determine the contents and profile the user.
Every Google product is a bet that we will be willing to relinquish a degree of privacy in return for the benefits of their cool new product. What will we have to relinquish in order to use Duplex? Will it be worth it?
What makes Duplex so powerful is that it is a real AI – a machine that learns by doing. However, in complex situations, it may need human guidance – provided by Google employees. According to their post, “the system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.”
It’s too early to say if this has the potential to be a significant privacy risk, as the system is still in its developmental stages. However, in its current state, the system’s success depends on the surveillance of its users: “By monitoring the system as it makes phone calls in a new domain, [the operators] can affect the behavior of the system in real time as needed.”
It would seem that human operators are brought in when a request is too complex for the uninitiated system to handle it on its own. Will users be told when Duplex decides to contact a human operator? Will they be given a choice? If you’re using Duplex to schedule something sensitive, like a doctor’s appointment for an embarrassing medical condition, will that be a call you want a Google employee listening in on or even participating in?
The uncanny valley refers to how we react to imitations of human beings as they become more and more life-like. We can associate with a poor mimicry of a human face, like an emoticon or a Disney character, but are unsettled and repulsed by attempts that come very close but aren’t quite right. On the other side of the valley are human imitations that have crossed the valley and that we can empathize with – entities that we can forget aren’t human.
Has Duplex crossed that boundary? Some say that it has and that it is also capable of cracking the Turing test. Are we prepared to use a tool capable of completely fooling humans into believing that it is human?
It’s not too difficult to imagine emotional and practical barriers to having an AI speak for us. Duplex could make it more difficult to develop social relationships with local businesses and the people who run them if the manager on the phone asks Duplex how they’re doing. Google says Duplex is limited to the conversation patterns it has been trained for, though it can learn others. What if the person on the other end of the phone has a heart attack during the call? Would we feel guilty for sending an AI to make the call for us? What if the person on the other end sounds like they could use cheering up or a few soothing words?
How about the rights of the person speaking to Google Duplex? Will they know that they’re speaking to an AI? Should they? Would they change their behavior if they knew? Fortunately, Google seems to be aware of this issue and is looking for a solution: “We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”
It is too soon to tell how Google Duplex will address these issues or whether it will address them at all. Though the technology excites me, I remain concerned about how Google will use it – and how it might change the way we speak to one another.