Google puts Duplex AI on public trial – with some changes

Google puts Duplex AI on public trial – with some changes

When Duplex was demonstrated at Google’s developer conference earlier this year, the initial consensus was ‘wow’: here was a giant leap forward in the interaction between humans and virtual assistants.

Then came the questions, such as: How many attempts did it take to record seamless conversations between businesses and Google’s AI? And, as impressive as a machine using conversational fillers is, does it take things too far? And, is it right that people won’t realise that it’s software, and not a human at the other end of the line?

This week, Google’s robot caller has been performing in public trials, and the company has attempted to answer some of those questions with modifications to the system.

The most notable addition to the Duplex system is the introduction it uses. At its debut, the assistant started conversations as a human would, with a standard greeting.

Now, the introduction has been adapted to provide more information to the human on the other end of the line. “Hi, I’m calling to make a reservation,” the voice says. “I’m Google’s automated booking service, so I’ll record the call…”

As well as ticking the ethical box for those concerned about the technology being deceptive, it also asks for consent to record the call – and advertises its maker in the process.

One in five calls still requires a human operator

Should a business not wish the conversation to be recorded, Duplex will hang up and the appointment will be made by one of Google’s human operators instead. A business can also opt out of receiving calls from Duplex in future.

So for the time being, at least, human operators will be key to onboarding users with the Duplex system. On top of making calls to businesses that don’t want to be recorded, the system’s human handlers will also have to step in when conversations hit a roadblock and the AI gets confused. Currently, Google estimates that this happens in one out of every five calls.

It goes without saying that 20 percent is an unsustainable level of required human intervention with an automated system, unless Google plans to go into the call centre outsourcing business on a massive scale. Were such a system to be rolled out globally needing this amount of supervision, even Google’s resources would be pushed to the limit.

More convenience equals more data

To date, demos of Duplex and the underlying technology have been exciting and divisive in equal measure.

On the one hand, it promises to save people time and effort by handling boring tasks. Beyond that, it could also provide a vital service to those with accessibility problems or communication challenges, or help users arrange their time when travelling in a country where they don’t speak the language.

However, the technology also opens the data door and provides Google with yet another automated way to collect information about its users and their preferences, just as Amazon does with its Alexa-powered devices, which are increasing in number to include smart TV boxes, cameras, and, according to persistent rumours, a new range of domestic robots.

But whether that data-harvesting culture can continue indefinitely is in doubt: California is debating the introduction of GDPR-style laws, which may force technology companies to adopt them throughout the US, and even globally, for ease of management.

Aside from the data issues, Duplex offers a glimpse into a future in which human interactions are rarer than they perhaps ought to be. As a result, the market will decide how necessary the technology is – including those customers who are already tired of being plagued by automated calls, a problem that may be significantly worsened by any commercial release of Duplex.

Internet of Business says

The sense that a crunch time may be approaching in human beings’ relationship with machines is growing.

Debates are raging about what forms AI systems and robots should take: should they mimic human beings, or announce their machine nature in terms of their design and modus operandi? Should organisations still be allowed to harvest data on a massive scale, and set AI to work on it? And are machines really augmenting our abilities, or simply replacing more and more of the skills that we use to earn a living?

Such questions arise as a number of reports suggest that people are rejecting robots on a wave of negative media publicity, and are losing confidence in autonomous vehicle technologies.

Meanwhile, Google may have rolled out a new ethical strategy, but it had to be reminded of its core values by angry employees first, as our recent report revealed.

IBM has also been demonstrating a new AI recently: its Debater system, which goes far beyond the current definition of automating boring, repetitive tasks. Debater trawls the internet for research in order to summarise ‘for and against’ arguments for its human users, which arguably replaces independent thought and investigation.

And this is the underlying challenge, perhaps: the definition of ‘routine and boring’ is constantly changing, moving closer to the definition of what makes us human, and what constitutes a ‘worthwhile task’ for a person to carry out.

Everything is routine to someone – a heart operation to a surgeon, for example, or primary research to a physicist. As Dr Anders Sandberg of Oxford University put it at an AI and robotics seminar in 2016 (attended by Internet of Business editor Chris Middleton), “If you can describe what you do for a living, then your job can – and will – be automated.”

Additional reporting and analysis: Chris Middleton.