A new IBM artificial intelligence (AI) system, Project Debater, has taken on human beings in a live debating competition in San Francisco.
First the AI made a “surprisingly cogent argument” that space exploration should be subsidised, in the words of one commentator. When one of the humans disagreed, Debater – taking the form of a black obelisk with a female voice – challenged that argument with an improvised rebuttal of its own. In a second discussion, Debater argued for the increased use of remote medical treatments – telemedicine – while the human participant spoke against them.
The event served as an important demonstration of how the kinds of cognitive technologies that IBM is now in the vanguard of could begin to challenge human beings’ ability to marshall data towards a line of reasoning and argument – areas that are highly subjective, rather than in the realm of binary choices.
Here is a video of an earlier demonstration of the technology:
No idea what it’s talking about
However, one of the many intriguing aspects of the live event was that the system itself, quite literally, had no idea what it was talking about.
Project Debater constructs answers by searching for and combining data from large online sources, including Wikipedia. Its arguments are assembled in real time from what IBM calls “hundreds of millions” of newspaper articles and other resources. By contrast, the human debaters at the event were experts in their field who had rehearsed their arguments beforehand.
This is a similar approach to other technology research programmes that have allowed robots to model human speech without having any concept of the meaning of the language they are using.
Both debates were judged by a human audience. The good news is that they sided with their fellow humans. However, in both cases the margin of victory was narrow, according to reports from the event, and Debater impressed witnesses with its ability to gather data from a much larger body of information than its human counterparts could muster.
So what’s behind the concept? The project’s aim is to help people make critical decisions by providing a range of ‘for’ and ‘against’ arguments, said IBM.
“We’re interested in enterprises and governments; our goal is to help humans in decision-making,” said IBM’s director of research, Arvind Krishna. “Should we drill for oil in west Africa? Should we let our food supply have antibiotics in it? There are no right or wrong answers, but we want there to be an informed debate.”
However, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle, WA, told the MIT technology Review that it’s difficult to judge the capabilities of the IBM system based on a single contest.
“It’s easier to put together a canned demo than an open one where they let you interact with it in a natural way,” he said, following admissions that some elements of the presentation – such as jokes – were programmed by IBM’s researchers.
His criticism followed Google’s recent demonstration of its AI mimicking human conversation on the phone, which some commentators alleged was faked, staged, or heavily edited – allegations that Google has not responded to.
- Read more: Google announces new ethical AI strategy
Internet of Business says
OPINION That IBM intends to commercialise the technology in the years ahead should not be in doubt, following its successes with Watson – which found its initial fame in a similar public demonstration, when it won the game show Jeopardy against a human team.
But is Project Debater a good thing?
One advantage that Watson has in customer-facing deployments is its ability to access vast amounts of data rapidly. For example, a robot concierge in a hotel can, via Watson in the cloud, access industry-specific data sets of a size and depth that no human being could rival, however expert and knowledgeable they may be.
A system such as Debater would have that same theoretical advantage: real-time access to big data on demand. But in itself, that is not necessarily a good thing when it comes to critical reasoning, debates, and decisions, for several reasons.
First, the system risks turning all secondary research into a passive activity, reinforcing a problem that we all face today: that we are skating on the surface of a world of noise and advertising, Liking and Sharing things without reading anything in depth first hand.
IBM might argue that Debater is here to change that by analysing millions of data resources and summarising them into an argument for us. In itself, that may be a valuable service. But it also reinforces the message that 21st Century research is a passive, not active, concern, and relies on online resources that themselves may be inaccurate, lacking in peer review, untested, based on thousands of misguided shares, or pushing a paid-for point of view.
After all, we live in a world in which more and more people believe that the Earth is flat – and have the apparent evidence to prove it, even as the communications satellites they use to push this view fail to plummet from the sky.
Second, if the past couple of years have taught us anything, it is that vast amounts of data can be faked, search algorithms gamed, and social resources manipulated to push people towards viewpoints and stories that pander to their own preconceptions, biases, or prejudices – or to the interests of investors, media proprietors, or advertisers.
At heart, the Cambridge Analytica story was not about a social platform’s mass data breach; it was a demonstration of how any viewpoint, when combined with large-scale private funding and specially designed algorithms, can be persuasive, and can actively influence people’s choices.
Third, the media in some countries – the UK is just one of many examples – leans more towards one political views than another. Anyone is free to express those or any other views, subject to the law, but the views expressed in those titles, on either side of the debate, merely reflect those of their proprietors.
In the UK, the number of right-of-centre broadsheets and tabloids is significantly higher than the numbers adopting a broadly left-of-centre view, for example. Whatever your own political views might be, any trawl of local media would inevitably produce a view that, in general terms, more strongly supports one viewpoint over the other. But that’s a very different thing to the weight of evidenced, peer-reviewed research coming to the same conclusion.
Fourth, in a world of search engine optimisation (SEO), a lot of information itself is being distorted and bent into whatever shapes suit websites’ content and ranking strategies. The gaming of SEO algorithms with stock phrases and clickbait terms in order to push pages higher up the rankings is rife in every part of the publishing industry, meaning that a lot of data is misleading, poorly tagged, extensively manipulated – or paid for by marketers.
In this strict sense, many primary data resources are becoming unreliable – even some newspapers are now accepting advertisers’ money to write positive news stories about those companies. Those alliances and biases would be invisible to a computer system.
Fifth, the internet is also full of widely shared memes and, from time to time, inaccurate claims and faked stories, many of which are Liked and shared by tens of thousands of people – as anyone who has seen an image of wolves walking through a snowy wasteland on LinkedIn or Facebook and believes it holds deep lessons for human leadership should know. It doesn’t. It’s just some wolves walking in the snow; the claims for the pack’s leadership structure are nonsense, as even five minutes looking into the matter first hand will reveal.
And that’s the point: the internet says it’s a demonstration of leadership, even though there is no scientific evidence for that assertion. To a computer, the weight of evidence points strongly towards the non-evidenced view – the meme, or the expression of belief.
Apply this concept to more serious issues – IBM used the examples of drilling for oil in west Africa or putting antibiotics in food – and there is a risk that whatever viewpoint is the most highly funded, pushed by newspapers, or manipulated at source may win the day. That’s not to say that it would be the wrong view, merely that the debate may have been gamed or weighted at source.
After all, matters of opinion are more easily disseminated at scale if you own a media platform.
What’s really needed in such a world are technologies that encourage us to find things out for ourselves, and which present depth, hard evidence, and peer-reviewed research over surface noise. The problem is that much technology is now pushing us further and further towards passively accepting surface noise, rather than actively investigating depth.
Doubtless, IBM believes that it’s here to help with Project Debater – to present us with depth, not surface – and yet the deployment of this technology can only be passive, even if it encourages deeper debate. After all, it sets the conditions for the discussion, and presents whatever data it has found. That said, perhaps it may convince people that the Earth is round – based purely on the weight of evidence.
Either way, the more we pursue technology that allows us to consume information passively, from a point of trust and laziness, the more we may all end up like Mark Zuckerberg, staring at Congress like a frightened rabbit, trying to explain how we got here.
But of course, that’s merely an opinion. No doubt Debater could construct a persuasive counter argument.