CogX 2018: How AI will teach itself to transform economy
Schmidhuber with iCub robot.

CogX 2018: How AI will teach itself to transform economy

As AI and robotics develop side by side, the medium-term future will see infant-like robots that can learn for themselves and crow-like machine intelligences that can teach themselves to use tools, claims a leading AI researcher. But their progress will go far beyond there. Chris Middleton reports from CogX in London.

In 2018, most people are worried that AI and robotics might automate their jobs and leave them scrabbling in the gig economy for a regular wage. But one AI expert believes that the technologies’ ambitions are much bigger than that: in the centuries ahead, AI and robots will “emigrate” from Earth and communicate with each other across the universe, he says.

But long before then, general artificial intelligences will emerge that can be taught like human children, and which can teach themselves about how the world works through “power play” and “artificial curiosity”, he said.

Jürgen Schmidhuber is a computer scientist who, in 1997, was one of the prime movers of a recurrent neural network (RNN) technology known as Long Short-Term Memory (LSTM), a method by which computers can learn the syntax and roots of processes and become more intelligent over time.

In 2014, he founded AI research company nnaisance, whose aim is to develop a general AI that is capable of learning for itself.

Speaking at the CogX artificial intelligence festival in London, Schmidhuber explained that LSTM blocks are building units for the layers of a recurrent neural network. LSTM technology currently resides on three billion smartphones, he said. Since 2015, it has been carrying out the speech recognition on Android phones and, since 2016, two learning LSTM systems have sat at the heart of Google Translate.

LSTM is now consuming a large part of the computational resources of the world,” he said. “For example, in all these Google data centres, as of 2017, almost 30 percent of the computational power for inference was used for LSTM.

“Facebook is using it, as of 2017, to do about 4.5 billion translations per day, and it learned to translate just by looking at lots of examples of translated text from the European Parliament,” he added.

“When we started all of that stuff in the 90s, nobody was interested. I remember giving a talk on recurrent networks back then, and there was just a single person in the audience – the next speaker. Since then we have greatly profited from the fact that every five years, computers are getting ten times faster.”

Europe is the cradle of computer science and AI, he said – despite deep learning emerging in the Soviet Union. However, much of the commercial exploitation of the technology has taken place on the Pacific Rim, on the US West Coast and on the East coast in Asia. Europe needs to learn from that process about how better to commercialise AI in the years ahead, said Schmidhuber.

Robots that learn

LSTM technologies can also be used to control robots, he said. Although most applications are currently supervised learning – where robots are “taught to slavishly imitate the human” – we will also have “systems that invent their own goals, that set themselves their own goals like little babies do”.

These AIs and the robots they control can invent their own experiments that “figure out how the world works, how to play with toys that tell them something about physics, about gravity, and so on, to the extent that they can learn through their experiments, which they invent by themselves to learn how the world works. To that extent, they also become more and more general problem solvers.”

Artificial curiosity and power play are the buzzwords that describe a new world in which machines increasingly learn in the same ways that humans do – just much faster and more efficiently, said Schmidhuber,

Nnaisance man

Nnaisance (neural-network-based artificial intelligence) is trying to build a general purpose AI that makes this vision a reality. It’s progress to date has been impressive. A year ago, the team designed a fleet of model cars that learned how to park themselves through trial and error. They then created an AI that was able to learn to run, from scratch.

So what’s next? “In the not so distant future, there will be robots that will be able to teach like kids, by speaking to them and showing things to them,” continued Schmidhuber.

“For example, you will say, ‘OK let’s try to assemble that smartphone’, and you take the slab of plastic and the screwdriver, and you just show the robot what you’re doing […] and it will figure out, just by camera input and speech input, what you mean. And quickly and economically, so that it can then speed up by itself and do the same thing faster and with less energy.

“This doesn’t work well yet, but we can see already how we can get there. It’s going to change many, many professions – everything in production, machines that make machines, and so on. At least 10-15 percent of the economy [will be affected], probably much more.

“In the not so distant future, for the first time, we will have animal-like AI – at the level of a little crow, which can learn to use tools to achieve goals. Again, we don’t have that yet, but we see it on the horizon. Once we have something like that, it will take just a few more decades until we have true human-level AI.”

However, this AI will be a “super democratic thing – everyone will be able to use it”, he explained. “It’s not going to be just one company owning everything.”

The distant view

Schmidhuber then turned his own intelligence to the distant future, and explained how technological progress is accelerating more and more.

“History started 13.8 billion years ago with the Big Bang. If we take one thousandth of that time, 13 million years ago, something really important happened: the first hominids emerged –  your ancestors and mine. And, again, in one thousandth of that time, 13,000 years ago, something really important happened, and changed everything in the process: the first civilisation emerged. Agriculture was invented, the domestication of animals…

And now we can see that all of civilisation is just a flash in world history. In just a flash, the guy who had the first agriculture was almost the same guy who had the first spacecraft in 1957. And soon we are going to have the first AIs that really deserve the name, the first true AIs.

“It doesn’t matter if it’s going to happen in ten years, or 100 years, or even 1,000 years; it’s all small compared to the age of civilisation, which is tiny compared to the age of the universe.”

So what are these super-smart AIs going to do, at whatever point in human history they arrive – assuming human civilisation survives the current period of increased insularity and nationalism?

Schmidhuber said, “Of course, they will realise what we realised a long time ago: that most resources are not on this planet; they are far away. They are out there in the solar system where there is a billion times more sunlight and where there is lots of material – for making self-replicating robot factories, for example.

“So most AI is going to emigrate, because most resources are far away. And it’s not going to stop with the solar system, it’s going to cross the whole galaxy within a few hundred thousand years – totally within the limits of physics. But that’s very different from the human centric way of science fiction of the last century. It’s going to be AIs that do that; it’s not going to be humans.

“The galaxy will be covered in senders and receivers that will allow AIs to travel in the same way they do in my lab, which is by radio, by wireless, from sender to receiver. And it’s not going to stop there, because most resources are not in this galaxy.”

Internet of Business says

In 2018, this is not quite the debate about emigration and job losses that most sentient beings are used to – and, as a digital technology, could any piece of code be said to “emigrate” rather than, say, merely replicate itself? But while Schmidhuber’s view might seem far-fetched, that’s not to say that he’s wrong. So it’s oddly comforting to know that, in a world of fake news and flat Earthers, intelligence might turn out to be the planet’s biggest export after all – but perhaps less so to know that it won’t be human.

Chris Middleton
Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.