Is DeepMind’s Gato AI really a human-level intelligence breakthrough?
DeepMind has released what it calls a “generalist” AI called Gato, which can play Atari games, accurately caption images, chat naturally with a human and stack coloured blocks with a robot arm, among 600 other tasks. But is Gato truly intelligent – having artificial general intelligence – or is it just an AI model with a few extra tricks up its sleeve?
What is artificial general intelligence (AGI)?
Outside science fiction, AI is limited to niche tasks. It has seen plenty of success recently in solving a huge range of problems, from writing software to protein folding and even creating beer recipes, but individual AI models have limited, specific abilities. A model trained for one task is of little use for another.
AGI is a term used for a model that can learn any intellectual task that a human being can. Gary Marcus at US software firm Robust.AI says the term is shorthand. “It’s not a single magical thing,” he says. “But roughly, we mean systems that can flexibly, resourcefully solve problems that they haven’t seen before, and do so in a reliable way.”
How will we know if AGI has been achieved?
Various tests have been proposed that would grant an AI the status of AGI, although there is no universally accepted definition. Alan Turing famously suggested that an AI should have to pass as human in a text conversation, while Steve Wozniak, co-founder of Apple, has said he will consider AGI to be real if it can enter a random house and figure out how to make a cup of coffee. Other proposed tests are sending an AI to university and seeing if it can pass a degree, or testing whether it can carry out real-world jobs successfully.
Does AGI exist yet?
Yann LeCun, chief AI scientist at Facebook’s owner Meta, says there is no such thing because even humans are specialised. In a recent blog post, he said that a “human level AI” may be a useful goal to aim for, where AI can learn jobs as needed like a human would, but that we aren’t there yet. “We still don’t have a learning paradigm that allows machines to learn how the world works, like human and many non-human babies do,” he wrote. “The solution is not just around the corner. We have a number of obstacles to clear, and we don’t know how.”
One of the driving forces behind the current success of AI research is scale; more and more computer power is being used to train ever-larger models on increasingly large sets of data. The discovery that simple scaling-up provides such power is surprising, and we are yet to see any signs that more power, more data and larger models won’t keep providing more capable AI. But many researchers are sceptical that it will lead to a conscious, or even general AI.
Is Gato an AGI?
Nando de Freitas at DeepMind tweeted that “the game is over” when Gato was released, and suggested that achieving AGI was now simply a matter of making AI models bigger and more efficient, and feeding more training data in. But others aren’t so sure.
Marcus says Gato was trained to do each and every one of the tasks it can do, and that faced with a new challenge it wouldn’t be able to logically analyse and solve that problem. “These are like parlour tricks,” he says. “They’re cute, they’re magician’s tricks. They’re able to fool unsophisticated humans who aren’t trained to understand these things. But that doesn’t mean that they’re actually anywhere near [AGI].”
Oliver Lemon at Heriot-Watt University in Edinburgh, UK, says the claim that the “game is over” isn’t accurate, and that Gato is not AGI. “These models do really impressive things,” he says. “However, a lot of the really cool examples you see are cherry-picked; they get exactly the right input to lead to impressive output.”
So what has Gato achieved?
Even DeepMind’s own scientists are sceptical of the claims being made by some about Gato. David Pfau, a staff research scientist at DeepMind, tweeted: “I genuinely don’t understand why people seem so excited by the Gato paper. They took a bunch of independently trained agents, and then amortized all of their policies into a single network? That doesn’t seem in any way surprising.”
But Lemon says the new model, and others like it, are creating surprisingly good results, and that training an AI to accomplish varied tasks may eventually create a solid foundation of general knowledge on which a more adaptable model could be based. “I’m sure deep learning is not the end of the story,” he says. “There’ll be other innovations coming along that fill in some of the gaps that we currently have in creativity and interactive learning.”
DeepMind wasn’t available for comment.
More on these topics: