The man behind a startup acquired by Google for $628 million plans to build a revolutionary new artificial intelligence.
By Tom Simonite on December 2, 2014
WHY IT MATTERS
Software could be vastly more useful if it successfully mimicked the human brain.
Demis Hassabis started playing chess at age four and soon blossomed into a child prodigy. At age eight, success on the chessboard led him to ponder two questions that have obsessed him ever since: first, how does the brain learn to master complex tasks; and second, could computers ever do the same?
Now 38, Hassabis puzzles over those questions for Google, having sold his little-known London-based startup, DeepMind, to the search company earlier this year for a reported 400 million pounds ($650 million at the time).
Google snapped up DeepMind shortly after it demonstrated software capable of teaching itself to play classic video games to a super-human level (see “Is Google Cornering the Market on Deep Learning?”). At the TED conference in Vancouver this year, Google CEO Larry Page gushed about Hassabis and called his company’s technology “one of the most exciting things I’ve seen in a long time.”
Researchers are already looking for ways that DeepMind technology could improve some of Google’s existing products, such as search. But if the technology progresses as Hassabis hopes, it could change the role that computers play in many fields.
DeepMind seeks to build artificial intelligence software that can learn when faced with almost any problem. This could help address some of the world’s most intractable problems, says Hassabis. “AI has huge potential to be amazing for humanity,” he says. “It will really accelerate progress in solving disease and all these things we’re making relatively slow progress on at the moment.”
Renaissance Man
Hassabis’s quest to understand and create intelligence has led him through three careers: game developer, neuroscientist, and now, artificial-intelligence entrepreneur. After completing high school two years early, he got a job with the famed British games designer Peter Molyneux. At 17, Hassabis led development of the classic simulation game Theme Park, released in 1994. He went on to complete a degree in computer science at the University of Cambridge and founded his own successful games company in 1998.
But the demands of building successful computer games limited how much Hassabis could work on his true calling. “I thought it was time to do something that focused on intelligence as a primary thing,” he says.
So in 2005, Hassabis began a PhD in neuroscience at University College London, with the idea that studying real brains might turn up clues that could help with artificial intelligence. He chose to study the hippocampus, a part of the brain that underpins memory and spatial navigation, and which is still relatively poorly understood. “I picked areas and functions of the brain that we didn’t have very good algorithms for,” he says.
As a computer scientist and games entrepreneur who hadn’t taken high school biology, Hassabis stood out from the medical doctors and psychologists in his department. “I used to joke that the only thing I knew about the brain was that it was in the skull,” he says.
But Hassabis soon made a mark. In a 2007 study recognized by the journal Science as a “Breakthrough of the Year,” he showed that five patients suffering amnesia due to damage to the hippocampus struggled to imagine future events. It suggested that a part of the brain thought to be concerned only with the past is also crucial to planning for the future.
That memory and forward planning are intertwined was one idea Hassabis took with him into his next venture. In 2011, he quit life as a postdoctoral researcher to found DeepMind Technologies, a company whose stated goal was to “solve intelligence.”
High Score
Hassabis founded DeepMind with fellow AI specialist Shane Legg and serial entrepreneur Mustafa Suleyman. The company hired leading researchers in machine learning and attracted noteworthy investors, including Peter Thiel’s firm Founders Fund and Tesla and SpaceX founder Elon Musk. But DeepMind kept a low profile until December 2013, when it staged a kind of debutante moment at a leading research conference on machine learning.
At Harrah’s Casino on the shores of Lake Tahoe, DeepMind researchers showed off software that had learned to play three classic Atari games - Pong, Breakout and Enduro - better than an expert human. The software wasn’t programmed with any information on how to play; it was equipped only with access to the controls and the display, knowledge of the score, and an instinct to make that score as high as possible. The program became an expert gamer through trial and error.
No one had ever demonstrated software that could learn to master such a complex task from scratch. DeepMind had made use of a newly fashionable machine learning technique called deep learning, which involves processing data through networks of crudely simulated neurons (see “10 Breakthrough Technologies 2013: Deep Learning”). But it had combined deep learning with other tricks to make something with an unexpected level of intelligence.
“People were a bit shocked because they didn’t expect that we would be able to do that at this stage of the technology,” says Stuart Russell, a professor and artificial intelligence specialist at University of California, Berkeley. “I think it gave a lot of people pause.”
DeepMind had combined deep learning with a technique called reinforcement learning, which is inspired by the work of animal psychologists such as B.F. Skinner. This led to software that learns by taking actions and receiving feedback on their effects, as humans or animals often do.
Artificial intelligence researchers have been tinkering with reinforcement learning for decades. But until DeepMind’s Atari demo, no one had built a system capable of learning anything nearly as complex as how to play a computer game, says Hassabis. One reason it was possible was a trick borrowed from his favorite area of the brain. Part of the Atari-playing software’s learning process involved replaying its past experiences over and over to try and extract the most accurate hints on what it should do in the future. “That’s something that we know the brain does,” says Hassabis. “When you go to sleep your hippocampus replays the memory of the day back to your cortex.”
A year later, Russell and other researchers are still puzzling over exactly how that trick, and others used by DeepMind, led to such remarkable results, and what else they might be used for. Google didn’t take long to recognize the importance of the effort, announcing a month after the Tahoe demonstration that it had acquired DeepMind.
Company Man
Today, Hassabis leads what is now called Google DeepMind. It is still headquartered in London and still has “solve intelligence” as its mission statement. Roughly 75 people strong at the time it joined Google, Hassabis has said he aimed to hire around 50 more. Around 75 percent of the group works on fundamental research. The rest form an “applied research team” that looks for opportunities to apply DeepMind’s techniques to existing Google products.
DeepMind’s technology could be used to refine YouTube’s recommendations or improve the company’s mobile voice search, says Hassabis. “You’ll see some of our technology embedded into those kinds of things in the next few years,” he says. Google isn’t the only one convinced this approach could be a money-spinner. Last month, Hassabis received the Mullard Award from the U.K.’s Royal Society for work likely to benefit the country’s economy.
But Hassabis sounds more excited when he talks about going beyond just tweaking the algorithms behind today’s products. He dreams of creating “AI scientists” that could do things like generate and test new hypotheses about disease in the lab. When prodded, he also says that DeepMind’s software could also be useful to robotics, an area in which Google has recently invested heavily (see “The Robots Running This Way”). “One reason we don’t have more robots doing more helpful things is that they’re usually preprogrammed,” he says. “They’re very bad at dealing with the unexpected or learning new things.”
Hassabis’s reluctance to talk about applications might be coyness, or it could be that his researchers are still in the early stages of understanding how to advance the company’s AI software. One strong indicator that Hassabis expects swift progress toward a powerful new form of AI is that he is setting up an ethics board inside Google to consider the possible downsides of advanced artificial intelligence. “It’s something that we or other people at Google need to be cognizant of. We’re still playing Atari games currently,” he says, laughing. “But we are on the first rungs of the ladder.”
This story was updated on December 3 to reflect that DeepMind’s Atari-playing software did not learn to beat a human expert at Space Invaders.
No comments:
Post a Comment