What the history of AI tells us about its future

But the fact that computers were bad, traditionally, is a strategy – the ability to think about the shape of the game many, many moves in the future. That’s where people still had the upper hand.

That’s what Kasparov thought until Deep Blue’s move in the second game impressed him. It seemed so sophisticated that Kasparov began to worry: maybe the car is much better than he thought! Convinced that he had no chance of winning, he gave up the second game.

But he shouldn’t have. It turned out Deep Blue wasn’t that good. Kasparov could not notice a move that would allow the game to end in a draw. He went crazy: worrying that the machine might be much more powerful than it actually was, he began to see human reasoning where there was none.

Beating the rhythm, Kasparov played worse and worse. He was stunned again and again. At the beginning of the sixth game, when the winner gets everything, he made such a bad move that the chess observers cried out in shock. “I was not in the mood to play at all,” he said later at a news conference.

IBM has benefited from its monthly release. In the heat of the press following the success of Deep Blue, the company’s market capitalization grew by $ 11.4 billion in one week. Even more significant, however, was that IBM’s triumph felt like a thaw in the long winter of AI. If chess could be defeated, what was next? In the minds of the public worried.

“That,” Campbell tells me, “is what made people pay attention.”


However, it is not surprising that the computer defeated Kasparov. Most people who paid attention to AI – and chess – expected it to happen eventually.

Chess may seem the pinnacle of human thought, but it is not. Indeed, this is a mental task that is completely amenable to brute force calculations: the rules are clear, there is no hidden information, and the computer does not even need to keep track of what happened in the previous steps. He’s just assessing the position of the parts right now.

“There are very few problems where, as in chess, you have all the information you might need to make the right decision.”

Everyone knew that if computers were fast enough, they would overwhelm a person. It was only a question of when. By the mid-1990s, “in a sense, the inscription was already on the wall,” said Demis Khasabis, head of AI DeepMind, a member of Alphabet.

Deep Blue’s victory was a moment that showed how limited manual coding systems can be. IBM has spent years and millions of dollars developing a computer to play chess. But there was nothing else he could do.

“It didn’t lead to those breakthroughs that allowed [Deep Blue] AI to have a huge impact on the world, ”says Campbell. They did not actually discover any principles of intelligence, because the real world is not like chess. “There are very few problems where, as in chess, you have all the information you might need to make the right decision,” Campbell adds. “Most often there are unknowns. There is a coincidence. “

But even at a time when Deep Blue was washing the floor with Kasparov, a handful of meager upstarts crafted a radically more promising form of AI: the neural network.

With neural networks the idea was not how with expert systems, patiently write rules for every decision that will be made by AI. Instead, learning and reinforcement reinforce internal connections in the crude emulation (as theory says) of how the human brain learns.

1997: After Harry Kasparov defeated Deep Blue in 1996, IBM asked the world chess champion to hold a rematch in New York with a modernized machine.

AP PHOTOS / ADAM ALSO

The idea has existed since the 50s. But learning a useful large neural network required lightning-fast computers, tons of memory, and a lot of data. None of this was available then. Even in the 1990s, neural networks were considered a waste of time.

“Back then, most people in AI thought neural networks were just rubbish,” said Jeff Hinton, an honored professor of computer science at the University of Toronto and a pioneer in the field. “I was called a ‘true believer'” is not a compliment.

But by the 2000s, the computer industry was evolving to make neural networks viable. Video game players ’thirst for better graphics has created a huge industry of ultra-fast graphics processing units that have proven to be perfect for neural math. Meanwhile, the internet has exploded, creating torrent images and texts that could be used to teach systems.

By the early 2010s, these technical leaps allowed Hinton and his team of true believers to raise neural networks to new heights. Now they could create networks with multiple layers of neurons (meaning “deep learning”). In 2012, his team easily won the annual Imagenet competition, where AIs compete in recognizing elements in images. This stunned the world of computer science: learning machines have finally become viable.

Ten years after the revolution of deep learning neural networks and their ability to recognize patterns have colonized every corner of everyday life. They help Gmail complete your offers, help banks detect fraud, allow photo apps to automatically recognize faces, and – in the case of OpenAI GPT-3 and DeepMind’s Gopher – write long essays that sound human and summarize texts. They even change the way science is conducted; in 2020, DeepMind debuted AlphaFold2, an artificial intelligence that can predict how proteins will coagulate – a superhuman skill that can help researchers develop new drugs and treatments.

Meanwhile, Deep Blue has disappeared, leaving behind no useful inventions. Playing chess, it turned out, was not a computer skill needed in everyday life. “What Deep Blue eventually showed was the shortcomings of trying to make everything by hand,” said DeepMind founder Khasabis.

IBM has tried to rectify the situation with Watson, another specialized system designed to solve a more practical problem: getting a machine to answer questions. He used statistical analysis of vast volumes of text to achieve an understanding of language that was advanced for its time. It was more than a simple “once upon a time” system. But Watson faced an unfortunate time: only a few years later he was overshadowed by a revolution in deep learning that brought a generation of models of language crunch, far more nuanced than Watson’s statistical methods.

Deep learning has suffered from the old school of artificial intelligence precisely because “pattern recognition is incredibly powerful,” says Daphne Kohler, a former Stanford professor who founded and runs Insitro, which uses neural networks and other forms of machine learning to investigate new drug treatments. The flexibility of neural networks – the great variety of ways to use pattern recognition – is the reason that there has not yet been another AI winter. “Machine learning has really brought value,” she says, something the “previous waves of zeal” in AI never did.

The inverted state of Deep Blue and neural networks shows how badly we have long judged what is difficult – and what is valuable – in AI.

For decades, people thought that mastering chess would be important because, well, people find it hard to play chess at a high level. But chess turned out to be pretty easy to learn computers because it’s so logical.

It was much harder for computers to learn the random, unconscious mental work that people do, such as having a lively conversation, driving in a traffic jam, or reading a friend’s emotional state. We make these things so easy that we rarely realize how complex they are and how many inaccurate judgments in shades of gray they require. The great usefulness of deep learning stems from the ability to capture small chunks of this subtle, unforeseen human intelligence.


However, there is no ultimate victory in artificial intelligence. Deep learning may now be at its height, but it is also accumulating sharp criticism.

“For a very long time there was this techno-chauvinistic enthusiasm that well, AI will solve every problem!” says Merede Brussard, a programmer who became a professor of journalism at New York University and author Artificial intelligence. But, as she and other critics have noted, deep learning systems often learn from biased data and absorb these biases. Computer scientists Joy Bualamvini and Timnit Gebru have found that three commercially available visual systems of artificial intelligence poorly analyze the faces of women with dark skin. Amazon taught AI to check the resume, only to find that it was downgraded by women.

Although computer scientists and many artificial intelligence engineers are now aware of these bias problems, they are not always sure how to deal with them. In addition, neural networks are also “big black boxes,” said Daniela Russ, a veteran of AI who currently runs the Massachusetts Institute of Technology’s computer science and artificial intelligence lab. Once a neural network is trained, its mechanics are difficult to understand even by its creator. It is unclear how he will come to his conclusions – or how it will not work.

For a very long time there was this techno-chauvinistic enthusiasm that “Okay, AI will solve every problem!”

It may not be a problem, according to Rus, to count on a black box for a task that is not “critical to security.” But what about working with higher rates, such as autonomous management? “It’s really amazing that we could trust them with so much faith,” she says.

That’s where Deep Blue had the advantage. The old school style of handicraft rules may have been delicate, but it was understandable. The car was complicated, but it was no secret.


Ironically, this old style of programming could become like a comeback when engineers and computer scientists struggle with the boundaries of pattern matching.

Language generators, such as GPT-3 from OpenAI or Gopher from DeepMind, can take a few sentences you write and continue to write pages and pages of believable prose. But despite some impressive mimicry, Gopher “still doesn’t understand what he’s saying,” Khasabis says. “Not in the true sense.”

Similarly a visual AI can make terrible mistakes when faced with an extreme case. Self-driving cars crashed into fire trucks parked on the highway because in all the millions of hours of video in which they were trained, they have never encountered such a situation. Neural networks have their own version of the problem of “fragility”.

.

Leave a Comment