The surprise and threat of artificial intelligence

Algorithms using ARTIFICIAL intelligence are trying to solve problems with unexpected tricks, surprising their developers. But at the same time, it has raised concerns about how to control AI.
A group of Google employees are staring blankly at their computer screens. For months, they’ve been perfecting an algorithm to control an unmanned balloon from Puerto Rico all the way to Peru. But there’s still something to be desired, as the balloon continues to veer off course under the control of machine intelligence.
Project Loon, one of Google’s now-discontinued projects, was designed to bring Internet access to remote areas via balloons. Salvatore Candido, the project’s director, could not explain the balloon’s trajectory. Eventually, his colleagues manually took control of the system and got the balloon back on track.
Only later did they realize what had happened. In an unexpected twist, the artificial intelligence aboard the balloon learned to recreate ancient navigation techniques humans invented hundreds, if not thousands, of years ago, such as “changing course,” which involves steering a ship to the wind and then tilting outward to zigzag in the general direction.
Autonomous balloons have learned to change course all by themselves in adverse weather conditions. They did it spontaneously, to the surprise of everyone, especially the researchers involved in the project.
“When the first balloon allowed to fully perform this technology set a flight time record from Puerto Rico to Peru, we immediately knew we had been beaten,” Candido wrote in a blog post about the project. “I’ve never felt so smart and stupid at the same time.”
Creative artificial intelligence
That’s what’s likely to happen when AI is left to its own devices. Unlike traditional computer programs, AI is designed to explore and develop new ways to do tasks that human engineers have not explicitly told them to do.
However, as well as learning how to do these tasks, AI sometimes comes up with creative ways to surprise even those who have been using such systems. That could be a good thing, but it could also make everything ai controls unpredictable and potentially dangerous. Robots and self-driving cars, for example, could end up making decisions that put humans at risk.
How could an AI system “outsmart” its human master? Can we somehow constrain machine intelligence to avoid some unforeseen catastrophe?
In the ai research community, there’s one example of AI creativity that seems to be most cited. The moment that really got people excited about ai’s capabilities was how DeepMind’s AI machine learning system, AlphaGo, mastered the ancient game of Go and then beat one of the best human players in the world. DeepMind, an artificial intelligence company founded in 2010, was acquired by Google in 2014.
It turns out they can use some new strategies or techniques against human players that have never been used before — or at least that many people didn’t know about.
However, even such a simple game of Go can evoke different feelings. On the one hand, DeepMind proudly described “innovations” in its system, AlphaGo, and revealed new ways to play Go, a game that humans have been playing for thousands of years. On the other hand, some question whether such creative AI will one day pose a serious threat to humanity.
After AlphaGo’s historic victory, Jonathan Tapson, a machine learning, electronics and neuroscience researcher at the University of Western Sydney in Australia, wrote: “It’s laughable to think that we can predict or manage the worst behavior of AI, when we can’t actually imagine how they might behave.”
The important thing to remember is that AI doesn’t really think like humans. Their neural networks are indeed inspired by animal brains, but they are more precisely what are called “exploration devices.” They don’t come with many, if any, preconceptions about the wider world when trying to solve a task or problem. They just try — sometimes millions of times — to find a solution.
We humans have a lot of mental baggage, we think about rules, artificial intelligence systems don’t even understand rules, so they can fiddle with things at will.
In this context, AI can be described as the silicon equivalent of savant syndrome. Savant syndrome refers to a person with a severe mental disorder who has extraordinary artistic or academic abilities, often related to memory.
Keep surprising us
One of the ways artificial intelligence surprises us is their ability to use the same basic systems to solve fundamentally different problems. Recently, a machine learning tool was asked to perform a very different function: playing chess.
The system, called GPT-2, was developed by OpenAI, a nonprofit artificial intelligence research organization. Gpt-2 is trained on millions of online news articles and web pages to predict the next word in a sentence based on the previous word. Developer Sean Plesser believes that chess moves can be represented as combinations of letters and numbers, so if algorithms are trained on records of chess matches, the tool can learn how to play by calculating the ideal sequence of moves.
Prather trained the GPT-2 system for 2.4 million chess games. “It was really cool to see the chess engine come to life,” he said. “I wasn’t even sure it would work.” But gpT-2 did. Although it is not as good as a specially designed chess computer, it can successfully play tough games.
The experiment showed that the GPT-2 system had many unexplored capabilities and was a chess genius. A later version of the software stunned web designers when a developer trained it simply to write code for displaying items, such as text and buttons, on a web page. Despite simple descriptions like “red text for ‘I love you’ and a button with an ‘OK’ on it,” the AI generated the appropriate code. It was clear that it had mastered the basics of web design, but had surprisingly little training.
For a long time, artificial intelligence has been impressive mainly in video games. In the ai research community, there are countless examples that reveal just how amazing algorithms can do in virtual environments. Researchers often test and hone algorithms in Spaces such as video games to see how powerful they really are.
In 2019, OpenAI made headlines for a video. In the video, a character controlled by machine learning is playing hide-and-seek. To the researchers’ surprise, the “seekers” in the game eventually found that they could “surf” by jumping on top of objects to get into the fence where the “hiders” were. In other words, seekers learn to change the rules of the game for their own benefit.

Trial-and-error strategies lead to all sorts of interesting behaviors, but they don’t always lead to success. Two years ago, DeepMind researcher Victoria Krakowna asked readers of her blog to share stories of AI solving tough problems in ways that were unpredictable or unacceptable.
She came up with a long list of fascinating examples. One game algorithm learns to commit suicide at the end of level 1 in order to avoid dying in level 2, which achieves the goal of not dying in level 2, but in a particularly impressive way. Another algorithm found that it could jump off a cliff in the game and lead an opponent to destruction; In this way, the AI gained enough points to earn extra lives to repeat the suicide strategy over and over in an infinite loop.
Julian Toglius, a video game artificial intelligence researcher at New York University’s Tanden School of Engineering, tried to explain what was happening. These are classic examples of “reward allocation” errors, he says. When an AI is asked to do something, it may find strange and unexpected ways to accomplish it, and ultimately prove them right. Humans rarely adopt such strategies, and the methods and rules that guide how we play are important.
The researchers found that this goal-directed bias was exposed when ai systems were tested under special conditions. In a recent experiment, are required to invest in bank game ai characters will run to the virtual bank hall near a corner, waiting for the return on investment, this algorithm has learnt to run to the corner and get money, even though this kind of movement and not the actual relationship between get much in return.
It’s a bit like ai developing superstitions, where after receiving a certain reward or punishment, they start to wonder why they got it.
This is one of the pitfalls of reinforcement learning. Reinforcement learning is when ai eventually devises misjudged strategies based on what it encounters in the environment. Ai doesn’t know why it succeeds, it can only base its actions on learned associations. It’s a bit like the early days of human culture, when prayers were linked to changes in the weather.
Another interesting example of pigeons doing the same thing. In 1948, an American psychologist published a paper describing an unusual experiment in which he placed pigeons in pens and intermittently rewarded them with food. The pigeons began to associate the food with what they were doing at the time, sometimes with flapping wings, sometimes with dance-like movements. They then repeated the behavior, seemingly expecting a reward to follow.
New ways to solve old problems
There are huge differences between the game ai tested and the live animals psychologists use, but the same basic mechanism seems to be at work, where rewards are erroneously associated with specific behaviors.
Ai researchers may be surprised by the path machine learning systems take, but that doesn’t mean they are in awe of them. “I never felt these AI had a mind of their own,” says Raya Hadsell, a deep learning research scientist at DeepMind.
Hadsell has experimented with many AI systems and found that they can come up with interesting and novel solutions to problems that she or her colleagues hadn’t anticipated, which is why researchers should be working to enhance AI, so that they can do things that humans can’t do on their own.
Products that use ARTIFICIAL intelligence, such as self-driving cars, can be rigorously tested to ensure that any unpredictability is within a certain acceptable range. At this point, only time will tell if all the companies selling AI products are so cautious. But at the same time, it’s worth noting that the unexpected behavior exhibited by AI is not limited to research environments, but has entered the realm of commercial products.
At a factory in Berlin, Germany, in 2020, a robotic arm developed by Covariant, an American reinforcement learning robotics company, shows unexpected ways of sorting items as they pass along a conveyor belt. Although there is no specific program, the ARTIFICIAL intelligence controlling the arm learns to aim at the center of the transparently wrapped object to ensure it successfully grabs the object every time. Since the items are transparent and can be mixed up when overlapped, imprecise targeting means the robot may not be able to grab them. But the AI learned to avoid overlapping angles of objects and instead aim for surfaces that were easiest to pick up. This is really surprising.
Similarly, the team recently tested a robotic arm that can pick up different objects by sorting holes by shape. At first, the robot’s arms were clumsy. Under the control of artificial intelligence, it learned by constantly picking up and dropping objects. Eventually, the robot can grab an item when it’s in the right place and easily place it into the appropriate hole, rather than trying to fiddle with it with pliers.
All of which supports the view of OpenAI’s research manager, Jeff Kron, that ai’s exploratory nature is fundamental to its future success. In recent years, Kron has been working with colleagues around the world to collect examples of ai developing solutions to problems in unexpected ways.
“As we continue to scale up these AI systems, we can see that they are doing something creative and impressive, rather than just showing academic curiosity,” said Jeff Kron.
If AI systems can find better ways to diagnose diseases or deliver emergency supplies to those in need, they could save more lives. Kroen added that AI has the ability to find new ways to solve old problems. But he also believes those developing such systems need to be open and honest about their unpredictable nature to help the public understand how AI works.
After all, it’s a double-edged sword. The promise and threat of AI have always been there. What will they think of next? This is an intriguing question.