In December it was announced that a chess player with four hours of training beat the world’s world champion chess-playing program. The player was AlphaZero, an AI (artificial intelligence) program operated by Google.
It was told the game rules, but was given no chess-playing strategies or solutions. It knew very little at the outset, played itself over and over again (rather randomly at first,) and got better very quickly over the course of 68 million games played. It had the benefit of massive computing power as well, something unavailable to its opponent, which ran on an ordinary PC.
It was one of a trio of victories for the program. “AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case,” say the authors of the paper “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.”
Artificial Intelligence, or machine intelligence, is in contrast to human or animal intelligence. When a device seems to be learning or problem solving, some say that machine has artificial intelligence. When it does things better than us, it is called “superhuman.”
Because our world is increasingly filled with data, AI might assist us in making sense of it all. AI has applications in healthcare, autonomous vehicles, finances, speech recognition, and more. Any device or company that is providing recommendations to you, for example, is probably using some flavor of AI to tell you what you might like. If you’ve played a computer game, you’ve won or lost to AI.
Current goals for AI research include adding the ability to reason, plan, and perceive.
Neural networks can assemble patterns of behavior and be used to power AI. The idea is to have a system, the “neural network” of interconnected computer nodes, that can experiment rapidly and learn from mistakes. A neural network algorithm is “trained” through repetition until it “learns,” then it can respond to input.
Algorithms, a sequence of actions to be performed, are used to train neural networks.
AI can be used in robots, but doesn’t require them. AI can exist as a machine connected to the internet.
As with most technologies, it depends on how we use it whether it is useful or not.
In the useful category, for example, a team of scientists created a new algorithm that can look at an image, then fix blurriness, grainy noise, missing pixels, or color errors. What they find exciting is that their neural network is able to look at a wide variety of problems at the same time, rather than fix each problem individually.
In order for it to work, they had to give the system a large database of good images, so it could “learn” what makes for a good image. After that, it can recognize and fix deviations it finds in new images. Sort of. It can have trouble with finer details, such as hair, so the team is working to improve the algorithm by teaching it context, such as “hair is often at the top of a face.” Regardless, the days of bad photos may soon be over thanks to AI.
In the not-entirely-creepy category, Cornell is working on new “neuromorphic” computer chips. Instead of processing in binary 1’s and 0’s like our current computers, these chips process spikes in electrical current in complex combinations. They are hoping to mimic neural activity, and pack some powerful computation into a very small space.
As an experiment, they are building a RoboBee that can instantly respond to changes in wind. As you might expect, it is being developed using a neural network so it can learn from previous experiences.
Obviously, AI has useful applications.
Consider this: businesses are using AI to generate and analyze hypotheses on its own, to share with humans to help the humans make decisions. And, at the same time, AI can generate artificial and convincing Yelp reviews. Could this lead to AI assistants interpreting data created by AI assistants?
AI will write stories and books. The AP is already experimenting with algorithmically-written stories. Bots offer up comments in major social media platforms daily.
AI can be used to recreate anyone’s voice from a small sample. An artificial voice could help someone with damaged vocal chords, or could be used as a cruel prank (a “sibling” telling you unfortunate news.) The ability to discern what is real or not is now a greater concern.
That leads us to the possibly-dangerous category. A group of scientists recently released a modification to the computer game Civilization 5 that allows for AI to be used by players, and they offer a warning: “In the modified game, artificial intelligence initially provides benefits, and eventually can turn into superintelligence that brings mastery of science to its discoverer. However, if there is too little investment in AI safety research, rogue superintelligence can destroy humanity and bring an instant loss of the game,” noted the scientists.
On of the researchers, Shahar Avin , shared an insight: ““Something that struck me as surprising was if the geopolitical situation is very messy,” he said. “Let’s say you’re stuck between two aggressive civilizations. It becomes very difficult to manage AI risk because your resources are devoted to fighting wars.”
There are groups concerned about the future of AI, and one of those concerns is misalignment. This concern is that AI will become competent and powerful (not necessarily evil), but not aligned with our goals. AI is very good at reaching its goals.
A related worry is that if we cede our position as the smartest beings on the planet, we may also cede control. What happens if AI develops a weapons system we humans can’t understand or out-manipulates human leaders?
AI is already an almost invisible layer in our lives. As it becomes more powerful, and combined with other technologies, steps will need to be taken to keep AI applications under control.
We might task an AI assistant to do it for us.