Beyond AlphaGo: Deep Learning and the Neural Network

When you bring up the topic of artificial intelligence to the general public, one of the first things that comes to mind is either Terminator’s Skynet or the robots of the Matrix. But stories of man’s creations rising up against him are not new. In fact, the word Robot was first used in 1920, by Karel Capek in his play, R.U.R. Back then, the word robot didn’t refer to the mechanized automatons we’re used to today, but simply scientifically engineered humanoids.

In addition to being the originator of the word robot, R.U.R. is also the first story to explore the concept of robots rising up and turning on their creators. And even older than that is Mary Shelley’s classic Frankenstein.

Smile if you love your robotic overlords!

But instead of being scared away from these stories, we’ve embraced them. In fact, AI experts today are even researching a process called deep learning which intends to bring artificial intelligence out of science fiction and into real life.

Deep learning is a form of machine learning that’s geared towards advancing computer intelligence. In machine learning, a computer is slowly exposed to new data over a period of time and taught to make predictions based on that data. Then, developers go back into the software and make tweaks to the parameters in order to improve prediction quality.

Deep learning, however, uses repeated exposure to multiple data sets, typically images or sound bites, in order to identify key classifiers. The computer then presents a prediction based on those classifiers, and developers provide feedback, either confirming the prediction or providing a correction. It’s a process very similar to human learning, and in fact, deep learning actively attempts to mimic a human mind by using systems referred to as artificial neural networks.

A simplified model of a neural network
A simplified model of a neural network. Many networks will have more hidden layers to pass through before hitting the output layer. Image courtesy of neuralnetworksanddeeplearning.com

A neural network is a mathematical model which comprises multiple layers. When a data set is sent through a neural network, it travels through each of the layers, known as hidden layers. At each layer it checks against certain parameters, until it reaches out the output layer, where a prediction is made regarding the content of the data. How the neural networks check parameters is defined by something called a learning rule.

While there are many different kinds of learning rules, such as the Perceptron learning rule, the Widrow-Hoff learning rule, and the Adaptive Ho-Kashyap (AHK) learning rules, the Delta learning rule is one of the most commonly used rules used by back-propagational neural networks. Now, back-propagational sounds like a really fancy word, but all it really means is that the computers learn by making mistakes, which are then corrected by a developer, much like with human learning.

If it sounds like this process would take a long time to complete, you would be right. If it’s only one computer going through the process. But many places that are investing in deep learning, like Google, frequently use large amounts of computers at once.

In June of 2013, Google compiled one of the largest neural networks to date, made up 16,000 computer processors running over a billion connections, and showed it 10 million randomly selected Youtube videos. Without any human input, it was able to positively identify 16% of the content, which doesn’t seem impressive until you realize that this was a 70% increase over previous models. When they recalibrated the system and made the sorting categories more general (down to 1,000 categories from 22,000), the accuracy rate jumped up to 50%.

So what, you may ask, are we doing with this technology, besides showing it a lot of cat videos off the internet? Well, so far, deep learning processes have increased the accuracy in voice recognition software and there have been great strides in translation technology as well. It’s also been used in robotics to increase dexterity and precision in their movement. Finally, one of the most impressive things we’ve done is teach a computer how to play the Chinese game, Go.

Not only does the computer play this abstract strategy game well, but it has consistently defeated master players time and time again. Of course, this is because the computer, AlphaGo, prioritizes a win condition over gaining points, which is something few humans would consciously chose to do. After all, crushing an opponent by 50 points is far more satisfactory than winning by 2 points, even if the possibility of losing is slightly higher.

AlphaGo’s objectivity is part of the reason why it is able to keep racking up wins against the best of human players, but what does this mean for the future?

Will our Go playing robot overlords decide that our human emotions are the cause of our suffering?

Will they lock us in tubes to serve as batteries for their highly complex and logical society?

Or will they merely declare us obsolete and send mechanical assassins back in time to erase humanity’s resistance leaders from existence in order to crush our inborn need to survive, despite their best efforts to crush us beneath their mighty metallic heels?

One can only hope that the artificial intelligence of tomorrow is far more benevolent than science fiction likes to depict. It’s hard to deny, though, that the nerds in us are half excited to see what we can make out of our machines, and half preparing to save John Connor.

For now, though, we’ll stick with losing at board games.

2017-01-29T18:06:21-04:00June 2nd, 2016|Big Data, Current Technology, Popular Culture|

About the Author:

Andrew is a technical writer for Deep Core Data. He has been writing creatively for 10 years, and has a strong background in graphic design. He enjoys reading blogs about the quirks and foibles of technology, gadgetry, and writing tips.

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.