The Nightmare Machine: How AI is Taking Fear to the Next Level

Disclaimer: More Robopocalypse talk coming. Rest assured that despite the constant Matrix-like scenarios, we’re actually big fans of AI technology. But we’re also sci-fi geeks, so we have to get it out somewhere.

Earlier this year, I wrote about the basics of how machine learning works, and how we’ve been using it to train computer programs to beat us at the Chinese strategy game, Go. You’d think that teaching a computer how to think strategically and crush their opponents beneath their cybernetic heel would be enough for researchers, but they’ve decided to raise the bar again.

Now, they want to teach computers just what it is that humans fear.

It started in August, when researchers at IBM showed Watson hundreds of horror movie trailers, then had it perform a series of audio, visual, and compositional analyses to get a feel for just what makes a horror movie trailer good. It was then tasked with creating a movie trailer of its own – or at the very least, picking out the perfect scenes to include in a movie trailer. While a human still had to piece the scenes together, using Watson reduced the time it takes to create a trailer from 10 to 30 days down to about 24 hours, and the result is suitably eerie.

The real scary part? The movie it created the trailer for is Morgan, a sci-fi thriller about an AI that goes rogue and turns on its creators. Coming to theaters tomorrow, December 2nd.

Thanks, IBM. Giving one of the world’s most well-known supercomputers ideas is one of the last things we’d like you to be doing.

However, IBM is not alone in the effort to teach computers some of humanity’s greatest weaknesses. Just before Halloween (appropriately enough), MIT joined up with Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO) to create the “Nightmare Machine,” a program designed to turn even the most idyllic of landscapes into scenes straight out of a horror movie.

The “nightmarifying” process starts by feeding the program scary images to identify distinctive traits and learn just what distinguishes them from non-spooky images. Once the program feels comfortable with the set of traits that make up a style, they then apply those qualities to images like a filter. The team is not currently taking feedback on their “haunted” locations, but they are collecting data on the computer generated “haunted faces.” In fact, the CSIRO blog describes the deep learning algorithm as “[growing] hungrier and hungrier for more user data, until it was able to think and feel on its own.”

Why? “To find what unites us in our phobia and terrifies us on a universal scale.” Your mileage may vary on whether or not they’ve succeeded; I personally find the faces only vaguely grotesque and not particularly unsettling, and the locations mostly look like they’ve been run through some aesthetic Instagram filters.

The Nightmare Team is also trying to define the barriers between human and machine cooperation – in other words, the difference between what makes people and computers tick. Essentially, they’re trying to see just how far they can push the limits of artificial intelligence.

But the Nightmare Machine isn’t the first program designed to generate images. Last year, Google released DeepDream, a program that creates images out of white noise. Like with all learning machines, it started by being shown thousands and thousands of images until it learned to recognise patterns, edges, and shapes. The theory is that once you’ve seen an object enough times, you should be able to draw it yourself, right?

Well, when DeepDream was asked to draw a dumbbell, it didn’t quite succeed.

Some dumbbells have arms, apparently.
Because so many pictures of dumbbells include arms, the computer thought that sometimes dumbbells have arms. Makes perfect sense.

This inspired researchers to dig deeper into pattern recognition. They directed the program to focus on color and form, and to keep accentuating anything it recognizes. So if a rock looked like a building or a landmark, the program was to keep applying the idea of the landmark over and over again. Once it was able to convert normal pictures into psychedelic images straight from an LSD trip, they moved on to white noise.

These are the dreams the name DeepDream refers to. The images generated out of white noise, which can be anything from curling fractal patterns to something almost recognizable, are made purely out of the computer’s own processes, and Google is hoping to study how DeepDream “thinks” in order to uncover the root of the creative process.

To be fair, staring at white noise until patterns emerge is pretty much the extent of my creative process too. Unfortunately, DeepDream can pick out those patterns a lot quicker than I can, but at least I’m smart enough to know that a dumbbell doesn’t have arms.

So now that computers know how to think strategically and creatively, and they know just what it is that humans fear, it’s time for the robot apocalypse, right?

Not so fast, T-800! Sarah Connor isn’t ready for you yet.

One day, Arnold Schwarzenegger will find us and track us down.
Until Arnie comes for me, I’m still making these jokes.

Our Googly overlords have once again stepped in to save us from ourselves and have created an AI safety group to monitor DeepMind and protect us from AIs that may want to turn rogue. Their concern is, in part, spurred by predictions from people like Stephen Hawking and Elon Musk. As DeepMind’s expressed goal is to “solve intelligence” and “make the world a better place,” I can sort of understand where they’re coming from.

But for now, Google has made sure than many of it’s AI “agents” are interruptible, meaning that a computer handler can prevent the program from carrying out further actions if it looks like it’s going down a dangerous path. They’re also taking precautions to make sure that agents are unable to prevent the interruptions, and for now, it seems like a pretty good way to keep AIs in line.

At least, it is until AIs achieve human equivalence and start working towards superintelligence, something researchers predict they will reach in about 100 years. But hey, at least we don’t have to worry about it right now.

Oh, one more thing. Sweet dreams.

2017-01-29T18:06:20-04:00December 1st, 2016|Artificial Intelligence, Big Data, Current Technology|

About the Author:

Andrew is a technical writer for Deep Core Data. He has been writing creatively for 10 years, and has a strong background in graphic design. He enjoys reading blogs about the quirks and foibles of technology, gadgetry, and writing tips.

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.