Smart Media

A HISTORY OF ARTIFICIAL INTELLIGENCE IN 10 LANDMARKS

A HISTORY OF ARTIFICIAL INTELLIGENCE IN 10 LANDMARKS

WHY IT MATTERS TO YOU

AI is an extraordinarily important and complex field. We've done our best to narrow down the 10 milestones in its history you should know.

Compressing all of artificial intelligence (AI) into 10 “moments to remember” isn’t easy. With hundreds of research labs and thousands of computer scientists, compiling a list of every landmark achievement would be, well, a job for a smart algorithm to handle.

With that proviso taken care of, however, we’ve scoured the history books to bring you what we think are the top 10 most significant milestones in the history of AI. Check them out below.

THE BIRTH OF NEURAL NETWORKS

You’ve probably heard of neural networks, the brain-inspired AI tools behind most of today’s cutting edge artificial intelligence. While concepts like deep learning are relatively new, they’re based on a mathematical theory which dates back to 1943. Warren McCulloch and Walter Pitts’ “A Logical Calculus of the Ideas Immanent in Nervous Activity” might sound like a mouthful, but it’s as important to computer science as (if not more than!) “The PageRank Citation Ranking,” a.k.a. the research paper which spawned Google. In “A Logical Calculus,” McCulloch and Pitts describe how networks of artificial neurons can be made to perform logical functions. The dream of AI is born!
You’ve probably heard of neural networks, the brain-inspired AI tools behind most of today’s cutting edge artificial intelligence. While concepts like deep learning are relatively new, they’re based on a mathematical theory which dates back to 1943.
Warren McCulloch and Walter Pitts’ “A Logical Calculus of the Ideas Immanent in Nervous Activity” might sound like a mouthful, but it’s as important to computer science as (if not more than!) “The PageRank Citation Ranking,” a.k.a. the research paper which spawned Google. In “A Logical Calculus,” McCulloch and Pitts describe how networks of artificial neurons can be made to perform logical functions. The dream of AI is born!

ARTIFICIAL INTELLIGENCE GETS ITS NAME

If you were to pinpoint an official beginning for artificial intelligence, it may well by August 31, 1955. That’s when a proposal is made for a “2 month, 10 man study of artificial intelligence,” submitted by researchers John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The conference takes place the following year at the 269-acre estate of Dartmouth College. Unfortunately, their timeline turns out to be a bit too optimistic. “We think a significant advance can be made … if a carefully selected group of scientists work on it for a summer,” they write. Things take a bit longer than that.
If you were to pinpoint an official beginning for artificial intelligence, it may well by August 31, 1955. That’s when a proposal is made for a “2 month, 10 man study of artificial intelligence,” submitted by researchers John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon.
The conference takes place the following year at the 269-acre estate of Dartmouth College. Unfortunately, their timeline turns out to be a bit too optimistic. “We think a significant advance can be made … if a carefully selected group of scientists work on it for a summer,” they write. Things take a bit longer than that.

 

THE ARRIVAL OF ‘BACKPROP’

Sometimes abbreviated to “backprop,” backpropagation is the single most important algorithm in the history of machine learning. The idea behind it was first proposed in 1969, although it only became a mainstream part of machine learning in the mid-1980s. What backpropagation does is to allow a neural network to adjust its hidden layers in the event that the output it comes up doesn’t match the one its creator is hoping for. In short, it means that creators can train their networks to perform better by correcting them when they make mistakes. When this is done, backprop modifies the different connections in the neural network to make sure it gets the answer right the next time it faces the same problem.
Sometimes abbreviated to “backprop,” backpropagation is the single most important algorithm in the history of machine learning. The idea behind it was first proposed in 1969, although it only became a mainstream part of machine learning in the mid-1980s.
What backpropagation does is to allow a neural network to adjust its hidden layers in the event that the output it comes up doesn’t match the one its creator is hoping for. In short, it means that creators can train their networks to perform better by correcting them when they make mistakes. When this is done, backprop modifies the different connections in the neural network to make sure it gets the answer right the next time it faces the same problem.

CONVERSING WITH COMPUTERS

Ever wondered what the grandparent of Amazon’s Alexa, Google Assistant, and Apple’s Siri is? Back in the mid-1960s, a professor at the MIT Artificial Intelligence Laboratory developed a computer psychotherapist called ELIZA, which could carry out seemingly intelligent conversations via text with users. Its creator noted at the time how surprised they were that users were so willing to converse with a machine in this way.
Ever wondered what the grandparent of Amazon’s Alexa, Google Assistant, and Apple’s Siri is? Back in the mid-1960s, a professor at the MIT Artificial Intelligence Laboratory developed a computer psychotherapist called ELIZA, which could carry out seemingly intelligent conversations via text with users.
Its creator noted at the time how surprised they were that users were so willing to converse with a machine in this way.

THE SINGULARITY

Don’t worry, you haven’t missed a major headline or anything: the Singularity, a.k.a. the point at which machines become smarter than humans, hasn’t happened yet. But in 1993, author and computer scientist Vernor Vinge published an article which popularized the idea. Called “The Coming Technological Singularity,” Vinge predicted that, within the next 30 years, humankind would have the ability to create superhuman intelligence. “Shortly after, the human era will be ended,” he wrote. It’s a warning that others like Elon Musk have reiterated in the years since.
Don’t worry, you haven’t missed a major headline or anything: the Singularity, a.k.a. the point at which machines become smarter than humans, hasn’t happened yet. But in 1993, author and computer scientist Vernor Vinge published an article which popularized the idea.
Called “The Coming Technological Singularity,” Vinge predicted that, within the next 30 years, humankind would have the ability to create superhuman intelligence. “Shortly after, the human era will be ended,” he wrote. It’s a warning that others like Elon Musk have reiterated in the years since.

HERE COME THE SELF-DRIVING CARS

Think that Google developed the world’s first self-driving car? Think again. Back in 1986, a Mercedes-Benz van kitted out with cameras and smart sensors by researchers at Germany’s Bundeswehr University was able to successfully drive on empty streets. A few years later, a Carnegie Mellon researcher named Dean Pomerleau built an autonomous Pontiac Transport minivan and used this to drive 2,797 miles coast to coast from Pittsburgh, PA to San Diego, CA. The tech was primitive by today’s standards, but demonstrated that it could be done.
Think that Google developed the world’s first self-driving car? Think again. Back in 1986, a Mercedes-Benz van kitted out with cameras and smart sensors by researchers at Germany’s Bundeswehr University was able to successfully drive on empty streets.
A few years later, a Carnegie Mellon researcher named Dean Pomerleau built an autonomous Pontiac Transport minivan and used this to drive 2,797 miles coast to coast from Pittsburgh, PA to San Diego, CA. The tech was primitive by today’s standards, but demonstrated that it could be done.

“THE BRAIN’S LAST STAND”

1997 was a banner year for AI, as IBM’s Deep Blue supercomputer took on world chess champion Garry Kasparov in a chess battle pitting human against machine brain. While there was no doubt that Deep Blue could process information more quickly than Kasparov, the real question was whether it cold think more strategically. It turns out that it could! The results may not have shown AI to be capable of anything more than working exceptionally well at problems with clearly defined rules, it was still a massive leap forward for artificial intelligence as a field.
1997 was a banner year for AI, as IBM’s Deep Blue supercomputer took on world chess champion Garry Kasparov in a chess battle pitting human against machine brain. While there was no doubt that Deep Blue could process information more quickly than Kasparov, the real question was whether it cold think more strategically. It turns out that it could!
The results may not have shown AI to be capable of anything more than working exceptionally well at problems with clearly defined rules, it was still a massive leap forward for artificial intelligence as a field.

AI TRIUMPHS AT JEOPARDY!

 

 Much like Deep Blue’s standoff with Garry Kasparov, IBM’s AI faced another big challenge in 2011 when its Watson AI took on former Jeopardy! winners Brad Rutter and Ken Jennings at their game show of choice — and won the $1 million first place. After the bout, a crushed Ken Jennings quipped that, “I, for one, welcome our new robot overlords.”

AI LOVES… CATS?

In June 2012, Google researchers Jeff Dean and Andrew Ng trained a giant neural network of 16,000 computer processors by feeding it 10 million unlabeled images taken from YouTube videos. Despite being given no identifying information about them, the AI was able to learn to detect pictures of felines, using its deep learning algorithms. It turns out that, just like us, even impressively smart AI enjoys cat videos.
In June 2012, Google researchers Jeff Dean and Andrew Ng trained a giant neural network of 16,000 computer processors by feeding it 10 million unlabeled images taken from YouTube videos. Despite being given no identifying information about them, the AI was able to learn to detect pictures of felines, using its deep learning algorithms.
It turns out that, just like us, even impressively smart AI enjoys cat videos.

AI BEATS THE GO WORLD CHAMPION

On March 2016, Google DeepMind’s AlphaGo AI defeated the Go world champion Lee Sedol in four games to one. The match was watched by 60 million people around the world. The reason this was such a landmark was due to the sheer number of allowable board positions in the game, which add up to more than the total number of atoms in the universe. It’s AI’s most astonishing feat to date.
On March 2016, Google DeepMind’s AlphaGo AI defeated the Go world champion Lee Sedol in four games to one. The match was watched by 60 million people around the world. The reason this was such a landmark was due to the sheer number of allowable board positions in the game, which add up to more than the total number of atoms in the universe. It’s AI’s most astonishing feat to date.

by