Robot Wars: Charting the History of Artificial Intelligence

As Microsoft's Tay project implodes in spectacular fashion, we ask if AI is a reflection of ourselves whether it is a good idea and take a look at some of the milestones in AI technology. 

Artificial 'Intelligence'

A little over a week ago developers at Microsoft launched ‘Tay’ - an AI Twitter user, designed to speak and interact like an average teenage girl and rather wonderfully marketed as having ‘Zero-chill’ by the team at Microsoft.

What a great idea. Until like many teenagers around the planet invariably do, Tay had a few major meltdowns. But unlike normal teenage meltdowns, which usually involve slammed doors, puppy dog eyes and crocodile tears Tay went a bit farther. And in the space of just 24hrs she became a deranged, sex-obsessed, Hitler-loving racist of epic proportions. 

Tay's Twitter Account (Twitter)

The problem of course with Tay (and something that as unfathomable as it may seem Microsoft’s boffins failed to predict could backfire) was that Tay’s intelligence was designed to be artificially constructed live from the 'intelligence' of the humans she interacted with. 

Actually, let’s use the word scraped. Tay’s intelligence was a result of her scraping the bottom of Twitter’s cesspit and wearing whatever cloak she found there. The blueprint she began with was a rudimentary collection of teenage slang and a love for Taylor swift, but past that point, anything was fair game. Which invariably meant lots of people trying to proposition her sexually, lots of racism and homophobia and within a few hours Tay was seriously mentally damaged; because, you know, Twitter.

So if Tay and her potty mouth is where we find ourselves in 2016, then AI is damn scary right? But what led us here, and what does the future look like. 

The 1950’s – Beginnings

In 1950 British mathematician Alan Turing published a seminal paper that began: "I propose to consider the question, 'Can machines think?'" The “Turing Test” developed shortly after this paper was released had at its core a simple premise: If a machine could be developed that, through programming, was able to hold a conversation with a human that was indistinguishable from human conversation – then the machine could be considered as ‘thinking’. The test was not successfully passed until over 60 years later, more on that later.

In 1956 during a conference in Dartmouth the term ‘Artificial Intelligence’ was coined and the global race to the front of the AI queue began.

Marvin Minsky (MIT)

Influential MIT professor Marvin Minsky began his work in AI in the 1950’s, preferring the ‘top-down’ approach to development of artificial intelligence. In other words, what Minsky and his compatriots sort to do was programme a computer with the fundamental rules that govern human behaviour. Everything the computer then develops has its roots in these rules.

Minsky’s involvement in 1968 film ‘A Space Oddyssey’ is considered legendary. The world was introduced to HAL 9000, an intelligent computer imagined in the labs of MIT and brought to ‘life’ on the big screen. The problem of course, was that a lot of the rhetoric was just that. Hal 9000 had successfully scared audiences with a glimpse of what a robot led future could look like, but nothing was real enough to feel exciting. 

Shakey

Shakey was developed at Stanford, and Shakey was lauded as the first robot that could reason, and therefore react to its surroundings. Here is Shakey in action (there's lots of hiss on this video so you may want to turn your speakers down), prepare to be amazed:

Shakey (SRI / YouTube)

So Shakey could learn, sort of. And adapt to its surroundings, sort of.  But it all seemed very clunky, Shakey’s intelligence was basically memory and recollection, it was a million miles away from been capable of independent, intelligent thought. It remembered things, and performed incredibly small tasks at incredibly slow speed and due to the commercial nothingness of projects like Shakey, the 1970s and early 80s saw a sharp decline in any traction for the AI trailblazers, the so-called ‘Winter of AI’. A decline that was only reversed in the 80’s when developers began to awaken to the potential commercial and business benefits of AI and design with real problems - in need of tangible solutions - in mind.

The past 30 years and the future

Firstly, lets talk about Boston Dynamics. A company at the cutting edge of the AI/robot evolution and a company financed, largely, from the US defence budget. Boston Dynamics make the famous ‘BigDog’, an autonomous packhorse robot capable of independent journey over a variety of inhospitable terrains. 

BigDog Evolution (Boston Dynamics / YouTube)

The possibilities with BigDog, and indeed most of Boston Dynamics projects, are 10% exciting and 90% scary, it all seems a bit Terminator 2 doesn’t it? Packs of BigDogs roaming around woods and mountains, all they need is for Trump to get the vote in November, strap AK47’s to their backs and let them loose on the Mexican border and everything changes. And if you think this stuff is pie in the sky, think again. Boston Dynamics have already been responsible for AI units that have been deployed on bomb disposal missions in Iraq and Afghanistan.

Then of course, you have Google, who seem intent on buying up every established company, infant technology and bedroom AI project within their grasp. It owns Boston Dynamics, its recent 1bn dollar investment in driverless cars is taking new strides every day and the Google DeepMind project produced an intelligent computer, AlphaGO that on March 9th 2016 managed to defeat Korean Grandmaster Lee Se-Dol in a game of the ancient board game Go.

A success that not only required AlphaGO to remember sequential moves and tactical gameplay, but also to evolve mid-game and become self-intuitive. If I was DeepMind CEO Demis Hassabis, I’d have played it cool and pretended like I expected the victory all along. Not for Hassabis though, who was visual shaken as he declared the achievement “mind-blowing” and enough to render him speechless.

Which takes us back to the Turing Test

In 2014 Russian Researchers turned up at the Royal Society in London armed with their intelligent computer program, dubbed "Eugene Goostman". Eugene acted like a 13-year-old who had English as a second language and held a conversation with a panel of experts. At the end of the conversation, 33% of the experts were successfully convinced they were indeed talking with a 13-year-old boy.  Therefore, on June 7, 2014, it became the first computer to successfully pass the iconic test.

For many, and just as Hassabis would feel a few years later with the DeepMind project, this result was stunning and broke new ground in the field of AI development, the programming had to negotiate a phenomenal amount of possibilities.  The conversation had no pre-set script, the panel were free to guide questioning in any direction they wished, and over a third of them were fooled by a mere machine.

AI holds so many potentials, and that is what keeps companies like Google desperate to stay ahead of the race. But it also involves too many unanswered questions, too many rabbit warrens and the distinct potential for catastrophe.  Too much is expected from it, and you have to wonder how and when it will ever deliver.