

Most AI scientists agree that we need to know more about the human brain before replicating something we still don’t fully understand.A key concept in the world of artificial intelligence is the Turing Test – the idea that a computer exhibits intelligence when a natural language text discussion with it becomes indistinguishable from a text discussion with a human. Some suggest that it might happen around 2030 some say not earlier than 2040. Are we close to developing AI that would finally pass the Turing test? As soon as the human would lead the conversation in a different direction, Google Duplex would fail.

Real-time learning, deep understanding, reasoning requires true cognitive abilities that none of the Second Wave AI programs have. Considered to be a groundbreaking achievement in AI voice technology, Google Duplex is also far from passing the Turing test.ĭuplex is a deep learning system representing the ‘Second Wave of AI’ – trained with hundreds of hours at performing very narrow tasks. The hairdresser did not recognize she was speaking to an AI. In 2018, Google Duplex voice AI called a hairdresser and successfully made an appointment in front of the audience. Another problem was that, by portraying the chatbot as a 13-year old child from Odessa, judges would let nonsense sentences and obvious mistakes slip, explaining it by English skills and young age. However, there were only three judges, meaning that only one was fooled – not exactly a significant result. The bot convinced 33% of the human judges that it was a human (read some of the conversation transcripts here). The psychiatrists were fooled 48 percent of the time – impressive!įast forward to 2014 – Eugene Goostman, a computer program that simulated a 13-year-old boy from Ukraine, made headlines claiming to have passed the Turing test. During the Turing test, two groups of psychiatrists analyzed conversation transcripts of both actual patients and computers running PARRY. In 1972, PARRY, a chatbot modeling the behavior of a paranoid schizophrenic, used a similar approach to that of ELIZA. However, ELIZA was an easy target if trying to intentionally ask questions that are likely to make a computer slip up. That’s why ELIZA could fool some humans and claimed to be one of the programs to pass the Turing test. If ELIZA couldn’t find a keyword in a user’s text, it would provide a “non-directional” response containing a keyword earlier in the conversation. Its script pretended to be a Rogerian psychotherapist that gave “non-directional” responses. In 1966, Joseph Weizenbaum (computer scientist and MIT professor) created ELIZA, a program that looked for specific keywords in typed comments to transform them into sentences. Join the Partisia Blockchain Hackathon, design the future, gain new skills, and win!
