GPT-4 fools more than half of humans in Turing test


OpenAI’s creation is so far the most advanced artificial intelligence deployed for the public.

OpenAI's GPT-4 has reached a level of sophistication where it can deceive over half of human test subjects into believing they are conversing with a real person.

In a recent study, cognitive science researchers from the University of California San Diego discovered that more than 50% of participants mistook GPT-4's responses for those of a human. This indicates that GPT-4 effectively passes the Turing test, a benchmark for determining machine intelligence.

More to read:
Artificial intelligence learns to diagnose diseases by examining human tongue

The researchers conducted an experiment involving approximately 500 participants, who engaged in five-minute text-based conversations with either a human or a chatbot powered by GPT-4.

Participants were then asked to identify whether they had been interacting with a human or an AI.

The results, detailed in the yet-to-be-peer-reviewed paper, revealed that 54% of the participants believed they had been talking to a human when in fact they were communicating with GPT-4.

First proposed by computer science pioneer Alan Turing in 1950, the Turing Test was designed as a thought experiment rather than a strict protocol. Turing's original concept involved three participants: a human interrogator, a hidden human or machine, and a human observer.

The UC San Diego researchers removed the third human observer to simplify the process, but added more machines. Participants interacted with one of four entities: another human, GPT-3.5, GPT-4, or ELIZA, a primitive chatbot from the 1960s.

The researchers hypothesized that participants would generally distinguish between humans and ELIZA, but would be equally likely to mistake GPT-3.5 and GPT-4 for humans. This proved accurate: 54% mistook GPT-4 for a human, and 50% confused GPT-3.5 with a human. In contrast, only 22% believed ELIZA was human, highlighting the impressive advancements in AI sophistication.

***
NewsCafe relies in its reporting on research papers that need to be cracked down to average understanding. Some even need to be paid for. Help us pay for science reports to get more interesting stories. Use PayPal: office[at]rudeana.com or paypal.me/newscafeeu.



Is citizenship withdrawal a justified measure against unloyal citizens?

View all
YES
NO