More than half of ChatGPT’s answers to programming questions are incorrect


Research demonstrates that AI chatbots are not as competent in coding as believed.

In recent years, programmers have increasingly turned to chatbots like OpenAI's ChatGPT for coding assistance while many companies laid off programmers to carry on with artificial intelligence.

However, a recent study by Purdue University researchers, presented at the Computer-Human Interaction conference in Honolulu on 11-16 May, indicates that 52% of programming answers generated by ChatGPT are incorrect.

More to read:
OpenAI’s latest chatbot recommends nuclear strike in simulated conflict

This significant proportion of inaccuracies underscores a broader issue experienced by various users, including writers and teachers: AI platforms like ChatGPT often produce entirely fabricated answers.

The researchers analyzed 517 questions from Stack Overflow and reviewed ChatGPT's responses, revealing a troubling trend.

"We found that 52% of ChatGPT answers contain misinformation, 77% of the answers are more verbose than human answers, and 78% of the answers suffer from different degrees of inconsistency with human answers," the researchers noted.

The study also included a linguistic analysis of 2,000 randomly selected ChatGPT answers, finding them "more formal and analytical" with "less negative sentiment" — a typically bland and overly cheerful tone produced by AI.

What's particularly concerning is that many human programmers seem to prefer ChatGPT's answers. In a poll of 12 programmers, the researchers found that 35% preferred ChatGPT's responses and 39% failed to identify mistakes in AI-generated answers.

Why is this happening? One reason might be ChatGPT's polite and well-articulated language.

"The follow-up semi-structured interviews revealed that the polite language, articulated and textbook-style answers, and comprehensiveness are some of the main reasons that made ChatGPT answers look more convincing, causing participants to lower their guard and overlook some misinformation in ChatGPT answers," the researchers wrote.

This study highlights significant flaws in ChatGPT's capabilities and the continued need for human intervention to correct AI-generated code errors.

***
NewsCafe is an independent outlet that cares about big issues. Our sources of income amount to ads and donations from readers. You can support us via PayPal: office[at]rudeana.com or paypal.me/newscafeeu. We promise to reward this gesture with more captivating and important topics.



Is citizenship withdrawal a justified measure against unloyal citizens?

View all
YES
NO