Wednesday, December 17, 2025
News

ChatGPT performs poorly at US' urologists exam

SocialTwist Tell-a-Friend    Print this Page   COMMENT

New York | Thursday, 2023 12:51:07 PM IST
The much acclaimed OpenAI's ChatGPT chatbot has failed a urologist exam in the US, according to a study.

This comes at a time of growing interest in the potential role of artificial intelligence (AI) technology in medicine and healthcare.

The study, reported in the journal Urology Practice, showed that ChatGPT achieved less than a 30 per cent rate of correct answers on the American Urologist Association's widely used Self-Assessment Study Program for Urology (SASP).

"ChatGPT not only has a low rate of correct answers regarding clinical questions in urologic practice, but also makes certain types of errors that pose a risk of spreading medical misinformation," said Christopher M. Deibert, from University of Nebraska Medical Center.

The AUA's Self-Assessment Study Program (SASP) is a 150-question practice examination addressing the core curriculum of medical knowledge in urology.

The study excluded 15 questions containing visual information such as pictures or graphs.

Overall, ChatGPT gave correct answers to less than 30 per cent of SASP questions, 28.2 per cent of multiple-choice questions and 26.7 per cent of open-ended questions.

The chatbot provided "indeterminate" responses to several questions. On these questions, accuracy was decreased when the LLM model was asked to regenerate its answers.

For most open-ended questions, ChatGPT provided an explanation for the selected answer.

The explanations provided by ChatGPT were longer than those provided by SASP, but "frequently redundant and cyclical in nature", according to the authors.

"Overall, ChatGPT often gave vague justifications with broad statements and rarely commented on specifics," Dr. Deibert said.

Even when given feedback, "ChatGPT continuously reiterated the original explanation despite it being inaccurate".

The researchers suggest that while ChatGPT may do well on tests requiring recall of facts, it seems to fall short on questions pertaining to clinical medicine, which require "simultaneous weighing of multiple overlapping facts, situations and outcomes".

"Given that LLMs are limited by their human training, further research is needed to understand their limitations and capabilities across multiple disciplines before it is made available for general use," Dr. Deibert said.

"As is, utilisation of ChatGPT in urology has a high likelihood of facilitating medical misinformation for the untrained user."

--IANS rvt/vd

( 368 Words)

2023-06-07-17:18:02 (IANS)

 
  LATEST COMMENTS (0)
POST YOUR COMMENT
Comments Not Available
 
POST YOUR COMMENT
 
 
TRENDING TOPICS
 
 
CITY NEWS
MORE CITIES
 
 
 
MORE HEALTH NEWS
Breakthrough technique maps toxic protei...
More...
 
INDIA WORLD ASIA
UDF candidate dies by suicide after loca...
IndiGo issues travel advisory as misty w...
Revanth Reddy appeals to Sitharaman for ...
'Not only was there total mismanagement....
'Bihar toh sirf jhanki hai, pura Maharas...
'No matter what you do, TMC's vote perce...
More...    
 
 Top Stories
Food and health security, capacity ... 
PM Modi's visit to Ethiopia elevate... 
Revanth Reddy appeals to Sitharaman... 
"Had this visit been in accordance ... 
"Not only was there total mismanage... 
"Bihar toh sirf jhanki hai, pura Ma... 
Oscars 2026: Neeraj Ghaywan's Homeb... 
SP leader files complaint against B...