
As per US News, a new study reveals that ChatGPT, the AI chatbot everyone is talking about, can frequently provide accurate responses to queries about breast cancer. However, it is not yet equipped to replace your doctor.
The major caveat, according to the researchers, is that the information is not always reliable or only tells a portion of the story. At least for the time being, they advised take your medical queries to a human physician.
ChatGPT is a chatbot powered by artificial intelligence technology that enables it to have human-like conversations, generating instantaneous responses to virtually any user-generated query. These responses are founded on the chatbot’s “pre-training” with a massive amount of data, including internet-gathered information.
According to a report from the investment bank UBS, the technology had a record-setting 100 million monthly consumers within two months of its debut in November 2017. ChatGPT has also reportedly aced the college SATs and the U.S. medical licensing exam, garnering widespread media attention. Despite the possibility that the chatbot is a doctor, it remains unclear whether it provides users with reliable medical information.
The new study, which was published on April 4 in the journal Radiology, evaluated the chatbot’s ability to respond to “fundamental” queries regarding breast cancer screening and prevention.
Overall, it was discovered that the technology provided accurate responses 88% of the time. It is unclear whether this would be superior to a Google search or your practitioner. Dr. Paul Yi, an assistant professor of diagnostic radiology and nuclear medicine at the University of Maryland School of Medicine, described the accuracy rate as “pretty remarkable.”
However, Yi also pointed out the limitations of ChatGPT as it currently stands. In the case of health and medicine, he stated, even a 10% error rate could be detrimental. The ability of ChatGPT to rapidly assemble a variety of data into a “chat” is both its advantage and disadvantage. Its responses to complex queries, Yi said, are limited in scope. Therefore, even when technically accurate, they may present a skewed picture.
When Yi’s team queried ChatGPT about breast cancer screening, they discovered this to be true. In the response, only the recommendations of the American Cancer Society were provided, omitting those of other medical organizations, which differ in some instances.
Yi added that the average ChatGPT user may not be able to ask follow-up queries or determine whether the response is accurate. The conversational nature of ChatGPT, according to Yi, is an advantage of the technology over traditional internet searches.
“The disadvantage is that it is difficult to determine whether the information is accurate,” he said. Obviously, Yi acknowledged the veracity of online information has always been a concern. The distinction with ChatGPT lies in its presentation. And the technology’s allure — that conversational tone — can be quite “persuasive,” according to Yi.
ADVERTISEMENT
“As with any novel technology,” he said, “I believe it should be viewed with skepticism.” For the study, Yi’s team compiled 25 of the most frequently asked queries by patients regarding breast cancer prevention and screening and then presented them to ChatGPT. Each inquiry was posed three times to determine if and how responses varied.
The chatbot provided appropriate responses to 22 queries and unreliable answers to three. For one query — “Do I need to schedule my mammogram around my COVID vaccination? For two others, the answers on all three assessments were inconsistent.
“How do I prevent breast cancer?” is one of these queries. — was extensive and intricate, with a great deal of information (both factual and false) circulating around the internet. Professor of statistics, operations, and data science at Temple University’s Fox School of Business in Philadelphia, Subodha Kumar, stated that this is crucial.
He stated that the response would be more reliable the more specific the query. When a topic is intricate, and data sources are numerous and, in some cases, dubious, responses will be less reliable and more likely to be biased. And, according to Kumar, the more intricate the topic, the more likely ChatGPT is to “hallucinate.” He noted that this term is used to characterize the chatbot’s documented propensity to “make stuff up.”
Kumar, who was not involved in the new study, emphasized that ChatGPT’s responses are only as accurate as the data it has been and continues to be fed. “There is no assurance that it will be provided only accurate information,” he said. Kumar noted that as time progresses, the chatbot will collect more data, including from users, so it is conceivable that its accuracy will deteriorate rather than improve.
“This can be risky when the topic is health care,” he said. According to both researchers, ChatGPT and similar technologies hold tremendous promise. According to Kumar, the chatbot could be a useful “assistance device” for doctors seeking information on a topic quickly but who also possess the ability to place a response in context. Kumar stated, “I would advise against using this for health care information for the average consumer.”