LAWRENCE: A recent study found that parents are increasingly turning to ChatGPT for medical advice instead of consulting actual doctors and nurses.
Researchers from the University of Kansas discovered that many parents view AI-generated content as credible, trustworthy, and ethical.
Lead author and doctoral student Calissa Leslie-Miller noted, “When we started this research right after ChatGPT was launched, we were concerned about how parents might use this convenient tool to seek health information for their children. Since parents often look to the internet for advice, we wanted to understand the implications of using ChatGPT.”
The study involved 116 parents aged 18 to 65 and was published in the Journal of Pediatric Psychology. Participants reviewed health-related texts without knowing whether they were generated by healthcare professionals or by ChatGPT. They rated the texts based on perceived morality, trustworthiness, expertise, accuracy, and their likelihood of relying on the information.
Interestingly, many parents struggled to differentiate between the AI-generated content and that produced by experts. In instances where there were notable differences, ChatGPT was rated as more trustworthy, accurate, and reliable.
READ ALSO: 47 Palestinians Reportedly Killed in Israeli Strikes in Central Gaza
Leslie-Miller remarked, “This finding was unexpected, especially considering the study occurred early in ChatGPT’s release. We’re noticing AI is being integrated into our lives in ways that may not be immediately apparent, and users may not even recognize when they’re engaging with AI-generated text versus expert content.”
Since its launch in November 2022, ChatGPT has gained over 250 million active monthly users, and OpenAI recently introduced a search engine feature to its chatbox. While ChatGPT can be useful in various situations, it’s essential to remember that the AI can generate incorrect information and is not a substitute for expert advice.
Leslie-Miller emphasized, “During the study, some early versions of the AI outputs contained inaccuracies, which is concerning. AI tools like ChatGPT can experience ‘hallucinations’—errors that occur when the system lacks sufficient context. In the realm of child health, where the stakes can be high, we must address this issue. We’re worried that people might increasingly rely on AI for health guidance without proper expert oversight.”