首页|Faculty of Medicine and University Hospital Reports Findings in Artificial Intel ligence (Comparison of ChatGPT, Gemini, and Le Chat with physician interpretatio ns of medical laboratory questions from an online health forum)
Faculty of Medicine and University Hospital Reports Findings in Artificial Intel ligence (Comparison of ChatGPT, Gemini, and Le Chat with physician interpretatio ns of medical laboratory questions from an online health forum)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – New research on Artificial Intelligenc e is the subject of a report. According to news reporting out of Cologne, German y, by NewsRx editors, research stated, “Laboratory medical reports are often not intuitively comprehensible to non-medical professionals. Given their recent adv ancements, easier accessibility and remarkable performance on medical licensing exams, patients are therefore likely to turn to artificial intelligence-based ch atbots to understand their laboratory results.” Our news journalists obtained a quote from the research from the Faculty of Medi cine and University Hospital, “However, empirical studies assessing the efficacy of these chatbots in responding to real-life patient queries regarding laborato ry medicine are scarce. Thus, this investigation included 100 patient inquiries from an online health forum, specifically addressing Complete Blood Count interp retation. The aim was to evaluate the proficiency of three artificial intelligen ce-based chatbots (ChatGPT, Gemini and Le Chat) against the online responses fro m certified physicians. The findings revealed that the chatbots’ interpretations of laboratory results were inferior to those from online medical professionals. While the chatbots exhibited a higher degree of empathetic communication, they frequently produced erroneous or overly generalized responses to complex patient questions. The appropriateness of chatbot responses ranged from 51 to 64 % , with 22 to 33 % of responses overestimating patient conditions. A notable positive aspect was the chatbots’ consistent inclusion of disclaimers regarding its non-medical nature and recommendations to seek professional medica l advice. The chatbots’ interpretations of laboratory results from real patient queries highlight a dangerous dichotomy - a perceived trustworthiness potentiall y obscuring factual inaccuracies.”
CologneGermanyEuropeArtificial Int elligenceEmerging TechnologiesMachine Learning