首页|Selcuk University Reports Findings in Artificial Intelligence (Comparative Analy sis of Artificial Intelligence Chatbot Recommendations for Urolithiasis Manageme nt: A Study of EAU Guideline Compliance)
Selcuk University Reports Findings in Artificial Intelligence (Comparative Analy sis of Artificial Intelligence Chatbot Recommendations for Urolithiasis Manageme nt: A Study of EAU Guideline Compliance)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News-New research on Artificial Intelligenc e is the subject of a report. According to news reporting originating in Konya, Turkey, by NewsRx journalists, research stated, "Artificial intelligence (AI) ap plications are increasingly being utilized by both patients and physicians for a ccessing medical information. This study focused on the urolithiasis section (pe rtaining to kidney and ureteral stones) of the European Association of Urology ( EAU) guideline, a key reference for urologists." The news reporters obtained a quote from the research from Selcuk University, "W e directed inquiries to four distinct AI chatbots to assess their responses in r elation to guideline adherence. A total of 115 recommendations were transformed into questions, and responses were evaluated by two urologists with a minimum of 5 years of experience using a 5-point Likert scale (1-False, 2-Inadequate, 3-Su fficient, 4- Correct, and 5-Very Correct). The mean scores for Perplexity and Cha tGPT 4.0 were 4.68 (SD: 0.80) and 4.80 (SD: 0.47), respectively, both significan tly differed the scores of Bing and Bard (Bing vs. Perplexity, p<.001; Bard vs. Perplexity, p<.001; Bing vs. ChatGPT, p<.001; Bard vs. ChatGPT, p<.001). Bing had a mean score of 4 .21 (SD: 0.96), while Bard scored 3.56 (SD: 1.14), with a significant difference (Bing vs. Bard, p<.001). Bard exhibited the lowest score a mong all chatbots. Analysis of references revealed that Perplexity and Bing cite d the guideline most frequently (47.3% and 30%, respe ctively). Our findings demonstrate that ChatGPT 4.0 and, notably, Perplexity ali gn well with EAU guideline recommendations."