摘要
由一名新闻记者兼机器人与机器学习每日新闻编辑每日新闻-关于人工智能ce的详细数据已经呈现。根据NewsRx记者来自西班牙圣地亚哥-德康波斯特尔A的新闻,研究表明,"人工智能(A I)聊天机器人能够用通俗的语言解释复杂的概念。本研究的目的是评估三个人工智能聊天机器人回答与隐形眼镜(CL)佩戴有关的常见问题的准确性。"这项研究的资助者包括Xunta de Galicia,USC的Maria Zambrano Contract-欧盟-下一代欧盟,MCIN/AEI,ESF Investing in Your Future。我们的新闻记者从Santiago de Compostela大学的研究中获得了一句话,"比较了三个开放访问的AI聊天机器人:困惑、开放助理和聊天GPT 3.5.在同一天向两个不同国家的所有AI C聊天机器人提出了十个一般的CL问题。两名在两个国家都有工作经验的独立验光师分别用西班牙语和英语提问,并对所提供答案的准确性进行了评估。如果人工智能聊天机器人的输出显示出对(或反对)任何眼科护理专业人员(ECP)的任何B IA,就会评估他们的回答。同样的人工智能聊天机器人在西班牙和英国获得的答案是不同的。此外,人工智能聊天机器人之间在准确性方面存在统计学上的显著差异。在英国,ChatGPT 3.5是最高的,开放助理最不准确(P<0.01)。在西班牙,困惑和聊天在统计学上比O Pen Assistant更准确(P<0.01)。所有的人工智能聊天机器人都呈现BIA S,除了西班牙的ChatGPT 3.5.人工智能聊天机器人并不总是考虑当地的法律地位,它们的准确性似乎取决于与他们交谈时使用的语言。"
Abstract
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Data detailed on Artificial Intelligen ce have been presented. According to news originating from Santiago de Compostel a, Spain, by NewsRx correspondents, research stated, “Artificial Intelligence (A I) chatbots are able to explain complex concepts using plain language. The aim o f this study was to assess the accuracy of three AI chatbots answering common qu estions related to contact lens (CL) wear.” Funders for this research include Xunta de Galicia, Maria Zambrano contract at U SC - European Union-NextGenerationEU, MCIN/AEI, ESF Investing in your future. Our news journalists obtained a quote from the research from the University of S antiago de Compostela, “Three open access AI chatbots were compared: Perplexity, Open Assistant and ChatGPT 3.5. Ten general CL questions were asked to all AI c hatbots on the same day in two different countries, with the questions asked in Spanish from Spain and in English from the U.K. Two independent optometrists wit h experience working in each country assessed the accuracy of the answers provid ed. Also, the AI chatbots’ responses were assessed if their outputs showed any b ias towards (or against) any eye care professional (ECP). The answers obtained b y the same AI chatbots were different in Spain and the U.K. Also, statistically significant differences were found between the AI chatbots for accuracy. In the U.K., ChatGPT 3.5 was the most and Open Assistant least accurate (p <0.01). In Spain, Perplexity and ChatGPT were statistically more accurate than O pen Assistant (p <0.01). All the AI chatbots presented bia s, except ChatGPT 3.5 in Spain. AI chatbots do not always consider local CL legi slation, and their accuracy seems to be dependent on the language used to intera ct with them.”