Applying large language models to generate feedback for reporters of spontaneous adverse drug reactions
Objective To apply large language models(LLMs)to provide feedback for spontaneous ADR reporters,thereby promoting safe medication use,enhancing reporter engagement,and improving the spontaneous ADR reporting platform.Methods Using prompt strategies and information from drug package inserts,three different LLMs(Qwen-2.5,Kimi,and Zhipuqingyan)were employed to generate feedback for 10 spontaneous ADR reports,including causality assessment,information to be supplemented,and safe medication use suggestions.The causality assessment results generated by the LLMs were compared with the judgments made by three clinical pharmacists.The overall quality and comprehensibility of LLM-generated feedback were evaluated using DISCERN and C-PMART-P.Results The human-machine determination results were consistent in 9 out of 10 adverse reactions,with only 1 case showing inconsistent results.The DISCERN scores indicated that the feedback information generated by Qwen-2.5,Kimi,and Zhipuqingyan all had a median quality score of 5.The C-PMART-P scores showed that the average comprehensibility scores for the feedback information generated by Qwen-2.5,Kimi,and Zhipuqingyan were 74.9%,71.5%,and 72.6%,respectively,suggesting good overall quality and comprehensibility.Conclusion LLMs demonstrated high accuracy in ADR causality assessment and were able to generate feedback information that is of good quality and easy to understand,providing a new approach for guiding safe medication use,enhancing reporter experience,and improving the spontaneous ADR reporting system.
large language modelsadverse drug reactionsspontaneous reportingcausality assessmentsafe medication use