Reflections on Using ChatGPT as A Medical Education Tool:A Case Study of the Board Certification Examination for Car-diovascular Medicine
In order to evaluate the performance of the artificial intelligence language model ChatGPT in medical examinations,a cross-sectional study was conducted comparing ChatGPT's answers to a set of simulated tests of Board Certification Examination for Cardiovascular Medicine with the standard answers.After excluding 2 picture questions,ChatGPT correctly answered 189 out of 333 questions,with an accuracy rate of 56.8%.The correct response rates across four sections were 67.0%,61.0%,58.2%and 11.4%respectively.Logical errors were the most common cause of incorrect answers.Factors influencing incorrect responses included questions from the fourth section and case analysis questions.Research shows that ChatGPT failed to pass the simulation examination of Board Certification Examination for Cardiovascular Medicine,and it is not recommended to be used in the routine education of cardiovascular physicians.