Instruction Tuning of LLM for Unified Information Extraction in Mental Health Domain
Information extraction is to extract essential information from text.The information extraction ability in the mental health domain reflects the large language model(LLM)'s understanding of human mental health related information.To improve the LLM's ability in mental health domain,however,is currently blocked by the severe shortage of Chinese instruction datasets.This paper,under the guidance of psychologists,makes ChatGPT generate sample instances,and finally created 5641 unified instruction datasets for information extraction in the field of men-tal health through the designed instruction generation and data augmentation.This dataset covers three basic extrac-tion tasks:name entity recognition,relation extraction,and event extraction,with the aim of filling the gap in men-tal health information extraction Chinese instruction datasets.Applied parameter-efficient tuning with this instruction dataset,LLM is shown to be capable of performing unified information extraction tasks in the mental health field according to the comparison against the baseline models and the results of human evaluations.
information extractionmental healthlarge language modelinstruction tuning