Large language model-driven open-source intelligence cognition
With the extensive application of open-source intelligence in the military field,the demand for cognition and analysis of relevant intelligence is growing.However,the large language models currently used by researchers are prone to severe hallucination,rendering the information generated unreliable and unsuitable to direct utilization for the cognition of open-source military intelligence.To address this problem,the present study collected approximately 100,000 dialogue records online and constructed an open-source military intelligence dataset.Subsequently,a new model,ChatBIT,which is specifically optimized for dialogue and question answering tasks in the military field,was obtained by fine-tuning and training the LLaMA-13B base question answering model.This study further compared the military knowledge question answering capabilities of the ChatBIT model with those of the Vicuna-13B model.ChatBIT was found to outperform Vicuna-13B in a series of standardized evaluation metrics including the BLEU score,ROUGE-1,ROUGE-2,and ROUGE-L.Specifically,ChatBIT's BLEU score was 2.3909 higher than that of Vicuna-13B.Furthermore,ChatBIT's ROUGE-1,ROUGE-2,and ROUGE-L scores were respectively 3.2079,2.2562,and 1.5939 points higher than those of Vicuna-13B.These results indicate that the ChatBIT model provides more accurate and reliable information when dealing with military dialogue and question answering tasks.
large language modelChatBITopen-source intelligenceartificial intelligence