首页|A Novel Method of Multi-hop Reasoning in Large Language Models by Pruning Toxic Model Knowledge
A Novel Method of Multi-hop Reasoning in Large Language Models by Pruning Toxic Model Knowledge
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
“Rapid advancements in artificial intelligence have led to the widespread deploy ment of language models across various domains, from customer service to content generation, where the generation of accurate and ethically sound responses is p aramount. Addressing the dual challenges of toxic content generation and the need for sophisticated reasoning, this research introduces a novel approach that in tegrates targeted pruning techniques to enhance multi-hop reasoning while simult aneously reducing the presence of toxic knowledge. By fine-tuning the pre-trained Llama model, the study explores how selective pruning of toxic pathways can im prove the model’s accuracy, response diversity, and robustness against adversarial prompts.