Survey on large language models alignment research
With the rapid development of artificial intelligence technology,large language models have been widely applied in numerous fields.However,the potential of large language models to generate inaccurate,misleading,or even harmful contents has raised concerns about their reliability.Adopting alignment techniques to ensure the behav-ior of large language models is consistent with human values has become an urgent issue to address.Recent research progress on alignment techniques for large language models were surveyed.Common methods for collecting instruc-tion data and human preference datasets were introduced,research on supervised tuning and alignment adjustments was summarized,commonly used datasets and methods for model evaluation were discussed,and future research di-rections were concluded.
large language modelalignment techniquetunereinforcement learning