The Model of Familiarity Reduction in Human Language Acquisition——From the Viewpoint of ChatGPT's Model of Verbal Reduction
Despite the significant progress made in ChatGPT,a big language model of artificial intelligence,the philosophical debate between Turing(图灵)and Searle(塞尔)is still going on.However,ChatGPT is able to generate new sentences that conform to grammar,and has certainly reduced language units(tokens)and rules,thus solving the long-standing problems of natural language understanding in artificial intelligence.This is an important turning point.The learning model of ChatGPT relies on strong computing power and the massive storage capacity of computers,which can be collectively referred to as strong power of storaging and computing.In contrast,human brain has only weak of storaging and computing power reduce.It is precisely because of the limitations of its weak power of storaging and computing that human brain language learning cannot completely follow the language learning model of ChatGPT.Human brain reduce limited units and rules through activities of familiarity based on experience,thereby generating new sentences.ChatGPT currently adopts a text-based learning model,rather than an experience-based learning model of familiarity.The future bigger language models may expand to include a learning model of familiarity,truly simulating the model of familiarity reduction in human language acquisition.Until then,it may be said that robots can truly understand natural language,and the philosophical dispute between Turing and Searle may have been resolved.
artificial intelligenceTuring testChinese roomnatural language understandingthinking