首页|Beyond symbol processing: the embodied limits of LLMs and the gap between AI and human cognition
Beyond symbol processing: the embodied limits of LLMs and the gap between AI and human cognition
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
Springer Nature
Luciano Floridi has argued that AI, particularly Large Language Models (LLMs) like ChatGPT and Bard, exhibit "agency without intelligence." He points to how LLMs manage texts statistically, without understanding their content. In an example, Floridi reflects on his experience prompting a previous version of ChatGPT with a question: "What is the name of Laura's mother's only daughter?" Using the Saxon genitive, the model gave an "idiotic" response, demonstrating its limited understanding. He notes, "LLMs keep learning most 'errors,' which are like zero-day exploits," showing how even apparent shortcomings can be temporary (Floridi 2023, p. 15).
Rasmus Gahrn-Andersen
展开 >
Department of Culture and Language, University of Southern Denmark, Campusvej 55, 5230 Odense M, Denmark