首页|Research from Dong-A University Provides New Study Findings on Robotics (A Surve y of Robot Intelligence with Large Language Models)
Research from Dong-A University Provides New Study Findings on Robotics (A Surve y of Robot Intelligence with Large Language Models)
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Researchers detail new data in robotic s. According to news reporting from Busan, South Korea, by NewsRx journalists, r esearch stated, “Since the emergence of ChatGPT, research on large language mode ls (LLMs) has actively progressed across various fields. LLMs, pre-trained on va st text datasets, have exhibited exceptional abilities in understanding natural language and planning tasks.” Financial supporters for this research include Ministry of Trade, Industry, And Energy; Ministry of Education. Our news reporters obtained a quote from the research from Dong-A University: “T hese abilities of LLMs are promising in robotics. In general, traditional superv ised learning-based robot intelligence systems have a significant lack of adapta bility to dynamically changing environments. However, LLMs help a robot intellig ence system to improve its generalization ability in dynamic and complex real-wo rld environments. Indeed, findings from ongoing robotics studies indicate that L LMs can significantly improve robots’ behavior planning and execution capabiliti es. Additionally, vision-language models (VLMs), trained on extensive visual and linguistic data for the vision question answering (VQA) problem, excel at integ rating computer vision with natural language processing. VLMs can comprehend vis ual contexts and execute actions through natural language. They also provide des criptions of scenes in natural language. Several studies have explored the enhan cement of robot intelligence using multimodal data, including object recognition and description by VLMs, along with the execution of language-driven commands i ntegrated with visual information.”