首页|Patent Application Titled 'Blockwise Controlled Decoding Of Natural Language (Nl ) Based Output Generated Using A Large Language Model (Llm) To Reduce Latency In Rendering Thereof' Published Online (USPTO 20240330334)

Patent Application Titled 'Blockwise Controlled Decoding Of Natural Language (Nl ) Based Output Generated Using A Large Language Model (Llm) To Reduce Latency In Rendering Thereof' Published Online (USPTO 20240330334)

扫码查看
Reporters obtained the following quote from the background information supplied by the inventors: “Large language models (LLMs) are particular types of machine learning models that can perform various natural language processing (NLP) tasks , such as language generation, machine translation, and questionanswering. Thes e LLMs are typically trained on enormous amounts of diverse data including data from, but not limited to, webpages, electronic books, software code, electronic news articles, and machine translation data. Accordingly, these LLMs leverage th e underlying data on which they were trained in performing these various NLP tas ks. For instance, in performing a language generation task, these LLMs can proce ss a natural language (NL) based input that is received from a client device, an d generate a NL based output that is responsive to the NL based input and that i s to be rendered at the client device. However, in generating the NL based outpu t utilizing these LLMs, additional latency is introduced that may not be present absent utilizing these LLMs. This additional latency can prolong user interacti ons with these LLMs and detract from a user experience with these LLMs. Accordin gly, there is a need in the art for reducing latency in utilizing these LLMs.”

Emerging TechnologiesMachine LearningMachine TranslationPatent Application

2024

Robotics & Machine Learning Daily News

Robotics & Machine Learning Daily News

ISSN:
年,卷(期):2024.(Oct.23)