The knowledge-based visual question answering(KB-VQA)requires not only image and question information but also relevant knowledge from external sources to answer questions accurately.Existing methods typically involve using a retriever to fetch external knowledge from a knowledge base or relying on implicit knowledge from large models.However,solely depending on image and textual information often proves insufficient for acquiring the necessary knowledge.To address this issue,an enhanced retrieval strategy was proposed for both the query and external knowledge stages.On the query side,implicit knowledge from large models was utilized to enrich the existing image and question information,aiding.The retriever in locating more accurate external knowledge from the knowledge base.On the external knowledge side,a pre-simulation interaction module was introduced to enhance the external knowledge.This module generated a new lightweight vector for the knowledge vector,allowing the retriever to pre-simulate the interaction between the query and the knowledge passage,thus better capturing their semantic relationship.Experimental results demonstrated that the improved model can achieve an accuracy of 61.3%on the OK-VQA dataset by retrieving only a small amount of knowledge.