Artificial Intelligence(AI)governance has become a frontier issue and an important area of national and social governance.However,there is an urgent need to strengthen capacity building in various aspects,including AI technological innovation,risk prevention and control,corporate self-regulation,government regulation,social supervision and international collaboration,etc.Therefore,enhancing the capacity for safe and trustworthy AI development must be prioritized as the foremost task in AI governance,and the concept and mechanisms of'enabling AI governance'should be established.To achieve this goal,we should adhere to the core concepts of enabling AI governance which are human-centered and development-oriented,as well as the derived basic concepts,including AI for good,inclusiveness and prudence governance,agile governance,and sustainable development.An enabling AI governance mechanism centered on the rule of law should be constructed,along with specific mechanisms under the rule of law,such as perfecting the mechanisms of integrating legal governance and technological governance,establishing co-governance mechanisms that promote communication and collaboration among diverse stakeholders,constructing'safe harbor'mechanisms suitable for AI development,building agile and interactive dynamic regulatory mechanisms that incentivize the development of AI for good,and constructing social security mechanisms such as AI safety insurance.