The algorithm-generated personalized reading recommendations have facilitated people's reading life,but also brought about issues such as cognitive narrowing.Therefore,it is increasingly important to accurately disclose algorithm risks and ensure algorithm transparency to the public.At a practical level,this article takes 23 mainstream digital reading apps as research samples,and designs transparency detection points from three dimensions:technical standards,user perception,and normative compliance,to evaluate the performance gap of algorithm transparency through analysis of relevant privacy policy texts and algorithm functions.Finally,countermeasures are put forward in terms of improving the algorithm transparency standard system,adding"algorithm labels",and incorporate transparency into the design considerations in algorithm development.1 fig.4 tabs.29 refs.