摘要
由一名新闻记者兼机器人与机器学习的新闻编辑每日新闻-调查人员发布了关于马学习的新报告。根据NewsRx记者在中国上海的新闻报道,研究表明:“最近,在公平机器学习领域,大量的研究已经考虑了如何从数据中删除歧视性信息,实现下游t请求的公平性。公平表征学习考虑删除潜在空间中的敏感信息(如种族、性别等),学习到的表示可以防止事件机器学习系统受到歧视性信息的偏差。本研究的资助单位包括国家自然科学基金(NSFC)、上海市项目、上海市知识服务平台项目、上海市科委(STCSM)、教育部开放研究基金、中央大学基础研究基金。
Abstract
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – Investigators publish new report on Ma chine Learning. According to news reporting originating in Shanghai, People’s Re public of China, by NewsRx journalists, research stated, “Recently, in the field of fair machine learning, a large number of studies have considered how to remo ve discriminatory information from the data and achieve fairness in downstream t asks. Fair representation learning considers removing sensitive information (e.g . race, gender, etc) in the latent space, and the learned representations can pr event machine learning systems from being biased by discriminatory information.” Funders for this research include National Natural Science Foundation of China ( NSFC), Shanghai Municipal Project, Shanghai Knowledge Service Platform Project, Science & Technology Commission of Shanghai Municipality (STCSM), Open Research Fund of KLATASDS-MOE, Fundamental Research Funds for the Central U niversities.