首页|Showing AI users diversity in training data boosts perceived fairness and trust
Showing AI users diversity in training data boosts perceived fairness and trust
扫码查看
点击上方二维码区域,可以放大扫码查看
原文链接
NETL
NSTL
By a News Reporter-Staff News Editor at Robotics & Machine Learning Daily News Daily News – UNIVERSITY PARK, Pa. - While artificia l intelligence (AI) systems, such as home assistants, search engines or large la nguage models like ChatGPT, may seem nearly omniscient, their outputs are only a s good as the data on which they are trained. However, ease of use often leads u sers to adopt AI systems without understanding what training data was used or wh o prepared the data, including potential biases in the data or held by trainers. A new study by Penn State researchers suggests that making this information ava ilable could shape appropriate expectations of AI systems and further help users make more informed decisions about whether and how to use these systems.
Artificial IntelligenceEmerging Techno logiesMachine LearningPenn State