首页|Fairness in Algorithmic Profiling: The AMAS Case

Fairness in Algorithmic Profiling: The AMAS Case

扫码查看
We study a controversial application of algorithmic profiling in the public sector, the Austrian AMAS system. AMAS was supposed to help caseworkers at the Public Employment Service (PES) Austria to allocate support measures to job seekers based on their predicted chance of (re-)integration into the labor market. Shortly after its release, AMAS was criticized for its apparent unequal treatment of job seekers based on gender and citizenship. We systematically investigate the AMAS model using a novel real-world dataset of young job seekers from Vienna, which allows us to provide the first empirical evaluation of the AMAS model with a focus on fairness measures. We further apply bias mitigation strategies to study their effectiveness in our real-world setting. Our findings indicate that the prediction performance of the AMAS model is insufficient for use in practice, as more than 30% of job seekers would be misclassified in our use case. Further, our results confirm that the original model is biased with respect to gender as it tends to (incorrectly) assign women to the group with high chances of re-employment, which is not prioritized in the PES’ allocation of support measures. However, most bias mitigation strategies were able to improve fairness without compromising performance and thus may form an important building block in revising profiling schemes in the present context.

Algorithmic profilingStatistical discriminationPublic employment servicesArtificial intelligenceBias mitigation

Eva Achterhold、Monika Muehlboeck、Nadia Steiber、Christoph Kern

展开 >

LMU Munich, Munich, Germany

University of Vienna, Vienna, Austria

LMU Munich, Munich, Germany||Munich Center for Machine Learning (MCML), Munich, Germany

2025

Minds and machines

Minds and machines

ISSN:0924-6495
年,卷(期):2025.35(1)
  • 71