Mitigating the Judicial Human-AI Fairness Gap
-
Series
-
Speaker(s)Pascal Langenbach (Max-Planck-Institute for Research on Collective Goods, Germany)
-
FieldBehavioral Economics, Operations Analytics, Strategy and Entrepreneurship and Innovation
-
LocationUniversity of Amsterdam, Roeterseilandcampus, A3.01 and online
Amsterdam -
Date and time
September 16, 2025
13:00 - 14:15
Abstract
Are robot judges perceived as less fair than human judges? If so, how can this perceived judicial human-AI fairness gap be mitigated? We conduct a large online experiment with more than 4,800 observations to explore whether delegating judicial tasks to algorithms affects perceived procedural fairness and how human-in-the-loop interventions might offset any fairness gap. Participants are randomly assigned to assess one of the following conditions: a pure robot court (no human oversight), a pure human court (no algorithmic decision aids), or a hybrid court (a human judge assisted by algorithmic decision aids). Within the human and the hybrid conditions, we further vary whether the human judge conducts a thorough (high involvement) or brief (low involvement) review. Confirming prior research, we find robust evidence of a judicial human-AI fairness gap: participants perceive robot courts as less fair than human courts. Crucially, even low human involvement fully closes this gap. Although the fairness gap persists among Black participants, it is significantly smaller than the gap seen among White participants, highlighting an ethnic disparity. Overall, our results suggest that delegating judicial tasks to algorithmic systems may not undermine perceived procedural fairness if they remain subject to human review. However, our findings also raise the concern that nominal human oversight could be used to legitimize substantively unfair algorithmic procedures in the eyes of ordinary citizens.