• Graduate Programs
    • Tinbergen Institute Research Master in Economics
      • Why Tinbergen Institute?
      • Research Master
      • Admissions
      • Course Registration
      • PhD Vacancies
      • Selected PhD Placements
    • Facilities
    • Browse our Courses
    • Research Master Business Data Science
    • PhD Vacancies
  • Research
  • Browse our Courses
  • Events
    • Summer School
      • Applied Public Policy Evaluation
      • Deep Learning
      • Economics of Blockchain and Digital Currencies
      • Economics of Climate Change
      • Foundations of Machine Learning with Applications in Python
      • From Preference to Choice: The Economic Theory of Decision-Making
      • Gender in Society
      • Machine Learning for Business
      • Marketing Research with Purpose
      • Sustainable Finance
      • Tuition Fees and Payment
      • Business Data Science Summer School Program
    • Events Calendar
    • Events Archive
    • Tinbergen Institute Lectures
    • 16th Tinbergen Institute Annual Conference
    • Annual Tinbergen Institute Conference
  • News
  • Job Market Candidates
  • Alumni
    • PhD Theses
    • Master Theses
    • Selected PhD Placements
    • Key alumni publications
    • Alumni Community
Home | Events Archive | Mitigating the Judicial Human-AI Fairness Gap
Seminar

Mitigating the Judicial Human-AI Fairness Gap


  • Series
  • Speaker(s)
    Pascal Langenbach (Max-Planck-Institute for Research on Collective Goods, Germany)
  • Field
    Behavioral Economics, Operations Analytics, Strategy and Entrepreneurship and Innovation
  • Location
    University of Amsterdam, Roeterseilandcampus, A3.01 and online
    Amsterdam
  • Date and time

    September 16, 2025
    13:00 - 14:15

Abstract

Are robot judges perceived as less fair than human judges? If so, how can this perceived judicial human-AI fairness gap be mitigated? We conduct a large online experiment with more than 4,800 observations to explore whether delegating judicial tasks to algorithms affects perceived procedural fairness and how human-in-the-loop interventions might offset any fairness gap. Participants are randomly assigned to assess one of the following conditions: a pure robot court (no human oversight), a pure human court (no algorithmic decision aids), or a hybrid court (a human judge assisted by algorithmic decision aids). Within the human and the hybrid conditions, we further vary whether the human judge conducts a thorough (high involvement) or brief (low involvement) review. Confirming prior research, we find robust evidence of a judicial human-AI fairness gap: participants perceive robot courts as less fair than human courts. Crucially, even low human involvement fully closes this gap. Although the fairness gap persists among Black participants, it is significantly smaller than the gap seen among White participants, highlighting an ethnic disparity. Overall, our results suggest that delegating judicial tasks to algorithmic systems may not undermine perceived procedural fairness if they remain subject to human review. However, our findings also raise the concern that nominal human oversight could be used to legitimize substantively unfair algorithmic procedures in the eyes of ordinary citizens.