• Graduate Programs
    • Tinbergen Institute Research Master in Economics
      • Why Tinbergen Institute?
      • Research Master
      • Admissions
      • Course Registration
      • PhD Vacancies
      • Selected PhD Placements
    • Facilities
    • Browse our Courses
    • Research Master Business Data Science
    • PhD Vacancies
  • Research
  • Browse our Courses
  • Events
    • Summer School
      • Applied Public Policy Evaluation
      • Deep Learning
      • Economics of Blockchain and Digital Currencies
      • Economics of Climate Change
      • Foundations of Machine Learning with Applications in Python
      • From Preference to Choice: The Economic Theory of Decision-Making
      • Gender in Society
      • Machine Learning for Business
      • Marketing Research with Purpose
      • Sustainable Finance
      • Tuition Fees and Payment
      • Business Data Science Summer School Program
    • Events Calendar
    • Events Archive
    • Tinbergen Institute Lectures
    • 16th Tinbergen Institute Annual Conference
    • Annual Tinbergen Institute Conference
  • News
  • Job Market Candidates
  • Alumni
    • PhD Theses
    • Master Theses
    • Selected PhD Placements
    • Key alumni publications
    • Alumni Community

Klein Teeselink, B., van Dolder, D., van den Assem, MartijnJ. and Dana, JasonD. (2025). High-Stakes Failures of Backward Induction Games and Economic Behavior, :.


  • Journal
    Games and Economic Behavior

We examine high-stakes strategic choice using more than 40 years of data from the American TV game show The Price Is Right. In every episode, contestants play the Showcase Showdown, a sequential game of perfect information for which the optimal strategy can be found through backward induction. We find that contestants systematically deviate from the subgame perfect Nash equilibrium. These departures from optimality are well explained by a modified agent quantal response model that allows for limited foresight. The results suggest that many contestants simplify the decision problem by adopting a myopic representation, and optimize their chances of beating the next contestant only. In line with learning, contestants' choices improve over the course of our sample period.