• Graduate Programs
    • Tinbergen Institute Research Master in Economics
      • Why Tinbergen Institute?
      • Research Master
      • Admissions
      • All Placement Records
      • PhD Vacancies
    • Facilities
    • Research Master Business Data Science
    • Education for external participants
    • Summer School
    • Tinbergen Institute Lectures
    • PhD Vacancies
  • Research
  • Browse our Courses
  • Events
    • Summer School
      • Applied Public Policy Evaluation
      • Deep Learning
      • Development Economics
      • Economics of Blockchain and Digital Currencies
      • Economics of Climate Change
      • The Economics of Crime
      • Foundations of Machine Learning with Applications in Python
      • From Preference to Choice: The Economic Theory of Decision-Making
      • Inequalities in Health and Healthcare
      • Marketing Research with Purpose
      • Markets with Frictions
      • Modern Toolbox for Spatial and Functional Data
      • Sustainable Finance
      • Tuition Fees and Payment
      • Business Data Science Summer School Program
    • Events Calendar
    • Events Archive
    • Tinbergen Institute Lectures
    • 2026 Tinbergen Institute Opening Conference
    • Annual Tinbergen Institute Conference
  • News
  • Summer School
  • Alumni
    • PhD Theses
    • Master Theses
    • Selected PhD Placements
    • Key alumni publications
    • Alumni Community
Home | News | Monograph on gradient-based learning and optimization by Bernd Heidergott
News | November 24, 2025

Monograph on gradient-based learning and optimization by Bernd Heidergott

The monograph Optimization and Learning via Stochastic Gradient Descent by research fellow Bernd Heidergott (Vrije Universiteit Amsterdam) and Felisa Vazques-Abad (Hunter College, City University of New York), published by Princeton University Press, offers a comprehensive treatment of recursive optimization and learning algorithms for stochastic problems.

Monograph on gradient-based learning and optimization by Bernd Heidergott

The book explores gradient-based stochastic optimization through the methodologies of stochastic approximation and gradient estimation. While the presentation is primarily theoretical, the authors emphasize the development and implementation of practical algorithms. The guiding philosophy is that effective problem-solving arises from the interplay between mathematical theory, modeling, and numerical methods—none of which should dominate the others.

The first part of the text presents a rigorous treatment of stochastic approximation theory, including advanced models and state-of-the-art analytical techniques, focusing first on applications that do not require gradient estimation. The second part introduces modern approaches to gradient estimation, integrating cutting-edge numerical algorithms. The final part presents a series of rich case studies that synthesize the earlier concepts into complete, worked examples. Applications in statistics and machine learning are discussed in depth, and selected gradient estimation methods receive detailed theoretical analysis.

Numerous examples throughout the book illustrate how these methods can be applied in practice. End-of-chapter exercises help readers consolidate their understanding, while sections on “Practical Considerations” highlight common implementation trade-offs. Offering the first unified treatment of this field, the book addresses a broad audience of researchers and graduate students in applied mathematics, engineering, computer science, physics, and economics.