• Graduate program
  • Research
  • News
  • Events
    • Summer School
      • Climate Change
      • Gender in Society
      • Inequalities in Health and Healthcare
      • Business Data Science Summer School Program
      • Receive updates
    • Events Calendar
    • Events Archive
    • Tinbergen Institute Lectures
    • Conference: Consumer Search and Markets
    • Annual Tinbergen Institute Conference
  • Summer School
    • Climate Change
    • Gender in Society
    • Inequalities in Health and Healthcare
    • Business Data Science Summer School Program
    • Receive updates
  • Alumni
  • Magazine
Home | Magazine | Markets with Asymmetric Information
InDepth | May 07, 2015 | Benoît Crutzen

Markets with Asymmetric Information

An interview with Jonathan Levin and Liran Einav.


Markets with Asymmetric Information

Professors Levin and Einav, according to the Tinbergen Institute course flyer, the series of lectures you gave at TI was on “market failures and inefficiencies due to asymmetric information”. So an obvious first question would be: what got you interested in this topic?

JL: I’ve been interested in information economics, in one way or another, my whole career. I’m not sure I remember a particular moment where I got interested. I do remember reading Mike Spence’s book on market signaling when I was a PhD student, and thinking that was pretty much the ideal for a dissertation: take a problem that was sort of intuitive but also complex and come up with a theory that was beautiful and concise and immediately lets you see how lots of economic situations were related by a single idea—in this case, the goal of credibly communicating private information.
 
“Actually, now that really terrific data are available and a fair amount of empirical work has been carried out in this context, there are aspects of the theory that appear limited, which means that some empirical results could  spur on new extensions and modifications to the theory.”

LE: In my case, I actually got to this area somewhat randomly. In my first year at Stanford, a former classmate from graduate school approached me and asked if I would work with her on a project that

was based on new data she received from an Israeli auto insurance company. The dataset was quite amazing in size and detail (at least, relative to the type of datasets that were used 15 years ago), so I started working with her and in the process became fascinated by the topic and by the close interplay between theory and empirical work that it allowed.
 

Why focus more on the empirics as opposed to the theory? Is it just that theory has been relatively well developed already, whereas empirical research in the area still faces larger challenges?

JL: The theory of asymmetric information is one of the major contributions of 20th century economics. Market failures due to asymmetric information are commonly used to motivate the regulation of major industries, from banking and finance to insurance to healthcare, and many product and professional service markets as well. It’s also an area where economists spent decades developing the theory and its implications with less concerted effort on empirical evidence and measurement. Nowadays a great deal of data is available to look at markets where we think there might be big imperfections due to asymmetric information— so there’s an opportunity to try to come at all the old questions in a fresh way.

“Information technology has had an enormous  effect on almost all of  the industries that we  think of as having serious  informational asymmetries.”

LE: Actually, now that really terrific data are available and a fair amount of empirical work has been carried out in this context, there are aspects of the theory that appear limited, which means that some empirical results could spur on new extensions and modifications to the theory. In a sense, this is really a cycle of interplay between theory and empirics, and it just happened to be that the empirical part has seen  more exciting developments over the last two decades. One could easily imagine the answer to flip if we were asked the same question twenty years from now.

What are the fundamental challenges that are specific to your area of research interest?

JL: They’re probably the same as in any field: trying to identify issues that are important and then finding a way to formulate questions or models or gather data that lets you make some progress and say something interesting.

Are globalization and the innovations in information and communication technology helping in reducing the (negative) effects of asymmetric information in markets or, to the contrary, is the ICT revolution making these issues more pressing than ever?

JL: Information technology has had an enormous effect on almost all of the industries that we think of as having serious informational asymmetries. To take just one example, the traditional concern in insurance and credit markets is that consumers know much more than the firms offering insurance or loans— but that’s not the case anymore. Nowadays, insurers and lenders often have incredibly data that arguably flip the problem around and make them better forecasters than individuals are.

“The crisis was quite a complicated chain of events… It is probably hard for a single  researcher to fully understand  all its pieces, and we doubt  that any single theory would  suffice to explain it.”

LE: This being said, the extent to which the problem “flips” and the asymmetric information becomes less of an issue depends a lot on the market setting and the market institutions. In some markets (financial markets or auto and home insurance) pricing is quite sophisticated, and residual private information on the consumer side is presumably not as important as it used to be. However, many other markets— due either to regulation or to “sticky” tradition— do not use much of the newly available information. So while the information problems could be solved in principle, they do not really get solved in practice, which means that these markets still face much of the same “old” problems.

It seems all too obvious that we should ask what your opinion is regarding the importance of the topics you have at heart in explaining the crisis that started in 2008. After all, wasn’t it to a large extent an insurance and credit crisis due to asymmetric information (and wrongly designed incentives)?

JL: The most common view among economists is that the crisis was brought on by excessive lending and household leverage. Why did this happen? Most textbook theories of credit markets will actually try to explain why the market will under-provide credit to borrowers, not over-provide it. You can come up with theories where lending is excessive for various informational or incentive reasons (and no doubt there were plenty of incentive problems in credit markets in 2007), but our own view is that it’s hard to understand something like the housing and mortgage boom in the US in the 2000s without moving somewhat outside of the standard optimizing rational expectations models we use in economics.

LE: You would need to add several other forces that would go along with the more standard theories. Either way, the crisis was quite a complicated chain of events … It is probably hard for a single researcher to fully understand all its pieces, and we doubt that any single theory would suffice to explain it.
 
The crisis was quite a complicated chain of events … It is probably hard for a single researcher to fully understand all its pieces, and we doubt that any single theory would suffice to explain it.

Turning now to your coverage of the empirical analysis of big data, could you perhaps first explain to the non-specialists: 1) what is meant by the term “big data”; 2) what is special about big data; and 3) what specific challenges do researchers face in this area?

JL: Well it’s sort of a buzzword, so I’m not sure there is a specific definition, but I think people generally mean very large or high frequency datasets— often with less obvious structure than the datasets we’ve traditionally used in economics that you might open up in Excel and look at. One challenge involves just that: figuring out how to “look at” the data.
LE: For the interested reader, here are two links with in-depth reviews on the above matters. Section II of the first paper focuses specifically on points 1 and 2. Both papers also address point 3.

Big data are not specific to economics. Yet, economists have arguably developed and are developing still research tools that could give them an advantage over other professions when it comes to big data analysis. What is your take on this possibility? Are there any red lines we should be wary of crossing? Is collaborative research with scholars from other relevant fields a strategy to defend and advocate?

JL: Collaborations across fields can be great, and sometimes they lead to really interesting research, but if we had a recipe for research breakthroughs, we’d have more of them.
LE: I’m not a great fan of advocating general principles on how people should do research… Having said that, the last part of the Science article – in the second link I gave you— speaks a little bit about related topics.
 

If we turn to outright violations of the usual codes of integrity, in the last few years, and in the Netherlands as well, there have been a few important cases of such wrongdoings (plagiarism and fraud for example). Regarding the latter, it seems that empirical research is more susceptible to fraud (data massage, non-random data selection, non- or misreporting of results, etc.) than theory is. Do you share this view? What can we do about these issues?

JL: Well, outright fabrication of data is really unfortunate and hopefully doesn’t

happen very often. At the same time, anyone who has done empirical research in economics understands that it involves dozens or hundreds of judgment calls about how to construct samples, deal with data quality issues, specify regression models, choose what to report and so forth. That’s why, at the end of the day, when you read an empirical paper you inevitably have to assess whether you think the authors are being transparent and making good decisions, and whether the results square with other evidence. You’ve also got to rely to some extent on the authors’ reputations as scholars.

LE: That being said, if an empirical result is influential enough, it will go through enough scrutiny and replication, which will reveal any fabrication. If fabrication isn’t revealed, most likely it is because it didn’t
get enough scrutiny, which must mean that it wasn’t very influential— and if this is the case, then the downside is limited. Of course, we would ideally want to live in an honest world with no crime; but even absent the ideal, given what we just said, we are not overly worried about data fabrication in economics.

Big data are likely to gain in importance in the future. Does this constitute a curse or a blessing when it comes to minimizing the risk of misbehavior by researchers?

JL: We don’t know. Certainly the growing use of proprietary data poses a problem for doing replication or follow-on studies. That being said, we’re not sure the recent history of replication studies in economics is a great model for science, either, because assuming there is no obvious malfeasance or coding error, once you get into the details there are usually so many judgment calls and decisions to argue about that it can become a real mess to replicate someone else’s findings.
LE: Yet, we would reiterate that we believe that the really influential discoveries are not going to be affected by any of this type of misbehavior. Let me try to draw an analogy … It’s like asking someone who
moves from a small house to a big house if he is more worried about burglary because the new house has more windows. Perhaps he should worry, but the benefits of living in the big house seem (to us) so much more important than the increased risk of burglary, that we do not know why we need to focus on this.

The usual closing questions: do you have any words of advice for young students who wish to follow your footsteps?

LE: They shouldn’t worry about following anyone’s footsteps, and just be confident in what they do and enjoy it.
JL: Wow, that question makes me feel old! I think I need to wait a few years to answer it.
 

Jonathan Levin and Liran Einav, both professors at Stanford University, presented the Tinbergen Institute Economics Lectures 2014. Levin is an applied economist interested in industrial organization, market design and the economics of technology. Einav carries out research in industrial organization and applied microeconomics, investigating insurance markets and using empirical analyses to explore the implications of adverse selection and moral hazard.