As if war wasn't destructive enough, a new working paper by Mevlude Akbulut-Yuksel at Dalhousie University finds that childhood exposure to conflict-induced destruction has a wide variety of consequences in adulthood:
During World War II, more than one-half million tons of bombs were dropped in aerial raids on German cities, destroying about one-third of the total housing stock nationwide. This paper provides causal evidence on long-term consequences of large-scale physical destruction on the educational attainment, health status and labor market outcomes of German children. I combine a unique dataset on city-level destruction in Germany caused by Allied Air Forces bombing during WWII with individual survey data from the German Socio-Economic Panel (GSOEP). My identification strategy exploits the plausibly exogenous city-by-cohort variation in the intensity of WWII destruction as a unique quasi-experiment. My findings suggest significant, long-lasting detrimental effects on the human capital formation, health and labor market outcomes of Germans who were at school-age during WWII. First, these children had 0.4 fewer years of schooling on average in adulthood, with those in the most hard-h! it cities completing 1.2 fewer years. Second, these children were about half inches (one centimeter) shorter and had lower self-reported health satisfaction in adulthood. Third, their future labor market earnings decreased by 6% on average due to exposure to wartime physical destruction. These results survive using alternative samples and specifications, including controlling for migration. Moreover, a control experiment using older cohorts who were not school-aged during WWII reveals no significant city-specific cohort trends. An important channel for the effect of destruction on educational attainment appears to be the destruction of schools and the absence of teachers, whereas malnutrition and destruction of health facilities during WWII seem to be important for the estimated impact on health.
Welcome! This is a blog that generally covers issues related to health and development economics. Feel free to visit and comment as often as you'd like.
Friday, September 25, 2009
Monday, September 21, 2009
Psychiatric Pharmacotherapy and Crime
I'm now on my psychiatry rotation, and it would be an understatement to say that it has been interesting. Washington University is a big believer in biological models of psychiatric diseases - which means I need to have a good command of neurobiology and pharmacology to really understand what is going on with the ward patients - and this sentiment has grown more generally in the last 20-25 years.
One of the things we are taught in terms of epidemiology is the link between psychiatric disease and crime. As such, from a health policy standpoint, if we are trying to understand the net social impact of pharmacotherapy for psychiatric disease, we need to understand the impacts this may have on crime in addition to disease burden. A new working paper by Dave Marcotte and Sara Markowitz attempts to look into this issue:
In this paper we consider possible links between the advent and diffusion of a number of new psychiatric pharmaceutical therapies and crime rates. We describe recent trends in crime and review the evidence showing mental illness as a clear risk factor both for criminal behavior and victimization. We then briefly summarize the development of a number of new pharmaceutical therapies for the treatment of mental illness which diffused during the “great American crime decline.” We examine limited international data, as well as more detailed American data to assess the relationship between crime rates and rates of prescriptions of the main categories of psychotropic drugs, while controlling for other factors which may explain trends in crime rates. We find that increases in prescriptions for psychiatric drugs in general are associated with decreases in violent crime, with the largest impacts associated with new generation antidepressants and stimulants used to treat ADHD. Our estimates imply that about 12 percent of the recent crime drop was due to expanded mental health treatment.
As you can imagine, the authors have to work pretty hard to deal with all the unobserved heterogeneity/confounding that might lead to the spurious estimates of the treatment-drug relationship. I think the authors do a decent job (though any analysis of this sort will have limitations) and the 12% number, while seemingly high, is still lower than postulated impacts of other sources of the crime decline (abortion, reductions in lead exposure), and enough so that it actually sounds plausible.
One of the things we are taught in terms of epidemiology is the link between psychiatric disease and crime. As such, from a health policy standpoint, if we are trying to understand the net social impact of pharmacotherapy for psychiatric disease, we need to understand the impacts this may have on crime in addition to disease burden. A new working paper by Dave Marcotte and Sara Markowitz attempts to look into this issue:
In this paper we consider possible links between the advent and diffusion of a number of new psychiatric pharmaceutical therapies and crime rates. We describe recent trends in crime and review the evidence showing mental illness as a clear risk factor both for criminal behavior and victimization. We then briefly summarize the development of a number of new pharmaceutical therapies for the treatment of mental illness which diffused during the “great American crime decline.” We examine limited international data, as well as more detailed American data to assess the relationship between crime rates and rates of prescriptions of the main categories of psychotropic drugs, while controlling for other factors which may explain trends in crime rates. We find that increases in prescriptions for psychiatric drugs in general are associated with decreases in violent crime, with the largest impacts associated with new generation antidepressants and stimulants used to treat ADHD. Our estimates imply that about 12 percent of the recent crime drop was due to expanded mental health treatment.
As you can imagine, the authors have to work pretty hard to deal with all the unobserved heterogeneity/confounding that might lead to the spurious estimates of the treatment-drug relationship. I think the authors do a decent job (though any analysis of this sort will have limitations) and the 12% number, while seemingly high, is still lower than postulated impacts of other sources of the crime decline (abortion, reductions in lead exposure), and enough so that it actually sounds plausible.
Saturday, September 19, 2009
Variance in Physician Behaviors
I just finished my internal medicine clerkship a week ago. One of the more frustrating yet interesting things I noticed was how one attending physician would praise me for a suggestion while another would gently chastise me for making the same point. That physician behavior varies from one area to the next is well known. However, the kind of variation I experienced - within area variation is less well studied. This sort of variation is equally interesting and important for policy. Do physicians in the same area vary in practice styles because of their baseline experiences and subsequent Bayesian updating? Will physicians converge to the same set of practices or beliefs by learning from their peers?
Andrew Epstein and Sean Nicholson attempt to quantify and explain within area variation in an interesting paper forthcoming in the Journal of Health Economics. From their abstract:
Small-area-variation studies have shown that physician treatment styles differ substantially both between and within markets, controlling for patient characteristics. Using a data set containing the universe of deliveries in Florida over a 12-year period with consistent physician identifiers and a rich set of patient characteristics, we examine why treatment styles differ across obstetricians at a point in time, and why styles change over time. We find that the variation in c-section rates across physicians within a market is two to three times greater than the variation between markets. Surprisingly, residency programs explain less than four percent of the variation between physicians in their risk-adjusted c-section rates, even among newly-trained physicians. Although we find evidence that physicians, especially relatively inexperienced ones, learn from their peers, they do not substantially revise their prior beliefs regarding how patients should be treated due to the local exchange of information. Our results indicate that physicians are not likely to converge over time to a community standard; thus, within-market variation in treatment styles is likely to persist.
What is fascinating is that (a) early-career variation in treatment styles cannot be explained by the place and nature of training and (b) that while there is considerable cross talk across attending physicians, doctors are reluctant to change their beliefs. This makes things difficult from a policy standpoint: how do you get physicians on board with new guidelines or encourage local diffusion of best practices when the rate of change is so slow and the variation apparently idiosyncratic?
Andrew Epstein and Sean Nicholson attempt to quantify and explain within area variation in an interesting paper forthcoming in the Journal of Health Economics. From their abstract:
Small-area-variation studies have shown that physician treatment styles differ substantially both between and within markets, controlling for patient characteristics. Using a data set containing the universe of deliveries in Florida over a 12-year period with consistent physician identifiers and a rich set of patient characteristics, we examine why treatment styles differ across obstetricians at a point in time, and why styles change over time. We find that the variation in c-section rates across physicians within a market is two to three times greater than the variation between markets. Surprisingly, residency programs explain less than four percent of the variation between physicians in their risk-adjusted c-section rates, even among newly-trained physicians. Although we find evidence that physicians, especially relatively inexperienced ones, learn from their peers, they do not substantially revise their prior beliefs regarding how patients should be treated due to the local exchange of information. Our results indicate that physicians are not likely to converge over time to a community standard; thus, within-market variation in treatment styles is likely to persist.
What is fascinating is that (a) early-career variation in treatment styles cannot be explained by the place and nature of training and (b) that while there is considerable cross talk across attending physicians, doctors are reluctant to change their beliefs. This makes things difficult from a policy standpoint: how do you get physicians on board with new guidelines or encourage local diffusion of best practices when the rate of change is so slow and the variation apparently idiosyncratic?
Wednesday, September 16, 2009
How the Steelers Can Win An Additional Game this Season
Fall means football, and the NFL season started off in full force last Saturday. Like any well-invested football fan, I've been following every piece of gossip and analysis in a futile attempt to predict outcomes and will the Pittsburgh Steelers to victory.
Interestingly, the most intriguing analysis of the year to date has come from a pair of economists, which includes Steve Levitt of Freakonomics fame. They write:
Game theory makes strong predictions about how individuals should behave in two player, zero sum games. When players follow a mixed strategy, equilibrium payoffs should be equalized across actions, and choices should be serially uncorrelated. Laboratory experiments have generated large and systematic deviations from the minimax predictions. Data gleaned from real-world settings have been more consistent with minimax, but these latter studies have often been based on small samples with low power to reject. In this paper, we explore minimax play in two high stakes, real world settings that are data rich: choice of pitch type in Major League Baseball and whether to run or pass in the National Football League. We observe more than three million pitches in baseball and 125,000 play choices for football. We find systematic deviations from minimax play in both data sets. Pitchers appear to throw too many fastballs; football teams pass less than they should. In both sports, there is negative serial correlation in play calling. Back of the envelope calculations suggest that correcting these decision making errors could be worth as many as two additional victories a year to a Major League Baseball franchise, and more than a half win per season for a professional football team.
Interesting stuff. I wonder if this article will garner as much controversy as a piece written a few years back by economist David Romer, whose analysis suggested that NFL teams play a risk-averse, sub-optimal strategy when punting the football outside of their own red zone.
Interestingly, the most intriguing analysis of the year to date has come from a pair of economists, which includes Steve Levitt of Freakonomics fame. They write:
Game theory makes strong predictions about how individuals should behave in two player, zero sum games. When players follow a mixed strategy, equilibrium payoffs should be equalized across actions, and choices should be serially uncorrelated. Laboratory experiments have generated large and systematic deviations from the minimax predictions. Data gleaned from real-world settings have been more consistent with minimax, but these latter studies have often been based on small samples with low power to reject. In this paper, we explore minimax play in two high stakes, real world settings that are data rich: choice of pitch type in Major League Baseball and whether to run or pass in the National Football League. We observe more than three million pitches in baseball and 125,000 play choices for football. We find systematic deviations from minimax play in both data sets. Pitchers appear to throw too many fastballs; football teams pass less than they should. In both sports, there is negative serial correlation in play calling. Back of the envelope calculations suggest that correcting these decision making errors could be worth as many as two additional victories a year to a Major League Baseball franchise, and more than a half win per season for a professional football team.
Interesting stuff. I wonder if this article will garner as much controversy as a piece written a few years back by economist David Romer, whose analysis suggested that NFL teams play a risk-averse, sub-optimal strategy when punting the football outside of their own red zone.
Subscribe to:
Posts (Atom)