About two years ago, I posted something on my now non-existent Facebook account about how medical tests and treatments, especially those that are elective, are more likely to be offered if doctors are reimbursed well for them. My point was that there was a strong financial incentive to test and treat, even in cases where doing so would confer only little benefit to the patient's health, at best. A bunch of people (mainly physicians) responded on my wall pointing out how misguided I was. It was actually a bit more vociferous than this, but I digress.
Anyway, it turns out that I was right (in this case, NOT shocking). I just came across a great study by Joshua Gottleib, an economics job market candidate from Harvard. His study uses a natural experiment in physician incentives to examine whether payment drives care offered. Specifically, he takes advantage of a large scale policy change by Medicare in 1997. Previously, Medicare created different fee schedules for each of around 300 small geographic areas. This was done because production costs and other realities of providing a given service obviously varied across space. In 1997, they decided to coalesce these regions into 80 larger areas. For some smaller areas, there may have been large payouts for certain services which fell after 1997 because the average payout for their new larger group was lower. For others, it went the other way. In any case, comparing pre and post 1997 gives you a nice experiment as to what would happen to health services provision when payouts are changed for reasons other than local health outcomes or demand for care.
Whether you hold my priors or shared those of my misguided Facebook friends, the results remain astounding. Across all health services, Gottleib finds that "on average, a 2 percent increase in payment rates leads to a 5 percent increase in care provision per patient." Predictably the price response of services with an elective component (such as cataract surgery, colonoscopy and cardiac procedures - don't huff and puff, I said elective COMPONENT!) but not so much for things like dialysis or cancer care, where it is easy to identify who needs it and you need to do it no matter what. Furthermore, in addition to disproportionally adjusting the provision of relative intensive and elective treatments as reimbursements rise, physicians also invest in new technology to do so; this is beautifully illustrated by the examination of reimbursement rates and MRI purchases.
So what's the upshot of all this? Is this a good thing? Probably not. Despite scaling up technology, Gottleib is unable to find any impacts on health outcomes or mortality among cardiac patients (for which he explored more deeply the relationship between payouts and treatment). Furthermore, he asserts that "that changes in physician pro fit margins can explain up to one third of the growth in health spending over recent decades."
Ultimately, some good lessons here. First, if we are interested in bring down costs and increasing health care efficiency, we need to pay for things that actually help maintain and increase health. Second, we can't rely on physicians do be the gatekeepers of rising costs as it is clear that, given incentives, they may not always behave in a way that actually improves health outcomes (thankfully, for cases like fractures, cancer or end-stage renal disease treatment, docs aren't sensitive to prices and do the right thing clinically). Finally, we need to stop universally and blindly lauding the US health care system as a bastion of health care technology if that technology does little to improve outcomes.
Welcome! This is a blog that generally covers issues related to health and development economics. Feel free to visit and comment as often as you'd like.
Sunday, November 27, 2011
Saturday, November 5, 2011
Infections and IQ
A well known fact about our world is that there are great disparities in average IQ scores across countries. In the past, some have tried to argue that this pattern be explained by innate differences in cognition across populations - some people are just innately smarter than others. Others have tried to attribute these to cultural factors. However, genetics and culture are likely not driving these differences in any meaningful sense. After all, another stylized fact is that average IQ scores have been going up markedly, within one or two generations, within any given country. These changes, also known as the Flynn Effect after the researcher who painstakingly documented them, speak against the genes story because they occurred far more quickly than one would expect from population-wide changes in the distribution of cognition-determining genes. The have occured too quickly to be explained by paradigm shifting social changes, as well.
So what gives? Enter Chris Eppig, a researcher at the University of New Mexico. In a recent piece in The Scientific American , he proposes that cross-country differences in IQ, as well as changes in IQ rates within a country over time, can be explained by exposure to infectious diseases early in life. The story goes something like this: infections early in life require energy to fight off. Energy during this age is primarily used for brain development (in infancy, it is thought that over 80% of calories are allocated to neurologic development). So if energy is diverted to fend off infections, it can't be used to develop cognitive endowments, and afflicted infants and children end up becoming adults that do poorly on IQ tests.
In the piece, Eppig cites some of his work linking infectious disease death rates in countries to average IQ scores. His models control for country income and a few other important macroeconomic variables. His evidence, while not proof of a causal relationship, is certainly provocative. So provocative in fact that I ended up trying to build a stronger causal story between early childhood infections and later life cognitive outcomes. In a recent paper (cited in the above Scientific American article), I examine the impact of early life exposure to malaria on later life performance on a visual IQ test. I use a large-scale malaria eradication program in Mexico (1957) as a quasi-experiment to prove causality. Basically, I find that individuals born in states with high rates of malaria prior to eradication - the area that gained most from eradication - experienced large gains in IQ test scores after eradication relative those born in states with low pre-intervention malaria rates, areas that did not benefit as much from eradication (see this Marginal Revolution piece for a slightly differently worded explanation).
My paper also looks at the mechanisms linking infections and cognition. One possibility is the biological model described above - infections divert nutritional energy away from brain development. However, I also find evidence of a second possibility: parents respond to initial differences in cognition or health due to early life infections and invest in their children accordingly. In the Mexican data, children who were less afflicted by malaria thanks to the eradication program started school earlier than those who were more afflicted. Because a child's time is the domain of parental choice, this suggests that parents reinforce differences in the way their children are (- erhaps they feel that smarter children will be smarter adults, and so investments in their schooling will yield a higher rate of return - and that this can modulate the relationship between early life experiences and adulthood outcomes.
So what gives? Enter Chris Eppig, a researcher at the University of New Mexico. In a recent piece in The Scientific American , he proposes that cross-country differences in IQ, as well as changes in IQ rates within a country over time, can be explained by exposure to infectious diseases early in life. The story goes something like this: infections early in life require energy to fight off. Energy during this age is primarily used for brain development (in infancy, it is thought that over 80% of calories are allocated to neurologic development). So if energy is diverted to fend off infections, it can't be used to develop cognitive endowments, and afflicted infants and children end up becoming adults that do poorly on IQ tests.
In the piece, Eppig cites some of his work linking infectious disease death rates in countries to average IQ scores. His models control for country income and a few other important macroeconomic variables. His evidence, while not proof of a causal relationship, is certainly provocative. So provocative in fact that I ended up trying to build a stronger causal story between early childhood infections and later life cognitive outcomes. In a recent paper (cited in the above Scientific American article), I examine the impact of early life exposure to malaria on later life performance on a visual IQ test. I use a large-scale malaria eradication program in Mexico (1957) as a quasi-experiment to prove causality. Basically, I find that individuals born in states with high rates of malaria prior to eradication - the area that gained most from eradication - experienced large gains in IQ test scores after eradication relative those born in states with low pre-intervention malaria rates, areas that did not benefit as much from eradication (see this Marginal Revolution piece for a slightly differently worded explanation).
My paper also looks at the mechanisms linking infections and cognition. One possibility is the biological model described above - infections divert nutritional energy away from brain development. However, I also find evidence of a second possibility: parents respond to initial differences in cognition or health due to early life infections and invest in their children accordingly. In the Mexican data, children who were less afflicted by malaria thanks to the eradication program started school earlier than those who were more afflicted. Because a child's time is the domain of parental choice, this suggests that parents reinforce differences in the way their children are (- erhaps they feel that smarter children will be smarter adults, and so investments in their schooling will yield a higher rate of return - and that this can modulate the relationship between early life experiences and adulthood outcomes.
Subscribe to:
Posts (Atom)