According to this recent paper by Ben Sommers, et al, in The New England Journal of Medicine, the answer is yes. Sommers and coauthors examine mortality before and after medicaid expansions in certain states, using the same before-after comparison in neighboring states as a control to account for general trends in mortality that would exist without Medicaid changes. They find expansions "were associated with a significant reduction in adjusted all-cause mortality (by 19.6 deaths per 100,000 adults, for a relative reduction of 6.1%; P=0.001)...Mortality reductions were greatest among older adults, nonwhites, and residents of poorer counties." They also find evidence of better self-reported health and reduced delays to getting care with Medicaid expansions.
This is a well done, timely piece. The recent Supreme Court ruling declared the provisions in the Affordable Care Act (ACA) decreeing mandatory Medicaid expansions to be unconstitutional. This has led some state governors to announce that they will not pursue such expansions. Some of this, of course, is dumb election year posturing: the ACA increases the Federal fund matching rate for Medicaid by a great deal, so governors will most likely buy in to the expansions because of these nice financial incentives. That said, this piece provides another angle to this whole debate, as failure to expand Medicaid may not just mean money left on the table, but people's lives as well.
Welcome! This is a blog that generally covers issues related to health and development economics. Feel free to visit and comment as often as you'd like.
Showing posts with label medicine. Show all posts
Showing posts with label medicine. Show all posts
Thursday, July 26, 2012
Wednesday, July 18, 2012
Value Based Puchasing and Safety Net Hospitals
One of the mainstays of our current health care reform dialogue is promoting efficiency. The US health care system gets pretty good outcomes for its clients, but it comes at a great cost and with a lack of equity. Regarding cost, there has been a push to pay providers (doctors, clinics, hospitals, etc) for the quality of the services they render, not just for having rendered the service. Along these lines, with the new health care reform bill, Medicare will start paying hospitals based on performance as measured by the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). This is what is referred to as Value Based Purchasing (VBP).
En face, it sounds like a great idea. But are there potential downsides to VBP? An elegant new study by Paula Chatterjee and colleagues suggests that the answer might be yes. Paula, et al, look at how safety net hospitals, a group of providers that disproptionately take care of poor, vulnerable patients have performed on HCAHPS surveys, showing that these hospitals typically have fared work on the patient satisfaction aspect of this index. Strikingly, safety net hospitalis were 60% less likely to be at or above the median on a variety of patient experience measures when compared to other hospitals. Because the median is the key metric upon which the Medicare VBP algorithms are based, safety net hospitals will potentially stand to lose key Medicare dollars. As they are financially constrained as it is, this could potentially lead to a negative feedback loop, where safety net hospitals lack the funds to make the improvements necessary to perform better on VBP, thereby losing more funds, and so on.
This example highlights a few key points that are relevant for health care reform anywhere. First, certain kinds of incentives will create, or exacerbate existing, tradeoffs between making the system more equitable versus more efficient. In this case, the push for efficiency may potentially reduce available care options for the poor, which leads to worsening equity. Note that this could even lead to worsened efficiency, if the lack of immediate care for the poor leads to increased downstream costs (eg, more ED visits, or longer hospitalizations when an earlier, less costly, hospitalization would have done). Second, better designed incentives may actually help move to an equilibrium where this tradeoff is minimized. For example, a VBP system which allows safety net hospitals lead time in figuring out best practices (perhaps via learning from each other, or via some explicit "curriculum" or forum created by Medicare) may help them improve quality -which will certainly be equitable given their population- and stay solvent.
Ultimately, I think key for payment reform, which should be the explicit goal for Medicare and other health programs, is to perform the reachable alchemy of improving both equity and efficiency. I say "reachable" for a reason: everyone is aware that other OECD countries spend less and gain the same or more when it comes to health. At least on a less fundamental level (i.e, payment reform within our mish-mosh private system), we can aim to do the same.
En face, it sounds like a great idea. But are there potential downsides to VBP? An elegant new study by Paula Chatterjee and colleagues suggests that the answer might be yes. Paula, et al, look at how safety net hospitals, a group of providers that disproptionately take care of poor, vulnerable patients have performed on HCAHPS surveys, showing that these hospitals typically have fared work on the patient satisfaction aspect of this index. Strikingly, safety net hospitalis were 60% less likely to be at or above the median on a variety of patient experience measures when compared to other hospitals. Because the median is the key metric upon which the Medicare VBP algorithms are based, safety net hospitals will potentially stand to lose key Medicare dollars. As they are financially constrained as it is, this could potentially lead to a negative feedback loop, where safety net hospitals lack the funds to make the improvements necessary to perform better on VBP, thereby losing more funds, and so on.
This example highlights a few key points that are relevant for health care reform anywhere. First, certain kinds of incentives will create, or exacerbate existing, tradeoffs between making the system more equitable versus more efficient. In this case, the push for efficiency may potentially reduce available care options for the poor, which leads to worsening equity. Note that this could even lead to worsened efficiency, if the lack of immediate care for the poor leads to increased downstream costs (eg, more ED visits, or longer hospitalizations when an earlier, less costly, hospitalization would have done). Second, better designed incentives may actually help move to an equilibrium where this tradeoff is minimized. For example, a VBP system which allows safety net hospitals lead time in figuring out best practices (perhaps via learning from each other, or via some explicit "curriculum" or forum created by Medicare) may help them improve quality -which will certainly be equitable given their population- and stay solvent.
Ultimately, I think key for payment reform, which should be the explicit goal for Medicare and other health programs, is to perform the reachable alchemy of improving both equity and efficiency. I say "reachable" for a reason: everyone is aware that other OECD countries spend less and gain the same or more when it comes to health. At least on a less fundamental level (i.e, payment reform within our mish-mosh private system), we can aim to do the same.
Sunday, July 15, 2012
PEPFAR on the Mind
The July 31st issue of Health Affairs devotes itself to the President's Emergency Plan for AIDS Relief (PEPFAR) and explores the aid program from a number of interesting viewpoints. The highlights for me includes a piece by Harsha Thirumurthy and coauthors on the economic benefits of treating HIV and a wonderful piece of research by Karen Grepin examining the effects of PEPFAR aid on other domains on health care (she finds positive spillovers for certain maternal health services, but crowd-out for child vaccines). The article by John Donnelly discusses the origin of PEPFAR within the (W) Bush administration, which I am sure would make most send some positive vibes to this otherwise beleaguered president.
The articles on the future of PEPFAR (how the funding streams can be better targeted, how much funding is needed and for how much longer) are quite interesting, as well. Certainly, if you have a particular interest in PEPFAR, the whole issue is worth reading.
The articles on the future of PEPFAR (how the funding streams can be better targeted, how much funding is needed and for how much longer) are quite interesting, as well. Certainly, if you have a particular interest in PEPFAR, the whole issue is worth reading.
Thursday, May 17, 2012
Global Health Focus in This Week's JAMA
Some interesting new op-ed and research pieces on global health in the latest issue of JAMA. Perhaps the most interesting is this piece by Eran Bendavid and co-authors, who examine PEPFAR (The United States President's Emergency Plan for AIDS Relief) and its impact on all-cause mortality in Africa. This is important because some have argued that intense spending on HIV in this manner has crowded out spending on other important health problems. The authors use a differences-in-differences strategy (comparing mortality rates within countries before and after PEPFAR, with some areas getting funding and others not as much, holding context fixed factos across countries) and find that increased PEPFAR funding was associated with lower all cause mortality rates. (HT: Paula Chatterjee)
Monday, April 16, 2012
Poorly Conceived Incentives in Health Care: The AM Discharge
There's no shortage of examples of well-meaning, but ultimately failing, incentives in health care, but here is a recent one that I've come across as an intern. In the community hospital we all rotate through, interns are paid $5 dollars if they manage to discharge 2 patients by 10 AM. Early discharges are valued by hospitals because they help with turnover and flow: overcrowding in the emergency room is kept to a minimum if hospital beds are available in the morning. But discharges take time: as an intern, we have to round on all of our inpatients in the morning, any new patients we inherit from the overnight team, and write a bunch of orders and notes. Discharges require thought, often irritating paperwork and emails, and all of that crowds out time for these other tasks. Furthermore, discharging people only means that you'll get more new patients later. For most of us, that's usually exciting, but its easy to see how this may seem like a penalty for keeping the hospital's best interests in mind.
In most places I've worked, there are no incentives for residents to get people out by noon, other than to avoid having to be gently reminded (over and over) by case managers and nurses to get the work done. Obviously, since the (opportunity) costs far outweigh the benefits, I'm sure most interns and residents are not in real rush to discharge people when they could being doing other things for their patients (or themselves).
Enter the hospital where I am now. They've ostensibly addressed this incentive compatibility problem by offering a small financial incentive for a certain number of pre-10 AM discharges. However, I think their $5 scheme is doomed to fail for two reasons. First, $5 dollars is not much money and most of us value our time far more than that. Second, and likely worse, such small incentives may actually make us less likely to move people out the door. Indeed, a number of experimental studies in behavioral economics and psychology have shown that small monetary rewards may reduce effort, either by reminding us that we get paid less than our peers in other fields or by demeaning us (or the task) with the size of the offer. Several of the interns I've spoken to have interpreted the incentive scheme in the latter light.
So what is the right kind of incentive? There are several, likely more effective, options. One is to increase the cash value of the incentive: perhaps the intern or resident at the end of the month (or year, to reduce sampling error as far as the medical problems and sickness of the patient) gets $100 or more for getting the most people out by noon. Perhaps even better is to take the monetary aspect out of it, altogether. As physicians, many of us value the altruism that goes with the field. Why not make the hospital brass-resident relationship center around that aspect and reward the intern/resident who discharges most effectively with some kind of public, visible and worthy commendation that he/she is a good doctor (at least one valued by the hospital for his/her attention to patient care and the realities of health care delivery on a larger scale)?
In most places I've worked, there are no incentives for residents to get people out by noon, other than to avoid having to be gently reminded (over and over) by case managers and nurses to get the work done. Obviously, since the (opportunity) costs far outweigh the benefits, I'm sure most interns and residents are not in real rush to discharge people when they could being doing other things for their patients (or themselves).
Enter the hospital where I am now. They've ostensibly addressed this incentive compatibility problem by offering a small financial incentive for a certain number of pre-10 AM discharges. However, I think their $5 scheme is doomed to fail for two reasons. First, $5 dollars is not much money and most of us value our time far more than that. Second, and likely worse, such small incentives may actually make us less likely to move people out the door. Indeed, a number of experimental studies in behavioral economics and psychology have shown that small monetary rewards may reduce effort, either by reminding us that we get paid less than our peers in other fields or by demeaning us (or the task) with the size of the offer. Several of the interns I've spoken to have interpreted the incentive scheme in the latter light.
So what is the right kind of incentive? There are several, likely more effective, options. One is to increase the cash value of the incentive: perhaps the intern or resident at the end of the month (or year, to reduce sampling error as far as the medical problems and sickness of the patient) gets $100 or more for getting the most people out by noon. Perhaps even better is to take the monetary aspect out of it, altogether. As physicians, many of us value the altruism that goes with the field. Why not make the hospital brass-resident relationship center around that aspect and reward the intern/resident who discharges most effectively with some kind of public, visible and worthy commendation that he/she is a good doctor (at least one valued by the hospital for his/her attention to patient care and the realities of health care delivery on a larger scale)?
Saturday, April 14, 2012
Can Tax Breaks Help Decrease Organ Shortages?
Despite advances in organ procurement, immune suppression (to prevent the body from rejecting a transplant), and surgical techniques, organ shortages and wait list numbers continue to grow in the United States. Calls for instituting organ markets (including ones from prominent economists) notwithstanding, ethical concerns have made outright payments for organs a non-starter policy in the American context. However, policymakers have recognized that those who wish to donate organs, particularly living donors, may face enormous costs in the form of lost wages, health care costs and travel and lodging fees. As such, efforts have been made to reduce these financial barriers to donation: in the last decade, 16 states have now passed laws to provide tax deductions to living organ donors, with many other considering similar legislation.
Have these policies been effective in increasing donation rates? In a new paper in the American Journal of Transplantation, Erika Martin, Anitha Vijayan, Jason Wellen and I explore this question using data on living organ donations and transplants for each state between 2000-2010. Our research design basically compares the change in donation rates before and after the enactment of a tax policy in states that passed these laws against the change in same time period in those states that did not (the differences-in-differences design). We found no evidence that these laws were effective in appreciably closing the organ shortage. At most, they may have led to 5% increases in donation rates (the upper bound of our confidence intervals) - not too bad, but not the panacea.
Why is this the case? First off, tax deductions don't put that much money back in your pocket. For an upper middle class family of four, the value of a deduction amounts to $500-$900. Nothing to sneer at, but still below the estimated (opportunity) cost of making a donation. We argue that tax credits (which have a higher cash value) or grants may work better. Second, in doing the research we ended up talking to organ donation activists in many states, many of whom had no idea these policies were in place! It's possible that a lack of awareness led to our null finding, as well. Finally, while we don't elaborate on this in the paper, our Figure 1 shows that states that passed these laws experienced increasing donation rates relative to the other set of states prior to the passage of the laws. So it's possible that states that were progressive enough to pass tax deductions were already doing other things to bump up donation rates.
Finally, while tax deductions may not lead to new donors, they may be helping people who would donate anyway. For example, a living donor who is compensated for his/her private costs because of the tax policy, perhaps as a result in less financial distress, certainly would benefit from the policy. We don't really have the data to examine this "intensive margin." On this point, I wouldn't yet write tax policies off.
Have these policies been effective in increasing donation rates? In a new paper in the American Journal of Transplantation, Erika Martin, Anitha Vijayan, Jason Wellen and I explore this question using data on living organ donations and transplants for each state between 2000-2010. Our research design basically compares the change in donation rates before and after the enactment of a tax policy in states that passed these laws against the change in same time period in those states that did not (the differences-in-differences design). We found no evidence that these laws were effective in appreciably closing the organ shortage. At most, they may have led to 5% increases in donation rates (the upper bound of our confidence intervals) - not too bad, but not the panacea.
Why is this the case? First off, tax deductions don't put that much money back in your pocket. For an upper middle class family of four, the value of a deduction amounts to $500-$900. Nothing to sneer at, but still below the estimated (opportunity) cost of making a donation. We argue that tax credits (which have a higher cash value) or grants may work better. Second, in doing the research we ended up talking to organ donation activists in many states, many of whom had no idea these policies were in place! It's possible that a lack of awareness led to our null finding, as well. Finally, while we don't elaborate on this in the paper, our Figure 1 shows that states that passed these laws experienced increasing donation rates relative to the other set of states prior to the passage of the laws. So it's possible that states that were progressive enough to pass tax deductions were already doing other things to bump up donation rates.
Finally, while tax deductions may not lead to new donors, they may be helping people who would donate anyway. For example, a living donor who is compensated for his/her private costs because of the tax policy, perhaps as a result in less financial distress, certainly would benefit from the policy. We don't really have the data to examine this "intensive margin." On this point, I wouldn't yet write tax policies off.
Monday, March 26, 2012
Dear Rush Limbaugh: Oral Contraceptives Have Positive Labor Market Consequences!
Rush Limbaugh doesn't like oral contraception. At least, he doesn't think health insurers should have to pay it ('Dar He Blogs will keep this debate academic and avoid any "pot-kettle-black" type comments). I'm guessing he wasn't aware of a new study by Martha Bailey and co-authors illustrating the large, positive labor market effects "The Pill" has provided for women:
Decades of research on the U.S. gender gap in wages describes its correlates, but little is known about why women changed their career paths in the 1960s and 1970s. This paper explores the role of “the Pill” in altering women’s human capital investments and its ultimate implications for life-cycle wages. Using state-by-birth-cohort variation in legal access to contraception, we show that younger access to the Pill conferred an 8-percent hourly wage premium by age fifty. Our estimates imply that the Pill can account for 10 percent of the convergence of the gender gap in the 1980s and 30 percent in the 1990s.
Why would birth control pills enable women to earn more? By allowing more control over the reproductive cycle, this would reduce uncertainty in the timing of certain events, such as pursuing college, job training opportunities, and entering the workforce. One could easily imagine how providing women with relative certainty could lead to more investment in their "human capital" because it is now easier to do so and the returns become more predictable.
Another way oral contraceptives can help in the labor market has to do with absenteeism due to menstruation. In a really neat study, Andrea Ichino and Enrico Moretti show that work absences for young women follow a 28 day cycle, whereas those for women over the age of 45, and men of any age, do not. They suggest that this pattern is due to menstruation and go on to calculate that lost work days on the account of the cycle may explain 14% of the gender differential in earnings seen in their Italian dataset.
Decades of research on the U.S. gender gap in wages describes its correlates, but little is known about why women changed their career paths in the 1960s and 1970s. This paper explores the role of “the Pill” in altering women’s human capital investments and its ultimate implications for life-cycle wages. Using state-by-birth-cohort variation in legal access to contraception, we show that younger access to the Pill conferred an 8-percent hourly wage premium by age fifty. Our estimates imply that the Pill can account for 10 percent of the convergence of the gender gap in the 1980s and 30 percent in the 1990s.
Why would birth control pills enable women to earn more? By allowing more control over the reproductive cycle, this would reduce uncertainty in the timing of certain events, such as pursuing college, job training opportunities, and entering the workforce. One could easily imagine how providing women with relative certainty could lead to more investment in their "human capital" because it is now easier to do so and the returns become more predictable.
Another way oral contraceptives can help in the labor market has to do with absenteeism due to menstruation. In a really neat study, Andrea Ichino and Enrico Moretti show that work absences for young women follow a 28 day cycle, whereas those for women over the age of 45, and men of any age, do not. They suggest that this pattern is due to menstruation and go on to calculate that lost work days on the account of the cycle may explain 14% of the gender differential in earnings seen in their Italian dataset.
Thursday, March 22, 2012
Shortening Medical Training? Yes, Please!
Those who want to become a doctor in the United States stare at a very long road: 4 years of college, 4 years of medical school, 3-7 years of residency and, potentially, fellowship to finish it all off. (Some crazies additionally tack on another 1-5 years to do this in pursuing a joint degree, as well). But does it really require all this time to create a doctor well-equipped to fight disease in our current times?
None another than Ezekiel Emanuel and Victor Fuchs (one of the Founding Fathers of health economics) think that the answer is "no." In a recent piece in the Journal of the American Medical Association, they state that the current educational system for physicians is, in fact, wasteful. They point out that a physician today cannot credibly be the superhero, one-man-band of master clinician, penetrating researcher, community pillar, and public intellectual that she/he was expected to be up through the 1980s. The state that the complexity of medicine today requires a team based approach where doctors are relative specialists. Emanuel and Fuchs illustrate how physicians today are now more likely to take on the one (or two) of these archetypical roles that fit with their comparative strengths while letting other members of the larger team fulfill the other roles. In contrast to this new reality of medicine, our current system of medical education is wasteful because it still aims to produce one-man-band types.
So how can medical education be shortened? Emanuel and Fuchs suggest the following:
-Loosen requirements that undergraduate degrees be mandatory (there are plenty of six or seven year combined undergrad-MD programs that produce equally good doctors)
-Cut the pre-clinical years of medical school from 2 to 1.5 years (UPenn and Duke have shortened versions of the classroom years)
-Cut the clinical years from 24 to 15 months (Harvard is currently doing this, quite successfully I might add)
-Eliminate research requirements in residency and fellowship for those who do not want to do them (this, for example, would shave off 2 of the 7 years to become a general surgeon)
-Eliminate "leadership years" (for example, in internal medicine, the third year is to lead teams and has little value added in the education production function)
I am 100% behind this. Much of our current system persists because of historical considerations from nearly a century ago. In the same way that some of our antiquated mechanisms to finance health care (for example, employer sponsored health insurance) don't make much sense anymore, neither does making people spend a great deal of time in training where up to a third of it has little marginal benefit in turning people into good doctors.
If we are concerned with cost and efficiency in the health care system, we ought to paying as much attention to being efficient in how we train doctors as we are in figuring out how to pay them.
None another than Ezekiel Emanuel and Victor Fuchs (one of the Founding Fathers of health economics) think that the answer is "no." In a recent piece in the Journal of the American Medical Association, they state that the current educational system for physicians is, in fact, wasteful. They point out that a physician today cannot credibly be the superhero, one-man-band of master clinician, penetrating researcher, community pillar, and public intellectual that she/he was expected to be up through the 1980s. The state that the complexity of medicine today requires a team based approach where doctors are relative specialists. Emanuel and Fuchs illustrate how physicians today are now more likely to take on the one (or two) of these archetypical roles that fit with their comparative strengths while letting other members of the larger team fulfill the other roles. In contrast to this new reality of medicine, our current system of medical education is wasteful because it still aims to produce one-man-band types.
So how can medical education be shortened? Emanuel and Fuchs suggest the following:
-Loosen requirements that undergraduate degrees be mandatory (there are plenty of six or seven year combined undergrad-MD programs that produce equally good doctors)
-Cut the pre-clinical years of medical school from 2 to 1.5 years (UPenn and Duke have shortened versions of the classroom years)
-Cut the clinical years from 24 to 15 months (Harvard is currently doing this, quite successfully I might add)
-Eliminate research requirements in residency and fellowship for those who do not want to do them (this, for example, would shave off 2 of the 7 years to become a general surgeon)
-Eliminate "leadership years" (for example, in internal medicine, the third year is to lead teams and has little value added in the education production function)
I am 100% behind this. Much of our current system persists because of historical considerations from nearly a century ago. In the same way that some of our antiquated mechanisms to finance health care (for example, employer sponsored health insurance) don't make much sense anymore, neither does making people spend a great deal of time in training where up to a third of it has little marginal benefit in turning people into good doctors.
If we are concerned with cost and efficiency in the health care system, we ought to paying as much attention to being efficient in how we train doctors as we are in figuring out how to pay them.
Sunday, March 18, 2012
Thank You, Affordable Care Act!
I've come across quite a few patients now in their early 20s presenting with the first symptoms of what may be a serious chronic illness. Many of these individuals happen to be without health insurance for one reason or another. Thankfully, in Massachusetts, the combination of Mass Health (Medicaid), Commonwealth Care and coverage options for young adults under 26 allows many of these individuals to get much needed care. On the other hand, my patients from neighboring states do not necessarily have access to these luxuries, which is were the Patient Protection Affordable Care Act (PPACA) comes in.
The PPACA has quite a few moving parts, some of which are in place and others not (see here and here). One piece that has gone into effect mandates that health plans that cover children of the enrollee to now cover said children up to the age of 26. For several of my patients, this has allowed them to get access to health care as they bridge to their late 20s and eventually find their own care options. As their doctor, this has been huge: it prevents my patients from deferring care for a serious condition that would most certainly result in large short-term and long-term economic and health consequences.
Interestingly, I'm not sure if too many people know about this aspect of the health care law. I told one self-identified Republican, a young man who would go on to benefit from the extension of parental insurance, about it and he seemed shocked: "For real? You mean, this is President Obama's idea? Wow, he's looking out for us."
I wonder now if much of the resistance to the PPACA has to do with similarly placed ignorance. If that is the case, the Republicans should be credited for obfuscating the national debate around the law in their favor and the Democrats chastised for allowing this to happen.
I'm not saying I'm a PPACA homer or anything. The act certainly has some issues. That said, as a new primary care doctor, I just can't imagine practicing in a time where such options were not available. Really, it's incredible that that time was literally a year or so ago. Imagine holding off treatment for newly diagnosed active and fulminant Crohn's disease because of lack of access: would that happen in a just, advanced society? Thankfully, not anymore.
The PPACA has quite a few moving parts, some of which are in place and others not (see here and here). One piece that has gone into effect mandates that health plans that cover children of the enrollee to now cover said children up to the age of 26. For several of my patients, this has allowed them to get access to health care as they bridge to their late 20s and eventually find their own care options. As their doctor, this has been huge: it prevents my patients from deferring care for a serious condition that would most certainly result in large short-term and long-term economic and health consequences.
Interestingly, I'm not sure if too many people know about this aspect of the health care law. I told one self-identified Republican, a young man who would go on to benefit from the extension of parental insurance, about it and he seemed shocked: "For real? You mean, this is President Obama's idea? Wow, he's looking out for us."
I wonder now if much of the resistance to the PPACA has to do with similarly placed ignorance. If that is the case, the Republicans should be credited for obfuscating the national debate around the law in their favor and the Democrats chastised for allowing this to happen.
I'm not saying I'm a PPACA homer or anything. The act certainly has some issues. That said, as a new primary care doctor, I just can't imagine practicing in a time where such options were not available. Really, it's incredible that that time was literally a year or so ago. Imagine holding off treatment for newly diagnosed active and fulminant Crohn's disease because of lack of access: would that happen in a just, advanced society? Thankfully, not anymore.
Saturday, March 3, 2012
Preventing HIV and STDs with Cash Transfers - Both Abroad and Here?
So on the heels of my post a few days ago comes a brand new study in The Lancet where women in Malawi randomized to receive unconditional cash transfers were less likely to contract HIV or other STDs than their unpaid counterparts. Here is a very nice summary piece on the study.
The mechanism linking cash transfers to reduced HIV may have something to do with the fact that women with access to such resources need not depend on men for the same. That is, women who are cash strapped or who lack opportunities in the labor market may need to depend on relationships where the partner can support them financially. Financial support, in turn, may reduce their ability to negotiate safe sex practices (this is the so-called "transactional sex").
Cash transfer programs of this nature may not just be useful overseas. A forthcoming article in the Journal of Adolescent Health shows that young African American women in Atlanta who have boyfriends who give them gifts are less likely to use condoms than those without such boyfriends or those with boyfriends who go on to find another source of spending money. The authors conclude that "receiving spending money from a boyfriend is common among adolescent women in populations targeted by pregnancy and sexually transmitted infection prevention interventions, and may undermine interventions' effectiveness." (HT: Paula C for the Atlanta paper).
The mechanism linking cash transfers to reduced HIV may have something to do with the fact that women with access to such resources need not depend on men for the same. That is, women who are cash strapped or who lack opportunities in the labor market may need to depend on relationships where the partner can support them financially. Financial support, in turn, may reduce their ability to negotiate safe sex practices (this is the so-called "transactional sex").
Cash transfer programs of this nature may not just be useful overseas. A forthcoming article in the Journal of Adolescent Health shows that young African American women in Atlanta who have boyfriends who give them gifts are less likely to use condoms than those without such boyfriends or those with boyfriends who go on to find another source of spending money. The authors conclude that "receiving spending money from a boyfriend is common among adolescent women in populations targeted by pregnancy and sexually transmitted infection prevention interventions, and may undermine interventions' effectiveness." (HT: Paula C for the Atlanta paper).
Monday, February 27, 2012
Preventing HIV By Paying People to Maintain Safe Sex Practices
Finding an HIV prevention strategy that works is one of the Holy Grail's of public health policy. Indeed, poor evidence of the epidemic quelling effects of most prevention programs (outside of biomedical interventions such as circumcision) have driven many to accept that treatment may actually be the best preventive device we have.
That may be true, but I do not think that all modes of prevention have been exhausted. One margin where there is some growing evidence is in providing cash transfers for people who maintain a given set of behaviors over a period of time. For example, one could pay people for every month they do not test positive for HIV. Such methods have been tried in the substance abuse world, apparently to good effect. Could it work with HIV/AIDS?
In a great new NBER working paper, Damien de Walque, William H. Dow, Carol Medlin, and Rose Nathan argue that the answer is a resounding yes (ungated version here). From the abstract:
[W]e discuss the use of sexual-behavior incentives in the Tanzanian RESPECT trial. There, participants who tested negative for sexually transmitted infections are eligible for outcome-based cash rewards. The trial was well-received in the communities, with high enrollment rates and over 90% of participants viewing the incentives favorably. After one year, 57% of enrollees in the “low-value” reward arm stated that the cash rewards “very much” motivated sexual behavioral change, rising to 79% in the “high-value” reward arm. Despite its controversial nature, we argue for further testing of such incentive-based approaches to encouraging reductions in risky sexual behavior.
The abstract undersells some of the evidence they cite in the paper, so I would go ahead and read the entire thing. While people may somehow find it reprehensible to pay people to do the right thing, there is already a great precedent in education and yearly doctor visits (i.e., conditional cash transfers for those things are all the rage in the Americas) as well as (as mentioned above) with getting people to stop using illicit/recreational drugs.
That may be true, but I do not think that all modes of prevention have been exhausted. One margin where there is some growing evidence is in providing cash transfers for people who maintain a given set of behaviors over a period of time. For example, one could pay people for every month they do not test positive for HIV. Such methods have been tried in the substance abuse world, apparently to good effect. Could it work with HIV/AIDS?
In a great new NBER working paper, Damien de Walque, William H. Dow, Carol Medlin, and Rose Nathan argue that the answer is a resounding yes (ungated version here). From the abstract:
[W]e discuss the use of sexual-behavior incentives in the Tanzanian RESPECT trial. There, participants who tested negative for sexually transmitted infections are eligible for outcome-based cash rewards. The trial was well-received in the communities, with high enrollment rates and over 90% of participants viewing the incentives favorably. After one year, 57% of enrollees in the “low-value” reward arm stated that the cash rewards “very much” motivated sexual behavioral change, rising to 79% in the “high-value” reward arm. Despite its controversial nature, we argue for further testing of such incentive-based approaches to encouraging reductions in risky sexual behavior.
The abstract undersells some of the evidence they cite in the paper, so I would go ahead and read the entire thing. While people may somehow find it reprehensible to pay people to do the right thing, there is already a great precedent in education and yearly doctor visits (i.e., conditional cash transfers for those things are all the rage in the Americas) as well as (as mentioned above) with getting people to stop using illicit/recreational drugs.
Saturday, February 25, 2012
200 Images Every Physician Should Know
As part of their 200th anniversary celebrations, the New England Journal of Medicine has collected 200 of its most striking (and sometimes clinically relevant) images from their "Images in Clinical Medicine" series. These are pretty awesome - especially if you are busy intern trying to sneak in time to read, but don't have either that or the will to work through dense review articles. Think of these as flash cards for residents (or for all doctors in general!). Good stuff (HT: Paula C).
Wednesday, February 8, 2012
Internal M&M
Like the last two Mornings, The Intern parked his Laptop on Wheels at the Nursing Station and went into a nearby room to see Mr. B. And like the last two Mornings, Mr. B was there, lying in his hospital bed. The Intern looked him over. Some increased bruising on his right arm, maybe some increased bleeding in his gumline. The rest of him looked the same, save an IV sticking out from a long bone in his lower left leg. The Intern lifted the diaphragm of his stethoscope onto Mr. B's chest and heard nothing. This was a New Morning, and this was a New Finding. The Intern had expected it. A silence - really, a vacuum - had replaced once robust heart sounds.
Earlier That Morning, The Intern woke up, weary and maybe even a little defeated, after a Call Day that marched into the night. Like every Morning he reflexively checked his e-mail, and saw the Night Resident's message that Mr. B's heart stopped some time before daybreak. They couldn't resuscitate him. There was the jolt of shock and sadness, of course, but, for that one tiny slice of The Intern's cortex, a perverse affirmation, as well.
The Previous Morning, Mr. B greeted The Intern as always. "Hey doc, what's going on? How are you?" The Intern dutifully listened to his heart, his lungs, examined his eyes, mouth, fingernails and felt his abdomen. He felt for enlarged lymph nodes. Everything was fine like always, despite The Numbers telling him otherwise. Very low platelets, very high this, very low that. More Data forthcoming. Biopsy results pending. The difference between Mr. B in person, on exam, and Mr. B on The Computer was always jarring and troubling to The Intern. Something viscerally troubled him more on this day, however. He came to check in on Mr. B every two hours or so.
And so for the first part of Yesterday, Mr. B was just like he was on admission - well-appearing, kind, deferential, interested in talking about his family, and speaking seriously about an incipient scandal in college football. But later in the day, he seemed uncomfortable. "I don't know what's going on doc...I just don't feel right," he told The Intern. He pointed to his abdomen, just below his rib cage - "it hurts here. I’m sorry to keep bothering you." The Intern examined him, drew some Labs, looked over a Scan and then reassured him. "First of all, don’t apologize. I’m here to Take Care of you. And what’s going on, it's probably your pancreas. I'm going to give you some fluids, some pain medications. We’ll get an ultrasound. Don't eat anything, OK? Well, maybe you can have that milkshake over there. I'll turn a blind eye to that." A lame joke, a smile.
But The Intern was disturbed. Mr. B could not articulate what was wrong with him and The Intern knew him to be an articulate man. The Intern pushed what he recognized as an ominous feeling to the recesses of his brain, so that it occupied only a tiny slice of cortex, and pushed his Laptop on Wheels forward to see another patient who was very ill. He did, however, return to see Mr. B a few more times, and was reassured when he seemed a bit better. At eleven that night, The Intern saw Mr. B one final time before going home. "I'm great doc, thanks for everything! Go get some sleep!"
As he looked down at Mr. B on This Morning, The Intern's mind skipped through a slide set of images and thoughts, as if they were set to a random shuffle. He remembered, with some shame, how three days ago he was annoyed to be paged about a new admission late in the day, only to feel some form of joy when he connected with Mr. B during that first conversation. He thought about how Mr. B told him about the first time he met his wife, when she thought he was "some married jerk flirting with her," only to realize later that he was single and legit - "we've been together ever since." He wondered whether Mr. B's pancreatitis, chest pain and low platelets were all tied together by one of two catastrophic diseases, based on a discussion he had with The Consultants and The Attending the other day. He remembered what Mr. B said about his children, how they weren't wealthy but were doing jobs that they really liked and that that's what he’d always wanted for them. The Intern knew he'd tell his children the same thing someday.
He remembered how Mr. B kept talking about his family and his good fortune to have them in his life. He talked more about them Yesterday and The Intern wondered then if Mr. B somehow knew Yesterday itself that he would not see them ever again. He wondered if articulate people being unable to find words for their unease and distress was a poor prognostic sign.
And then he asked himself whether he could have done something differently. Whether he could have thought more about the medical mystery that was Mr. B instead of spending time running around writing for Eucerin cream and Imodium or rushing through tasks in order to go home at a reasonable hour. He wondered if Large Academic Hospital's Cadillac of a Differential Diagnosis distracted him from the few more obvious possibilities, the ones that may have mattered to fixate on. He wondered if he could have simply done better by knowing more and thinking faster. He wished that tiny slice of cortex that told him to be afraid did so more loudly and forcefully.
It was then that The Intern noticed The Tag on Mr. B's left big toe. He didn't bother to read what was written on it; he knew it served to identify, perhaps to differentiate Mr. B from others in The Morgue. The Intern looked elsewhere, with his gaze stopping at Mr. B's right hand, which lay there palm up, fingers open. The Intern thought he must have died like this, keeping his hand that way as if to ask to touch another person one last time before his exit. The Intern reached down and closed his fingers, rotated the hand inward. Mr. B's fingers were cold in a way that The Intern had never experienced. The Floor Nurse stared at The Intern, worried and puzzled. The Intern looked up at her, adjusted his Mask of Professionalism, and then left, rolling his Laptop on Wheels, Progress Notes in hand, ready to see his next patient and scribble down The Plan before Attending Rounds started.
A Senior Resident told him that every Resident has a Patient that dies like this and that it is part of the Doctor Experience. “You’ll be fine.” But The Intern knew that the questions, regrets, self-doubts, memories, and sorrows would all come back later. In a wave that would sweep him into a darker place. As if reflexively, he suddenly thought of the people he felt lucky to have in his own life, and reached into his pocket for his phone.
Earlier That Morning, The Intern woke up, weary and maybe even a little defeated, after a Call Day that marched into the night. Like every Morning he reflexively checked his e-mail, and saw the Night Resident's message that Mr. B's heart stopped some time before daybreak. They couldn't resuscitate him. There was the jolt of shock and sadness, of course, but, for that one tiny slice of The Intern's cortex, a perverse affirmation, as well.
The Previous Morning, Mr. B greeted The Intern as always. "Hey doc, what's going on? How are you?" The Intern dutifully listened to his heart, his lungs, examined his eyes, mouth, fingernails and felt his abdomen. He felt for enlarged lymph nodes. Everything was fine like always, despite The Numbers telling him otherwise. Very low platelets, very high this, very low that. More Data forthcoming. Biopsy results pending. The difference between Mr. B in person, on exam, and Mr. B on The Computer was always jarring and troubling to The Intern. Something viscerally troubled him more on this day, however. He came to check in on Mr. B every two hours or so.
And so for the first part of Yesterday, Mr. B was just like he was on admission - well-appearing, kind, deferential, interested in talking about his family, and speaking seriously about an incipient scandal in college football. But later in the day, he seemed uncomfortable. "I don't know what's going on doc...I just don't feel right," he told The Intern. He pointed to his abdomen, just below his rib cage - "it hurts here. I’m sorry to keep bothering you." The Intern examined him, drew some Labs, looked over a Scan and then reassured him. "First of all, don’t apologize. I’m here to Take Care of you. And what’s going on, it's probably your pancreas. I'm going to give you some fluids, some pain medications. We’ll get an ultrasound. Don't eat anything, OK? Well, maybe you can have that milkshake over there. I'll turn a blind eye to that." A lame joke, a smile.
But The Intern was disturbed. Mr. B could not articulate what was wrong with him and The Intern knew him to be an articulate man. The Intern pushed what he recognized as an ominous feeling to the recesses of his brain, so that it occupied only a tiny slice of cortex, and pushed his Laptop on Wheels forward to see another patient who was very ill. He did, however, return to see Mr. B a few more times, and was reassured when he seemed a bit better. At eleven that night, The Intern saw Mr. B one final time before going home. "I'm great doc, thanks for everything! Go get some sleep!"
As he looked down at Mr. B on This Morning, The Intern's mind skipped through a slide set of images and thoughts, as if they were set to a random shuffle. He remembered, with some shame, how three days ago he was annoyed to be paged about a new admission late in the day, only to feel some form of joy when he connected with Mr. B during that first conversation. He thought about how Mr. B told him about the first time he met his wife, when she thought he was "some married jerk flirting with her," only to realize later that he was single and legit - "we've been together ever since." He wondered whether Mr. B's pancreatitis, chest pain and low platelets were all tied together by one of two catastrophic diseases, based on a discussion he had with The Consultants and The Attending the other day. He remembered what Mr. B said about his children, how they weren't wealthy but were doing jobs that they really liked and that that's what he’d always wanted for them. The Intern knew he'd tell his children the same thing someday.
He remembered how Mr. B kept talking about his family and his good fortune to have them in his life. He talked more about them Yesterday and The Intern wondered then if Mr. B somehow knew Yesterday itself that he would not see them ever again. He wondered if articulate people being unable to find words for their unease and distress was a poor prognostic sign.
And then he asked himself whether he could have done something differently. Whether he could have thought more about the medical mystery that was Mr. B instead of spending time running around writing for Eucerin cream and Imodium or rushing through tasks in order to go home at a reasonable hour. He wondered if Large Academic Hospital's Cadillac of a Differential Diagnosis distracted him from the few more obvious possibilities, the ones that may have mattered to fixate on. He wondered if he could have simply done better by knowing more and thinking faster. He wished that tiny slice of cortex that told him to be afraid did so more loudly and forcefully.
It was then that The Intern noticed The Tag on Mr. B's left big toe. He didn't bother to read what was written on it; he knew it served to identify, perhaps to differentiate Mr. B from others in The Morgue. The Intern looked elsewhere, with his gaze stopping at Mr. B's right hand, which lay there palm up, fingers open. The Intern thought he must have died like this, keeping his hand that way as if to ask to touch another person one last time before his exit. The Intern reached down and closed his fingers, rotated the hand inward. Mr. B's fingers were cold in a way that The Intern had never experienced. The Floor Nurse stared at The Intern, worried and puzzled. The Intern looked up at her, adjusted his Mask of Professionalism, and then left, rolling his Laptop on Wheels, Progress Notes in hand, ready to see his next patient and scribble down The Plan before Attending Rounds started.
A Senior Resident told him that every Resident has a Patient that dies like this and that it is part of the Doctor Experience. “You’ll be fine.” But The Intern knew that the questions, regrets, self-doubts, memories, and sorrows would all come back later. In a wave that would sweep him into a darker place. As if reflexively, he suddenly thought of the people he felt lucky to have in his own life, and reached into his pocket for his phone.
Saturday, February 4, 2012
Public Health and Health Care Reform
As the fate of the Affordable Care Act (ACA) hangs in the balance, the Obama administration has had to make some compromises in order to keep the bill afloat. One of these is cutting #3.5 billion from the $15 billion Prevention and Public Health Fund. The fund exists given increasing recognition that a majority of chronic disease we see now are likely secondary to what are public health issues (changing diets, limited opportunities to exercise, pollution, etc).
The usual justification for public health is that, in the long-run, it prevents disease and thereby lowers health care costs. I think this is fundamentally correct. However, it's a tired argument, especially in the context of four-year political cycles (why wait on cost savings that will take a long time to materialize if they don't confer any immediate electoral benefit?) and a natural fixation over observable events (it's hard to appreciate things that don't happen, but seeing someone catheterize an occluding blood vessel to the heart is real and amazing; doing something about that raises political visibility).
Enter Nicholas Stine and Dave Chokshi, both physicians at the Brigham and Women's Hospital. They have a nice perspective piece in this week's New England Journal of Medicine that makes a fresh case for public health expenditures. In particular, they cleverly frame the value of public health in terms of immediate cost savings and current cost-control objectives in health care. Some of the more interesting points:
-We are now interested in paying for good quality health services rather than health services in general. There is also a push to think more about population management? Which means we need to be able to measure things we've never measured before on a larger scale. Public health organizations have the know-how and capacity to do population surveillance which can be helpful in this regard. Why not outsource the data gathering and analysis aspects for both quality, capacity and population based outcome measures to the pros? In this way, spending on public health will have spillovers to the medical care.
-More generally, we can use public health departments help organize IT. It is now very difficult for smaller practices or hospitals to afford good IT. Outsourcing this to a larger organzation that could perhaps manage IT for many different providers could be really useful in cutting costs for individual firms both in the short and long-run.
-In the same way we are paying for good health care, why not introduce financial incentives for good public health, so as to glean gains from it in the shorter run?
This is a great piece because it takes the public health versus medicine issue and illustrates how the two are complementary. The new angle is that the complementarities can lead to cost-savings sooner than we'd expect, and can help augment and empower current initiatives on the medical care side to improve quality and cut costs. Even their title is beautiful: I'm sure that "Opportunity in Austerity" is what every politician wants to hear now!
The usual justification for public health is that, in the long-run, it prevents disease and thereby lowers health care costs. I think this is fundamentally correct. However, it's a tired argument, especially in the context of four-year political cycles (why wait on cost savings that will take a long time to materialize if they don't confer any immediate electoral benefit?) and a natural fixation over observable events (it's hard to appreciate things that don't happen, but seeing someone catheterize an occluding blood vessel to the heart is real and amazing; doing something about that raises political visibility).
Enter Nicholas Stine and Dave Chokshi, both physicians at the Brigham and Women's Hospital. They have a nice perspective piece in this week's New England Journal of Medicine that makes a fresh case for public health expenditures. In particular, they cleverly frame the value of public health in terms of immediate cost savings and current cost-control objectives in health care. Some of the more interesting points:
-We are now interested in paying for good quality health services rather than health services in general. There is also a push to think more about population management? Which means we need to be able to measure things we've never measured before on a larger scale. Public health organizations have the know-how and capacity to do population surveillance which can be helpful in this regard. Why not outsource the data gathering and analysis aspects for both quality, capacity and population based outcome measures to the pros? In this way, spending on public health will have spillovers to the medical care.
-More generally, we can use public health departments help organize IT. It is now very difficult for smaller practices or hospitals to afford good IT. Outsourcing this to a larger organzation that could perhaps manage IT for many different providers could be really useful in cutting costs for individual firms both in the short and long-run.
-In the same way we are paying for good health care, why not introduce financial incentives for good public health, so as to glean gains from it in the shorter run?
This is a great piece because it takes the public health versus medicine issue and illustrates how the two are complementary. The new angle is that the complementarities can lead to cost-savings sooner than we'd expect, and can help augment and empower current initiatives on the medical care side to improve quality and cut costs. Even their title is beautiful: I'm sure that "Opportunity in Austerity" is what every politician wants to hear now!
Sunday, January 29, 2012
Random Links: Read "Short White Coat," the Value of a Good Kindergarten Experience, and Why Drop Out?
My co-intern in the primary care program, Ishani Ganguli, is an incredibly thoughtful individual who writes really well. She has a blog (a real blog, one with a consistently large readership that is fronted by the Boston Globe) that discusses various aspects of health care and medical education through the eyes of a resident. Worth checking out, especially her latest post on teamwork in health care.
-There's a great book, now a classic, by Robert Fulgham called "All I Really Need to Know I Learned in Kindergarten." Now some new research from some big shot economists on the value of a good kindergarten experience:
In Project STAR, 11,571 students in Tennessee and their teachers were randomly assigned to classrooms within their schools from kindergarten to third grade. This paper evaluates the long-term impacts of STAR by linking the experimental data to administrative records. We first demonstrate that kindergarten test scores are highly correlated with outcomes such as earnings at age 27, college attendance, home ownership, and retirement savings. We then document four sets of experimental impacts. First, students in small classes are significantly more likely to attend college and exhibit improvements on other outcomes. Class size does not have a significant effect on earnings at age 27, but this effect is imprecisely estimated. Second, students who had a more experienced teacher in kindergarten have higher earnings. Third, an analysis of variance reveals significant classroom effects on earnings. Students who were randomly assigned to higher quality classrooms in grades K-3 – as measured by classmates' end-of-class test scores – have higher earnings, college attendance rates, and other outcomes. Finally, the effects of class quality fade out on test scores in later grades but gains in non-cognitive measures persist.
-In related news, there's a great recent NYT Op-Ed on the cost of high school dropouts to the US economy (thanks James Hudspeth!). The authors (both well known economists) mention the following:
Studies show that the typical high school graduate will obtain higher employment and earnings — an astonishing 50 percent to 100 percent increase in lifetime income — and will be less likely to draw on public money for health care and welfare and less likely to be involved in the criminal justice system.
So a good economics question: if high school completion is so valuable, why would anyone drop out? Is it because of ability (low ability kids are forced out), time preference (the value of now exceeds the returns of income later?), or a lack of information about the returns to high school? There are other explanations. But if we assume that people make choices based on marginal returns, it seems bizarre that so many would drop out and leave that kind of money on the table, right? Thoughts?
-There's a great book, now a classic, by Robert Fulgham called "All I Really Need to Know I Learned in Kindergarten." Now some new research from some big shot economists on the value of a good kindergarten experience:
In Project STAR, 11,571 students in Tennessee and their teachers were randomly assigned to classrooms within their schools from kindergarten to third grade. This paper evaluates the long-term impacts of STAR by linking the experimental data to administrative records. We first demonstrate that kindergarten test scores are highly correlated with outcomes such as earnings at age 27, college attendance, home ownership, and retirement savings. We then document four sets of experimental impacts. First, students in small classes are significantly more likely to attend college and exhibit improvements on other outcomes. Class size does not have a significant effect on earnings at age 27, but this effect is imprecisely estimated. Second, students who had a more experienced teacher in kindergarten have higher earnings. Third, an analysis of variance reveals significant classroom effects on earnings. Students who were randomly assigned to higher quality classrooms in grades K-3 – as measured by classmates' end-of-class test scores – have higher earnings, college attendance rates, and other outcomes. Finally, the effects of class quality fade out on test scores in later grades but gains in non-cognitive measures persist.
-In related news, there's a great recent NYT Op-Ed on the cost of high school dropouts to the US economy (thanks James Hudspeth!). The authors (both well known economists) mention the following:
Studies show that the typical high school graduate will obtain higher employment and earnings — an astonishing 50 percent to 100 percent increase in lifetime income — and will be less likely to draw on public money for health care and welfare and less likely to be involved in the criminal justice system.
So a good economics question: if high school completion is so valuable, why would anyone drop out? Is it because of ability (low ability kids are forced out), time preference (the value of now exceeds the returns of income later?), or a lack of information about the returns to high school? There are other explanations. But if we assume that people make choices based on marginal returns, it seems bizarre that so many would drop out and leave that kind of money on the table, right? Thoughts?
Wednesday, December 14, 2011
Health Expenditures in the US: Are We Not Spending Enough?
According the Elizabeth Bradley, a Professor of Health Policy and Administration at Yale, the answer is no. As she and Lauren Taylor point out in a recent New York Times editorial:
We studied 10 years’ worth of data and found that if you counted the combined investment in health care and social services, the United States no longer spent the most money — far from it. In 2005, for example, the United States devoted only 29 percent of gross domestic product to health and social services combined, while countries like Sweden, France, the Netherlands, Belgium and Denmark dedicated 33 percent to 38 percent of their G.D.P. to the combination. We came in 10th.
Bradley and Taylor put forth the argument that the things that make people healthy go beyond what we typically think of as health care. That is, access to employment, good housing, food security, and educational institutions all contribute to population health. I don't think this is a revolutionary thought.
But what is revolutionary is that they authors imply that the answer to our central question for US health care - "Do we get what we pay for?" - might not be the "no" we've always assumed, but a "yes." We just aren't spending enough, at least not on the proximal things that really matter. I don't think that it is that simple - it's hard to know what portion of social service spending actually improves health. But the discourse does need to move in this direction.
Furthermore, another neat aspect of this piece is that Bradley and Taylor's contention doesn't just apply to the macro-level health policy sphere. Imagine a primary care system that takes into account the socioeconomic realities of patients and creates interventions that use these insights to better provide care. A developing country example: subsidizing the transportation fees for HIV/AIDS patients who would otherwise find this to be a barrier and be unable to seek much needed health care. This sort of intervention may be equally important to any medications or lab tests in advancing the health of these patients. I'll talk more about this sort of "economic hotspotting" in a later post.
We studied 10 years’ worth of data and found that if you counted the combined investment in health care and social services, the United States no longer spent the most money — far from it. In 2005, for example, the United States devoted only 29 percent of gross domestic product to health and social services combined, while countries like Sweden, France, the Netherlands, Belgium and Denmark dedicated 33 percent to 38 percent of their G.D.P. to the combination. We came in 10th.
Bradley and Taylor put forth the argument that the things that make people healthy go beyond what we typically think of as health care. That is, access to employment, good housing, food security, and educational institutions all contribute to population health. I don't think this is a revolutionary thought.
But what is revolutionary is that they authors imply that the answer to our central question for US health care - "Do we get what we pay for?" - might not be the "no" we've always assumed, but a "yes." We just aren't spending enough, at least not on the proximal things that really matter. I don't think that it is that simple - it's hard to know what portion of social service spending actually improves health. But the discourse does need to move in this direction.
Furthermore, another neat aspect of this piece is that Bradley and Taylor's contention doesn't just apply to the macro-level health policy sphere. Imagine a primary care system that takes into account the socioeconomic realities of patients and creates interventions that use these insights to better provide care. A developing country example: subsidizing the transportation fees for HIV/AIDS patients who would otherwise find this to be a barrier and be unable to seek much needed health care. This sort of intervention may be equally important to any medications or lab tests in advancing the health of these patients. I'll talk more about this sort of "economic hotspotting" in a later post.
Tuesday, December 6, 2011
Great Harvard Med Class Show Parody
As an intern, I get to work side by side with Harvard medical students. I have to say that they have all been very, very good in terms of their clinical knowledge and ability to efficiently get things done. No wonder I got rejected when I applied.
It turns out that Harvard med students are pretty funny, too. Check out this great parody of medical students' experiences while on their third year clinical rotations by members of the Class of 2014. I'm sure you'll recognize the Saturday Night Live short this is based on. (HT: the awesome and hilarious Camila Fabersunne).
It turns out that Harvard med students are pretty funny, too. Check out this great parody of medical students' experiences while on their third year clinical rotations by members of the Class of 2014. I'm sure you'll recognize the Saturday Night Live short this is based on. (HT: the awesome and hilarious Camila Fabersunne).
Sunday, December 4, 2011
Male Circumcision, HIV/AIDS and the "Real World"
This past week, PLoS Medicine put forth multi-piece expose (start with this lead/summary article) on medical male circumcision, its cost-effectiveness in combating HIV/AIDS and methods and challenges to scaling up this practice in Sub-Saharan Africa, where the epidemic is at its worst. The upshot of this series of papers was covered in a recent Scientific American piece (which quotes yours truly). To summarize, the argument is that medical male circumcision works (as demonstrated in three large randomized clinical trials, all conducted in Africa) and is cost-effective. Indeed, it may even be cost-saving, with high upfront costs that are easily recovered over a 10 year period. Challenges to scale-up include finding health care workers to carry out circumcisions (in a way that doesn't crowd-out provision of other important health care services), getting people to adopt the practice in a respectful, non-coercive yet effective way, especially in areas where there are strong traditional norms over circumcision, and dealing with any risk compensating behavior (if circumcised individuals think circumcision is protective, they may be more likely to engage in riskier sexual behaviors than they otherwise would - more on this in a later post).
Circumcision is one of those topics that seems to always bring with it a vociferous debate. Those opposed to the practice make their stance known quite vehemently. In my opinion, much of what is being spouted against medical male circumcision as a tool for HIV prevention is based on an incomplete understanding of the available evidence and already strong negative priors against the practice that are almost impossible to shift (for example, see this clip or refer to any of the comments to the aforementioned Scientific American article).
However, I think there is one oft-cited argument against medical male circumcision that is worth discussing further. In particular, opponents point to evidence from a 2009 UNAIDS study that uses recent survey data from 18 African countries and concludes that "there appears no clear pattern of association between male circumcision and HIV prevalence—in 8 of 18 countries with data, HIV prevalence is lower among circumcised men, while in the remaining 10 countries it is higher." This is contrast to the large randomized clinical trials mentioned above which show that circumcision reduces HIV rates by greater than 50%. The fact that the clinical trial results are not borne out in the sample survey data, opponents argue, means that circumcision does not work in "real world settings."
In a recent study, Brendan Maughan-Brown, Nicoli Nattrass, Jeremy Seekings, Alan Whiteside and I offer a different explanation for this differential set of findings. It has to do with the fact that the UNAIDS study looks at population that were circumcised in a multitude of settings (clinics, traditional healers) whereas the clinical trials focus on medical circumcision only. In practice, there great deal of heterogeneity in traditionally circumcising populations: some people do not have all of their foreskin removed, and others are circumcised several years after their peers. In our study population of blacks living in the Cape Town metro area, when we don't account for this heterogeneity, we find only a weak negative effect of circumcision on HIV positivity. However, once we "unpack" circumcision, we find that the practice actually has a strong negative association with the probability of testing HIV positive, provided it is done earlier and that there is complete removal of the foreskin.
These results suggest that the UNAIDS results may simply be due to measurement error. In a traditional setting, a circumcision is not a circumcision is not a circumcision. Treating every circumcised person the same introduces measurement error, and statistically it is well known that this would deflate the estimates of the impacts of the practice towards zero. So, the differential results between the UNAIDS findings and the randomized clinical trial findings is not that circumcision doesn't work in the real world. Rather, it is that we really need to understand better the heterogeneity in male circumcision and what can be done to ensure better outcomes for everyone involved.
Circumcision is one of those topics that seems to always bring with it a vociferous debate. Those opposed to the practice make their stance known quite vehemently. In my opinion, much of what is being spouted against medical male circumcision as a tool for HIV prevention is based on an incomplete understanding of the available evidence and already strong negative priors against the practice that are almost impossible to shift (for example, see this clip or refer to any of the comments to the aforementioned Scientific American article).
However, I think there is one oft-cited argument against medical male circumcision that is worth discussing further. In particular, opponents point to evidence from a 2009 UNAIDS study that uses recent survey data from 18 African countries and concludes that "there appears no clear pattern of association between male circumcision and HIV prevalence—in 8 of 18 countries with data, HIV prevalence is lower among circumcised men, while in the remaining 10 countries it is higher." This is contrast to the large randomized clinical trials mentioned above which show that circumcision reduces HIV rates by greater than 50%. The fact that the clinical trial results are not borne out in the sample survey data, opponents argue, means that circumcision does not work in "real world settings."
In a recent study, Brendan Maughan-Brown, Nicoli Nattrass, Jeremy Seekings, Alan Whiteside and I offer a different explanation for this differential set of findings. It has to do with the fact that the UNAIDS study looks at population that were circumcised in a multitude of settings (clinics, traditional healers) whereas the clinical trials focus on medical circumcision only. In practice, there great deal of heterogeneity in traditionally circumcising populations: some people do not have all of their foreskin removed, and others are circumcised several years after their peers. In our study population of blacks living in the Cape Town metro area, when we don't account for this heterogeneity, we find only a weak negative effect of circumcision on HIV positivity. However, once we "unpack" circumcision, we find that the practice actually has a strong negative association with the probability of testing HIV positive, provided it is done earlier and that there is complete removal of the foreskin.
These results suggest that the UNAIDS results may simply be due to measurement error. In a traditional setting, a circumcision is not a circumcision is not a circumcision. Treating every circumcised person the same introduces measurement error, and statistically it is well known that this would deflate the estimates of the impacts of the practice towards zero. So, the differential results between the UNAIDS findings and the randomized clinical trial findings is not that circumcision doesn't work in the real world. Rather, it is that we really need to understand better the heterogeneity in male circumcision and what can be done to ensure better outcomes for everyone involved.
Sunday, November 27, 2011
Do Financial Incentives Induce Physicians to Provide More (Unnecessary) Care?
About two years ago, I posted something on my now non-existent Facebook account about how medical tests and treatments, especially those that are elective, are more likely to be offered if doctors are reimbursed well for them. My point was that there was a strong financial incentive to test and treat, even in cases where doing so would confer only little benefit to the patient's health, at best. A bunch of people (mainly physicians) responded on my wall pointing out how misguided I was. It was actually a bit more vociferous than this, but I digress.
Anyway, it turns out that I was right (in this case, NOT shocking). I just came across a great study by Joshua Gottleib, an economics job market candidate from Harvard. His study uses a natural experiment in physician incentives to examine whether payment drives care offered. Specifically, he takes advantage of a large scale policy change by Medicare in 1997. Previously, Medicare created different fee schedules for each of around 300 small geographic areas. This was done because production costs and other realities of providing a given service obviously varied across space. In 1997, they decided to coalesce these regions into 80 larger areas. For some smaller areas, there may have been large payouts for certain services which fell after 1997 because the average payout for their new larger group was lower. For others, it went the other way. In any case, comparing pre and post 1997 gives you a nice experiment as to what would happen to health services provision when payouts are changed for reasons other than local health outcomes or demand for care.
Whether you hold my priors or shared those of my misguided Facebook friends, the results remain astounding. Across all health services, Gottleib finds that "on average, a 2 percent increase in payment rates leads to a 5 percent increase in care provision per patient." Predictably the price response of services with an elective component (such as cataract surgery, colonoscopy and cardiac procedures - don't huff and puff, I said elective COMPONENT!) but not so much for things like dialysis or cancer care, where it is easy to identify who needs it and you need to do it no matter what. Furthermore, in addition to disproportionally adjusting the provision of relative intensive and elective treatments as reimbursements rise, physicians also invest in new technology to do so; this is beautifully illustrated by the examination of reimbursement rates and MRI purchases.
So what's the upshot of all this? Is this a good thing? Probably not. Despite scaling up technology, Gottleib is unable to find any impacts on health outcomes or mortality among cardiac patients (for which he explored more deeply the relationship between payouts and treatment). Furthermore, he asserts that "that changes in physician pro fit margins can explain up to one third of the growth in health spending over recent decades."
Ultimately, some good lessons here. First, if we are interested in bring down costs and increasing health care efficiency, we need to pay for things that actually help maintain and increase health. Second, we can't rely on physicians do be the gatekeepers of rising costs as it is clear that, given incentives, they may not always behave in a way that actually improves health outcomes (thankfully, for cases like fractures, cancer or end-stage renal disease treatment, docs aren't sensitive to prices and do the right thing clinically). Finally, we need to stop universally and blindly lauding the US health care system as a bastion of health care technology if that technology does little to improve outcomes.
Anyway, it turns out that I was right (in this case, NOT shocking). I just came across a great study by Joshua Gottleib, an economics job market candidate from Harvard. His study uses a natural experiment in physician incentives to examine whether payment drives care offered. Specifically, he takes advantage of a large scale policy change by Medicare in 1997. Previously, Medicare created different fee schedules for each of around 300 small geographic areas. This was done because production costs and other realities of providing a given service obviously varied across space. In 1997, they decided to coalesce these regions into 80 larger areas. For some smaller areas, there may have been large payouts for certain services which fell after 1997 because the average payout for their new larger group was lower. For others, it went the other way. In any case, comparing pre and post 1997 gives you a nice experiment as to what would happen to health services provision when payouts are changed for reasons other than local health outcomes or demand for care.
Whether you hold my priors or shared those of my misguided Facebook friends, the results remain astounding. Across all health services, Gottleib finds that "on average, a 2 percent increase in payment rates leads to a 5 percent increase in care provision per patient." Predictably the price response of services with an elective component (such as cataract surgery, colonoscopy and cardiac procedures - don't huff and puff, I said elective COMPONENT!) but not so much for things like dialysis or cancer care, where it is easy to identify who needs it and you need to do it no matter what. Furthermore, in addition to disproportionally adjusting the provision of relative intensive and elective treatments as reimbursements rise, physicians also invest in new technology to do so; this is beautifully illustrated by the examination of reimbursement rates and MRI purchases.
So what's the upshot of all this? Is this a good thing? Probably not. Despite scaling up technology, Gottleib is unable to find any impacts on health outcomes or mortality among cardiac patients (for which he explored more deeply the relationship between payouts and treatment). Furthermore, he asserts that "that changes in physician pro fit margins can explain up to one third of the growth in health spending over recent decades."
Ultimately, some good lessons here. First, if we are interested in bring down costs and increasing health care efficiency, we need to pay for things that actually help maintain and increase health. Second, we can't rely on physicians do be the gatekeepers of rising costs as it is clear that, given incentives, they may not always behave in a way that actually improves health outcomes (thankfully, for cases like fractures, cancer or end-stage renal disease treatment, docs aren't sensitive to prices and do the right thing clinically). Finally, we need to stop universally and blindly lauding the US health care system as a bastion of health care technology if that technology does little to improve outcomes.
Saturday, November 5, 2011
Infections and IQ
A well known fact about our world is that there are great disparities in average IQ scores across countries. In the past, some have tried to argue that this pattern be explained by innate differences in cognition across populations - some people are just innately smarter than others. Others have tried to attribute these to cultural factors. However, genetics and culture are likely not driving these differences in any meaningful sense. After all, another stylized fact is that average IQ scores have been going up markedly, within one or two generations, within any given country. These changes, also known as the Flynn Effect after the researcher who painstakingly documented them, speak against the genes story because they occurred far more quickly than one would expect from population-wide changes in the distribution of cognition-determining genes. The have occured too quickly to be explained by paradigm shifting social changes, as well.
So what gives? Enter Chris Eppig, a researcher at the University of New Mexico. In a recent piece in The Scientific American , he proposes that cross-country differences in IQ, as well as changes in IQ rates within a country over time, can be explained by exposure to infectious diseases early in life. The story goes something like this: infections early in life require energy to fight off. Energy during this age is primarily used for brain development (in infancy, it is thought that over 80% of calories are allocated to neurologic development). So if energy is diverted to fend off infections, it can't be used to develop cognitive endowments, and afflicted infants and children end up becoming adults that do poorly on IQ tests.
In the piece, Eppig cites some of his work linking infectious disease death rates in countries to average IQ scores. His models control for country income and a few other important macroeconomic variables. His evidence, while not proof of a causal relationship, is certainly provocative. So provocative in fact that I ended up trying to build a stronger causal story between early childhood infections and later life cognitive outcomes. In a recent paper (cited in the above Scientific American article), I examine the impact of early life exposure to malaria on later life performance on a visual IQ test. I use a large-scale malaria eradication program in Mexico (1957) as a quasi-experiment to prove causality. Basically, I find that individuals born in states with high rates of malaria prior to eradication - the area that gained most from eradication - experienced large gains in IQ test scores after eradication relative those born in states with low pre-intervention malaria rates, areas that did not benefit as much from eradication (see this Marginal Revolution piece for a slightly differently worded explanation).
My paper also looks at the mechanisms linking infections and cognition. One possibility is the biological model described above - infections divert nutritional energy away from brain development. However, I also find evidence of a second possibility: parents respond to initial differences in cognition or health due to early life infections and invest in their children accordingly. In the Mexican data, children who were less afflicted by malaria thanks to the eradication program started school earlier than those who were more afflicted. Because a child's time is the domain of parental choice, this suggests that parents reinforce differences in the way their children are (- erhaps they feel that smarter children will be smarter adults, and so investments in their schooling will yield a higher rate of return - and that this can modulate the relationship between early life experiences and adulthood outcomes.
So what gives? Enter Chris Eppig, a researcher at the University of New Mexico. In a recent piece in The Scientific American , he proposes that cross-country differences in IQ, as well as changes in IQ rates within a country over time, can be explained by exposure to infectious diseases early in life. The story goes something like this: infections early in life require energy to fight off. Energy during this age is primarily used for brain development (in infancy, it is thought that over 80% of calories are allocated to neurologic development). So if energy is diverted to fend off infections, it can't be used to develop cognitive endowments, and afflicted infants and children end up becoming adults that do poorly on IQ tests.
In the piece, Eppig cites some of his work linking infectious disease death rates in countries to average IQ scores. His models control for country income and a few other important macroeconomic variables. His evidence, while not proof of a causal relationship, is certainly provocative. So provocative in fact that I ended up trying to build a stronger causal story between early childhood infections and later life cognitive outcomes. In a recent paper (cited in the above Scientific American article), I examine the impact of early life exposure to malaria on later life performance on a visual IQ test. I use a large-scale malaria eradication program in Mexico (1957) as a quasi-experiment to prove causality. Basically, I find that individuals born in states with high rates of malaria prior to eradication - the area that gained most from eradication - experienced large gains in IQ test scores after eradication relative those born in states with low pre-intervention malaria rates, areas that did not benefit as much from eradication (see this Marginal Revolution piece for a slightly differently worded explanation).
My paper also looks at the mechanisms linking infections and cognition. One possibility is the biological model described above - infections divert nutritional energy away from brain development. However, I also find evidence of a second possibility: parents respond to initial differences in cognition or health due to early life infections and invest in their children accordingly. In the Mexican data, children who were less afflicted by malaria thanks to the eradication program started school earlier than those who were more afflicted. Because a child's time is the domain of parental choice, this suggests that parents reinforce differences in the way their children are (- erhaps they feel that smarter children will be smarter adults, and so investments in their schooling will yield a higher rate of return - and that this can modulate the relationship between early life experiences and adulthood outcomes.
Subscribe to:
Posts (Atom)