A typical phone call with my mom involves learning about some of our family friends back home. Invariably, a given set of family friends would have someone my age who I either went to high school with or happened to socialize with in community events. I enjoy these updates because its great to see everyone carving out interesting lives for themselves.
In any case, I recently heard from her about a friend of mine who is now doing an internal medicine residency in Cleveland, OH. This chap was a year ahead of me in high school and someone I really looked up to as I tried to temper and use for good my pre-college pre-med craziness. Like everyone else, I ended up googling him thereafter and came across this really interesting article about a medical school experience of his in the New York Times.
As a physician in training, I am working on developing two skill sets. The first is building enough experiential knowledge in order to recognize important aspects of clinical situations more quickly and thereby reach a diagnostic and treatment pathway earlier on in the clinical encounter. The second is to "think outside the box" and develop a "broad differential diagnosis", meaning one thinks of all of the possibilities why someone might be coming into the hospital to start with and then using the available data to parse out what is more likely to be happening than what is not. Experience is important here, too. As you see more patients, you'll see some weird things, and those will get added to your possibility space as you move forward.
However, I've always wondered whether there are situations where these two skills aren't complementary. The first skill brings forth a form of pattern recognition: "this looks like acute heart failure because I've seen hundreds of these and that was it has to be." This is what experienced clinicians bring to the table. On the other hand, "thinking outside the box" involves convincing yourself that there is a non-zero probability that a rare diagnosis may be driving the picture. This is what medical students are good at, in part because they aren't really taught much about population prevalence and probabilities, evening the playing field between the common and the rare. But I wonder, at least at some points in time, if part of this is also because medical students are not "encumbered" by experiential knowledge. As a result, they can "go outside of the box" because they don't have tunnel vision? They come into a clinical situation with fresh eyes, in some sense like an outside consultant.
The NYT piece about my family friend discusses a situation where more experienced clinicians zeroed in on one particular diagnosis but then were at a loss when they didn't find the evidence to support this. My friend, a fresh wide-eyed third year medical student, was able to think outside the box about a rare illness that could unify all the laboratory results and physical exam findings. He turned out to be right. The piece goes on to discuss the value of a fresh, unadulterated pair of eyes.
I'm barely two months in to my residency, but I've been wondering how to optimize returns to my experiences while at least keeping one eye outside the box. On the other hand, I think, in general, the returns to experience are positive with almost all patients. That is, having more clinical experiences under your belt is usually better. In the other cases, which may be rare, maybe my best bet then is to rely on the fresh supply of medical students and doctors-in-training. Or maybe it actually does make sense for a hospital to have some weirdo like House, MD to outsource bizarre, intractable cases to. We'll each have our comparative advantage and can capitalize on "returns to specialization." Thoughts?
Welcome! This is a blog that generally covers issues related to health and development economics. Feel free to visit and comment as often as you'd like.
Saturday, August 20, 2011
Friday, August 19, 2011
Absolutely Lovely Paper About Esther Duflo
Esther Duflo, hands down, is one of my favorite economists. Her work spans a rich swath of development economics, touching subjects as diverse (and, ultimately, as related!) as microfinance, education, health care, corruption, women's agency, and, central to a good deal of her work, randomized field experiments. She is also the founder of the Jameel Poverty Action lab and co-wrote the incredible Poor Economics.
Recently, she won the American Economic Association's John Bates Clark Medal, an bi-yearly award given to the topic economist under the age of forty. Chris Udry, one of my favorite professors at Yale, ruminates on Duflo's work in a beautiful essay in the Journal of Economic Perspectives. Reading it, one is amazed about the impact one person can have on an entire field - disruptive technological change at its finest! My favorite part about the piece is how Duflo has remained an incredible academician while also serving as a public intellectual and activist. Excellent stuff.
Recently, she won the American Economic Association's John Bates Clark Medal, an bi-yearly award given to the topic economist under the age of forty. Chris Udry, one of my favorite professors at Yale, ruminates on Duflo's work in a beautiful essay in the Journal of Economic Perspectives. Reading it, one is amazed about the impact one person can have on an entire field - disruptive technological change at its finest! My favorite part about the piece is how Duflo has remained an incredible academician while also serving as a public intellectual and activist. Excellent stuff.
Wednesday, August 10, 2011
Great Paper on the Impact of Cancer Screening
A good number of health care professionals, though perhaps not enough, are obsessed with screening to catch various diseases early - "when we can do more about it." Cancer screening, in particular, is an area of great interest, one that the public has really latched on to over the last 20 years or so. As the row over breast cancer screening illustrates, there is a huge debate on when to screen people. Should we screen at, say, age 40 for breast cancer, or start yearly mammograms at age 50?
It turns out that the evidence to motivate one set of screening guidelines over another isn't all that great. That is, while we know that certain kinds of screening (breast and colon, for example) work, randomized clinical trials of screening tests have too few people in each age group to definitively assess the best age cutoff for these modalities.
Enter this neat paper by Srikanth Kadiyala and Erin Strumpf. They utilize our existing screening guidelines as "natural experiment" to study the effectiveness of screening at the population level. Specifically, while the person aged 39 years old and the person aged 41 years old may be similar in terms of their cancer risk, our national guidelines lead to one person being screened and the other not. Is the 41 year old better off as a result?
U.S. cancer screening guidelines recommend that cancer screening begin for breast cancer at age 40 and for colorectal cancer and prostate cancers at age 50. What are the marginal returns to physician and individual compliance with these cancer screening guidelines? We estimate the marginal benefits by comparing cancer test and cancer detection rates on either side of recommended initiation ages (age 40 for breast cancer, age 50 for colorectal and prostate cancers). Using a regression discontinuity design and self-reported test data from national health surveys, we find test rates for breast, colorectal, and prostate cancer increase at the guideline age thresholds by 78%, 65% and 4%, respectively. Data from cancer registries in twelve U.S. states indicate that cancer detection rates increase at the same thresholds by 25%, 27% and 17%, respectively. We estimate statistically significant effects of screening on breast cancer detection (1.3 cases/1000 screened) at age 40 and colorectal cancer detection (1.8 cases/1000 individuals screened) at age 50. We do not find a statistically significant effect of prostate cancer screening on prostate cancer detection. Fifty and 65 percent of the increases in breast and colorectal case detection, respectively, occur among middle-stage cancers (localized and regional) with the remainder among early-stage (in-situ). Our analysis suggests that the cost of detecting an asymptomatic case of breast cancer at age 40 is approximately $100,000-125,000 and that the cost of detecting an asymptomatic case of colorectal cancer at age 50 is approximately $306,000-313,000. We also find suggestive evidence of mortality benefits due to the increase in U.S. breast cancer screening at age 40 and colorectal cancer screening at age 50.
This is a neat, well-crafted study. The methodology the authors use, called regression discontinuity, utilizes sharp cutoffs in decision rules/policies/etc in the context of otherwise inconsequential changes in the variable used to determine this cutoff (i.e., in the immediate neighborhood of the cutoff). It is a useful way to get quasi-experimental evidence where there is no other way to get it. Indeed, regression discontinuity is now considered second below the gold standard randomized clinical trial in the pantheon of statistical approaches.
Of course, the one problem with this study is that, while we know our existing cutoffs are useful, we don't know if there is some other cutoff that would be better (one of the motivating questions of the paper). Here is a weakness of their research design: unless there is a policy change to a different age cutoff, regression discontinuity will only allow us to evaluate our current guidelines. Either we'll have to turn to evidence from other countries, evidence from different eras - both of which have problems insofar as that the epidemiology and treatment of cancer likely varies across time and space - or turn to larger randomized controlled trials where we can be sure to have large numbers of individuals around the cutoff ages we want to test. Alternatively, we could perhaps exploit current confusion over breast cancer screening guidelines, taking advantage of the fact that some providers may choose age 40 and others 50 to commence yearly mammograms, to assess whether one or the other is better.
It turns out that the evidence to motivate one set of screening guidelines over another isn't all that great. That is, while we know that certain kinds of screening (breast and colon, for example) work, randomized clinical trials of screening tests have too few people in each age group to definitively assess the best age cutoff for these modalities.
Enter this neat paper by Srikanth Kadiyala and Erin Strumpf. They utilize our existing screening guidelines as "natural experiment" to study the effectiveness of screening at the population level. Specifically, while the person aged 39 years old and the person aged 41 years old may be similar in terms of their cancer risk, our national guidelines lead to one person being screened and the other not. Is the 41 year old better off as a result?
U.S. cancer screening guidelines recommend that cancer screening begin for breast cancer at age 40 and for colorectal cancer and prostate cancers at age 50. What are the marginal returns to physician and individual compliance with these cancer screening guidelines? We estimate the marginal benefits by comparing cancer test and cancer detection rates on either side of recommended initiation ages (age 40 for breast cancer, age 50 for colorectal and prostate cancers). Using a regression discontinuity design and self-reported test data from national health surveys, we find test rates for breast, colorectal, and prostate cancer increase at the guideline age thresholds by 78%, 65% and 4%, respectively. Data from cancer registries in twelve U.S. states indicate that cancer detection rates increase at the same thresholds by 25%, 27% and 17%, respectively. We estimate statistically significant effects of screening on breast cancer detection (1.3 cases/1000 screened) at age 40 and colorectal cancer detection (1.8 cases/1000 individuals screened) at age 50. We do not find a statistically significant effect of prostate cancer screening on prostate cancer detection. Fifty and 65 percent of the increases in breast and colorectal case detection, respectively, occur among middle-stage cancers (localized and regional) with the remainder among early-stage (in-situ). Our analysis suggests that the cost of detecting an asymptomatic case of breast cancer at age 40 is approximately $100,000-125,000 and that the cost of detecting an asymptomatic case of colorectal cancer at age 50 is approximately $306,000-313,000. We also find suggestive evidence of mortality benefits due to the increase in U.S. breast cancer screening at age 40 and colorectal cancer screening at age 50.
This is a neat, well-crafted study. The methodology the authors use, called regression discontinuity, utilizes sharp cutoffs in decision rules/policies/etc in the context of otherwise inconsequential changes in the variable used to determine this cutoff (i.e., in the immediate neighborhood of the cutoff). It is a useful way to get quasi-experimental evidence where there is no other way to get it. Indeed, regression discontinuity is now considered second below the gold standard randomized clinical trial in the pantheon of statistical approaches.
Of course, the one problem with this study is that, while we know our existing cutoffs are useful, we don't know if there is some other cutoff that would be better (one of the motivating questions of the paper). Here is a weakness of their research design: unless there is a policy change to a different age cutoff, regression discontinuity will only allow us to evaluate our current guidelines. Either we'll have to turn to evidence from other countries, evidence from different eras - both of which have problems insofar as that the epidemiology and treatment of cancer likely varies across time and space - or turn to larger randomized controlled trials where we can be sure to have large numbers of individuals around the cutoff ages we want to test. Alternatively, we could perhaps exploit current confusion over breast cancer screening guidelines, taking advantage of the fact that some providers may choose age 40 and others 50 to commence yearly mammograms, to assess whether one or the other is better.
Sunday, August 7, 2011
Work Hour Limitations for Residents - What Are They Good For?
I have the (dubious?) distinction of being part of the first cohort of interns who can only work 16 hours at a stretch in the context of an 80 hour work week. These new restrictions were put in place by the American Council for Graduate Medical Education (ACGME) for the purported reasons of reducing medical errors and improving education for first year residents.
These changes come on the heels of a previous set of work hour restrictions put into place in 2004. Prior to that time point residents were pulling long(er) work weeks. Some high profile cases and research into medical errors pointed to the sleepy residence who'd been up 30 hours or something as the culpable party. Hence, residency programs were mandated to have residents work no more than 80 hours on an average week (over some time frame like four weeks).
Interestingly, the data on whether these work hour restrictions produced better outcomes for patients is mixed. Some studies show positive effects on mortality and complications and some show no effect (here's a systematic review from this year covering a whole bunch of studies in the US and UK).
All of this begs two questions: Why did the original work hours restriction not give unambiguously positive effects on patient outcomes? And why the further restriction in 2011, limiting the length of shift to 16 hours?
On the first point, one theory is that compliance to the 80 hour work week was low in residency programs: there was no effect because work hours really didn't decrease. Another explanation is that work hour reductions led to increases in patient "hand-offs." Resident A can't work anymore so resident B might have to take over. If resident A does a poor job telling resident B about a complicated patient, mistakes and bad outcomes become more likely when resident B takes over the caregiving role.
Growing concerns over poor handoffs - and there is some kind of evidence base (see here for an example) now suggesting that handoffs are indeed poor - have prompted many to criticize the ACGME's latest work hour restrictions. If there original work hour restrictions didn't affect patient outcomes, why make any further changes, especially if the number of handoffs increase?
An interesting piece in the NYT a few days ago really gets at the heart of the matter, I think. The author, Darshak Sanghavi, begins deconstructing our obsession with work hour restrictions by examining the case of Libby Zion. In 1984, Zion, a college student, died in an NYC emergency department while being take care of by two residents, one of which was an intern on little sleep. Essentially, a medication error was made that led to a drug interaction that cost Zion her life. The incident became the test case for resident work hour reform, with New York State instituting such reforms first, and the ACGME following suit nationally.
Sanghavi examines this case in detail and notes that there were a multitude of different factors - fragmented outpatient care, poor electronic medical records and decision support, among others - that also could have credibly contributed to Zion's tragic demise. This deconstruction makes a powerful point - sleepy residents may indeed make more errors, but we should be wary of tunnel vision. Other factors deserve consideration. It's a great article and I encourage you to read it (hat tip to OKS for sending it to me).
That said, it may be that the latest work hour restrictions may have some positive effects on patient outcomes. It could be that the real margin of improvement is reducing a given work stretch from 30 to 16 hours, not reducing the total weekly work hours from 100+ to 80. It may also be that less lengthy shifts may reduce burnout and actually allow interns to go home and read something medical. Certainly, there will be a good deal of empirical work on these latest restrictions, and I'll be interested to see which way the axe falls.
(PS: Not sure how much time I'll have to read. One disadvantage of the new restrictions is that it looks like I'll be in the hospital more often during inpatient rotations. In the end, they get you somehow!)
These changes come on the heels of a previous set of work hour restrictions put into place in 2004. Prior to that time point residents were pulling long(er) work weeks. Some high profile cases and research into medical errors pointed to the sleepy residence who'd been up 30 hours or something as the culpable party. Hence, residency programs were mandated to have residents work no more than 80 hours on an average week (over some time frame like four weeks).
Interestingly, the data on whether these work hour restrictions produced better outcomes for patients is mixed. Some studies show positive effects on mortality and complications and some show no effect (here's a systematic review from this year covering a whole bunch of studies in the US and UK).
All of this begs two questions: Why did the original work hours restriction not give unambiguously positive effects on patient outcomes? And why the further restriction in 2011, limiting the length of shift to 16 hours?
On the first point, one theory is that compliance to the 80 hour work week was low in residency programs: there was no effect because work hours really didn't decrease. Another explanation is that work hour reductions led to increases in patient "hand-offs." Resident A can't work anymore so resident B might have to take over. If resident A does a poor job telling resident B about a complicated patient, mistakes and bad outcomes become more likely when resident B takes over the caregiving role.
Growing concerns over poor handoffs - and there is some kind of evidence base (see here for an example) now suggesting that handoffs are indeed poor - have prompted many to criticize the ACGME's latest work hour restrictions. If there original work hour restrictions didn't affect patient outcomes, why make any further changes, especially if the number of handoffs increase?
An interesting piece in the NYT a few days ago really gets at the heart of the matter, I think. The author, Darshak Sanghavi, begins deconstructing our obsession with work hour restrictions by examining the case of Libby Zion. In 1984, Zion, a college student, died in an NYC emergency department while being take care of by two residents, one of which was an intern on little sleep. Essentially, a medication error was made that led to a drug interaction that cost Zion her life. The incident became the test case for resident work hour reform, with New York State instituting such reforms first, and the ACGME following suit nationally.
Sanghavi examines this case in detail and notes that there were a multitude of different factors - fragmented outpatient care, poor electronic medical records and decision support, among others - that also could have credibly contributed to Zion's tragic demise. This deconstruction makes a powerful point - sleepy residents may indeed make more errors, but we should be wary of tunnel vision. Other factors deserve consideration. It's a great article and I encourage you to read it (hat tip to OKS for sending it to me).
That said, it may be that the latest work hour restrictions may have some positive effects on patient outcomes. It could be that the real margin of improvement is reducing a given work stretch from 30 to 16 hours, not reducing the total weekly work hours from 100+ to 80. It may also be that less lengthy shifts may reduce burnout and actually allow interns to go home and read something medical. Certainly, there will be a good deal of empirical work on these latest restrictions, and I'll be interested to see which way the axe falls.
(PS: Not sure how much time I'll have to read. One disadvantage of the new restrictions is that it looks like I'll be in the hospital more often during inpatient rotations. In the end, they get you somehow!)
Saturday, August 6, 2011
Can Research on Measurement Provide Insights into the Poverty Experience?
Great paper, forthcoming in the Journal of Development Economics on how the length of recall periods in surveys leads to different measurements of health, wellness and health care seeking behavior. Also interesting is how the recall period length effect differs by income status. The authors use their findings to suggest that experiences with illness have become disturbingly become the normal among the poor vis-a-vis the rich:
Between 2000 and 2002, we followed 1621 individuals in Delhi, India using a combination of weekly and monthly-recall health questionnaires. In 2008, we augmented these data with another 8 weeks of surveys during which households were experimentally allocated to surveys with different recall periods in the second half of the survey. We show that the length of the recall period had a large impact on reported morbidity, doctor visits; time spent sick; whether at least one day of work/school was lost due to sickness and; the reported use of self-medication. The effects are more pronounced among the poor than the rich. In one example, differential recall effects across income groups reverse the sign of the gradient between doctor visits and per-capita expenditures such that the poor use health care providers more than the rich in the weekly recall surveys but less in monthly recall surveys. We hypothesize that illnesses--especially among the poor--are no longer perceived as "extraordinary events" but have become part of “normal” life. We discuss the implications of these results for health survey methodology, and the economic interpretation of sickness in poor populations.
Between 2000 and 2002, we followed 1621 individuals in Delhi, India using a combination of weekly and monthly-recall health questionnaires. In 2008, we augmented these data with another 8 weeks of surveys during which households were experimentally allocated to surveys with different recall periods in the second half of the survey. We show that the length of the recall period had a large impact on reported morbidity, doctor visits; time spent sick; whether at least one day of work/school was lost due to sickness and; the reported use of self-medication. The effects are more pronounced among the poor than the rich. In one example, differential recall effects across income groups reverse the sign of the gradient between doctor visits and per-capita expenditures such that the poor use health care providers more than the rich in the weekly recall surveys but less in monthly recall surveys. We hypothesize that illnesses--especially among the poor--are no longer perceived as "extraordinary events" but have become part of “normal” life. We discuss the implications of these results for health survey methodology, and the economic interpretation of sickness in poor populations.
Wednesday, August 3, 2011
Inaccurate Public Health Messages from Politicians = Very Bad
Those of you who either follow public health and/or know South Africa have certainly heard about the "AIDS-denialist" bent of former President Mbeki and his Health Minister, Manto Tshabalala-Msimang. If you don't, basically the two of them (mainly the latter with support from the former) put forth a view that HIV does not cause AIDS and that anti-retrovirals on balance confer negative health benefits (see this earlier post). Clearly, this flies in the face of science and common-sense. But what are the effects of these espousals on risky behaviors? Do people actually listen to this stuff? Did these beliefs lead to changes in behavior and, ominously, more HIV infections, in the general public?
A recent paper by Eduard Grebe and Nicoli Nattrass at the University of Cape Town strongly suggests that denialist claims played a role in reducing condom use among a sample of young adults in South Africa. Here's the abstract:
This paper uses multivariate logistic regressions to explore: (1) potential socio-economic, cultural, psychological and political determinants of AIDS conspiracy beliefs among young adults in Cape Town; and (2) whether these beliefs matter for unsafe sex. Membership of a religious organisation reduced the odds of believing AIDS origin conspiracy theories by more than a third, whereas serious psychological distress more than doubled it and belief in witchcraft tripled the odds among Africans. Political factors mattered, but in ways that differed by gender. Tertiary education and relatively high household income reduced the odds of believing AIDS conspiracies for African women (but not men) and trust in President Mbeki's health minister (relative to her successor) increased the odds sevenfold for African men (but not women). Never having heard of the Treatment Action Campaign (TAC), the pro-science activist group that opposed Mbeki on AIDS, tripled the odds of believing AIDS conspiracies for African women (but not men). Controlling for demographic, attitudinal and relationship variables, the odds of using a condom were halved amongst female African AIDS conspiracy believers, whereas for African men, never having heard of TAC and holding AIDS denialist beliefs were the key determinants of unsafe sex.
The study makes a few good points:
1) Bad information can lead to bad public health outcomes. (The ridiculous measles vaccines-autism scare did something very similar, more on that later)
2) These negative effects can depend on the level of education. (Here it is decreasing in education. For the measles vaccine-autism link, more educated people were more likely to decline the vaccine for their kids. Again, more on that later)
3) Social organizations, NGOs and activists can play a major role in reducing the effects of noisy or bad information.
A recent paper by Eduard Grebe and Nicoli Nattrass at the University of Cape Town strongly suggests that denialist claims played a role in reducing condom use among a sample of young adults in South Africa. Here's the abstract:
This paper uses multivariate logistic regressions to explore: (1) potential socio-economic, cultural, psychological and political determinants of AIDS conspiracy beliefs among young adults in Cape Town; and (2) whether these beliefs matter for unsafe sex. Membership of a religious organisation reduced the odds of believing AIDS origin conspiracy theories by more than a third, whereas serious psychological distress more than doubled it and belief in witchcraft tripled the odds among Africans. Political factors mattered, but in ways that differed by gender. Tertiary education and relatively high household income reduced the odds of believing AIDS conspiracies for African women (but not men) and trust in President Mbeki's health minister (relative to her successor) increased the odds sevenfold for African men (but not women). Never having heard of the Treatment Action Campaign (TAC), the pro-science activist group that opposed Mbeki on AIDS, tripled the odds of believing AIDS conspiracies for African women (but not men). Controlling for demographic, attitudinal and relationship variables, the odds of using a condom were halved amongst female African AIDS conspiracy believers, whereas for African men, never having heard of TAC and holding AIDS denialist beliefs were the key determinants of unsafe sex.
The study makes a few good points:
1) Bad information can lead to bad public health outcomes. (The ridiculous measles vaccines-autism scare did something very similar, more on that later)
2) These negative effects can depend on the level of education. (Here it is decreasing in education. For the measles vaccine-autism link, more educated people were more likely to decline the vaccine for their kids. Again, more on that later)
3) Social organizations, NGOs and activists can play a major role in reducing the effects of noisy or bad information.
Subscribe to:
Posts (Atom)