Sunday, September 30, 2007

The Balance Sheet on Managed Care? (And Other Interesting Links)

As US health expenditures skyrocketed in the late 70s and 80s, managed care started to proliferate as a potential cost saving measure. The basic idea was (and is) simple: combine providers and insurers into a single organizations and incentivize providers to use the least amount of resources as necessary for a given patient. Along with that, throw in an emphasis on preventative care so as to cut costs further down the line.

This system was revolutionary compared to what was going on before, where doctors were mostly paid by third party insurers on a fee-for-service basis. Many argue that this system encourages doctors and patients to use more resources than necessary. Today, payments to physicians take on a variety of different flavors, and fee-for-service and managed care incentives both coexist.

However, there may be a dark side to managed care , as well. The 'managed care backlash' was basically driven by uneasiness over whether necessary care would be disincentivized by typical managed care instruments such as utilization review, capitation payments, pay for performance, etc. Thus, managed care could be good or bad for patients, and the whole thing comes down to an empirical question.

So where does the axe ultimately fall? Anna Aizer, Janet Currie and Enrico Moretti's recent paper in the Review of Economics and Statistics provides some interesting insight into this issue:

Poor and uneducated patients may not know what health care is desirable and, if fully insured, have little incentive to minimize the costs of their care. Partly in response to these concerns, most states have moved a substantial portion of their Medicaid caseloads out of traditional competitive fee-for-service (FFS) care, and into mandatory managed care (MMC) plans that severely restrict the choice of provider.

We use a unique longitudinal data base of California births in order to examine the impact of this policy on pregnant women and infants. California phased in MMC creating variation in the timing of MMC. We identify the effects of MMC using changes in the regime faced by individual mothers between births. Some counties adopted single-carrier plans, while others adopted regimes with at least two carriers. Hence, we also ask whether competition between at least two carriers improved MMC outcomes. We find that MMC reduced the quality of prenatal care and increased low birth weight, prematurity, and neonatal death. Our results suggest that the competitive FFS system provided better care than the new MMC system, and that requiring the participation of at least two plans did not improve matters.

What I like about the paper is the use of quirks in public policy (like phase-ins), great data, and sound econometrics in coming up with an answer to a policy relevant question. I think the most convincing evidence in this paper comes from comparing siblings, whose mother was exposed to different "incentive regimes." This allows the authors to control for fixed factors that induce selection bias at the level of a given mother. Good stuff.

Other Links:

1) John Stossel doesn't think US health care system is as bad as everyone says it is. Check out this link as well as the articles listed in the sidebar on the right. I'll have more to say about this in a future post.

2) On that note, check out this recent working paper looking at differences in the US and Canadian health care systems. Here is a nice sentence from that piece:

We also find that Canada has no more abolished the tendency for health status to improve with income than have other countries. Indeed, the health-income gradient is slightly steeper in Canada than it is in the U.S.


3) I miss Audioslave. Check out their live cover of "Seven Nation Army." Even Omar Siddiqi enjoyed this.

Friday, September 28, 2007

Incentives for Environmentalism - II

The custodial staff recently outfitted each of our offices with new sets recycle bins and trash cans. In contrast to the previous receptacle regime, our new trash bin is about 1/3 - 1/4 the size of the recycle bin. The total volume of the former is equivalent to the space occupied by 6 soda cans: its super small.

I'm pretty sure that giving us a much smaller trash bin is part of a larger effort to get us to recycle more. I think the incentive works in two ways. First, space constraints in smaller bin encourage the user to recycle things that he/she normally would have thrown away. This is even more powerful in a setting where there are real opportunity costs to getting up and finding a larger trash bin (after all, researchers are loathe to leave their desks when in the middle of an engrossing problem!). Second, placing a much larger recycle bin next to the trash bin may send some implicit (subconscious?) signals about the relative importance of recycling over convenience. (Tsk, Tsk, how can you throw that away when there is so much space left in that big recycling bin!)

I've noticed a change in my behavior along both of these lines: I now try to recycle everything I possibly can because space in the trash bin has become much more valuable. (I could get around the constraints by a) walking 20 feet to the larger trash receptacle or b) using the trash bin in the neighboring professor's office. I'm too lazy to do (a) and going through with (b) would be weird.)

Perhaps more interesting, I also feel guilty putting things in the trash bin when there is a large, unfilled recycling bin right next to it.

What are your thoughts on this? Could we scale this intervention up to a larger level? Has that already been tried?

Tuesday, September 25, 2007

Does Television Cause Autism?

One of my favorite activities in grad school is attending the weekly departmental seminar. During the fall and spring semesters we invite scholars from various universities to give talks on health services and health economics projects they happen to be working on. You can check out current and past seminar schedules here.

This week's presenter discussed some interesting research looking at whether exposure to television at an early age serves as a trigger for autism. If that sounds provocative to you, you're onto something: this paper was probably one of the most talked about studies in 2007. Go ahead and google "television" and "autism" and you'll see.

The basic idea of the paper is to use county-level measures of autism rates and early life (i.e., prior to the age of 3) television watching over a period of several years and to explore the correlation. The authors note this correlation need not imply causation: autistic children (or those with a predisposition to the condition) may be attracted to TV or TV and autism may be correlated with a third, unobserved factor. To get around this, the authors use random variation in precipitation (it rains, kids go inside and watch TV) and the differential diffusion rates of cable TV (cable makes TV more attractive since there are more channels for kids) also at the county-level as instruments to exploit "random variation" in TV watching. A good summary of the study can be found in this Slate article. I highly recommend it.

You might be skeptical and, if so, you are not alone. A lot of people don't buy the results. I personally think its an interesting study. The authors are careful not push the results too far and only suggest that their estimates should motivate future research in the area. Their empirical methodology is as rigorous as it can be given the data. I think most people who have actually read the paper will agree with me, unconditional on whether they ultimately buy the link or not.

Unfortunately, this paper has caused quite a firestorm among autism advocacy groups and medical professionals. Indeed, the results and suggested causal mechanism made many people very angry. I'm assuming the parents in advocacy groups are upset because the finding somehow implicates them as "bad parents." Some public comments accuse the authors of not understanding autism at all. Somebody should tell them that the first author's son is autistic.

More troublesome, though, is the way some (not all!) medical professionals have reacted to the study. This is interesting to me given that nobody knows what environmental triggers may push a predisposed kid towards autism (obviously, I am making an assumption about the underlying biological model here) and that medical researchers are pursuing equally kooky theories like the mercury-in-vaccine/autism link seriously. Given the current state of thought, doesn't it make sense to take the findings of a carefully done study seriously, at the very least to generate further research?

It all makes me wonder whether the vociferous naysayers have actually read the paper and made an honest attempt to judge it on its scientific merits.

(Aside: Perhaps in fairness to the naysayers, I should note this research is somewhat atypical for an economics paper. The television-autism hypothesis is not huge theory in medicine and it certainly has not generated a huge literature anywhere. As implied earlier, the role of this paper is to empirically generate a hypothesis for further testing rather than actually testing some existing theory. Given all that, I can see how some would see this finding as coming out of "left-field." That being said, a lot of the "armchair epidemiology" studies I blogged about earlier fulfill this role as well, and the medical community dutifully notes and (usually) tests the generate hypothesis in another sample or with better data. Why can't we treat this study the same way? Especially when it is methodologically superior to the vast majority of the "armchair" nonsense.)

Sunday, September 23, 2007

The 'Dar He Blogs Guide to Art Appreciation

I recently went to the Museum of Modern Art in New York City. The trip inspired me to pass on some tips on how to get the most out of a day at the gallery. Enjoy...and see you at the museum!

1) Go with a friend. That way, when things get boring, you'll have someone to talk to.





















2) Don't be afraid to laugh at art! Here, Kevin Ard seems highly amused by this abstract piece. This is probably because it looks like it required very little effort to create (but is somehow still worth seven or eight figures). In all seriousness, the audio guide to this piece said that it represented the emptiness and detachment created by technological change. That gets points from me because it sounds a lot like OK Computer. On the other hand, I really don't see it.





















3) Modern art can get really abstract: sometimes an unpainted canvas or a twig in the middle of the room is really some valuable and/or famous piece! The key is not to confuse such art with things that aren't part of the exhibit. The gentleman in this picture appears to have made that very mistake.

















4) Always go with your bread and butter. I've found that I really like impressionism, surrealism and futurism. I try to anchor my museum visit around these pieces so as to incentivize myself to look at stuff I don't like as much. For example, I made multiple visits to this very fun piece, which captures how a classical waltz party looks if you try to take in the scene after spinning around really fast multiple times.

Thursday, September 20, 2007

Responsible "Science"? Part II

This issue is important enough to warrant a second posting. In what couldn't be better timing (with respect to my post yesterday), Alex Tabarrok at the Marginal Revolution blogs this morning about "Why Most Published Research Findings are False."

This short post references an illuminating earlier thread. The latter is a great read with some really excellent links. The discussion cites a 2005 PLoS article (of the same name) by John Ioannidis, which goes through the nitty gritty of why so many falsehoods exist. Here is how The New Scientist distills his findings:

Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.

John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.

"We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.

My analysis goes through study design related reasons for why these studies are bad and only briefly (and indirectly) touches on things like publication bias and selective reporting. In that sense, Ioannidis' paper is a more complete discussion of the issues raised in my earlier post. Furthermore, the Ioannidis critique discusses bodies of research as a whole rather than individual studies, and his discussion has important implications of what we can and cannot learn from meta-analysis. I highly recommend this paper both for its own merits as well as a complement to my rant yesterday. Enjoy!


Wednesday, September 19, 2007

Responsible "Science"? (And Other Interesting Links)

Try out this madlib:

A (Prestigious Academic Institution's Name) University study found that (food item of your choice) is associated with a greater/lesser risk of (disease of your choice).

How often do you hear such studies cited in the popular press? For most of you, the answer will be "quite." Now, how often are the results of these studies quickly overturned/discredited? Again, quite.

The New York Times magazine recently put out an interesting article about whether or not we really know anything the determinants of health and disease and the research that generates the often well-publicized variants of the above madlib (thanks to Brian and Erika for the link). I'm really glad that this article was written because it basically validates this long standing rant of mine: the vast majority of these oft-quoted medical studies suck.

Why do they suck? Mainly because the researchers who produce them have no idea how to do empirical research and make all sorts of obvious errors. I call this entire branch of pseudo-science "armchair epidemiology" - and the whole enterprise is as dangerous as that phrase is innocuous.

First, here is why I think these studies are of such poor quality:

1) Researchers who put out such studies have no concept of uncertainty and chance: The big thing in statistics is to find results with a p-value of less than 0.05. To abuse the language a bit, this means that we are more than 95% "certain" that our result is not due to random chance.

Here's the rub: if you test 20 coefficients in a regression, chances are pretty good that at least one of them will be significant at the 5% level. That comes from the whole intuition behind the 5% significance level.

To put it differently, you can test a bunch of different variables and find some significant result about 5% of the time, even when there is no "real" association between those two things, or there no ex ante theoretical reason to believe that those two things would truly be correlated in the real world. For example, you may find that shoes are associated with pancreatic cancer. Is there any theoretical reason to believe that these two things should be correlated? (No!)

Unfortunately, many researchers will take such findings and publish them, without seeing if it passes the "huh?!" test. And the public will listen and stop wearing shoes.

2) Researchers engage in heavy data mining without correcting for this in their statistical procedures: Raise your hand if you think this is good statistics: take a data set and keep running tons of regressions until you find statistically significant results. Present the best results.

This is called data mining and I'm not a big fan of it. If you use the data to inform further tests, you have to do a correction in order to obtain the true p-value. Think about it: if you beat the stuffing out of your data (i.e., run lots and lots of regressions), by random chance you are more likely to "find stuff" in the dataset. It makes sense that you should correct for this statistically.

If you are to data mine (i.e., your statistical investigation is not hypothesis or theory driven), the correct procedure would be to do all those regressions, obtain the results and either correct for your p-values (where the corrections take into account mining of the data) or try to replicate the results in another data set. The latter is called out of sample testing and most people don't do it.

And I haven't even mentioned (3) the whole correlation doesn't imply causation bit.

Fair enough, you might say, but why is this dangerous? Who cares? Here are my answers to that:

1) People care about their health, more so than most other things. And because of that, people (through the news, through the papers, whatever) want to know about the latest research findings. When they do find out, they will listen. In this context, the wrong information can be dangerous (the NYT article linked above has a particularly salient example of this).

2) Bad epidemiology of the "armchair" variety crowds out good epidemiology, and there is a sizable amount of the latter. People will ultimately get tired with the bad science, learn that medical studies are typically bad since the results flip back and forth, and give up on public health research. When all that is said and done, the public might not have patience even for the best of studies.

Final question: if such studies are so obviously bad, why aren't they weeded out (in favor of better studies)? Again, here are my views:

1) Its not obvious to most people that these are bad studies. I think the general public has a pretty poor understanding of basic statistics. This is why every high school/college kid should be made to take a stats course. (And they should take an intro microecon course while they are at it).

2) The incentives in the medical research market encourage bad studies. After all, its publish or perish at most medical schools. Quantity often carries the day and each researcher is incentivized to publish as much as possible, either by turning one study into three smaller papers or putting out trash like shoes cause cancer.

A related point: there is more data, more computing power, and more easy-to-use stats packages out there than ever before. While this reduces the cost of doing both good and bad studies, good studies still require a lot more time and care, the very things that go by the wayside when incentives to publish favor quantity.

3) People want to learn about health related things. This (plus 1) creates a huge public demand for such work.

4) Peer reviewed medical journals do not weed out these studies. Rather, they actually encourage trash research by publishing it. At times, some of the top medical journals are guilty of this as well.

I think this happens because a) many reviewers don't know good statistics from bad statistics, b) it pays to publish freaky results and c) by accepting other people's trash, it makes room in the literature for one's own trash.

Now, since my rant is over, I'll direct you to some other interesting links:

1) Steve Levitt at the Freakonomics blog has a great post about the Iraq War surge and how good econometrics and data can inform about the impact of various policies. Recommended reading.

2) Santosh Anagol of BMB fame has started a new series on how "non-economics people misunderstand basic micro-economics." Its great stuff for those with and without economics background: either way, you'll learn something.

3) Like Seinfeld? You'll love this clip. Trust me: its worth watching. (Credit to Xiaobo for the link).

Sunday, September 16, 2007

Should I Buy a Nintendo Wii? (and the Causal Returns to Studying)

I played the Nintendo Wii for the first time about two weeks ago. It was great, mainly because it was pretty easy and involved doing the motions yourself.

At one point in my life, I was pretty good at video games. That was up until N64 came on to the market. Once those systems with the really complicated controllers were released, I became a video-game zero. So, its refreshing to have something like the Wii, which allows non-gamers to feel proficient and have a good time.

I'm wondering if I should get a Wii for my apartment. Here are two reasons why I shouldn't:

1) My arm really hurt the day after playing the Wii. I think I overdid it while playing Wii baseball. Apparently, this has happened to other people as well. There's even a name for this condition: "wii-itis"

2) Purchasing a Wii might have an adverse effect on my academics. Furthermore, it might be bad for my roommate's studies, as well.

Todd and Ralph Stinebrickner touch on the second in an interesting paper about the effects of studying on academic performance. The basic idea is this: imagine having a dataset on a sample of students with information on how much they study per day and their grades. With this data, you could run a regression of grades on studying. But would this really allow you to recover the causal effect of studying on academic performance? Probably not: studying is likely associated with other unobserved factors that jointly determine grades, like motivation or intelligence. Also, poor grades in the past might cause an individual to study more (or less).

One way to estimate the causal effect of studying is to randomly assign students to different hours of studying. Obviously, its not possible to do this. But does the real world generate this kind of random variation on its own?

Stinebrickner and Stinebrickner believe the answer is yes. They note that, in the first year of college, individuals are randomly assigned roommates. If roommates are randomly assigned, so are the characteristics of these roommates, and these characteristics, in turn, might influence how much a given individual studies.

This is where video games come in. The authors, using freshman year information from panel data they collect on a cohort of students at Berea College, find that individuals who randomly assigned a roommate that brings video games to school study significantly less than those that are assigned a roommate with other (more productive?) hobbies. The authors use this and a few other sources of random variation to assess the causal effects of studying on academic performance. Not surprisingly, they find that the extra hour of studying is good for your grades.

Thursday, September 13, 2007

Protesting Stupidity: AIDS Demonstration in Cape Town

This time, I couldn't help but get riled up.

I've been to protests before, though most of these were of the ultimately trivial "uninformed-college-students-angry-against-the-world" sort. For the most part, I've approached these kind of gatherings with a decidedly positive (in the Milton Friedman sense) approach: go see what people are mad about and find out whether their specific concerns are borne out "in the data".

The protest I attended in Cape Town was completely different from these past experiences. This wasn't the kind of demonstration where both sides of the issue had some credibility. Rather, this was a protest against sheer stupidity on the part of elected officials, all of whom should know better.

In particular, demonstrators were protesting the dubious dismissal of South African Deputy Health Minister Nozizwe Madlala-Routledge. The suspicion surrounding the incident lies in Nozizwe's support of aggressive anti-HIV/AIDS policy, which stands in contrast to her superiors' - which includes South African President Thabo Mbeki and Health Minister Manto Tshabalala-Msimang - essentially anti-science stance (they've gone on record with gems like HIV doesn't cause AIDS, garlic may be as effective as ARVs in treating the disease, etc). You can read about the whole story here, here and here. For a more eloquent account of AIDS politics in South Africa, check out some of the older posts at James Hudspeth's blog. He's a medical student who spent an academic year in South Africa and writes like...well, a real writer. For future reference, you'll find him hanging out under links.

A little bit more about the protest: it was, in a word, electrifying. The atmosphere plus the really obvious cause turned this usually casual observer into an active participant. While it was sad that the whole affair had to come to this, it was uplifting to be able to believe - at least for that one moment - that the protesters would ultimately succeed in their fight against AIDS and all the 'dubborn' (thats dumb+stubborn) people who stand in their way.

The affair was organized by the NGO Treatment Action Campaign, and was filled with singing, dances, and speeches by a variety of different activists (labor union reps, women's rights advocates, and so forth). My favorite speech was by a Quaker gentleman (that's what he introduced himself as), who looked completely out of place up on the dais as he was a good deal older and whiter than his counterparts. He went up on podium and, in a soft, calm voice, clinically annihilated each of the justifications put forth by the government for Nozizwe's dismissal.

The response he received was as loud as he was quiet: completely deafening. "Speak softly and carry a big stick," said Big Teddy R. Indeed.

I'm not good enough a writer to really describe in words how electric the atmosphere was at this protest. Instead, I'm going to let the event speak for itself. Check out this video and soak in the sounds and atmosphere. Even at 0.2 megapixel or whatever, its still pretty moving.




Tuesday, September 11, 2007

Are Criminals Lead Heads? (and Other Interesting Links)

The sharp drop in violent crime during the 1990s is one of the big stylized facts social scientists of all sorts have been grappling with. Relatively recent work in this area has led to some surprising findings. I'm sure you are all aware of the Freakonomics finding that the legalization of abortion in the 1970s may be associated with the precipitous drop in crime (see below for more and here if you aren't familiar with this finding).

I recently read another equally startling paper by Jessica Wolpaw Reyes of Amherst, who contends that environmental policy driven decreases in childhood exposure to lead during the 1970s can explain over half of the decline in violent crime! Here's the main idea:

There are substantial reasons to expect that a person's lead exposure as a child could affect the likelihood that he might commit a crime as an adult. Childhood lead exposure increases the likelihood of behavioral and cognitive traits such as impulsivity, aggressivity, and low IQ that are strongly associated with criminal behavior. Under the 1970 Clean Air Act, lead was almost entirely removed from gasoline between 1975 and 1985...As each [of the cohorts born during this time period] approaches adulthood, the sharp declines in lead exposure that occurred between 1975 and 1985 would be revealed in the behavior as adults.

While that pretty much hints at the author's empirical strategy, definitely read the paper if you are interested in learning more about the econometrics. I just finished it and it seems like good science to me.

Other links:

1. For more on the short and long term effects of early life exposure to lead, check out these other papers by Prof Reyes.

2. Levitt and Dubner at the Freakonomics blog have put up a video which explores how the former developed his thesis on the abortion-crime link. A bit earlier in this space, I commented about how great it is to follow an analyst through his/her argument and understand the genesis of cool ideas. Here is your chance to do just that.

3. My aunt just started a blog, called "Spotlight" (included under links). It is as informative, insightful and funny as she is. Here is a sample, from an entry about her recent trip to Cambodia:

Most of my friends in California wondered why we picked Cambodia to visit. Despite the fact that we have three biological children and have our hands full taking care of them, the first question they popped was "Are you going there to adopt?" I quickly pushed aside thoughts of putting up some of my own children for adoption in Cambodia, and replied quite seriously that first there was Angkor Wat, and then quite a few other ancient ruins, which my husband and I dig (metaphorically, that is).

You'll notice the Paul Simon quote on the left hand side of her blog. I had my first exposure to "Western music" during a 1987 trip to India, when my blogger aunt (a very able musician herself) opened my eyes to the brilliance of Paul Simon. His album, Graceland, swept the then still relevant Grammys and won a great deal of critical acclaim.

I remember listening to her walkman at night, thinking about the "boy in the bubble, and the baby with the baboon heart," concepts that brought both wonder and fear to a very young soul. About eight years later, Graceland became the very first CD of my nascent music collection. Just last week, I listened to Paul's poetry on my way back from Cape Town (which is appropriate, given that Graceland is an amalgam of Western and African sounds). And now, I am reading the quotation from "Call me Al," which happens to be my all time favorite bit of lyrics.

Hmm...a bit of a digression I guess. But its amazing how one person's simple gesture to entertain an irritating kid can continue to resonate even as that kid has become an adult (the last part, according to my parents, is debatable).






Monday, September 10, 2007

In Purgatory with Larry Summers (or 'I am not Sexist')

I was accused of being sexist today. Here was the "offensive" remark I made:

Other person: Why don't women play five set matches?
Me: I don't think consumers want to see best of five women's matches. They wouldn't be very interesting, especially given the lack of depth on the women's circuit.

How was that sexist? I was just calling it the way I see it. Was there anything in my statement that indicated that I thought women were inferior or something? No.

Just for the record, when the women's tour was sponsored by the cigarette manufacturer Virginia Slims, the year end tournament (kinda like the men's Tennis Masters) had five-set matches. Only one match ever went to five sets, and the majority of the other matches were brutally uninteresting (see here). Which brings up the point: if there was demand for longer women's matches, wouldn't we be seeing more of these, rather than the elimination of any existing five-setters? And no, I don't think a market failure argument works in the TV sports world.

This is the second time I've been accused of being sexist. The last incident was in 1991, when my sixth grade teacher got really mad at me for asking my mom to bring me a book that I forgot at home. "Why didn't you ask your dad to bring the book?" asked my teacher, while delivering a glowering look.

I was too afraid to explain my side: "Well, gee, my dad is at work right now, about 30 minutes a way, and my mom is at home, about 2 minutes away. And heck, my dad would be totally clueless about my textbooks, anyway."

I have a great deal of respect for women's rights and the feminist movement. What I have no patience for are the over-sensitive elements that sit around and get offended at perfectly reasonable arguments, such as why most people would get bored watching five sets of women's tennis (I'm sorry, but that's just true). And using emotionally charged terms like "sexist" in inappropriate contexts is just a lame thing to do. Save it for when it really matters.

A digression: Ok, I agree that there are some men out there who are total pigs. But most of us aren't. Kudos to the people who recognize that. And, trust me, there are people like that. I went to an AIDS-related protest in Cape Town a few weeks back (check this space in the next few days for more on that), and one of the groups represented was a women's rights outfit. The woman who was representing the group began her speech with stories about rape, disrespect and infidelity, all forces that bring HIV/AIDS to innocent women all over Africa who definitely deserve better.

But before she went off lambasting men, the speaker paused and said:

"Give a hand to all the enlightened and good men out there that understand our plight and fight with us."

Thank you.

Saturday, September 8, 2007

Irrational Voters

It's safe to say that these aren't the best of times for President Bush supporters. Depending on your point of view, every month seems to bring a new problem, a new screw-up, or a new impeachable offense. What is interesting is that, despite talk of him being the worst president ever, some 30% of the public still approves of Bush's job performance. "How is this even possible?" asked one of my relatives in India.

Perhaps part of the answer can be found in this very interesting paper by economists Sendhil Mullainathan and Ebonya Washington. Standard economic models of voting posit that individuals express their preferences through their vote. The authors suggest, however, that it is possible for the act of voting to influence beliefs and preferences. How does this occur? As the authors note:

One explanation for the impact of behaviors on beliefs is cognitive dissonance (Festinger, 1957) which refers to one’s internal need for consistency. If an individual performs an activity that is antithetical to his beliefs, the individual may unconsciously change his beliefs to alleviate the discomfort of having inconsistent attitudes and actions.

In order to test for behavioral effects of voting, the authors use an ingenious "natural experiment." First off, note that if we were to take the general population and run a regression of beliefs on voting, the results would be difficult to interpret: does the causality run from voting to beliefs or beliefs to voting? To get around that problem, Mullainathan and Washington exploit the age discontinuity in voting eligibility (i.e. 17 years olds can't vote, but 19 years olds can) and check the beliefs of these individuals two years later. They find that those who voted (the then 19 now 21 year olds) show more polarizing opinions than comparable individuals who did not (then 17, now 19 year olds). Note that, for this to work, those who voted and those who did not due to age restrictions should look the same as far as their traits and such. The authors go to some difficulty to establish this.

I buy it. In 2000, I put my support behind then Governor G. W. Bush. For the next few years, I felt myself supporting the Bush administration's policies less and less (and less and less), but, at least for a few months, I viscerally experienced some weird feeling (shame?) from having acted in one way and, subsequently, believing another. I'm happy to report that I am no longer upset about this and at peace with my anti-neocon stance, but the experience of being dissonant really did resonate with me.

More on voting: I'm really interested in reading this new book The Myth of the Rational Voter: Why Democracies Choose Bad Policies by Bryan Caplan. Supposedly the book is about why voters act stupidly with respect to their best interests.




Friday, September 7, 2007

"Indiana Jonesing" and Research in Cape Town

I watched Raiders of the Lost Ark for the first time when I was about nine years old. Immediately afterwards, I wanted to be a swashbuckling archaeologist when I grew up. And immediately after that, some killjoy destroyed my dream by pointing out how real archeology can be tedious. The real thrills, apparently, are in the anticipation of finding something interesting and the elucidation of an interesting story through more digging.

Doing health economics is kind of similar. You may be surprised to learn that health econ research is not always about the glamorous women, fast cars and exotic foreign locales. Instead, most of the time the joy comes from testing your theory with data, refining your model to incorporate new insights and tools, and growing intellectually with each additional project. Its more of a private excitement, often understood only by other like-minded nerds.

I have to say that the project I was part of in Cape Town was as close to the Indiana Jones model as I've ever been in my still young research life. A bit of background: the AIDS and Society Research Unit at the University of Cape Town has collected information on about 250 HIV+ individuals in the Khayelitsha township who have been taking ARVs for the last 5 years or so. The survey began when the majority of individuals had been on treatment for about a year or two and three waves have been collected since 2004. The survey questionnaire includes information on treatment adherence, stigma, labor force participation, and various other aspects of socio-economic life.

My colleague Brendan Maughan-Brown and I are working on a project that looks at how ARV patients and their families cope with the loss of social grants. Basically, individuals starting ARVs are often extremely sick (CD4 counts < 200). The South African Government provides such individuals with a monthly disability grant that pays roughly 2.5 times the median salary for a black family (almost all of the sampled individuals are black).

In the last few years, quite a few individuals have been losing these grants as they return to health. The legalese behind grant provision offers a potential explanation for this: individuals in the Western Cape with CD4 counts below 200 and who are unable to work are eligible for grants. When these grants come up for renewal, healthier individuals are no longer able to receive the grant.

This is how it should work in practice. However, with some intensive analysis of the data, we found that health and other parameters that should predict grant/no-grant status over time have no explanatory power. There is no correlation between health improvements (measure a variety of different ways) and labor force participation and the loss of a grant over time. Not only that, only individuals below a certain monthly income level should be receiving grants (i.e., there is a means test based on assets and income). However, people across the income distribution appear equally likely to have or lose a grant! Hmm...

At the same time, we see in the data that households may engage in a variety of practices (more to come on this) to cope with the loss of the grant: its a lot of money and, given that unemployment rates are so high in the area (~40%!), the transition from welfare grant to the labor force is fraught with lots of friction. Thus, the loss of a grant to the household is a very, very large negative shock and people engage in all sorts of coping mechanisms to smooth consumption.

However, we cannot say with certainty which direction the association with the outcomes goes: even controlling for unobserved fixed factors, is it the loss of the grant that precipitated a certain behavior, or vice versa (or are both correlated with some other behavior that changes over time)?

In order to get at this, we decided that we needed to learn more about how disability grants are awarded, especially given that we found virtually no evidence in the data that the official province rules for grant distribution were being followed. This is where it got really fun: we spent days meeting social workers, individuals on ARVs, academics and other involved sorts to try and understand the 'grant generation process,' the stuff thats really going on on the ground, legalese notwithstanding.

In essence, we were trying to uncover sources of variation that would allow us to statistically separate the effect of the loss of the grant on various outcomes from the reverse causality and omitted (time-varying) variable bias. In searching for this instrumental variable, we met all sorts of interesting people with really bizarre stories of how grants are provided in practice. We have a few extremely interesting leads that we plan to follow through. I'll definitely keep you posted on the specifics of this in a later post.

All that aside, the main point is really this: how amazing is it to hunt for statistical identification by interviewing highly interesting and engaging people in the real world, following up on a string of leads like journalists or detectives, and running away from gigantic boulders?

To quote Professor Allen Kelley: "Research is fun!"

Thursday, September 6, 2007

Andre Agassi and Great Analysts

I really enjoyed last night’s Federer-Roddick match. For one thing, it was pretty amazing to watch Roddick play his absolute best and still lose to someone who is capable of raising his game to ridiculous levels. (How does someone make returning a 140 MPH serve look that easy!) This was a feast of really high quality tennis, and that, too, served with an excellent side-dish: Andre Agassi’s commentary.

Andre was fantastic. What I loved most were his numerous strategic and technical tennis insights. Here are some of my favorite nuggets:

On Strategy: Agassi made it clear from the outset that Roddick would have to consistently serve and hit big, staying on the attack as long as possible and limit Federer from transitioning from defense to offense (the last of which being especially hard to do). That’s what Roddick did, and that’s why the match was so close, at least in the first two stanzas.

On Technique: Agassi’s discussion on what makes a good two-handed backhand was simply awesome. Roddick’s backhand has been criticized by many as being a bit of a weakness. What makes his stroke so different than, say, Marat Safin or Rafael Nadal’s? Agassi contends that two-handers that use stroke the ball primarily with their non-dominant hand (in Roddick’s case, his left) have weaker and less reliable strokes than those who strike primarily with the dominant hand.

Now I know why my backhand sucks.

On Boris Becker: Agassi pointed out that there is a strong correlation between the position of Becker’s tongue during his service ritual and the ultimate placement of his serve. This is why Agassi had such success returning Becker in their pantheon of great matches.

Agassi’s commentary reminded me a lot of ESPN stalwart and ace football analyst Ron Jaworski’s work. Jaws’ stuff is excellent fare: he breaks down the game play with deep insight that both serious and casual fans can really enjoy. Lucky for us, Jaws will be doing Monday Night Football this year. Hopefully, we’ll see more of Andre in the booth as well.

There is an additional benefit to listening to commentary from people like Jaws or Agassi: it really allows us to get into the mind of a great analyst and follow their chain of thought. I think following an analyst through the chain of logic is great in that it forces you to think through the problem as well and, as a result, helps you build your own analytical skills.

That’s why I loved my very first economics class, taught by Duke’s Allen Kelley. The whole course designed to improve our analytical skills and practice employing economic tools and reasoning. The recent spate of blogs and popular press books on economics (see links) purport to do the same thing.

Tuesday, September 4, 2007

Polyjuice to Noor

"as for the answer, why would i want to be anyone else?"

Charming, witty, efficient and deep - adjectives that can be used to describe Noor as well as her numerous one-liners. The latter are usually quite good, so its no surprise that this one wins her the 1st Annual Polyjuice Potion Contest.

2007 has proved to be quite the innings for Noor. She graduated from Harvard with her MPH, got married and, now, has bagged the Polyjuice Potion Contest (and that's listed in increasing order of importance - j/k). In a few months time, she'll graduate with her MD from Wash U, and go on to combine her training in medicine, public health, as well as a natural bent towards activism and a deep concern for the less fortunate, in what should be a very, very interesting career.

So I extend a hearty handshake and congrats to Noor for her victory. Noor: you'll be receiving your award shortly.

Tej, Maheer, James and Valli: thank you for entering the contest. Noor's answer was phenomenal, but it wasn't a runaway victory by any means; I enjoyed all of your contributions (as well as the emergent discussion on the properties of the potion!).

I quite enjoyed putting this contest out there, so look for more in this space in the coming months.


Sunday, September 2, 2007

Cherry Picking, Health Care and Progressive Auto Insurance Quotes

The lack of universal health insurance coverage in the American is often attributed to the way U.S. health insurance markets are organized. With the exception of Medicare, Medicaid and other government programs, health insurance is usually obtained from private organizations, usually via employers. While such markets should encourage competition between health insurers and therefore lower rates for customers, many observers point out that such a system has perverse consequences.

For one thing, insurance companies make profits on good risks and tend to engage in practices that either tend to bring in healthy consumers or balance out bad risks with good risks, such as offering lower rates to individuals in risks pools (such as workers in large firms), or advertise at gyms, where only the healthy, or possessors of good habits, tend to go. At the end of the day, an unemployed person trying to buy an individual policy will face extremely high rates. Not only that, if he/she tends to be sickly, the premiums will go through the roof.

What does this have to do with Progressive Auto Insurance? Just as with a health insurance company, Progressive stands to increase their profit margins by insuring safe drivers and not attracting the unsafe ones. I always wondered why Progressive was being so "nice" to us by giving us their own and their competitor’s quotes. Then it hit me: this strategy is a great way to attract good risks (based on actuarial parameters such as age, previous driving record, gender, etc) and drive away the bad ones to other companies. After all, wouldn’t Progressive love to insure Sally, a 35 year old mother of two with a spotless driving record, and suggest that Rex, a 17 year old Mustang driving monster with two speeding tickets, go to the British-accented gecko (or be placed in “good hands”)?

I wanted to test the theory by playing on the Progressive website. Unfortunately, it requires valid e-mail IDs and all sorts of other information that I don’t want to repetitively create/use.

(PS/Aside: The insurance system may also have consequences for health care costs. Each individual insurance company faces particular administrative costs. These costs are often high. Some observers (for example, Paul Krugman) argue that single payer models, in addition to covering all individuals, will be cheaper as well since a single insurer can take advantage of economies of scale in administration.)