I have an appointment with the doctor this Friday and I just received the third call asking for basic information (address, SSN, birthdate, medical history, etc) and confirmation. Each call was placed by a different department here at Yale-New Haven and took about 5-10 of my time.
I understand the need to confirm and get important medical information, but is it really necessary to make three calls in order to do this? Don't the different departments talk to each other? It is well known than Yale-New Haven (and many other hospitals) have archaic medical record systems that prevent one department from efficiently obtaining information about a patient from another physician/floor/department/whatever. It's not hard to see how this leads to wastage. And this doesn't even account for the fact that medical record keeping systems across hospitals are rarely interoperable, as well.
One potential solution to this particular inefficiency problem is to create a nationwide health care IT system. This article offers some analysis on how this would be done, as well as potential costs and benefits. Some take home points:
-IT can produce savings, but this alone will not shrink the per capita health care expenditure gap between the U.S. and other OECD nations. I.e., if cost and efficiency are major concerns this is no substitute for systemic change.
-Health care IT will likely require substantial public investment. A firm that puts in an IT system that other firms can link likely faces a first mover disadvantage: there are significant spillovers from this initial investment. Other firms can reap efficiency gains from IT at a fraction of the cost, since the first firm already has done most of the work. Hence, without government intervention (i.e., funding and/or mandates), one would expect underinvestment in health care IT. Indeed, this likely explains our current state of affairs and why it took three calls to confirm my appointment.
Welcome! This is a blog that generally covers issues related to health and development economics. Feel free to visit and comment as often as you'd like.
Wednesday, January 30, 2008
Saturday, January 26, 2008
Selling the Mango Burrito
A friend of mine and I went to The Whole Enchilada (21 Whitney Ave, New Haven, CT) yesterday, a small Mexican restaurant whose ambiance and proclivity to neon lighting belies the quality of food served inside. The menu indicated an intriguing new special, "The Mango Burrito." Since there was no description of this dish, we went up to the counter and asked the proprietor/cook for more details. I just had to mention the phrase "mango burrito" and the guy's eyes lit up. "The Maaaaah-ngo Burrito!" he said excitedly in his accented English, before going on to a full description of it's contents. I was quickly sold on ordering it, with 90% of my decision comimg down to this guy's excitement (he didn't even have to mention what was inside).
In every other restaurant I've been to, when the waiter/waitress reads off the specials I am less than enthused. I think this is so because they show no excitement towards the subject. You get "would you be interested in hearing about the specials?" and, regardless of your answer, you hear this long list of weird dishes, delivered with, at best, a poker face. Not too inspiring. In fact I don't think I've ever ordered the special until yesterday.
The Whole Enchilada experience suggests to me that there is a better way to peddle the specials: restaurant employees should act like they love them. It would definitely work with me. The mango burrito is the running example here but, many other times, after choosing between a few items and being unsure of my choice, hearing from the waiter or waitress that the dish I picked is "oooh, so good" makes me feel good about my choice and probably enhances the rest of my dining experience. Perhaps there's a nice randomized experiment here: randomly prime tables with the "ooh, thats so good, its my favorite dish" and then assess how people feel about the food and service later based on ratings/tips/etc?.
If there are potential gains to be had by inducing customers to order the specials, why aren't these sold more aggressively? Some thoughts:
1) Specials rotate quickly and the staff does not have time to sample them, render judgments over these impossible.
2) Waiters/waitresses are not as invested in the specials in the same way the cooks are. Remember, The Whole Enchilada guy was actually the cook, and his love of the mango burrito can likely be classified as pride in his creation.
3) There are no returns to the waiters/waitresses for pushing the special over the regular menu in terms of tips. The restaurant may profit from selling the specials, though, which suggests incentive incompatibility.
4) Marginal profits from sales are not very high for the specials (i.e., I've relaxed the original premise of the question), and the real benefit of having specials comes from reputation, social network effects and signaling ("restaurants that have specials must be really good"). In this case, the customer needs to be told there are specials, but aggressive sales of these have no additional benefit.
By the way, the mango burrito was damn good. Go eat one if you are in the neighborhood.
In every other restaurant I've been to, when the waiter/waitress reads off the specials I am less than enthused. I think this is so because they show no excitement towards the subject. You get "would you be interested in hearing about the specials?" and, regardless of your answer, you hear this long list of weird dishes, delivered with, at best, a poker face. Not too inspiring. In fact I don't think I've ever ordered the special until yesterday.
The Whole Enchilada experience suggests to me that there is a better way to peddle the specials: restaurant employees should act like they love them. It would definitely work with me. The mango burrito is the running example here but, many other times, after choosing between a few items and being unsure of my choice, hearing from the waiter or waitress that the dish I picked is "oooh, so good" makes me feel good about my choice and probably enhances the rest of my dining experience. Perhaps there's a nice randomized experiment here: randomly prime tables with the "ooh, thats so good, its my favorite dish" and then assess how people feel about the food and service later based on ratings/tips/etc?.
If there are potential gains to be had by inducing customers to order the specials, why aren't these sold more aggressively? Some thoughts:
1) Specials rotate quickly and the staff does not have time to sample them, render judgments over these impossible.
2) Waiters/waitresses are not as invested in the specials in the same way the cooks are. Remember, The Whole Enchilada guy was actually the cook, and his love of the mango burrito can likely be classified as pride in his creation.
3) There are no returns to the waiters/waitresses for pushing the special over the regular menu in terms of tips. The restaurant may profit from selling the specials, though, which suggests incentive incompatibility.
4) Marginal profits from sales are not very high for the specials (i.e., I've relaxed the original premise of the question), and the real benefit of having specials comes from reputation, social network effects and signaling ("restaurants that have specials must be really good"). In this case, the customer needs to be told there are specials, but aggressive sales of these have no additional benefit.
By the way, the mango burrito was damn good. Go eat one if you are in the neighborhood.
Thursday, January 24, 2008
Post-Football Funk
I'm a huge football fan, and, each year after the Super Bowl, I come away with a sense of loss. Up to that point, between August and January, I likely spent each and every weekend watching at least one NFL game. The void left by no football is non-trivial. I try to salvage something and pathetically watch the Pro-Bowl the following weekend, only to be disappointed by it's total vapidity. (This "priceless pep-talk' by Peyton Manning describes this feeling much better than I just did.)
The post-NFL funk last about a month for me, at the end of which the inevitable March madness comes by to get my spirits up again. As such, February is my least favorite month of the year, and I estimate that my mood and productivity both tank during this time.
I wonder if this holds more generally across the population of football fans. Almost everyone I know feels depressed after the NFL (or college) seasons. Does this have consequences for their health, productivity and happiness, too? How important are these consequences to population health and the economy at large?
I don't think anyone has done a study on this yet. Obviously, establishing causality would be difficult. You'd need some kind of before and after Super Bowl measurement(s) on these various outcomes among NFL fans as well as a control group to difference out general non-NFL trends (like "wow, I'm a bit sick of winter now", etc). But finding a control group would be hard. I'd suggest NBA fans, but they are a little weird.
Cleaner identification is possible by looking fans of opposing teams playing each other in a single-elimination game. For example, an enterprising soul could look at Patriots and Giants fans before and after the Super Bowl. This article uses a similarly themed strategy for soccer matches and finds that Premier League home team losses are associated with increased incidence of death due to acute myocardial infarctions and strokes.
With the Steelers out of the playoffs, my arch enemies the Patriots poised to go undefeated, and eight months of no NFL in front of me, I am poised to have a pretty dumb February.
But what's that? Duke basketball in the top 5? Maybe the funk will be tempered somewhat.
The post-NFL funk last about a month for me, at the end of which the inevitable March madness comes by to get my spirits up again. As such, February is my least favorite month of the year, and I estimate that my mood and productivity both tank during this time.
I wonder if this holds more generally across the population of football fans. Almost everyone I know feels depressed after the NFL (or college) seasons. Does this have consequences for their health, productivity and happiness, too? How important are these consequences to population health and the economy at large?
I don't think anyone has done a study on this yet. Obviously, establishing causality would be difficult. You'd need some kind of before and after Super Bowl measurement(s) on these various outcomes among NFL fans as well as a control group to difference out general non-NFL trends (like "wow, I'm a bit sick of winter now", etc). But finding a control group would be hard. I'd suggest NBA fans, but they are a little weird.
Cleaner identification is possible by looking fans of opposing teams playing each other in a single-elimination game. For example, an enterprising soul could look at Patriots and Giants fans before and after the Super Bowl. This article uses a similarly themed strategy for soccer matches and finds that Premier League home team losses are associated with increased incidence of death due to acute myocardial infarctions and strokes.
With the Steelers out of the playoffs, my arch enemies the Patriots poised to go undefeated, and eight months of no NFL in front of me, I am poised to have a pretty dumb February.
But what's that? Duke basketball in the top 5? Maybe the funk will be tempered somewhat.
Sunday, January 20, 2008
Is Moving from State to Nationwide Insurance Markets a Good Idea?
The U.S. health care system places restrictions on where you can buy insurance. In particular, if you're in the market for a policy, your choice is restricted to those providers operating within your state. Many market-oriented health care reformers, including all of the Republican Presidential candidates, view this as a source of inefficiency in the health care system and argue that inter-state purchase restrictions be lifted.
The logic of this claim is easy to follow. Currently, the average cost of insurance (premiums and deductibles) varies greatly across states, driven by differences in state mandates. Allowing consumers to purchase plans from other states, where the cost of insurance might be cheaper, puts pressure on the insurance companies to lower costs, and state governments to loosen restrictions, which will also help reduce costs through increased efficiency. Further, access will be improved, as well, since lower costs are typically more salient for poorer consumers.
But will this actually work out in practice? Vincent, a student of my mine from an undergraduate class in health economics last semester, wrote his final paper on this question. He offers an interesting argument against this policy, at least as promoted in isolation or in conjunction with other market oriented policies pushed by Republicans. The main idea is that lifting interstate purchase restrictions will drive healthy individuals to buy policies in low-cost states, which are typically those with lesser mandates. Sicker individuals will buy policies in states mandating increased coverage. As a result, high mandate states will attract sicker customers and low mandates states will corner the healthy ones. As mandates under a nationwide market will be expensive, this situation would then encourage states to loosen such restrictions. Given that these mandates were generally put in place to get around address access issues in the first place, interstate competition may actually lead to decreased access.
Vincent cites the German "sick fund" liberalization as evidence for his hypothesis. While the German system mandates all individuals to have insurance through one of nearly 200 sick funds (or health insurance plans), even universal coverage could not stop adverse selection issues from arising when people were allowed to pick from providers/plans all over the country.
Has there been any other research done on this issue? I'm curious to see how the Presidential candidates addressed the possibility of an adverse selection "blow up," if they have even recognized this potential problem, at all.
The logic of this claim is easy to follow. Currently, the average cost of insurance (premiums and deductibles) varies greatly across states, driven by differences in state mandates. Allowing consumers to purchase plans from other states, where the cost of insurance might be cheaper, puts pressure on the insurance companies to lower costs, and state governments to loosen restrictions, which will also help reduce costs through increased efficiency. Further, access will be improved, as well, since lower costs are typically more salient for poorer consumers.
But will this actually work out in practice? Vincent, a student of my mine from an undergraduate class in health economics last semester, wrote his final paper on this question. He offers an interesting argument against this policy, at least as promoted in isolation or in conjunction with other market oriented policies pushed by Republicans. The main idea is that lifting interstate purchase restrictions will drive healthy individuals to buy policies in low-cost states, which are typically those with lesser mandates. Sicker individuals will buy policies in states mandating increased coverage. As a result, high mandate states will attract sicker customers and low mandates states will corner the healthy ones. As mandates under a nationwide market will be expensive, this situation would then encourage states to loosen such restrictions. Given that these mandates were generally put in place to get around address access issues in the first place, interstate competition may actually lead to decreased access.
Vincent cites the German "sick fund" liberalization as evidence for his hypothesis. While the German system mandates all individuals to have insurance through one of nearly 200 sick funds (or health insurance plans), even universal coverage could not stop adverse selection issues from arising when people were allowed to pick from providers/plans all over the country.
Has there been any other research done on this issue? I'm curious to see how the Presidential candidates addressed the possibility of an adverse selection "blow up," if they have even recognized this potential problem, at all.
Wednesday, January 16, 2008
"Football Guy"
I recently asked my four year old cousin to draw me a picture of a football player. Within a day, she produced the excellent piece of art displayed below. The name of the piece is "Football Guy," and I think it really captures the spirit of the game.
In case you are wondering, my cousin used Paintbrush to create this picture. Given that program's inherent limitations and finickiness, this makes her accomplishment even more remarkable.
To hear more about my cousin, in particular her ambitions to become a princess, see here.
In case you are wondering, my cousin used Paintbrush to create this picture. Given that program's inherent limitations and finickiness, this makes her accomplishment even more remarkable.
To hear more about my cousin, in particular her ambitions to become a princess, see here.
Monday, January 14, 2008
Does Knowing the Price Make Right?
The EconLog has a recent post salivating over the Singapore health care system. It was an interesting piece, but I was actually more intrigued by James A. Donald's comment about how there seem to be no set prices in the U.S. health care system. Implicit in the comment is the contention that providers don't seem to know what anything costs. He's got some great vignettes from recent conversations with various hospital reps. (Here is some additional evidence from the academic literature.)
The lack of prices, transparent or otherwise, is something market-oriented reformers point to as a tremendous problem with the U.S. health care system. Obviously, prices play a big role in consumer to seek care and provider decisions to offer it, especially in the context of a third party payer system. The argument then is that nonexistent or opaque prices mean nonexistent or opaque incentives, which in turn leads to inefficient outcomes and high costs.
Here is a related, but slightly more focused, line of inquiry: at the physician level, what are the consequences of providers lacking knowledge about the true costs of their actions? Does knowing the cost of care lead to changes in what sorts of treatments, diagnostic tests, etc are provided?
Martin Andersen, a former Yalie and good friend of mine, now doing his PhD in health economics at Harvard, did some interesting work on this subject last year for his final project in PLSC 512: Experiments in Social Science, a course he and I took together. In particular, he randomly assigned clinical vignette-based surveys to a group of fourth year medical students. Half of the questionnaires had a clinical scenario followed by a choice of biologically equivalent treatments. The other half were exactly the same, but this time the cost of each treatment drug was provided as well. Martin found that the information to be powerful: providing information on costs or prices led the average student to pick the cheaper options and the difference was practically and statistically significant even in the small sample he was using.
If this result holds more generally, what can we take from it? After decades of innovation on the cost-control front - capitation, pay for performance, utilization review, etc - perhaps one of the simplest methods is to keep things streamlined is to educate physicians about how much their actions cost. At the very least, making costs and prices more transparent to providers might serve as an effective complement to the myriad of other strategies we (quixotically?) employ to achieve cost-control. In any event, I'd definitely like to see more research in this area.
The lack of prices, transparent or otherwise, is something market-oriented reformers point to as a tremendous problem with the U.S. health care system. Obviously, prices play a big role in consumer to seek care and provider decisions to offer it, especially in the context of a third party payer system. The argument then is that nonexistent or opaque prices mean nonexistent or opaque incentives, which in turn leads to inefficient outcomes and high costs.
Here is a related, but slightly more focused, line of inquiry: at the physician level, what are the consequences of providers lacking knowledge about the true costs of their actions? Does knowing the cost of care lead to changes in what sorts of treatments, diagnostic tests, etc are provided?
Martin Andersen, a former Yalie and good friend of mine, now doing his PhD in health economics at Harvard, did some interesting work on this subject last year for his final project in PLSC 512: Experiments in Social Science, a course he and I took together. In particular, he randomly assigned clinical vignette-based surveys to a group of fourth year medical students. Half of the questionnaires had a clinical scenario followed by a choice of biologically equivalent treatments. The other half were exactly the same, but this time the cost of each treatment drug was provided as well. Martin found that the information to be powerful: providing information on costs or prices led the average student to pick the cheaper options and the difference was practically and statistically significant even in the small sample he was using.
If this result holds more generally, what can we take from it? After decades of innovation on the cost-control front - capitation, pay for performance, utilization review, etc - perhaps one of the simplest methods is to keep things streamlined is to educate physicians about how much their actions cost. At the very least, making costs and prices more transparent to providers might serve as an effective complement to the myriad of other strategies we (quixotically?) employ to achieve cost-control. In any event, I'd definitely like to see more research in this area.
Sunday, January 13, 2008
Teaching
In many graduate programs, teaching fellowships are a necessary part of one's PhD experience: in exchange for a generous stipend and (hopefully) access to all of the college/university's vast resources, PhD students get the privilege of helping educate some America's best and brightest.
For many graduate students, the (mandatory) opportunity to teach brings both excitement and apprehension. I know I definitely had mixed feelings. On the one hand, the psychic benefits of educating eager students and capitalizing on all the perks that come with authority are attractive features of the teaching fellowship or teaching assistant program. On the other hand, the more time you spend helping other people, the less time you have for your own research.
In my program, we are required to teach 40 hours. At first glance, this seems like a really good deal. However, that 40 hours actually means four 10-hr/week semesters of teaching. Apparently, compared to those "poor, unfortunate souls" in humanities programs, this isn't too bad, either.
So far I have completed 35/40 hours required. I have to make a decision about the remaining 5 hours. Should I fulfill that by teaching? By a research assistant job? Should I get an outside grant and buy my way out of these responsibilities altogether? Here are the pros and cons of teaching, as I see them:
Pros
1) Teaching students is rewarding, especially when they notice the effort and reciprocate by getting excited about the material. I've had some phenomenal students thus far, especially in the undergraduate health politics and health economics courses. It was great to be able to inspire them and get to know them before they go on to do big things.
2) You can reinforce your knowledge in a subject you already know or learn something completely new. I've TAed health policy and health systems, health and political science, and health econ courses. All of them are centered around U.S. health care, something I knew comparatively little about. Now I have a good handle on how everything works. Not only that, I also have a sense of where the interesting research questions are. So if I can't get a job doing international stuff, I could always study Medicaid or P4P or something.
Cons
1) 10 hours/week is more of a time commitment than you'd think. Actually, it wasn't a big deal during second year, when I TAed in addition to taking a full load of courses, but was more so this past semester, my first year of "full dissertation time." I'm obsessed with finishing this program as quickly as possible - after all, I need to go back to med school! - and in order to do so, I think you need to get off to a strong start early on in the third year. There were many times when I felt that teaching responsibilities (writing exams, grading exams, holding section, etc) were destroying any momentum I was picking up. As such, I've taken this semester off to focus fully on research work.
2) For most students, TAing responsibilities involving teaching section, coming up with exam questions, and grading exams and problem sets. I think you can get pretty good at those things, as well as skills like explaining things well, connecting with students, etc, after two semesters of teaching. After this, I think there are diminishing returns to TAing as far as it making you a better teacher. I was lucky in my third iteration in that the professor I TAed for expanded my responsibilities into helping come up with the syllabus, give lectures, and work with guest speakers. I don't think I'll be as fortunate on the fourth go.
3) Diminishing returns also apply to labor market returns to graduate experiences in teaching. At the research powerhouses, I think they care a lot less about your teaching experience than what papers you've written; this also applies for tenure. If this is true, it might be better to spend time RAing and get on some random publication (or get a grant and focus fully on my own work) than teaching.
(Many of you may have had graduate student TAs that seemed less than happy to be teaching you. Reasons 1-3 may explain their long faces.)
I'm not sure what I'm going to do. I'm definitely applying for the grant, but there is no guarantee I will get one. If it comes down to it, the decision between TA and RA is tougher. I'll keep you posted.
For many graduate students, the (mandatory) opportunity to teach brings both excitement and apprehension. I know I definitely had mixed feelings. On the one hand, the psychic benefits of educating eager students and capitalizing on all the perks that come with authority are attractive features of the teaching fellowship or teaching assistant program. On the other hand, the more time you spend helping other people, the less time you have for your own research.
In my program, we are required to teach 40 hours. At first glance, this seems like a really good deal. However, that 40 hours actually means four 10-hr/week semesters of teaching. Apparently, compared to those "poor, unfortunate souls" in humanities programs, this isn't too bad, either.
So far I have completed 35/40 hours required. I have to make a decision about the remaining 5 hours. Should I fulfill that by teaching? By a research assistant job? Should I get an outside grant and buy my way out of these responsibilities altogether? Here are the pros and cons of teaching, as I see them:
Pros
1) Teaching students is rewarding, especially when they notice the effort and reciprocate by getting excited about the material. I've had some phenomenal students thus far, especially in the undergraduate health politics and health economics courses. It was great to be able to inspire them and get to know them before they go on to do big things.
2) You can reinforce your knowledge in a subject you already know or learn something completely new. I've TAed health policy and health systems, health and political science, and health econ courses. All of them are centered around U.S. health care, something I knew comparatively little about. Now I have a good handle on how everything works. Not only that, I also have a sense of where the interesting research questions are. So if I can't get a job doing international stuff, I could always study Medicaid or P4P or something.
Cons
1) 10 hours/week is more of a time commitment than you'd think. Actually, it wasn't a big deal during second year, when I TAed in addition to taking a full load of courses, but was more so this past semester, my first year of "full dissertation time." I'm obsessed with finishing this program as quickly as possible - after all, I need to go back to med school! - and in order to do so, I think you need to get off to a strong start early on in the third year. There were many times when I felt that teaching responsibilities (writing exams, grading exams, holding section, etc) were destroying any momentum I was picking up. As such, I've taken this semester off to focus fully on research work.
2) For most students, TAing responsibilities involving teaching section, coming up with exam questions, and grading exams and problem sets. I think you can get pretty good at those things, as well as skills like explaining things well, connecting with students, etc, after two semesters of teaching. After this, I think there are diminishing returns to TAing as far as it making you a better teacher. I was lucky in my third iteration in that the professor I TAed for expanded my responsibilities into helping come up with the syllabus, give lectures, and work with guest speakers. I don't think I'll be as fortunate on the fourth go.
3) Diminishing returns also apply to labor market returns to graduate experiences in teaching. At the research powerhouses, I think they care a lot less about your teaching experience than what papers you've written; this also applies for tenure. If this is true, it might be better to spend time RAing and get on some random publication (or get a grant and focus fully on my own work) than teaching.
(Many of you may have had graduate student TAs that seemed less than happy to be teaching you. Reasons 1-3 may explain their long faces.)
I'm not sure what I'm going to do. I'm definitely applying for the grant, but there is no guarantee I will get one. If it comes down to it, the decision between TA and RA is tougher. I'll keep you posted.
Wednesday, January 9, 2008
Health Economics and the Presidential Election: Some Links
Health care, while seemingly always on everyone's mind, has typically been a second tier issue in major election years. My sense is that this is still true (I'd guess the economy, the Iraq Conflict and immigration comprise the top three voter concerns), but perhaps less so than in previous years.
Even so, it's not a stretch to say that 2008 promises to be an interesting year for health policy. Check out the following:
1) The Healthcare Economist has an excellent series of posts comparing the health care proposals of leading Republican and Democrat candidates. The posts include some well-constructed tables (that could stand alone if need be) and insightful commentary/analysis. I classify these as "must reads."
2) Neoclassical and behavioral economics clash once again, this time in the form of Hillary Clinton and Barack Obama! Check out this NYT article for more (thanks to John for sending the link over). The main thrust is that Hillary's health care plan hinges on rational responses to incentives while Obama apparently thinks simpler, less confusing policies are the way to go. I'm not too sure how salient the behavioral versus neoclassical divide is with respect to the merits of each health care plan, but its nice to see this stuff in the news.
Even so, it's not a stretch to say that 2008 promises to be an interesting year for health policy. Check out the following:
1) The Healthcare Economist has an excellent series of posts comparing the health care proposals of leading Republican and Democrat candidates. The posts include some well-constructed tables (that could stand alone if need be) and insightful commentary/analysis. I classify these as "must reads."
2) Neoclassical and behavioral economics clash once again, this time in the form of Hillary Clinton and Barack Obama! Check out this NYT article for more (thanks to John for sending the link over). The main thrust is that Hillary's health care plan hinges on rational responses to incentives while Obama apparently thinks simpler, less confusing policies are the way to go. I'm not too sure how salient the behavioral versus neoclassical divide is with respect to the merits of each health care plan, but its nice to see this stuff in the news.
Saturday, January 5, 2008
How to Innovate: Stop Reading Academic Papers?
One of my professors last semester had this piece of advice for graduate students: if you find an interesting research question, work through how you would attack the problem before doing a literature search. His point was that reading the literature, especially in the problem and research strategy generation phase of a project, stifles your creativity.
A recent NYT article makes a similar point:
This so-called curse of knowledge, a phrase used in a 1989 paper in The Journal of Political Economy, means that once you’ve become an expert in a particular subject, it’s hard to imagine not knowing what you do. Your conversations with others in the field are peppered with catch phrases and jargon that are foreign to the uninitiated. When it’s time to accomplish a task — open a store, build a house, buy new cash registers, sell insurance — those in the know get it done the way it has always been done, stifling innovation as they barrel along the well-worn path.
The rest of the article primarily considers strategies of how to get around the curse of knowledge at the organizational level, whereas my Professor's tactic tries to hit the problem at an individual level. Here are some questions/thoughts I had:
1) Is the curse of knowledge primarily a manifestation of framing, since an analysis presented in a certain way governs the way one thinks of the problem subsequently?
2) Do the benefits of knowledge outweigh the costs? I hear about a lot of projects that stem from "I read this and that paper and saw that I could make the following substantive improvements." I'm guessing that a lot of research is done that way. Such contributions require a fairly deep understanding of the existing body of research and where it is different.
3) Following from (2), I wonder if researchers self-select into the "innovation" (extensive margin) path or the "replicate and extend" (intensive margin) path. Is it possible for a single researcher to do both, or does it make more sense to specialize according to comparative advantage?
4) I'm guessing the innovation path is riskier, though if things pan out, returns will be much higher. If so, to what extent do academic incentives govern whether or not a given researchers chooses to innovate or incrementally add to the literature? For example, in the medical research world (excluded the more basic science oriented work) my guess is that the tacit requirements to publish in quantity drive the marginal researcher towards the "replicate and extend" path. Another example: some argue that academic tenure is justified in that it provides security for researchers with great promise to take on riskier projects.
5) In training graduate students, is their an optimal reading/creating ratio? After all, we learn about approaching problems from how others approach problems. There is really no way around that. But where is the point where we approach negative marginal returns?
6) What is the biological basis of the curse of knowledge? In my first year neuroscience course, one of the professors pointed out that we start out with many more synaptic connections as infants and children than we possess as adults. His take on this was that, with learning and experience, we strengthen the appropriate connections while "weeding out" the others. "This is why as kids we have all these crazy ideas, but as we grow up we have less and less of these," he said. From an evolutionary perspective, I can see how pruning out the crazy ideas makes sense. However, to the extent that evolution is a satisficing process, perhaps the pruning that goes on is a bit much. Are there ways to engage children such that they retain some of this "innovative craziness" later in life? Is there any research on this?
A recent NYT article makes a similar point:
This so-called curse of knowledge, a phrase used in a 1989 paper in The Journal of Political Economy, means that once you’ve become an expert in a particular subject, it’s hard to imagine not knowing what you do. Your conversations with others in the field are peppered with catch phrases and jargon that are foreign to the uninitiated. When it’s time to accomplish a task — open a store, build a house, buy new cash registers, sell insurance — those in the know get it done the way it has always been done, stifling innovation as they barrel along the well-worn path.
The rest of the article primarily considers strategies of how to get around the curse of knowledge at the organizational level, whereas my Professor's tactic tries to hit the problem at an individual level. Here are some questions/thoughts I had:
1) Is the curse of knowledge primarily a manifestation of framing, since an analysis presented in a certain way governs the way one thinks of the problem subsequently?
2) Do the benefits of knowledge outweigh the costs? I hear about a lot of projects that stem from "I read this and that paper and saw that I could make the following substantive improvements." I'm guessing that a lot of research is done that way. Such contributions require a fairly deep understanding of the existing body of research and where it is different.
3) Following from (2), I wonder if researchers self-select into the "innovation" (extensive margin) path or the "replicate and extend" (intensive margin) path. Is it possible for a single researcher to do both, or does it make more sense to specialize according to comparative advantage?
4) I'm guessing the innovation path is riskier, though if things pan out, returns will be much higher. If so, to what extent do academic incentives govern whether or not a given researchers chooses to innovate or incrementally add to the literature? For example, in the medical research world (excluded the more basic science oriented work) my guess is that the tacit requirements to publish in quantity drive the marginal researcher towards the "replicate and extend" path. Another example: some argue that academic tenure is justified in that it provides security for researchers with great promise to take on riskier projects.
5) In training graduate students, is their an optimal reading/creating ratio? After all, we learn about approaching problems from how others approach problems. There is really no way around that. But where is the point where we approach negative marginal returns?
6) What is the biological basis of the curse of knowledge? In my first year neuroscience course, one of the professors pointed out that we start out with many more synaptic connections as infants and children than we possess as adults. His take on this was that, with learning and experience, we strengthen the appropriate connections while "weeding out" the others. "This is why as kids we have all these crazy ideas, but as we grow up we have less and less of these," he said. From an evolutionary perspective, I can see how pruning out the crazy ideas makes sense. However, to the extent that evolution is a satisficing process, perhaps the pruning that goes on is a bit much. Are there ways to engage children such that they retain some of this "innovative craziness" later in life? Is there any research on this?
Tuesday, January 1, 2008
Parental Investments in Children
I watched the Patriots beat the Giants a few days ago (to go 16 - 0 for the regular season) with a twinge of annoyance and envy. The annoyance came from the fact that I really dislike the Patriots. The envy had to do with something else: I wished that I could have had the experience of playing football and setting brilliant records, as well.
For pretty much my entire childhood, my number one desire was to play football. However, my football experiences never really went beyond neighborhood pick-up games (though I was a total beast in those) and my experience with team sports was limited to rec basketball, tee-ball and soccer (all poor approximations of my dream). I recently asked my parents why they didn't enroll me in pee-wee football. My mom's answer: "I knew that Indians wouldn't make it."
My mom's reply is interesting on many levels. Mainly, it indicates that my parents were careful to allocate my time to activities with higher expected returns over my life cycle, where the probabilities used to calculate the expectations came from what they knew of the world (can you name an Indian NFL star?) and what they knew of me ("non-well-built"). (Of course, I don't necessarily think they accounted for all costs, benefits and constraints in their analysis. The strength, mental toughness, competitiveness and "team player" skills would have likely benefited me in the activities and career paths I've chosen to focus on since childhood. In addition, I spent a lot of time as a kid doing "non-productive" activities; that is, I don't think time constraints were binding. As such, I would have gladly substituted some of that time for football practice, I think.)
In general, the investments parents make in their children - the "hows" and "whats" - is a rich area of research in economics. An interesting paper by Kaivan Munshi and Mark Rosenzweig illustrates how parental investments in childhood education in India vary depending on broader economic conditions and traditional caste networks:
This paper addresses the question of how traditional institutions interact with the forces of globalization to shape the economic mobility and welfare of particular groups of individuals in the new economy. We explore the role of one such traditional institution - the caste system - in shaping career choices by gender in Bombay using new survey data on school enrollment and income over the past 20 years. We find that male working class - lower caste - networks continue to channel boys into local language schools that lead to the traditional occupation, despite the fact that returns to non-traditional white collar occupations rose substantially in the 1990s, suggesting the possibility of a dynamic inefficiency. In contrast, lower caste girls, who historically had low labor market participation rates and so did not benefit from the network, are taking full advantage of the opportunities that became available in the new economy by switching rapidly to English schools.
Another interesting question pertains to how parents allocate resources differentially across children. For a given set of parents, each child born to the couple will possess slightly different traits as far as intelligence, health, physical ability, and so on. Will parents tend to make productive investments (schooling, health care, etc) to children whose endowments will likely lead them to be more successful in the labor market (i.e., kids who offer higher marginal returns to various investments)? Or will they tend to invest more in the laggard children? To the extent that these decisions come down to differences in parental preferences, determining whether parents tend to reinforce or equalize child endowments comes down to an empirical question.
Two recent and interesting working papers offer some insight into this question. The first paper, by Marcos Rangel, looks at how mixed race couples in Brazil allocate educational investments across their kids. Rangel finds that parents tend to favor children with lighter complexions, presumably reflecting their better prospects in the Brazilian labor market. The second paper, by Ashlesha Datar, Arkadipta Ghosh, and Neeraj Sood, looks at health care investments among children in India. Their findings indicate that Indian parents favor children with higher birthweights, which has been shown to correlate with better health and economic prospects later in life.
Interestingly, the Datar, et al, paper also illustrates the sensitivity of the investments gradient to external socioeconomic factors: the propensity to investment relatively less in the lower birth weight child is found to be higher in areas with higher infant mortality rates. This same finding is highlighted in a well-known paper by Elaina Rose exploring gender bias in India. She finds that the ratio of female to male infant mortality is higher in rural areas during times of drought, when economic constraints are more likely to have bite.
For pretty much my entire childhood, my number one desire was to play football. However, my football experiences never really went beyond neighborhood pick-up games (though I was a total beast in those) and my experience with team sports was limited to rec basketball, tee-ball and soccer (all poor approximations of my dream). I recently asked my parents why they didn't enroll me in pee-wee football. My mom's answer: "I knew that Indians wouldn't make it."
My mom's reply is interesting on many levels. Mainly, it indicates that my parents were careful to allocate my time to activities with higher expected returns over my life cycle, where the probabilities used to calculate the expectations came from what they knew of the world (can you name an Indian NFL star?) and what they knew of me ("non-well-built"). (Of course, I don't necessarily think they accounted for all costs, benefits and constraints in their analysis. The strength, mental toughness, competitiveness and "team player" skills would have likely benefited me in the activities and career paths I've chosen to focus on since childhood. In addition, I spent a lot of time as a kid doing "non-productive" activities; that is, I don't think time constraints were binding. As such, I would have gladly substituted some of that time for football practice, I think.)
In general, the investments parents make in their children - the "hows" and "whats" - is a rich area of research in economics. An interesting paper by Kaivan Munshi and Mark Rosenzweig illustrates how parental investments in childhood education in India vary depending on broader economic conditions and traditional caste networks:
This paper addresses the question of how traditional institutions interact with the forces of globalization to shape the economic mobility and welfare of particular groups of individuals in the new economy. We explore the role of one such traditional institution - the caste system - in shaping career choices by gender in Bombay using new survey data on school enrollment and income over the past 20 years. We find that male working class - lower caste - networks continue to channel boys into local language schools that lead to the traditional occupation, despite the fact that returns to non-traditional white collar occupations rose substantially in the 1990s, suggesting the possibility of a dynamic inefficiency. In contrast, lower caste girls, who historically had low labor market participation rates and so did not benefit from the network, are taking full advantage of the opportunities that became available in the new economy by switching rapidly to English schools.
Another interesting question pertains to how parents allocate resources differentially across children. For a given set of parents, each child born to the couple will possess slightly different traits as far as intelligence, health, physical ability, and so on. Will parents tend to make productive investments (schooling, health care, etc) to children whose endowments will likely lead them to be more successful in the labor market (i.e., kids who offer higher marginal returns to various investments)? Or will they tend to invest more in the laggard children? To the extent that these decisions come down to differences in parental preferences, determining whether parents tend to reinforce or equalize child endowments comes down to an empirical question.
Two recent and interesting working papers offer some insight into this question. The first paper, by Marcos Rangel, looks at how mixed race couples in Brazil allocate educational investments across their kids. Rangel finds that parents tend to favor children with lighter complexions, presumably reflecting their better prospects in the Brazilian labor market. The second paper, by Ashlesha Datar, Arkadipta Ghosh, and Neeraj Sood, looks at health care investments among children in India. Their findings indicate that Indian parents favor children with higher birthweights, which has been shown to correlate with better health and economic prospects later in life.
Interestingly, the Datar, et al, paper also illustrates the sensitivity of the investments gradient to external socioeconomic factors: the propensity to investment relatively less in the lower birth weight child is found to be higher in areas with higher infant mortality rates. This same finding is highlighted in a well-known paper by Elaina Rose exploring gender bias in India. She finds that the ratio of female to male infant mortality is higher in rural areas during times of drought, when economic constraints are more likely to have bite.
Subscribe to:
Posts (Atom)