Tag Archives: Daniel Kahneman

Choice Architecture: Even in “Heads or Tails,” It Matters What’s Presented First

If you’re familiar with behavioural economics, then the results of this study will be right up your alley.

The researchers set out to determine whether there was a “first-toss Heads bias.” Meaning, when flipping a coin and the choices are presented “Heads or Tails,” there would be a bias towards people guessing “Heads” (because it was presented first). Through running their tests, they found something else that surprised them [Emphasis Added]:

Because of stable linguistic conventions, we expected Heads to be a more popular first toss than Tails regardless of superficial task particulars, which are transient and probably not even long retained. We were wrong: Those very particulars carried the day. Once the response format or verbal instructions put Tails before Heads, a first-toss Tails bias ensued.

Even in something as simple as flipping a coin, something where the script “Heads or Tails” is firmly engrained in our heads, researchers discovered that by simply switching the order of the choices, the frequency with which people chose one option or the other changed. That’s rather incredible and possibly has implications from policy to polling. However:

There is, of course, no reason to expect that, in normal binary choices, biases would be as large as those we found. In choosing whether to start a sequence of coin tosses with Heads or Tails, people ostensibly attach no importance to the choice and therefore supposedly do not monitor or control it. Since System 1 mental processes (that are intuitive and automatic) bring Heads to mind before Tails, and since there is no reason for System 2 processes (which are deliberative and thoughtful; see, e.g., Kahneman & Frederick, 2002) to interfere with whatever first comes to mind, many respondents start their mental sequence with Heads. However, in real-life questions people often have preferences, even strong ones, for one answer over another; the stronger the preference, the weaker the bias. A direct generalization from Miller and Krosnick (1998) suggests that in choices such as making a first-toss prediction, where there would seem to be no good intrinsic reason to guide the choice, order biases are likely to be more marked than in voting. At the magnitude of bias we found, marked indeed it was. Miller and Krosnick noted with respect to their much smaller bias that “the magnitude of name-order effects observed here suggests that they have probably done little to undermine the democratic process in contemporary America” (pp. 291–292). However, in some contexts, even small biases can sometimes matter, and in less important contexts, sheer bias magnitude may endow it with importance.

OK, so maybe these results don’t add too much to “government nudges,” but it can — at a minimum — give you a slight advantage (over the long haul) when deciding things by flipping coins with your friends. How?

Well, assuming that you are the one doing the flipping, you can say to your friend: “Tails or Heads?” (or “Heads or Tails?”) and then be sure to start the coin with the opposite side of what your friend said, facing up. A few years ago, Stanford math professor Persi Diaconis showed that the side facing up before being flipped is slightly more likely to be the side that lands facing up.

ResearchBlogging.orgBar-Hillel M, Peer E, & Acquisti A (2014). “Heads or tails?”–a reachability bias in binary choice. Journal of experimental psychology. Learning, memory, and cognition, 40 (6), 1656-63 PMID: 24773285

What’s the Status Quo From the Other Side: List of Biases in Judgment and Decision-Making, Part 14

It’s Monday, so you know what that means — cognitive bias! When I write that, I sort of imagine a “live television audience shouting in chorus: cognitive bias!” Wouldn’t that be fun? Well, maybe it wouldn’t, but it’s kind of funny to think about. I’ve only got a couple of more biases that I’d like to mention, so let’s get right to today’s — the status quo bias.

The status quo bias, like many of the previous biases we’ve talked about, is exactly what it sounds like: a preference for the how things currently are. You may even look at this bias as some people’s inability to accept change or a fear of change, but that probably wouldn’t be completely accurate. Let’s go back to one of those journal articles we looked at in previous biases — Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias:

A large-scale experiment on status quo bias is now being conducted (inadvertently) by the states of New Jersey and Pennsylvania. Both states now offer a choice between two types of automo­bile insurance: a cheaper policy that restricts the right to sue, and a more expensive one that maintains the unrestricted right. Mo­torists in New Jersey are offered the cheaper policy as the default option, with an opportunity to acquire an unrestricted right to sue at a higher price. Since this option was made available in 1988, 83 percent of the drivers have elected the default option. In Pennsyl­vania’s 1990 law, however, the default option is the expensive policy, with an opportunity to opt for the cheaper kind. The potential effect of this legislative framing manipulation was studied by Hershey, Johnson, Meszaros, and Robinson (1990). They asked two groups to choose between alternative policies. One group was presented with the New Jersey plan while the other was presented with the Pennsylvania plan. Of those subjects of­fered the New Jersey plan, only 23 percent elected to buy the right to sue whereas 53 percent of the subjects offered the Pennsylvania plan retained that right. On the basis of this research, the authors predict that more Pennsylvanians will elect the right to sue than New Jerseyans. Time will tell.

Another example:

One final example of a presumed status quo bias comes courtesy of the Journal of Economic Perspectives staff. Among Carl Shapiro’s comments on this column was this gem: “You may be interested to know that when the AEA was considering letting members elect to drop one of the three Association journals and get a credit, prominent economists involved in that decision clearly took the view that fewer members would choose to drop a journal if the default was presented as all three journals (rather than the default being 2 journals with an extra charge for getting all three). We’re talking economists here.”

You can see how important this bias would be for your life in making decisions. Should I sell my house (when the market’s hot) or should I hold onto it? You might be more liable to hold onto your house, even though there are economic gains to be had by selling it and, in fact, there are economic losses by keeping it!

As we’ve mentioned with some of the other biases, this bias can operate in tandem with other biases. For instance, think about the scenario I just mentioned and how that might also be similar to the endowment effect or loss aversion.

Ways for Avoiding the Status Quo Bias

1) Independent Evaluation

It really can be as easy as this. Have someone (or do it yourself) do a cost-benefit analysis on the situation/decision. In this way, you’ll be able to see the pros/cons of your decision in a new light. Of course, you may still succumb to the status quo bias, but you might be less likely to do so.

2) Role Reversal

While the independent evaluation makes “good sense” in trying to avoid this bias, doing some sort of role reversal will probably be the most effective. That is, look at the decision/situation from the other perspective. If it’s a negotiation, imagine that you’re in your negotiating partner’s shoes and you’re actually doing the trade from that side. Evaluate the deal. This may help to shake loose the status quo bias.

If you liked this post, you might like one of the other posts in this series:

Perspective and the Framing Effect: List of Biases in Judgment and Decision-Making, Part 5

Since I was going to talk about the framing effect last week (and opted for the planning fallacy instead because of circumstances), I thought I’d get into the framing effect this week. The framing effect is a very easy bias to understand, in that it’s not as complicated in its description as some of the other biases are. In short, the framing effect is how people can react differently to choices depending on whether the circumstances are presented as gains or losses.

The famous example of the framing effect comes from a paper by Kahneman (who I’ve mentioned before) and Tversky in 1981:

Problem 1: Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. [72 percent]

If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. [28 percent]

As you can see from the percentages in brackets, people opted for the sure thing. Now, let’s look at the second part of this study:

If Program C is adopted 400 people will die. [22 percent]

If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. [78 percent]

Did you notice something? Program C is identical to Program A, and yet the percentage of people who were opting for Program C dropped tremendously! Similarly, notice that Program D’s percentage went way up — even though it’s the same thing as Program B. This is the framing effect in action. Is it frightening to you that we’re so susceptible to changing our mind based simply on how a choice is framed? If it’s not, it certainly should be.

Ways for Avoiding the Framing Effect

1) Reframe the question

It may seem obvious, but you’d be surprised how many people don’t consider “reframing” the frame with which they are looking at a situation. For instance, in the example from earlier, instead of looking at it as a choice between Program A and Program B, someone could reframe Program A so that it looks like Program C and do the same with Program B, so that it looks like Program D. As a result, one would then be getting a “fuller” picture of their choice.

2) Empathy — assume someone else’s perspective

Many choices implicate another in a situation. As a result, it might be worth it to put yourself in the shoes of that other person to see how they would view a given situation. This is similar to the reframe, but is more specific in that it might serve to help the person remove themselves a little bit from the decision. That is, when we’re faced with a choice, our personal biases can have a big impact on the decision we make. When we imagine how someone else might make this decision, we’re less likely to succumb to our personal biases.

3) Parse the question

Some questions present us with a dichotomous choice: are apples good or bad? Should we exercise in the morning or the evening? Are gap years helpful or harmful? When faced with a question like this, I would highly recommend parsing the question. That is, are we sure that apples can only be good or bad? Are we sure that exercising in the morning or the evening are our only options? Often times, answers to questions aren’t simply this or that. In fact, more times than not, there is a great deal of grey area. Unfortunately, when the question is framed in such a way, it makes it very difficult to see the possibility of the grey area.

If you liked this post, you might like one of the other posts in this series:

Get a Second Opinion Before You Succumb to the Planning Fallacy: List of Biases in Judgment and Decision-Making, Part 4

I know that I said that I was going to be talking about a new bias in judgment and decision-making every Monday and I know that today is Tuesday. To be honest — I underestimated how long it would take me to prepare for my seminar in International Relations. Aside: if you want to challenge yourself, take a course in a subject which you know very little about and be amazed at how much you feel like you’ve been dropped into the ocean and told to swim! It can be a little unnerving at first, but if you’re into exploring and open to new experiences, it can be quite satisfying. Anyway, so today yesterday I’d planned to talk about the framing effect, but since I so conveniently demonstrated the planning fallacy, I thought I’d talk about it.

The consequence of this post being written/published today is directly related to my falling into the trap of the planning fallacy. I planned for the preparation for my International Relations class to take a certain amount of time. When that time lasted longer than I had anticipated, I had no time left to write about a bias in judgment and decision-making. The planning fallacy is our tendency to underestimate how long we’ll need to complete a task — especially when we’ve had experiences where we’ve underestimated similar tasks.

This is something that even the best of us fall prey to. In fact, one of the biggest names in cognitive biases Daniel Kahneman (Nobel Prize in economics, but a PhD in psychology!) has said that even he still has a hard time with the planning fallacy. Of course, this doesn’t make it permissible for us not to try to prevent the effects of the planning fallacy.

Before we get into ways for avoiding the planning fallacy, I want to share an excerpt from an oft-cited study when discussing the planning fallacy [emphasis added]:

Participants were provided with a series of specific confidence levels and were asked to indicate the completion time corresponding to each confidence level. In this manner, the participants indicated times by which they were 50% certain they would finish their projects (and 50% certain they would not), 75% certain they would finish, and 99% certain they would finish. When we examined the proportion of subjects who finished by each of these forecasted times, we found evidence of overconfidence. Consider the academic projects: only 12.8% of the subjects finished their academic projects by the time they reported as their 50% probability level, only 19.2% finished by the time of their 75% probability level, and only 44.7% finished by the time of their 99% probability level. The results for the 99% probability level are especially striking: even when they make a highly conservative forecast, a prediction that they feel virtually certain that they will fulfill, people’s confidence far exceeds their accomplishments.

There were a lot of numbers/percentages offered in the excerpt, so I’ve also included a visual representation of the data in a graph below. This graph comes from a book chapter by a couple of the same authors, but it is about the data in the preceding excerpt.

 

 

 

 

 

Ways for Avoiding the Planning Fallacy

With the first three biases I talked about, awareness was a key step in overcoming the bias. While you could make that argument for the planning fallacy, one of the hallmarks of [the fallacy] is that people know they’ve erred in the past and still make the mistake of underestimating. So, we’ll need to move beyond awareness to help us defend against this bias.

1) Data is your friend

No, I don’t mean Data from Star Trek (though Data would probably be quite helpful in planning), but now that I think about it, Data (the character) might be a good way to position this ‘way for avoiding the planning fallacy.’ For those of you not familiar, Data is a human-like android. In thinking about this way for avoiding the planning fallacy, think about how Data might estimate the length of time it would take to complete a project. It would be very precise and data-driven. Data would likely look at past projects and how long it took for those to be finished to decide the length of time needed for this new project. To put it more broadly, if you have statistics on past projects (that were similar) absolutely use them in estimating the completion time of the new project.

2) Get a second opinion

When we think about the project completion time of one project in relation to another project, we often think about the nuances that make this project different from that project — and by extension — why this project won’t take as long as that project. Planning fallacy. If you can, ask someone who has experience in project completion in the area for which you’re estimating. When you ask this person, be sure not to tell them all the “various ways why this project is different,” because it probably isn’t and it’s only going to cloud the predictive ability of the person you’re asking. You’re probably going to hear an estimate that’s larger than you thought, but I bet you that it’s probably a lot closer to the real project completion time than the estimate you made based on thinking about the ways that this project was going to be different than all the other projects like it.

If you liked this post, you might like the first three posts in this series:

Loss Aversion and the Big Picture: List of Biases in Judgment and Decision-Making, Part 2

I think I’m going to make a habit of posting something new to my series on biases in judgment and decision-making every Monday. Last Monday, we looked at sunk costs. Today, we’re going to look at loss aversion.

As much as I can, I’m trying to write about the different biases by themselves. Sunk costs are closely associated with loss aversion, so I could have included it in the first post. Similarly, the endowment effect is closely associated with loss aversion, so I could have wrote about it here. Learning about the biases one at a time may make it easier to focus on that bias for that week. So, without further adieu: loss aversion.

Loss aversion is the idea that we’d rather avoid losses than reap rewards. Put more simply: we’d prefer to not lose something than acquire something. Like we did with the sunk cost fallacy, let’s look at some examples of loss aversion to give us a better understanding of this bias. The implication of loss aversion is that someone who loses $100 or $1000 will lose more satisfaction (or be unhappier) than someone who gains $100 or $1000 will gain satisfaction (or be happier). If we think about a continuum where both of the people in the above example start at 0, the person who loses money will have an higher absolute number (with regard to their satisfaction) than the other person. This is a rather basic example, so let’s look at something a little juicier: golf.

In golf, the difference between winning and losing is sometimes one stroke (or one putt). A short excerpt from Kahneman’s book, Thinking Fast and Slow:

Every stroke counts in golf, and in professional golf every stroke counts a lot. Failing to make par is a loss, but missing a birdies putt is a foregone gain, not a loss. Pope and Schweitzer reasoned from loss aversion that players would try a little harder when putting for par (to avoid a bogey) than when putting for a birdie. They analyzed more than 2.5 million putts in exquisite detail to test that prediction.

They were right. Whether the putt was easy or hard, at every distance from the hole, the players were more successful when putting for par than for a birdie. The difference in their rate of success when going for par (to avoid a bogey) or for a birdie was 3.6%. This difference is not trivial. Tiger Woods was one of the “participants” in their study. If in his best years Tiger Woods had managed to putt as well for birdies as he did for par, his average tournament score would have improved by one stroke and his earnings by almost $1 million per season. [Emphasis added]

That’s an incredible statistic. With the only difference between putting for par and putting for birdie the fact that one would “lose” a stroke and professional golfers are 3.6% better at putting for par? Wow! As the excerpt said, that accounted for $1 million per season for Tiger Woods in his best years.

Ways for Avoiding Loss Aversion

As with the sunk cost fallacy, one of the most important ways to avoid loss aversion is to recognize it. That is, to know that humans have a tendency for loss aversion is an important first step in not falling into the trap of loss aversion.

1) What’s the big picture?

In our example of golf, that might mean knowing where you are in relation to the other players your competing with in the tournament (rather than where your ball is relation to the hole and what specific stroke you’re about to hit). In business, one might examine a decision about one business unit in relation to the entire company (rather than looking myopically at the one business unit).

2) Am I afraid of losing something?

This may seem like an obvious solution, but it’s pretty important. If before making a decision you can think to yourself (or have your team ask itself), “am I afraid to lose something here?” You might find that you are and it could serve to help you or your company avoid falling into the trap of loss aversion.

3) Do you really expect to never lose anything — ever?

Loss is inevitable. Sometimes, you won’t make that par putt (or that birdie putt). Sometimes, when you negotiate a deal, you won’t get the best deal. Sometimes, the decision to sell that business unit might result in losses somewhere else. If you can come to grips with the fact that every decision you make won’t be perfect and that sometimes you will lose, you may begin to shift your expectations about loss.