Tag Archives: Math

Choice Architecture: Even in “Heads or Tails,” It Matters What’s Presented First

If you’re familiar with behavioural economics, then the results of this study will be right up your alley.

The researchers set out to determine whether there was a “first-toss Heads bias.” Meaning, when flipping a coin and the choices are presented “Heads or Tails,” there would be a bias towards people guessing “Heads” (because it was presented first). Through running their tests, they found something else that surprised them [Emphasis Added]:

Because of stable linguistic conventions, we expected Heads to be a more popular first toss than Tails regardless of superficial task particulars, which are transient and probably not even long retained. We were wrong: Those very particulars carried the day. Once the response format or verbal instructions put Tails before Heads, a first-toss Tails bias ensued.

Even in something as simple as flipping a coin, something where the script “Heads or Tails” is firmly engrained in our heads, researchers discovered that by simply switching the order of the choices, the frequency with which people chose one option or the other changed. That’s rather incredible and possibly has implications from policy to polling. However:

There is, of course, no reason to expect that, in normal binary choices, biases would be as large as those we found. In choosing whether to start a sequence of coin tosses with Heads or Tails, people ostensibly attach no importance to the choice and therefore supposedly do not monitor or control it. Since System 1 mental processes (that are intuitive and automatic) bring Heads to mind before Tails, and since there is no reason for System 2 processes (which are deliberative and thoughtful; see, e.g., Kahneman & Frederick, 2002) to interfere with whatever first comes to mind, many respondents start their mental sequence with Heads. However, in real-life questions people often have preferences, even strong ones, for one answer over another; the stronger the preference, the weaker the bias. A direct generalization from Miller and Krosnick (1998) suggests that in choices such as making a first-toss prediction, where there would seem to be no good intrinsic reason to guide the choice, order biases are likely to be more marked than in voting. At the magnitude of bias we found, marked indeed it was. Miller and Krosnick noted with respect to their much smaller bias that “the magnitude of name-order effects observed here suggests that they have probably done little to undermine the democratic process in contemporary America” (pp. 291–292). However, in some contexts, even small biases can sometimes matter, and in less important contexts, sheer bias magnitude may endow it with importance.

OK, so maybe these results don’t add too much to “government nudges,” but it can — at a minimum — give you a slight advantage (over the long haul) when deciding things by flipping coins with your friends. How?

Well, assuming that you are the one doing the flipping, you can say to your friend: “Tails or Heads?” (or “Heads or Tails?”) and then be sure to start the coin with the opposite side of what your friend said, facing up. A few years ago, Stanford math professor Persi Diaconis showed that the side facing up before being flipped is slightly more likely to be the side that lands facing up.

ResearchBlogging.orgBar-Hillel M, Peer E, & Acquisti A (2014). “Heads or tails?”–a reachability bias in binary choice. Journal of experimental psychology. Learning, memory, and cognition, 40 (6), 1656-63 PMID: 24773285

Do Percentages Matter in a One-Time Decision?

I write a lot about decision-making. It’s clearly something that interests me. As a result, I often find myself thinking about how to make better decisions or how to help people make better decisions. That’s why I’m already up to Part 10 of that series on decision-making (and I’ve got at least 4 more to go). I’m not including today’s post as part of that series, but it serves as an interesting addendum. Meaning, it should at least give you something to think about. So, here we go!

As I said, I often find myself thinking about how to optimize decisions. Often times, when people are trying to make a decision about something in the future, there may be percentages attached to the success of a decision. For example, if you’re the elected leader of a country, you might have to decide about a mission to go in and rescue citizens that are being held hostage. When you’re speaking with your military and security advisors, they may tell you the likelihood of success of the different options you have on the table.

I was going to end the example there and move into my idea, but I think it might make it easier to understand, if I really go into detail on the example.

So, you’re the President of the United States and you’ve got citizens who are being held hostage in Mexico (but not by the government of Mexico). The Chief of the Joint Chiefs of Staff presents a plan of action for rescuing the citizens. After hearing about the chance of success of this plan, you ask the Chief what the chance of success is and he tells you 60%. The other option you have is to continue to pursue a diplomatic solution in tandem with the Mexican government. As the President, what do you do?

So, my wondering is whether that 60% number really matters that much. In fact, I would argue that the only “numbers” that would be useful in this situation are 100%, 0%, or whether the number is greater than 50 or less than 50 (to make sure that this is still three numbers, we could call this last number ‘x’). This sounds silly, right? A mission that has a 80% chance of success would make you more inclined to choose that mission, right? The problem is that 20% of the time, that mission is still going to fail. And my point is that since this is a one-time decision (meaning, it’s astronomically unlikely that the identical situation would occur again), there won’t be iterations such that 80% of the time, the decision to carry out that mission will be successful.

I suppose the argument against this idea is that in a mission that has only a 51% chance of success, there’s a 49% chance of failure and one would presume that there are more factors that might lead to failure with these percentages (or at least a higher chance of these failures coming to fruition).

I realize that this idea is off-the-wall, but I’d be interested to read an article in a math journal that explains why this is wrong (using reasoning beyond what I’ve explained here) or… why it’s right!

Perspective and the Framing Effect: List of Biases in Judgment and Decision-Making, Part 5

Since I was going to talk about the framing effect last week (and opted for the planning fallacy instead because of circumstances), I thought I’d get into the framing effect this week. The framing effect is a very easy bias to understand, in that it’s not as complicated in its description as some of the other biases are. In short, the framing effect is how people can react differently to choices depending on whether the circumstances are presented as gains or losses.

The famous example of the framing effect comes from a paper by Kahneman (who I’ve mentioned before) and Tversky in 1981:

Problem 1: Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows: If Program A is adopted, 200 people will be saved. [72 percent]

If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. [28 percent]

As you can see from the percentages in brackets, people opted for the sure thing. Now, let’s look at the second part of this study:

If Program C is adopted 400 people will die. [22 percent]

If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. [78 percent]

Did you notice something? Program C is identical to Program A, and yet the percentage of people who were opting for Program C dropped tremendously! Similarly, notice that Program D’s percentage went way up — even though it’s the same thing as Program B. This is the framing effect in action. Is it frightening to you that we’re so susceptible to changing our mind based simply on how a choice is framed? If it’s not, it certainly should be.

Ways for Avoiding the Framing Effect

1) Reframe the question

It may seem obvious, but you’d be surprised how many people don’t consider “reframing” the frame with which they are looking at a situation. For instance, in the example from earlier, instead of looking at it as a choice between Program A and Program B, someone could reframe Program A so that it looks like Program C and do the same with Program B, so that it looks like Program D. As a result, one would then be getting a “fuller” picture of their choice.

2) Empathy — assume someone else’s perspective

Many choices implicate another in a situation. As a result, it might be worth it to put yourself in the shoes of that other person to see how they would view a given situation. This is similar to the reframe, but is more specific in that it might serve to help the person remove themselves a little bit from the decision. That is, when we’re faced with a choice, our personal biases can have a big impact on the decision we make. When we imagine how someone else might make this decision, we’re less likely to succumb to our personal biases.

3) Parse the question

Some questions present us with a dichotomous choice: are apples good or bad? Should we exercise in the morning or the evening? Are gap years helpful or harmful? When faced with a question like this, I would highly recommend parsing the question. That is, are we sure that apples can only be good or bad? Are we sure that exercising in the morning or the evening are our only options? Often times, answers to questions aren’t simply this or that. In fact, more times than not, there is a great deal of grey area. Unfortunately, when the question is framed in such a way, it makes it very difficult to see the possibility of the grey area.

If you liked this post, you might like one of the other posts in this series:

Every Game Counts The Same: Does It Really?

In most sports, there is a “regular” season and a “post” season. That is, the teams play against it each other for a set number of games to jockey for position in the playoffs. As I write this, I’m thinking about in particular, as it is getting very near to the end of their season. As the season comes to a close, many teams are either jockeying for position in the playoffs or they are struggling to remain one of the teams that will get to play in the playoffs.

I was having a conversation with someone the other day about the relative importance of each game, ie. “every game counts.” Some people like to say that games at the end of the season “count more” than games at the beginning of the season. They’ll tell you quite a fancy story about how and why the games at the end mean more to a team than the games at the beginning of the season. And I want to believe them. I want to believe that there’s a formula that accounts for “time” in the relative importance of games. To my knowledge, there isn’t and a game won in the beginning of the season is equal to a game won at the end of the season.

Looking at it mathematically: there are 162 games in a season. So, every game is worth 1/162nd of a team’s record. If a team wins a game on May 6th, that game is worth 1/162nd of that team’s record. If a team loses on June 12th, that game is still worth 1/162nd of that team’s record. And if a team wins the last game of the season (!) that game is still worth 1/162nd of that team’s record.

I think where a lot of people get confused or misled when it comes to games at the end of the season meaning more is because of the cultural bias. It is often written of and spoke of that games at the end of the season mean more than games at the beginning of the season. As a result, people begin to believe this and say it themselves (creating a bit of an ). At the end of the day (literally), the last game of the season has the same weight on a team’s record as a game at the beginning of the season.

Note 1: this line of thinking doesn’t apply to those sports that use a more sophisticated way of measuring the success of their teams. For instance, some sports, like soccer, often use “goal differential” as a way of distinguishing the relative placement of their teams.

Note 2: for sports that have such relatively “short” seasons like the NFL, one could argue that a game later in the season is worth more because of the various tiebreakers that are used for Winning percentage, etc., but the sentiment of every game counting the same still holds.