Summary: Thinking in Bets by Annie Duke

1. Thinking in bets starts with recognizing that there are exactly two things that determine how our lives turn out: the quality of our decisions and luck. Learning to recognize the difference between the two is what thinking in bets is all about.

2. Pete Carroll was a victim of our tendency to equate the quality of a decision with the quality of its outcome. Poker players have a word for this: “resulting.” When I started playing poker, more experienced players warned me about the dangers of resulting, cautioning me to resist the temptation to change my strategy just because a few hands didn’t turn out well in the short run.

3. Hindsight bias is the tendency, after an outcome is known, to see the outcome as having been inevitable.

4. “Our thinking can be divided into two streams, one that is fast, automatic, and largely unconscious, and another that is slow, deliberate, and judicious.”

5. The first system, “the reflexive system, seems to do its thing rapidly and automatically, with or without our conscious awareness.” The second system, “the deliberative system . . . deliberates, it considers, it chews over the facts.”

6. The differences between the systems are more than just labels. Automatic processing originates in the evolutionarily older parts of the brain, including the cerebellum, cerebellum, basal ganglia, and amygdala. Our deliberative mind operates out of the prefrontal cortex.

7. The prefrontal cortex doesn’t control most of the decisions we make every day. We can’t fundamentally get more out of that unique, thin layer of prefrontal cortex. “It’s already overtaxed,”

8. Our goal is to get our reflexive minds to execute on our deliberative minds’ best intentions.

9. The quality of our lives is the sum of decision quality plus luck. In chess, luck is limited in its influence, so it’s easier to read the results as a signal of decision quality. That more tightly tethers chess players to rationality.

10. That’s chess, but life doesn’t look like that. It looks more like poker, where all that uncertainty gives us the room to deceive ourselves and misinterpret the data.

11. But getting comfortable with “I’m not sure” is a vital step to being a better decision-maker. We have to make peace with not knowing.

12. What makes a decision great is not that it has a great outcome. A great decision is the result of a good process, and that process must include an attempt to accurately represent our own state of knowledge. That state of knowledge, in turn, is some variation of “I’m not sure.”

13. If we misrepresent the world at the extremes of right and wrong, with no shades of grey in between, our ability to make good choices—choices about how we are supposed to be allocating our resources, what kind of decisions we are supposed to be making, and what kind of actions we are supposed to be taking—will suffer.

14. Daniel Kahneman and Amos Tversky’s work on loss aversion, part of prospect theory (which won Kahneman the Nobel Prize in Economics in 2002), that losses in general feel about two times as bad as wins feel good.

15. No matter how far we get from the familiarity of betting at a poker table or in a casino, our decisions are always bets.

16. In most of our decisions, we are not betting against another person. Rather, we are betting against all the future versions of ourselves that we are not choosing.

17. In fact, believing is so easy, and perhaps so inevitable, that it may be more like involuntary comprehension than it is like rational assessment.”

18. Two years later, Gilbert and colleagues demonstrated through a series of experiments that our default is to believe that what we hear and read is true. Even when that information is clearly presented as being false, we are still likely to process it as true.

19. We form beliefs without vetting most of them, and maintain them even after receiving clear, corrective information.

20. Truthseeking, the desire to know the truth regardless of whether the truth aligns with the beliefs we currently hold, is not naturally supported by the way we process information. We might think of ourselves as open-minded and capable of updating our beliefs based on new information, but the research conclusively shows otherwise. Instead of altering our beliefs to fit new information, we do the opposite, altering our interpretation of that information to fit our beliefs.

21. Flaws in forming and updating beliefs have the potential to snowball. Once a belief is lodged, it becomes difficult to dislodge.

22. This irrational, circular information-processing pattern is called motivated reasoning. The way we process new information is driven by the beliefs we hold, strengthening them. Those strengthened beliefs then drive how we process further information, and so on.

23. “Wanna bet?” triggers us to engage in that third step that we only sometimes get to. Being asked if we are willing to bet money on it makes it much more likely that we will examine our information in a less biased way, be more honest with ourselves about how sure we are of our beliefs, and be more open to updating and calibrating our beliefs.

24. The more we recognize that we are betting on our beliefs (with our happiness, attention, health, money, time, or some other limited resource), the more we are likely to temper our statements, getting closer to the truth as we acknowledge the risk inherent in what we believe.

25. Admitting we are not sure is an invitation for help in refining our beliefs, and that will make our beliefs much more accurate over time as we are more likely to gather relevant information.

26. By communicating our own uncertainty when sharing beliefs with others, we are inviting the people in our lives to act like scientists with us. This advances our beliefs at a faster clip because we miss out on fewer opportunities to get new information, information that would help us to calibrate the beliefs we have.

27. How we figure out what—if anything—we should learn from an outcome becomes another bet. As outcomes come our way, figuring out whether those outcomes were caused mainly by luck or whether they were the predictable result of particular decisions we made is a bet of great consequence.

28. We have the opportunity to learn from the way the future unfolds to improve our beliefs and decisions going forward. The more evidence we get from experience, the less uncertainty uncertainty we have about our beliefs and choices. Actively using outcomes to examine our beliefs and bets closes the feedback loop, reducing uncertainty. This is the heavy lifting of how we learn.

29. If making the same decision again would predictably result in the same outcome, or if changing the decision would predictably result in a different outcome, then the outcome following that decision was due to skill.

30. If, however, an outcome occurs because of things that we can’t control (like the actions of others, the weather, or our genes), the result would be due to luck.

31. Stanford law professor and social psychologist Robert MacCoun studied accounts of auto accidents and found that in 75% of accounts, the victims blamed someone else for their injuries. In multiple-vehicle accidents, 91% of drivers blamed someone else. Most remarkably, MacCoun found that in single-vehicle accidents, 37% of drivers still found a way to pin the blame on someone else.

32. Blaming the bulk of our bad outcomes on luck means we miss opportunities to examine our decisions to see where we can do better. 

33. Taking credit for the good stuff means we will often reinforce decisions that shouldn’t be reinforced and miss opportunities to see where we could have done better.

34. Human tendency: we take credit for good things and deflect blame for bad things.

35. Black-and-white thinking, uncolored by the reality of uncertainty, is a driver of both motivated reasoning and self-serving bias. If our only options are being 100% right or 100% wrong, with nothing in between, then information that potentially contradicts a belief requires a total downgrade, from right all the way to wrong.

36. Watching is an established learning method. There is an entire industry devoted to collecting other people’s outcomes.

37. When any of us makes decisions in life away from the poker table, we always have something at risk: money, time, health, happiness, etc. When it’s someone else’s decision, we don’t have to pay to learn. They do.

38. When it comes to watching the bad outcomes of other people, we load the blame on them, quickly and heavily. We see this pattern of blaming others for bad outcomes and failing to give them credit for good ones all over the place.

39. When we treat outcome fielding as a bet, it pushes us to field outcomes more objectively into the appropriate buckets because that is how bets are won. Winning feels good. Winning is a positive update to our personal narrative. Winning is a reward. With enough practice, reinforced by the reward of feeling good about ourselves, thinking of fielding outcomes as bets will become a habit of mind.

40. Once we start actively training ourselves in testing alternative hypotheses and perspective taking, it becomes clear that outcomes are rarely 100% luck or 100% skill.

41. If the ship’s navigator introduces a one-degree navigation error, it would start off as barely noticeable. Unchecked, however, the ship would veer farther and farther off course and would miss London by miles, as that one-degree miscalculation compounds mile over mile. Thinking in bets corrects your course. And even a small correction will get you more safely to your destination.

42. Recruiting help is key to creating faster and more robust change, strengthening and training our new truthseeking routines.

43. In fact, as long as there are three people in the group (two to disagree and one to referee*), the truthseeking group can be stable and productive.

44. In combination, the advice of these experts in group interaction adds up to a pretty good blueprint for a truthseeking charter: 

  • A focus on accuracy (over confirmation), which includes rewarding truthseeking, objectivity, and open-mindedness within the group; 

  • Accountability, for which members have advance notice; and 

  • Openness to a diversity of ideas.

45. Win bets by relentlessly striving to calibrate our beliefs and predictions about the future to more accurately represent the world. In the long run, the more objective person will win against the more biased person. In that way, betting is a form of accountability to accuracy.

46. A productive decision group can harness this desire by rewarding accuracy and intellectual honesty with social approval.

47. It is one thing to commit to rewarding ourselves for thinking in bets, but it is a lot easier if we get others to do the work of rewarding us.

48. I experienced firsthand the power of a group’s approval to reshape individual thinking habits. I got my fix by trying to be the best credit-giver, the best mistake-admitter, and the best finder-of-mistakes-in-good-outcomes.

49. “The only way in which a human being can make some approach to knowing the whole of a subject, is by hearing what can be said about it by persons of every variety of opinion, and studying all modes in which it can be looked at by every character of mind. No wise man ever acquired his wisdom in any mode but this; nor is it in the nature of human intellect to become wise in any other manner.”

50. We should guard against gravitating toward clones of ourselves. We should also recognize that it’s really hard: the norm is toward homogeneity; we’re all guilty of it; and we don’t even notice that we’re doing it.

51. In other words, the opinions of group members aren’t much help if it is a group of clones.

52. When presenting a decision for discussion, we should be mindful of details we might be omitting and be extra-safe by adding anything that could possibly be relevant. On the evaluation side, we must query each other to extract those details when necessary.

53. Just as we can recruit other people to be our decision buddies, we can recruit other versions of ourselves to act as our own decision buddies.

54. When we make in-the-moment decisions (and don’t ponder the past or future), we are more likely to be irrational and impulsive. This tendency we all have to favor our present-self at the expense of our future-self is called temporal discounting.

55. We are built for temporal discounting, for using the resources that are available to us now as opposed to saving them for a future version of us that we aren’t particularly in touch with in the moment of the decision.

56. We’re not perfectly rational when we ponder the past or the future and engage deliberative mind, but we are more likely to make choices consistent with our long-term goals when we can get out of the moment and engage our past- and future-selves.

57. Reconnaissance has been part of advance military planning for as long as horses have been used in battle.

58. To start, we imagine the range of potential futures. This is also known as scenario planning.

59. The idea is to consider a broad range of possibilities for how the future might unfold to help guide long-term planning and preparation.”

60. After identifying as many of the possible outcomes as we can, we want to make our best guess at the probability of each of those futures occurring.

61. By at least trying to assign probabilities, we will naturally move away from the default of 0% or 100%, away from being sure it will turn out one way and not another. Anything that moves us off those extremes is going to be a more reasonable assessment than not trying at all.

62. When it comes to advance thinking, standing at the end and looking backward is much more effective than looking forward from the beginning.

63. The most common form of working backward from our goal to map out the future is known as backcasting.

64. Backcasting makes it possible to identify when there are low-probability events that must occur to reach the goal. That could lead to developing strategies to increase the chances those events occur or to recognizing the goal is too ambitious.

65. Working backward helps even more when we give ourselves the freedom to imagine an unfavorable future.

66. A premortem is an investigation into something awful, but before it happens

67. Backcasting and premortems complement each other. Backcasting imagines a positive future; a premortem imagines a negative

68. Despite the popular wisdom that we achieve success through positive visualization, it turns out that incorporating negative visualization makes us more likely to achieve our goals.

69. We start a premortem by imagining why we failed to reach our goal: our company hasn’t increased its market share; we didn’t lose weight; the jury verdict came back for the other side; we didn’t hit our sales target.