1

I have been reading Daniel Kahneman’s book Thinking Fast and Slow. Kahneman is a psychologist who won the Nobel Memorial Prize for Economics in 2002 for his work on judgements and decision-making. With Amos Tevsky, to whom the book is dedicated, he developed prospect theory as an improvement on expected utility theory as an explanation of decision-making in economics. They had earlier proposed the heuristics and biases model of intuitive judgements. In his later work Kahneman has developed the idea that measuring well-being is problematic because there are two selves in play, an experiencing self and a remembering self, and they don’t agree, raising some interesting questions about the pursuit of happiness.

Two academic papers are included as appendices, but the book itself is written for non-specialists. The target is water cooler conversation and gossip. We are mostly good intuitive grammarians, but we are not good intuitive statisticians or logicians. In order to recognise our mistakes we need, Kahneman suggests, on the analogy with medicine, a set of precise diagnostic labels where the labels bind illness and symptoms, possible antecedents and causes, possible developments and consequences, possible interventions and cures. With these to hand, we can improve our recognition of errors and possible create counter-measures.

Kahneman proposes a two systems model of the mind. System 1, or fast thinking, is the intuitive mind. It operates quickly and automatically and without effort, generating impressions, feelings and intentions. However, it is also impulsive and impatient. System 2, or slow thinking, is who we think we are. System 2, is the introspective mind, the mind which consciously reasons. Introspection requires attention and effort but system 2 is lazy, possesses limited knowledge and makes mistakes. Most of the time it is content to endorse the impressions, feelings and intentions generated by system 1.

System 1 works by association of ideas. Its function is to maintain our model of the world and what is normal in it. It seeks the familiar. It represents sets of things by prototypes and judges probability by representativeness, by closeness to type, rather than by the size of the populations. It looks for the story, for patterns of causality and intention, even when they are not present.  It is reluctant to look for evidence that would spoil the narrative, neglecting ambiguity and suppressing doubt. Kahneman frequently uses the phrase ‘what you see is all there is’.

System 1 is therefore a machine for generating illusions: the illusion of understanding, because it is easier to construct a coherent story when there is little information; the illusion of validity, because it is a machine for jumping to conclusions and is not designed to notice the size of the jumps; and the illusion of control, because in our intuitive sense making machinery hindsight leads us underestimate our surprise at the turn of events and neglect the role of fortune and probability. System 1 creates a model of the world that is simpler and more predictable than the world actually is. This serves to promote resilience and persistence in the face of obstacles but also leads to overconfidence. It can register cognitive ease but it has no warning system if it is reaching its limits and becoming unreliable and there is no easy way to distinguish an intuitively skilled response from a rough and ready heuristic response.

Although the heuristics and biases approach focuses on the weakness, it should be remembered that intuition has access to vast repertory of skills and system 1 gets more right than it gets wrong.

2

System 1 generates basic assessments, substituting an easier questions when it can’t find an answer to the original question. Substitution is the essential idea behind the heuristics and biases approach to intuitive judgements.

Heuristics and biases is the name Kahneman and Tevsky gave to their model of intuitive thinking. An heuristic is a simple procedure which helps find an adequate though often imperfect answer. In their original paper, Kahneman and Tevsky identified three heuristics: judging by representativeness, judging by the availability of information, and the technique of anchoring and adjustment.  They identified 20 biases or systematic errors which these heuristics introduce into our thinking. At the time, the prevailing view was that human beings are basically rational and that emotion explains most occasions when people depart from rationality. The heuristics and biases approach suggests that errors are rather the product of the use of simplifying heuristics that introduce systematic biases.

It matters that they are systematic I think. If biases aren’t correlated presumably a wisdom-of-crowds type correction would operate where errors one way are balanced by errors the other way. Over a sufficiently large sample the errors will cancel each other out. However, this won’t happen if they are systematic and therefore correlated.

One challenge to this approach comes from studies of expert intuition. Expert intuition is a kind of recognition. Kahneman quotes Herbert Simon, the first psychologist to win the Nobel Prize:

‘The situation has provided a cue; this cue has given the expert access to information stored in memory, and the information provides the answer. Intuition is nothing more and nothing less than recognition’.

Kahneman engaged in an adversarial collaboration with Gary Klein, the leading advocate of this view, to try to map the boundary between the successful acquisition of expert intuition and the flawed intuitions of statistical thinking. They finally agreed that what is required is a high validity environment sufficiently regular to be predictable. It also requires that participants receive immediate and accurate feedback. In low validity environments, on the other hand, it is the heuristics of judgement that are invoked.

3

Kahneman and Tevsky met in 1969 and the paper on heuristics and biases published in 1974 was the outcome of their first five years collaboration. Their next project was a study of decision-making which led to the publication of a paper proposing prospect theory, a new decision theory, in 1979.

At the time thinking in this field was grounded in expected utility theory which is a theory of rational choice. Utility is the concept used in economics and decision theory to capture the psychological value of options, that is, things that can be chosen. A utility function maps a quantity to each option available with the number representing its utility rather than, say, the monetary value. The rational choice is then the option with the highest utility. The problem it is used to try to solve is the subjectivity of usefulness. The theory is grounded in what Kahneman calls psychophysics, a term I hadn’t heard before. Psychophysics is the idea that there is a relationship between a quantity in the material world and some hypothesised quantity in the mental world.

A baseline starting point might be that a quantity such as $100 correlates to some analogous quantity of utility. But if this is right then $200 will correlate to twice the amount of utility and simple observations suggest that an additional income of $100 has more value to a poor person than to a wealthy person. This therefore leads to the idea of diminishing marginal utility: as the amounts increase, the utility continues to increase, but for each increment the additional utility gets less and less. Perhaps it is the percentage increase rather than the amount that is important so that an additional 10% has a similar value at every income level.

This idea can be used to explain risk aversion, the observation that given the choice between a fair gamble and a lower but definite pay-out, most people will choose the lower amount. Suppose the gamble is on the toss of a coin and pays $100 for heads and $200 for tails. The fair value of a gamble is each outcome weighted by its probability, so in this simple case the gamble is worth $150. Given the choice, would you take the gamble or accept $150? People tend to opt for the sure thing and the idea of diminishing marginal utility theory offers an explanation. The difference between $100 and $150 is 50% while the difference between $150 and $200 is only 33%. On this reasoning you will probably prefer any amount more than $141 rather than take the gamble because at that pay-out the differences in percentage terms are equal.

However, this can’t be right. Consider the difference between this gamble and similar gambles where the amounts are, firstly, $10 and $20 and, secondly, $1000 and $2000. If diminishing marginal utility is correct then on average the balance of decisions to gamble should be similar in each case, but my guess is that, offered the choice, more people will gamble for $20 than will gamble for $2,000 because, given average incomes, forgoing $10 is trivial but missing a windfall $1000 disappointing.

There is also the circumstances to consider. Suppose this opportunity comes up just at the moment when for whatever reason you absolutely need to buy an air ticket. The ticket costs $105 and you have no other funds available. My guess is that in these circumstances you would choose any certain pay-out over $104 instead of the gamble. Conversely, if the ticket costs $195, you will probably choose the gamble unless the certain pay-out is more than $194.

This kind of scenario raises the question just how useful the concept of utility actually is. Utility as a concept is designed to understand how material quantities such as amounts of money map to psychological pay-offs. The link may not be one for one but I can see the value of the idea if there is some other predictable relationship. My feeling however, is that there really isn’t a mapping. Utility theories depend on the ‘other things being equal’ condition to make them plausible but the other things never really are equal.

Kahneman and Tevsky developed prospect theory to correct the flaws they found in expected utility theory but they did not challenge the concept of utility as such. Prospect theory is based on gains and losses rather than states and from this fact are derived a number of modification to expected utility theory: the idea of a reference point; the idea of loss aversion; the idea that probabilities are not weighted in a linear fashion; and the idea that sensitivity to gains and losses diminishes the further from the reference point you go.

Firstly, expected utility theory doesn’t include any kind of reference points. For example, if two people today each have $50,000 they should, within the framework of expected utility theory, be equally happy. For someone earning $50,000, the utility of an extra $5,000 is the difference between the utility of $50,000 and the utility of $55,000 and therefore it follows that the negative utility of losing $5,000, of moving from $55,000 to $50,000, is the same.

But the psychology of gains and losses doesn’t work like that. Suppose that two people each have $50,000, but yesterday one of them had $10,000 and the other had $100,000. More than likely the first will be happy and the second despondent. Similarly, if two people receive a bonus of $10,000, but one was expecting $5,000 and the other $20,000 it is also likely one will be cheerful and the other glum. The prior state and the prior expectation are reference points from which the utility of the sums is judged.

Loss aversion is the idea that when directly compared, losses loom larger than gains. If we take a similar gamble to the one above but in this case heads loses $100 and tails gains $100, you will probably not take the gamble because the prospect of paying out $100 you already have is more painful than the prospect of gaining $100 that you never had. How loss averse we are will vary. Kahneman suggests the experimental results for the loss aversion ratio are somewhere between 1.5 and 2.5, so between $150 and $250 will be needed before someone is willing to take the gamble.  These are small sums for most people, but as the sums get bigger, the loss aversion co-efficient will increase and there are catastrophic losses that we will not risk even at infinitely low probabilities and for infinitely high pay-outs.

Loss aversion is about the relative strength of motives. The theory proposes that gains and losses are always evaluated compared to a reference point and that when directly compared losses loom larger than gains. Loss aversion is the gravitational force that holds our life together near the reference points. Kahneman suggests that it is the most important contribution of psychology to behavioural economics. It explains phenomenon such as the endowment affect, which is the observation that we care more about losing something that we already own than we care about acquiring the same thing. The reference point and loss aversion are also significant in explaining the difficulty of reaching agreement in negotiations. Concessions are hard to make because they mean giving up something that you currently have and loss aversion explains that the party making the concession will feel the loss more than the party receiving the concession will feel the gain.

The idea behind the non-linear weighting of probabilities is also intuitive. In expected utility theory, a change of probability of 5% is weighted the same whether it is a difference between 100% and 95% or between 0% and 5%. But the first is the difference between certain and probable and the second is the difference between impossible and possible and the psychology of these gaps is different.

The idea that sensitivity to gains and losses diminishes in relation to the distance from the reference point means is similar to the idea of diminishing marginal utility but this time the baseline is the reference point rather than zero and works both for gains and for losses, although the losses count more heavily.

Kahneman concedes that prospect theory has its own limitations. Like expected utility theory, prospect theory does not take account the role of disappointment and anticipation in the evaluation of reference points. Suppose for example you are offered three prospects: a lottery ticket with a 1 in a million chance of winning $1 million, the ticket for a sweepstake with a 10% chance of winning $12, and, let’s say, a law suit with a 90% chance of paying out $1 million. The alternative in each case is gaining nothing so in each case the reference point is zero and has the same value. But it’s likely that whereas in the first two cases no-one will be upset at not winning it is likely coming away with nothing in the last case will be very disappointing. There is a similar issue with regret. For example, suppose that in the law suit scenario, in one case the counter-offer is $50 and in another the counter-offer is $150,000. In both cases if you turn down the counter-offer and lose you will be disappointed but in the second case you may also regret the spurned offer.

I think the underlying problem here is that utility is a flawed concept. It isn’t just that theories have to be simplified in order to be useful, they have to be simplified because if they were sophisticated enough apply to real decision-making the fundamental assumption that usefulness can be thought of as a kind of psychological quantity that maps to material quantities would be shown to be wrong.

4

The final section is concerned with Kahneman’s more recent research into utility and experience. Utility in economics and ethics has tended to mean different things. To the utilitarians such as Bentham it meant experienced utility. To economists it has meant decision utility. The two are expected to coincide because agents are rational; they know their own tastes and make good decisions.

Kahneman’s study of experience led to two findings about retrospective assessments which he calls the peak-end rule and duration neglect. The peak-end rule is the idea that the memory of an experience will be dominated by the peak experience and the experience at the end rather than being the sum of negative and positive experiences throughout the episode. Duration neglect is the finding that the duration of an experience is neglected in evaluations. Duration neglect is a normal feature of storytelling because caring for people often takes the form of concern, not so much for their feelings as such, but for the quality of their stories. Lives are represented by the type of the story rather than its duration. In experiments where test subjects were asked to judge stories of people’s lives, the effect of adding five less happy years at the end of the story was always judged to make the whole life worse.

The life of the experiencing self is a series of moments. To the experiencing self, the value of an episode is the sum of the value of its moments, something which can be evaluated using a method called experience sampling. It appears that the distribution of happiness isn’t equal and that a small fraction of the population does most of the suffering. Not unexpectedly, poverty amplifies every other misfortune. One the other hand, the remembering self tells stories and makes choices and time is not represented properly in these. The evaluation of life and the experience of living diverge.

From this, Kahneman suggest that there are two selves: the experiencing self and the remembering self. We identify with our remembering selves and care about the quality of our stories. The remembering self is a construction of system 2 but duration neglect and the peak-end rule originate in system 1. The remembering self neglects the fact that time is the ultimate finite resource and the tyranny of the remembering self means we privilege the preferences of our remembering self and forget the experiences of our experiencing self.

In Kahneman’s picture, the introspective mind is somewhat lazy; we get engaged when there is something novel or surprising to pay attention to but the rest of the time we tend to go with our intuitions. The doubt I have is this. Many mornings I go for an early morning walk along the sea-front. It is good exercise and an opportunity to think. The last few days I have been turning over the ideas in this book during my walk. From time to time I remember why I come to this particular place at this time and stop to watch the sun rising, the lights of the town across the bay and the fishing boats pulled up on the sand.  But because this is a routine I usually pay little attention to what is near at hand.

We live entirely in the present but our introspective self is not usually occupied navigating the current environment. The stream of consciousness is more often than not concerned with other things, sometimes quite remote from our surroundings. There is not just the remembering self, there is also the anticipating self and the imagining self, and these tend to dominate our introspection and reflection. How much of the time are we really occupied making judgements and decisions and how salient these are as activities among all the activities that are minds are engaged in?

5

The question what it means to be rational is one of the threads that runs through the book. The psychological assumption of economic theory at the time Kahneman and Tevsky were developing prospect theory was the rational-agent model and expected utility theory is the foundation of that model. Kahneman says that he was surprised to be shown an economics paper by a colleague which stated in its first line that the governing of assumption of economic theory is that ‘the agent of economic theory is rational, selfish, and his tastes do not change’. The economists down the hall seem to be studying an entirely different tribe from the psychologists.

Kahneman argues that the logical consistency of preferences that is axiomatic for the rational agent model is a mirage. Behavioural economics is interested in the heuristics people deploy to come to decisions and how the way a decision is framed changes the outcome. I think this theme gets complicated because there seems to be two different critiques involved. What does it mean to say the logical consistency of preferences that is axiomatic for the rational-agent model is a mirage? Does it mean that human beings cannot act like rational-agents because they lack the capacity and therefore fall short of rationality or does it mean that the rational-agent model is built on simplifying assumptions and tautologies for the purpose of theoretical modelling and doesn’t capture what is means to rational? To be rational, our decisions should be consistent with our understanding of the world, our values and interests, and our sense of ourselves. Rationality in this sense is closely connected with the idea of having reasons and being able to give justifications to ourselves and to others. This standard of rationality Kahneman notes sympathetically but he doesn’t develop the idea directly.

Making mistakes isn’t in itself evidence of a failure of rationality. It isn’t ignorance as such that is a failure but being unaware of our level of ignorance is. It is a failure because, although we don’t know what we don’t know, attention and engagement should reveal the existence of the gaps. If we didn’t assume that what we see is all there is we would be attentive to the things we should be expecting to see but don’t. We should be aware that we are poor intuitive statisticians and think attentively when we recognise situations where statistical thinking is applicable.

Kahneman makes it clear that system 1 and system 2 are not components of the mind as such. They are a way of characterising patterns of thinking to make them more vivid and less cumbersome to describe. They broadly represent the intuitive and the introspective activities of the mind. Our intuitions are reactive to events and work largely through the association of ideas. They operate on the expected, the familiar and the routine. Intuition is recognition. Introspection is conscious and reflective thinking that pays attention to the unexpected, the unfamiliar and the innovative. Introspection can deploy calculations and algorithms, statistical analysis and logical deduction.

In practice, the time and information available to make decisions is limited, so heuristics coupled with simple correction algorithms to adjust for biases may be a better strategy and do a good enough job at a lower cost in terms of effort than trying to calculate every scenario. Is the use of heuristics irrational if they are known to lead to systematic errors of judgement and decision-making? Most of the time heuristics will generate a good enough solution and it would be irrational to do a calculation to find a precise answer if the good enough solution is sufficient for the situation. Heuristics are short cuts. It will also be rational to use heuristics, despite systematic biases, if exhaustive analysis is impractical.

What is perhaps not clear in this account is that they must be used both in our intuitional thinking and in our introspective thinking because the introspective self also has to make judgements and decisions under uncertainty. Heuristics are necessary when information is not available or is insufficient or when seeking the algorithmic solution is out of proportion to the situation and a rough order of magnitude answer will suffice.

Both intuitive and introspective thinking are error prone. Scientific thinking, which is prototypically introspective thinking, has its own systematic biases including what Kahneman calls theory-induced blindness and the ability to ignore anomalous information. These kind of failures of conception are familiar from Thomas Kuhn’s account of science (*) which Kahneman seems to have in mind in his discussion.

You cannot correct visual illusions even when you know they are misleading but you can override them. But we can correct cognitive illusions. Although we may never be able to think intuitively about statistical relationships and probabilities we can recognise the situations where errors are likely to occur. By the time I read the appendix I understood the importance of sample size and therefore that the smaller one is the right answer to the question which of two hospitals, one large and one small, would most likely report significant discrepancies between male and female birth rates.