Veterans Affairs Medical Center, Coatesville PA, USA
University of Cape Town, South Africa
MANAGERIAL AND DECISION ECONOMICS
Manage. Decis. Econ. (2015)
Published online in Wiley Online Library
(wileyonlinelibrary.com) DOI: 10.1002/mde.2715
The field of behavioral economics (BE) has been defined by the study of anomalies in choice, that is, choices that do not obey what is called rational choice theory or expected utility theory. Expected utility theory holds that an intelligent and well-informed agent (‘economic man’) will make choices that maximize his expected utility, which by implication means avoiding choice patterns that would make him (her hereafter) vulnerable in competitive markets—for instance, to be money pumped, or make intransitive choices or choices that could be reversed by reframing. Research in two disparate schools has found that people commit a number of apparent violations of the maximizing principle. These traditions have formed the two legs on which BE stands: behavioral and cognitive.
During the 1970s and 1980s, both schools found anomalies in expected utility theory that called for new approaches. The behavioral school revealed anomalies in choice as a function of intertemporal differences in motivation, thus creating the topics of dynamic inconsistency and hyperbolic and quasihyperbolic delay discounting. The cognitive school revealed anomalies in choice as a function of cognitive framing, which are summarized in prospect theory. These anomalies have remained the cardinal phenomena of BE. Critics have called the resulting literature ‘a ragbag’, but I argue that the 12 best-known framing effects have coherent motivational roots. Most anomalies that persist after reflection can be understood as strategies in intertemporal bargaining, for maximizing hyperbolically discounted utility (or reward).
The term “behavioral economics” (BE) was coined by John Kagel and Robin Winkler to refer to the application of Skinnerian behavioral analysis to economic choices (1972). (A separate simultaneous coinage to name a sociological approach fell into disuse, and the Journal of Behavioral Economics became the Journal of Socio-Economics in 1990.) BE initially involved the verification of classical economic patterns by lab experiment—studying mental patients in token economies (Fisher et.al., 1978), and, more radically, using pigeons’ choices to replicate familiar economic phenomena (Kagel et.al., 1975). Pigeons would peck one of two keys to get intermittent deliveries of grain in an opening between the keys. Their preferences demonstrated phenomena such as cost sensitivity and consumption pattern as a function of budget set.
Parametric behavioral experimentation did not just replicate the patterns described by economics; it tested them. The first contradictory finding arose from the study of delayed reward. When the two keys each gave a pigeon grain for the first peck after an unpredictable interval, the birds would peck in inverse proportion to the average of the intervals on the two keys (“concurrent variable interval schedule”-- Herrnstein, 1961). That is, their relative pecking rates exactly matched the relative rates at which grain was delivered. When the sizes of the deliveries were varied, relative pecking rates were observed to match the sizes. When the effective peck was followed by a delay before the reward, relative pecking rates were in inverse proportion to those delays (Chung & Herrnstein, 1967). This “matching law” suggested that if the design were changed so a single peck was rewarded on each trial, the value of the reward might also be inversely proportional to its delay-- which turned out to be the case. The curve that best describes the value of a delayed reward as a function of its delay is a hyperbola (Ainslie, 1975; Mazur, 1987), the most important feature of which is that its rate of decline becomes less as delay increases. This function predicts that subjects will reverse their preferences from a larger, later (LL) reward to a smaller, sooner (SS) alternative as the pair get closer. Nonhuman animals do so (Ainslie & Herrnstein, 1981; Woolverton et.al., 2007). Human subjects did not show this pattern when small amounts of money were delivered over the course of an experimental session (Logue et.al., 1986), but they show it regularly if the reward is comfort, for instance relief from a loud noise (Navarick, 1982), or, more importantly, if money rewards are offered not in minutes but in weeks, months, or years (Green et.al., 1994). The pattern has been reported in scores of experiments (Frederick et.al., 2002; Kirby, 1997; Green et.al., 2005).
Expected utility theory (EUT) assumes that an individual’s motives remain consistent over time in the absence of new information—that is, that she will tend to order her preferences in the future just as she does in the present. EUT therefore disregards what hyperbolic discounting predicts will be a fundamental human mistrust of future selves, and all the consequences of that mistrust. In EUT an agent can stand apart from the events of the moment and construct a preference that will be good at all moments, in the absence of further information. By contrast, with hyperbolic discounting preference is specific to the moment. An individual who derives present satisfaction from the prospect of her future choices will be threatened by the uncertainty of that prospect.
The most obvious consequence is a relationship of limited warfare among successive motivational states. An agent who wants to be able to make a plan that steers close to SS rewards has to find ways to keep her preference from temporarily changing—from impulses. The early experiments demonstrated the predicted motivation to do this even in pigeons and rats, which can learn committing devices whose only effect is to forestall their own future reversals of preference (Ainslie, 1974; Deluty et.al., 1983). Economists have since found examples of similar commitments in long term human economic behavior (Laibson, 1997), and have explored them theoretically (O’Donoghue & Rabin, 2001); but the usefulness of direct precommitments in everyday life is limited.
The additive property of the relatively high tails of hyperbolic curves suggested a richer possibility for commitment, intertemporal bargaining: When hyperbolic but not exponential curves from a series of rewards at various delays are added together, their relatively high tails sum to produce more incentive to choose the LL rewards against a series of SS alternatives than does any single LL reward against its SS alternative (Ainslie, 1975, 2001, 2005a, 2012). Thus if a person perceives her current choice as a test case of whether she will choose LL or SS rewards in similar circumstances in the future, she will have more incentive to choose the LL reward than if she does not notice such a predictive implication. She faces something like a repeated prisoner’s dilemma against her own expected motivational states, and the lines she finds to divide cooperation from defection will define resolute intentions, what I have called personal rules.
Economists Richard Thaler and Hirsh Shefrin proposed a concept similar to personal rules by pointing out that people govern their spending with mental accounts (Thaler & Shefrin, 1981). Even though money is fungible, people value it differently depending on context. Thaler illustrated his proposal of mental accounting with a story of having just won $300 in a football lottery, and facing the suggestion that he invest it to increase his lifetime income by $20 a year (1990). The counterintuitive suggestion that he should put his football winnings into a long-term inverstment rather than celebrating with them demonstrated the functional existence of a mental account, conceived as something like “not-too-big windfalls, ” that permitted nonsaving of this money, but which also implied another account in which a frivolous $300 expenditure would be forbidden. We might ask why a senior economics professor would feel joy at winning a tiny fraction of his salary to begin with, considering that it did not much increase his total assets. The obvious reason is that the winning gave him an occasion to spend the money frivolously, which a raise in salary of that much would not have done.
But where does the value of such an occasion come from? If accounts are a form of self-control, what enforces them? Why not designate lots of payments as windfalls? The answer suggested by the intertemporal bargaining hypothesis is that spending a one-time windfall does not set a precedent for future spending, and so does not reduce the person’s expectation of future self-control. That is, it does not violate the tacit self-enforcing contract she has made with her expected future selves. Thaler and Shefrin were aware of the test-case hypothesis and of some form of discount curve whose rate declined with delay (Shefrin & Thaler, 1978, pp. 4-5, 24-30), but did not include either in the published article, leaving the means of enforcing mental accounts unspecified. Nevertheless, is clear that mental accounting represents an application of intertemporal bargaining, with the accounts defended by the threat of perceiving a lapse, just as with other examples of willpower based on personal rules.
Summed discount curves from series of choices let hyperbolic valuations approach consistent, exponentially discounted ones, and within limits are sufficient to enforce a personal rule to “evaluate future options according to bank rates” (Ainslie, 1991). Nevertheless, the economists who have focused on the preference reversal problem have generally been uncomfortable with the shifting valuations as a function of time implied by hyperbolic discount curves. Some have modeled personal rules (Bénabou & Tirole, 2004; Ross et.al., 2008), but most current theories of willpower posit dualistic motivational systems to avoid the implication of hyperbolic discount curves (Ross is an exception; the others reviewed in Ainslie, 2012). Many authors have compromised by adopting the “quasi-hyperbolic” step function proposed by David Laibson (1997). However, this accounts only for preference reversals caused by emotional (“visceral”) arousal, and, I have argued, imperfectly for those (Ainslie, 2010, 2012), which has limited their applicability in sophisticated forms of self-control.
The latest development in the motivational analysis of anomalies has been neuroimaging. BE is now taken to include observations of the neural processes that subtend motivation, although this field also has its own name, neuroeconomics (Montague & Berns, 2002; Ross et.al., 2008; Glimcher et.al., 2009). The results of real time fMRI of human brains and microelectrode studies in animal brains at first suggested that options learned in different ways and at different ranges of delay are evaluated by different brain processes. For instance, choices with an imminent option might be evaluated in lower brain centers than choices between long term options (McClure et.al., 2004). Or the relationship of impulse and control might depend on some process that assigns decision-making variously to Pavlovian, habitual, episodic, or goal-directed systems (Rangel et.al., 2008). Pavlovian systems are thought of as automatic; habitual systems as learned by trial and error; episodic systems as repeating the most salient past behavior (Lengyel & Dayan, 2007; cf. the “availability heuristic” of Tversky & Kahneman, 1973); and goal-directed systems as based on the calculation of weighted probabilities. However, as Rangel et.al. point out, all these response systems have to compete for expression in a common currency of value (ibid., p. 3)-- even the Pavlovian system, as I have long argued (see Ainslie & Engel, 1974). These can be called systems to the extent that each is associated with activity in, and connections between, different locations in the brain, and their functions can sometimes be dissociated by injury (Bechara, 2004) or experiment (Berridge, 2003); but in the intact organism they are coordinated (Kable & Glimcher, 2007). All final outputs that are substitutable for one another must still compete on a commensurable basis, the common dimension of which is choosability (e.g. Montague & Berns, 2002), best called rewardingness. Dyscoordination among valuation processes is not a usual cause of inconsistent choice over time, but hyperbolic discounting is. Thus game-theoretic solutions may be more informative than studying the balance among brain motivational centers, at least for the foreseeable future.
Simultaneously with the early behavioral research, and independently, Kahneman and Tversky were discovering a large variety of systematic errors that typical subjects made in estimating the probability of events, failing to appreciate randomness, regression to the mean, bias by mental availability, and the influence of experimenter suggestion, among others (Tversky & Kahneman, 1973; Kahneman et.al., 1982). Kahneman and Tversky soon began to study how cognitive framing affects choices subjects make, a topic they named “prospect theory” (1979; Kahneman et.al., 1982). They gave subjects thought experiments exemplifying violations of EUT. In the 1980s this cognitive approach was also given the name “behavioral economics,” since it confronted economics with experimental findings. Ironically, it was part of a movement against behaviorism in psychology.
The original rationale of behaviorism was to cure psychology of the various philosophical assumptions that had persisted from the introspective method, and that were leading to empirical dead ends (e.g. Titchener, 1909/1926). The cogito of behaviorism might have been, “I choose, therefore I prefer,” except that behaviorists came to regard “prefer” as introducing extra connotations—choice in a given context should itself be the endpoint of research—and the “I” was too small a sample from which to generalize (Skinner, 1938). Unfortunately the useful discipline of seeing what could be discovered by the manipulation of external contingencies morphed into a philosophical stance-- that mental processes are superfluous to the explanation of choice, and indeed might be merely epiphenomena (e.g. Rachlin, 1985). This was more limiting in psychology than was the analogous norm in economics, the ordinalism that was based entirely on revealed preference (e.g. Robbins, 1935). Inevitably, psychology had a paroxysm of revulsion at the endeavor to explain human choice without mental constructs. In this “cognitive revolution” (Gardner, 1985; Baars, 1986, pp. 4 – 10, 141-196) processes of perceiving, imagining, reasoning, and generalizing became the focus of psychology, while the behavioral concepts of reward and motivation have been generally avoided (see, e.g.. Baumeister & Heatherton, 1996, and my commentary, 1996) and sometimes demonized (e.g. Ryan & Deci, 2000, p. 16; Silvia, 2001, p. 278). As so often happens, the revolution went too far, for the highly developed behavioral methodology for quantifying motivation is not antithetical to mental models. On the contrary, it provides them with a much-needed unifying construct-- a mechanism by which cognitive processes are selected, as I will argue.
A long list of anomalies in expressed preference have now been described (Kahneman, 2003, Thaler, 1992), based mostly on how subjects frame their choices—ostensibly by cognitive predispositions that have not been tested in competitions for utility. To some commentators they appear to be a ragbag with no unifying features, an inadequate basis for a field of study. Richard Posner responded to a review of them (Jolls et.al., 1998) by writing that the proposed field of behavioral economics was just the set of anomalies in EUT: “It is economics minus the assumption that people are rational maximizers of their satisfactions. Its relation to standard economics is thus a bit like the relation of non-Euclidean to Euclidean geometry, though with the important difference that non-Euclidean geometry is as theoretically rigorous as Euclidean geometry, whereas behavioral economics is… antitheoretical” (Posner, 1998, p. 1552). “It would not be surprising if many of these phenomena turned out to be unrelated to each other, just as the set of things that are not edible by man include stones, toadstools, thunderclaps, and the Pythagorean theorem” (ibid., p. 1560). However, in the rest of this article I will suggest that they are not as random as they seem, and have much simpler underpinnings.
The first way to simplify the array of anomalies is to eliminate those that subjects themselves come to recognize as errors. I define errors as the preferences that a subject will change after thinking about them. In one famous example, subjects said that a hypothetical social activist was more likely to be a feminist bank teller than she was to be a bank teller tout simple. Even Stanford undergraduates who had had statistics courses committed the error in 36% of instances (Tversky & Kahneman, 1983). However, subjects would obviously not have persisted in that belief after having been debriefed. If most subjects persist in a preference after reflection, and especially after exposure to counter-arguments, it should be considered robust. By this test many supposed anomalies are not robust phenomena, even though they may demonstrate common heuristics that people use in estimating the probability or value of an outcome.
Gerd Gigenenzer has pointed out that an abstract presentation of such a choice to subjects often keeps them from accessing perfectly sound logic that they have developed in more familiar contexts (Gigerenzer et.al., 2012). In one example, a simple Bayesian problem is presented to sixth graders: Ten percent of people in a village are liars, and eighty percent of the liars have red noses. The remaining ninety percent of the villagers tell the truth, and ten percent of those have red noses. What is the probability that a given villager is a liar? No child got it right. But if the problem was presented as “10 out of every 100 people will lie, and of those eight have a red nose. Of the remaining 90 people, 9 have a red nose,” 54% got it right. (The corresponding numbers for MBA students were 47% and 76%-- Zhu & Gigerenzer, 2006; see also Gigerenzer, 2005).
Many framing-type anomalies are those in which few subjects can be imagined to persist after debriefing, that is, on full reflection. By contrast, the persistence of a framing effect despite exposure to logical analysis, debriefing, counterarguments or simple re-phrasing in familiar terminology suggests a robust basis that needs explanation. We have seen how hyperbolic delay discounting demands modification of EUT. But it makes this demand because of its own call for maximization of discounted expected reward. Where choice patterns fail to maximize exponentially discounted utility and persist on reflection, we should look at whether they fail to maximize hyperbolically discounted utility, taking strategic responses to the hyperbolic curve into account.
Economist Colin Camerer suggested that ten examples could be explained on the basis of two prospect-theoretical principles, the asymmetric valuation of gains and losses and the overweighting of very high and very low probabilities (2004). Without contradicting his analysis I will discuss a dozen examples that partially overlap his, and suggest how they may be brought together by the implications of hyperbolic delay discounting—that is, how they represent reward maximizing by the logic of either impulsiveness or impulse control.
Four kinds of mechanism define four categories:
Endowment effect. People decline to sell a good they own for a price they would not pay to buy the same good—the endowment effect, first demonstrated by economist Jack Knetsch by giving college students either chocolate bars or coffee mugs and then offering each group the other prize: Few members of either group opted to trade (1989). This often-replicated effect creates value in these small transactions that is typically equal to almost the value again of the goods to be traded (Novemsky and Kahneman, 2005, p. 124). By contrast, EUT calls it rational to assess resources the same whether you might gain them or lose them. However, such an attitude involves an obvious danger, in that the resources you might gain are infinite whereas what you have to lose is finite. Advice-givers have often warned against weighing opportunities for gain equally with potential losses, for instance in “A bird in the hand is worth two in the bush” or medicine’s Hippocratic rule, “First do no harm.” Evolution itself seems to have introduced the same principle in the widespread instinct to defend territory (Ardrey, 1966). For instance, inborn asymmetries give elk extra incentive for guarding harems, and motivate dogs to guard “their” turf. Evolution of dispositions against the valuing of options without regard to ownership suggests that such valuing is maladaptive. In people this instinct is manifested in “the pain of paying” (Prelec & Loewenstein, 1998), which among other effects makes it prudent for governments to collect taxes directly from payrolls, before citizens have endowed the money with ownership.
An inborn readiness to endow possessions with extra value can create an impulse-control problem in itself. The capacity to derive reward from ownership is seen sometimes in an extreme form, “compulsive” hoarding (Pertusa et.al., 2010). Short of compulsion, part of the pleasure of collecting objects depends on the premium for ownership, but this pleasure puts you at a disadvantage in the marketplace. The evolved instinct to defend existing resources has created a temptation that must be controlled in turn. For instance, traders who buy collectables for resale have to avoid feeling endowed with their goods (Haigh & List, 2005). Novemsky and Kahneman say this is merely a matter of intention—“Goods that are exchanged as intended are not evaluated as losses” (2005, p. 124). However, such intention is not casual, but entails the cost of resisting the temptation to endow the goods. Conversely, once the goods are endowed, people cannot disendow them by casually shifting their intentions. Thus the endowment effect can be maladaptive in a market economy, but it appears to have deep roots as a way to prevent risk-taking in just such free exchange.
Gain/loss asymmetry. Contrary to EUT, people tend to value a potential gain less than the avoidance of the same potential loss-- gain/loss asymmetry (Thaler, 1981). This case could be considered to be an example of the endowment effect, but it is also found between goods that we don’t yet possess (Carmon et.al, 2003). Our powers of foresight lead us to establish property in expectations, events that we “count on” and which we value over similar others. This form of asymmetry seems to have arisen in evolution as soon as foresight itself, which is found to a significant degree only in the primates (deWaal, 2007, pp. 184-187): Psychologist M. Keith Chen and colleagues showed Capuchin monkeys one apple slice on one side of their cage and two apple slices on the other, then had the monkeys choose a side. Each side delivered 2 slices with a 50% chance and 1 slice with a 50% chance. The monkeys strongly preferred the side that displayed 1 slice over the side that displayed 2 slices. That is, when the odds for outcomes were the same, they preferred getting pleasant surprises to getting unpleasant surprises (Chen et.al., 2006). In another experiment where the monkeys got a sure 1 slice on either side, they preferred the side on which a single slice had been displayed over the side where 2 slices had been displayed. Had the experimenters tried, they could almost certainly have titrated this preference against maximization of reward, perhaps finding indifference between a 45% chance of pleasant surprise against a 55% chance of unpleasant, suggesting a source of reward beyond the apple slices themselves (see section on Preference for Delayed Reward).
In a related example, people say they would pay more to prevent a future delivery from being delayed to a later date than they would pay to speed up delivery from that date (Loewenstein, 1988). This case also would seem to reflect an endowment of expectations.
Some preference phenomena that have been called anomalous are manifestations of the intertemporal bargaining/mental accounting that I discussed above. They have been discussed in both the cognitive and the motivational literature:
Less thrift in exceptional cases. People’s willingness to pay for a good goes up when they are on holiday, or when the good is attached to a larger purchase. For example, Kahneman and Tversky’s subjects reported a willingness to drive twenty minutes to save $5 on a $15 purchase, but not on a $125 purchase (1984). But a personal rule not to waste money will seem important for small amounts only if those small amounts are spent routinely, so that they add up. If there is a factor that makes the choice at hand infrequent, for instance if it occurs on a vacation trip or is attached to another purchase that can’t be made often because it is large, the person can credibly claim an exception.
Borrowing to protect savings. People have been regularly observed to pay high credit card interest rather than spending money they already have. Thaler gives the example of a couple who have saved $15,000 toward a dream home and put in a money market account at 10%. They then finance a new car at 15% (1985—a time of rocketing inflation). Similarly, people spend money they could be investing rather than accept zero-interest loans. Such examples show a willingness to incur costs to avoid violating the boundaries of mental accounts, as Thaler himself has argued. The theoretical need for hyperbolic discount curves and consequent intertemporal bargaining need arise only to account for how these boundaries are enforced—by the test-case mechanism described above. Thaler has mentioned this mechanism: He described a couple who can afford no more than $20 a night on wine, but could afford an occasional $30 bottle of champagne. Such bottles would be worth the $30 to them, “but they don’t trust themselves to resist the temptation to increase their wine budget unreasonably if they break the $20 barrier” (1999, p. 195). However, he did not relate this case to the hyperbolic discount function. We would expect people to be more rigid in maintaining the boundaries of mental accounts—for instance, to avoid dipping into an investment account to save credit card interest—the more they were worried about self-control, but this prediction has not been tested.
Cooperation in one-shot games. In competitive bargaining games people often fail to maximize their prospective outcomes (Henrich et.al., 2005). The most striking example of this is their tendency to cooperate in one-shot prisoner’s dilemmas, similar public goods games, and ultimatum games. This cooperation is called an anomaly because the experimenter counts on subjects’ having obeyed her instructions: to assume that their moves will have no consequences outside of the one-shot game. But adult subjects will certainly have formed personal rules for private conduct in situations involving sharing and trust. Even if no one else can see them doing something selfish, they will see themselves doing it, and for the trifling rewards offered to experimental subjects at that. Thus they may not let the experimenter grant them permission for an exception (Ainslie, 2005b). Self-signals touching on character are apt to have high stakes (Prelec & Bodner, 2003). Again, we would expect people for whom this had been an issue to be especially unwilling to play defector.
Magnitude effect. People’s rate of discounting delayed money seems to be shallower the larger the amount. Since the first studies of intertemporal decision-making, subjects have reported more patience for LL options the higher the amounts under consideration. In the first systematic study of human discounting, Thaler’s student subjects reported declining discount rates with not only delay but also with amount, for instance going from 277% for a $30 prize to 62% for a $3000 prize (1981). Ainslie and Haendel found that hospital staff (mean age = 41) reported that they would wait twice as long to double $1000 as to double $10 (1983). Leonard Green and his collaborators did the first quantitative study of the human discount curve as a function of amount (Green et.al., 1994). They found decreasingly steep curves as index amounts increased from $100 to $1000 and then 10,000, whether subjects had mean ages of 12, 20, or 68. Since then this effect has been noted regularly, with real money as well as hypothetical, to the extent that Scholten and Read called it “the most robust anomaly in intertemporal choice” (2010, p. 927). However, some recent experiments have found that subjects do not discount smaller prizes at a greater rate, but simply subtract a fixed transaction cost from the value of delayed awards. Economists Jess Benhabib and colleagues asked subjects what amount of actual money “today” would make them indifferent between that and a reference amount, from $10 to $100, paid at specified times from 3 days to 6 weeks (2010). Their results suggested that subjects simply subtracted about $4 from the value of any delayed award. Economists Steffen Andersen and colleagues also offered actual money for one of the subjects’ choices selected at random (2011). They offered much larger reference amounts-- $300 and $600 (with Danish kroner worth about 20 cents) at delays varying from 2 weeks to 12 months, and obtained discount rates lower than in most experiments (median = 5.5%). With the larger amount only double the smaller, they still found discounting for the smaller amount was half again as great as for the larger amount (4.3% vs. 6.6%)-- but they reported that this difference disappeared when the earlier option paid after one month (a “front end delay”) instead of immediately, thus making transaction costs of the two options equal.
The notion of a fixed transaction cost for having to collect money later is sensible, but does not account for the much larger magnitude effects found in earlier experiments. And the Ainslie & Haendel experiment, although crude, did use a front end delay of 1 week (1983). An additional factor is supplied by Thaler and Shefrin’s idea of mental accounts (1981), given motivational force by intertemporal bargaining. Small sums are pocket money, exempt from personal rules for thrift, especially when they would be won in a one-shot psychology experiment. As they get larger the rules are more apt to become salient—not proportionately, but, being rules, on a threshold basis. Accordingly, suggesting an investment context to subjects, or giving options in interest rates rather than amounts of money, produces exponential discounting and eliminates the magnitude effect (Read et.al., 2011). Significantly, nonhumans, which presumably do not engage in intertemporal bargaining, do not show the magnitude effect (Green et.al., 2004).
Choice aversion. In EUT a wider choice is always at least as valuable as a narrower one, but people are often happier with less choice (Schwartz et.al., 2002). The easiest explanation would be to say that choice entails cognitive effort, which is costly in its own right. But sheer computation is not effortful, and is often fun, as in puzzles. The effort of choice is not simply information processing, but the burden of facing consequences with motivational weight, in particular the risk of guilt and/or regret. The motivational constraints that impose or lift guilt and regret have not been explicitly modeled, but hyperbolic discounting supplies a suggestion for each. I have argued that guilt arises from having violated a personal rule, and gets its force from your lost expectation of future self-control in the relevant area. To the extent you rule that you must get full information before making choices, you are vulnerable to the self-accusation that you did not deliberate enough. To do “what I’ve been doing” bypasses this issue. People who resolve only to satisfice—to choose the first adequate option—are less averse to choosing (Schwartz et.al., 2002).
Choices that do not test personal rules may still be psychologically risky, but the risk is of regret rather than guilt. I have argued that much of the pain of regret arises from having to resist the urge to believe in the counterfactual (“if only..”), which would explain why regret is stronger the closer the counterfactual is to the fact (Ainslie, 1985). The most strongly motivated example of the status quo bias is undoubtedly the norm among infantrymen not to trade duty in combat patrols. The same logic would apply to trading lottery tickets, if there were any incentive to do so—the wisdom of which is illustrated by the converse case of a man who once did not play the lottery number he usually played: When he believed his habitual number would have won £2.7 million he shot himself (MacKinnon, 1995). With neither the patrols nor the lottery tickets would there be an occasion for guilt, as long as the starting odds were the same. But a change in the status quo stands out as a focus for regret, whereas passivity is part of the vast and poorly defined set of your non-engagements with the world.
The details of intertemporal bargaining are hard to study, but the effect of being in a choice situation itself can be measured: Merely perceiving an SS/LL choice changes the relative values of the options, in a way that reduces the value of SS rewards: When subjects anticipate individual (non-chosen) SS and LL rewards for which they had previously expressed equal preference, activity in brain reward centers is greater when they expect the SS reward than when they expect the LL reward (equal preference confirmed by post-test; Luo et.al., 2009). This finding implies that the process of intertemporal choice itself depresses the relative value of SS rewards.
Preference for passivity. Among required choices, people tend to value the more passive option. For instance, people say that they would leave an inheritance invested wherever it was when it arrived (Samuelson & Zeckhauser, 1988). People show the same passivity about choosing between auto insurance plans that do and do not permit litigation—When Pennsylvania and New Jersey offered the same choice, motorists overwhelmingly choses the default option, even though it was opposite in the two states (Johnson et.al., 1993). When maintaining the status quo requires activity, subjects tend to passively let change happen, as shown when subjects were given an experimental investment option that had to be renewed, but without cost—Most subjects didn’t bother (XX). The distinction in this kind of experiment is active choice versus passivity. Passivity turns out to have an advantage, perhaps because it avoids regret even better than actively pursuing the status quo.
Sunk cost fallacy. Investors commonly include the amount they have already invested in an option in the expected cost of switching to an alternative—the sunk cost fallacy (Arkes & Blumer, 1985). In a well-quantified example, stock traders irrationally hold losing stocks longer than winning stocks (Odean, 1998). This phenomenon reveals something about the attractiveness of passivity. The stocks in question have already lost monetary value, but at least part of the hedonic loss does not occur until, in the revealing terminology of the market, it is “realized.” Realization of a loss occasions immediate psychic punishment, and is accordingly deferred just as hyperbolic discounting predicts. Why a somewhat arbitrary occasion should have such power to punish (or reward) is another question, which I will discuss in the next section.
Preference for delayed reward. In EUT, expected value is the product of the probability an option will become available and the value it will have when it becomes available. But people often choose to defer a positive event—savor it (Loewenstein, 1987) and hasten a negative one--to avoid dread (Berns et.al., 2006). People usually save dessert for last, or opt for increasing wages over the years when greater early wages could be invested at interest (Loewenstein & Prelec, 1993). But what are the properties of this kind of incentive? Savoring seems to conjure additional reward out of thin air; dread rewards attention at least, luring you into intrusive thoughts that decrease net reward (“Cowards die many times before their deaths…”) Savoring requires imagination. This capacity is doubtless limited among nonhumans, although the Capuchin monkeys in the Chen et.al. experiments may have been deriving reward from the prospect of getting the apple slices beyond what they would have gotten from unexpected slices. But nonhumans have never been observed to prefer delay of a reward without a compensatory increase in value, and will choose to hasten punishment to reduce it only when the punishment will still be distant (Deluty et.al., 1983). Reward through imagination probably comes from the large “default areas” that characterize human brains (Buckner et.al., 2008); but this tells us nothing about its properties.
Preferences for sequences of reward vary among people and among the kinds of goods in question (Frederick & Loewenstein, 2008), which suggests that they depend more on individual bookkeeping than on the fixed properties of the goods themselves. The value of expectations in EUT follows the behavioral concept of secondary reward (Baum, 2005, 277-286)--that is, as a soft currency that must be backed up by the eventual delivery of hard currency in the form of an external reward. This assumption makes some sense, since otherwise you might be able to reward yourself at will; but on the other hand, many implied explanations of common motives involve highly fanciful chains of association, chains that somehow do not extinguish despite years without leading to their primary rewards. I have argued elsewhere that behavioral science needs to make room for the concept that non-predictive information and even pure imagination are sometimes rewarding in their own right, endogenously, limited only by the person’s appetite for them (Ainslie, 2013a). With hyperbolic discounting this appetite degenerates readily into daydreaming if satisfied ad lib, and must be built up by obstacles to its satisfaction—generally, by gambles on outcomes. These gambles can take the form of questions, puzzles, works of fiction, or sports events, as well as aspirations that have actual instrumental value. The occasions for endogenous reward that such gambles define perform best when they are singular—uncommon and standing out from other potential occasions—and surprising—not too predictable. Singularity is the same property that makes a hand of cards valuable, or a visual pattern interesting—its unusualness, or what some behavioral psychologists have equated to its complexity (Berlyne, 1974) or information value (Garner, 1970). Outcomes of a sports event will have descending singularity as you go from watching a current championship game in person to watching its broadcast, to the broadcast of a regular current game, to a game already played but which you haven’t seen, to the rebroadcast of a game you have seen—singularity that is compounded by the singularity of the events in the game. Furthermore, a team’s successes and failures will become increasingly singular over the years you support it, because they will stand out from the outcomes of teams you have not supported or have not supported for as long.
To the extent that your gambles restrict the gratification of appetite to the right degree, they form consumption capital. That is, they acquire what could be called hedonic importance (as opposed to the instrumental kind), a property that grows with use much like the endowment of ownership described in Section A1. Indeed, the factors that have been reported to invite endowment entail singularity: The endowment effect has been shown to be proportional to the length of time a subject has owned the object (Strahilevitz & Loewenstein, 1998); goods of fixed value are not endowed; goods that are readily substitutable for others are less endowed; a specific option on a good not actually possessed can be endowed; and designation of a finite purse that can be used to buy goods makes the money itself endowed (reviewed in Novemsky & Kahneman, 2005, who also confirm that endowment builds with time of possession). Of course singularity does not automatically evoke endowment; you have to participate in giving it meaning, just as you have to take an interest in card games or pictures for them to occasion reward. The aesthetic psychologist Michael Kubovy has pointed out that the early aesthetics of the Berlyne school erred in making beauty a function of stimulus complexity, whereas “complexity should be relevant to pleasures of the mind only insofar as it contributes to the generation of emotions” (1999, p. 142)—that is, insofar as you have made it hedonically important. Thus savoring is the exemplar of a much wider reward-getting strategy, potentially divorced from instrumental value but not necessarily so (see next section).
Goal-setting. Recent articles on “goal setting” have explored people’s ability to designate criteria that will occasion reward over a wide range of difficulties. Contrary to EUT, rewardingness does not depend on objective instrumentality but only on accepting an optimal risk of failure (Koch & Nafziger, 2011). But this is exactly the strategy for optimizing endogenous reward.
Actual accomplishment is usually a good source of singularity. The stated object of goal-setting is instrumental, and the process cannot grossly violate your personal rules for testing reality without reducing the singularity of your goals. But your ability to specify what goals will reward you implies two potentially conflicting bases of reward—external and endogenous. Your responsiveness to endogenous reward can parasitize your criteria for getting external reward, and in doing so create incentives to bend rules for testing reality. The resulting anomalies to EUT include clinging to inefficient but satisfying production methods, estimating the value of tasks by their difficulty, and seeking riches by casino gambling (discussed in Ainslie, 2013b).
Overvaluation of improbable outcomes. Contrary to EUT, people value small changes in probability at the extreme ends of the scale over larger changes nearer the middle. Most famous is an example of Allais’ paradox (Savage, 1954, pp. 102-104), which has proven robust on reflection (Slovic & Tversky, 1974): Subjects are told to imagine they can draw a ball from one of two urns. In both urns 89 of 100 balls pay $1 million. But in urn A the other balls are also worth $1 million, whereas in urn B 10 balls are worth $5 million and 1 ball is worth nothing. Subjects usually choose urn A. Then subjects are asked to make the same choice, except that in both urns the 89 balls that used to be worth $1 million are worth nothing. Now the subjects choose B, which always had the greatest expected value according to EUT. The difference is that in the first choice there was a single ball that was worth nothing, while in the second choice most were worth nothing. The risk of drawing a worthless ball was overvalued, but only when it was remote. Serious consequences of this overvaluation include people’s tendency to bet on long shots—lottery tickets (Cook & Clotfelter, 1993) and unpromising race horses (Jullien & Salanié, 2000), and, conversely, to buy insurance for rare events such as dismemberment and public utility failures (Cicchetti & Dubin, 1994). But to the extent that a person is controlling her imagination with rules for testing reality, the categorical difference between “cannot” and “might” will overshadow straightforward proportionality. However small, a chance of winning is still a real possibility, so savoring is not forbidden; with no chance, savoring is just another daydream. Conversely, a tiny chance that you will lose a large sum of money means that you can’t rule out the dread of it, and thus might be vulnerable to urges to panic. The hucksters who sell dismemberment insurance trade on the same threat with regard to having legs cut off, Therefore, the assumption called into question by the Allais paradox-- that all percentage points of risk are equally important-- should not stand the test of reflection, and in fact does not.
To the extent that utility determines choice, it is turning out to be a strict function of an integrated network of brain reward centers, the operation of which is just beginning to be understood. We can visualize their location but little of their syntax. Even without this syntax, the finding that the different centers discount delayed reward in unison and hyperbolically is moderately secure (Kable & Glimcher, 2007). It suggests that, whatever cognitive patterns are available to a person, her choices must fit within the bounds of an intertemporal bargaining model. The possibility of discerning the constraints of singularity and surprisingness on endogenous reward promises to bring the realm of imagination within the bounds of this model, so cognition can be understood as serving the maximization of discounted expected reward. It is no more irrational for a person to fit her choices to the needs of intertemporal bargaining than for a nation to base economic choices on the realities of political faction. For an exponential discounter without intertemporal factions the above anomalies would be pointless deviations from reward-getting; but while a hyperbolic discounter may still make mistakes, there is no reason to say that any of these processes are misguided.
1. This article is a U.S. Government work and is in the public domain in the USA.
2.Some of these had been previously described under the rubric of “behavioral decision-making,” for instance the failure of insurance-buying decisions to track expected value (Slovic et.al., 1977).
3. He actually named three, but his “reflection effect” seems to encompass his “loss aversion.”
Ainslie, G. (1974) Impulse control in pigeons. Journal of the Experimental Analysis of Behavior 21, 485-489.
Ainslie, G. (1975) Specious reward: A behavioral theory of impulsiveness and impulse control. Psychological Bulletin 82, 463-496.
Ainslie, G. (1985) Rationality and the emotions: A picoeconomic approach. Social Science Information 24, 355-374.
Ainslie, G. (1991) Derivation of "rational" economic behavior from hyperbolic discount curves. American Economic Review 81, 334-340.
Ainslie, G. (1996) Studying self-regulation the hard way. Psychological Inquiry 7, 16-20.
Ainslie, G. (2005a) Précis of Breakdown of Will. Behavioral and Brain Sciences 28(5), 635-673.
Ainslie, G. (2005b) You can’t give permission to be a bastard: Empathy and self-signalling as uncontrollable independent variables in bargaining games. Behavioral and Brain Sciences 28, 815-816.
Ainslie, G. (2010) The core process in addictions and other impulses: Hyperbolic discounting versus conditioning and cognitive framing. In What Is Addiction? D. Ross, H. Kincaid, D. Spurrett, and P. Collins (eds). MIT, pp. 211-245.
Ainslie, G. (2012) Pure hyperbolic discount curves predict “eyes open” self-control. Theory and Decision 73, 3-34. 10.1007/s11238-011-9272-5
Ainslie, G. (2013a) Grasping the impalpable: The role of endogenous reward in choices, including process addictions. Inquiry 56, 446-469. DOI: 10.1080/0020174X.2013.806129. http://www.tandfonline.com/eprint/8fGTuFsnfFunYJKJ7aA7/full
Ainslie, G. (2013b) Money as MacGuffin: A factor in gambling and other process addictions. In Neil Levy, ed., Addiction and Self-Control: Perspectives from Philosophy, Psychology, and Neuroscience. Oxford University Press, pp. 16-37
Ainslie, G. and Engel, B. T. (1974) Alteration of classically conditioned heart rate by operant reinforcement in monkeys. Journal of Comparative and Physiological Psychology 87, 373-383.
Ainslie, G. and Haendel, V. (1983) The motives of the will.in Etiology Aspects of Alcohol and Drug Abuse. E. Gottheil, K. Druley, T. Skodola, H. Waxman (eds). Charles C. Thomas, pp. 119-140.
Ainslie, G. and Herrnstein, R. (1981) Preference reversal and delayed reinforcement. Animal Learning and Behavior 9,476-482.
Andersen, S., Harrison, G., Lau, M., & Rutstroem, E. (2011). Discounting behavior and the magnitude effect. Working paper: Durham Research Online http://www.dur.ac.uk/business/faculty/working-papers/
Ardrey, R. (1966). The territorial imperative. New York: Atheneum.
Arkes, H.R. and Blumer, C. 1985 The psychology of sunk cost. Organizational Behavior and Human Decision Processes 35,124-140.
Baars, B. J. (1986) The Cognitive Revolution in Psychology Guilford.
Baum, W. M. (2005) Understanding Behaviorism 2d Edition. Blackwell.
Baumeister, R. F. and Heatherton, T. (1996) Self-regulation failure: An overview. Psychological Inquiry 7, 1-15.
Bechara, A. (2004) The role of emotion in decision-making: Evidence from neurological patients with orbitofrontal damage. Brain and Cognition 55, 30-40.
Bénabou, R., and Triole, J. (2004). Willpower and personal rules. Journal of Political Economy, 112, 848-886.
Benhabib, J.; Bisin, A.;and Schotter, A. (2010) Games and Economic Behavior. 69 (2), 205-223. DOI: 10.1016/j.geb.2009.11.003.
Berlyne, D.E. (1974) Studies in the New Experimental Aesthetics.Washington, D.C.: Hemisphere.
Berns, G. S., Chappelow, J., Cekic, M., Zink, C. F., Pagnoni, G., and Martin-Skurski, M. E. (2006) Neurobiological substrates of dread. Science 312, 754-758.
Berridge, K. C. (2003) Pleasures of the brain. Brain and Cognition 52, 106-128.
Buckner, R. L., Andrews-Hanna, J. R., and Schacter, D. L. (2008) The brain’s default network: anatomy, function, and relevance to disease. Annals of the New York Academy of Sciences 1124, 1-38, doi: 10.1196/annals.14410.011.
Camerer, C. F. (2004) Prospect theory in the wild: Evidence from the field. In Advances in Behavioral Economics, C. F. Camerer, G. Loewenstein, and M. Rabin(eds). Russell Sage, pp. 148-161.
Carmon, Z., Wertenbroch, K., & Zeelenberg, M. (2003). Option attachment: When deliberating makes choosing feel like losing. Journal of Consumer Research, 30(1), 15-29.
Chen, M. K., Lakshminarayanan, V., and Santos, L. R. (2006) How basic are behavioral biases? Evidence from Capuchin monkey trading behavior. Journal of Political Economy 114, 517-537.
Chung, S. and Herrnstein, R. J. (1967) Choice and delay of reinforcement. Journal of the Experimental Analysis of Behavior 10, 67-74.
Cicchetti, C. and Dubin, J. (1994) A micro-econometric analysis of risk-aversion and the decision to self-insure. Journal of Political Economiy 102, 169-186
Cook, P. I., and Clotfelter, C. T. (1993) The peculiar scale economies of lotto. American Economic Review 83, 634-643.
Deluty, M.Z., Whitehouse, W.G., Millitz, M. and Hineline, P. (1983) Self-control and commitment involving aversive events. Behavioral Analysis Letters, 3, 213-219.
DeWaal, F. (2007) Chimpanzee Politics. Johns Hopkins U.
Frederick, S., Loewenstein, G. (2008) Conflicting motives in evaluations of sequences. Journal of Risk and Uncertainty 37, 221-335. DOI10-1007/s11166-008-9051-z.
Frederick, S., Loewenstein, G., and O’Donoghue, T. (2002) Time discounting and time preference: A critical review. Journal of Economic Literature 40, 351-401.
Gardner, Howard (1985) The Mind’s New Science: A History of the Cognitive Revolution. Basic.
Garner, Wendell R. (1970) Good patterns have few alternatives: Information theory’s conept of redundancy helps in understqnding the Gestalt concept of goodness. American Scientist 58, 34-42.
Gigerenzer, Gerd (2005) I think, therefore I err. Social Research 72, 195-218.
Gigerenzer, G., Fiedler, K., & Olsson, H. (2012). Rethinking cognitive biases as environmental consequences. In P. M. Todd, G. Gigerenzer, & the ABC Research Group. Ecological Rationality: Intelligence in the World (pp. 80–110). New York: Oxford University Press.
Glimcher, P. W., Camerer, C., Poldrack, R. A. and Fehr, E. Neuroeconomics: Decision Making and the Brain. Elsevier.
Green, L., Fry, A., and Myerson, J. (1994) Discounting of delayed rewards: A life-span comparison. Psychological Science 5, 33-36.
Green, L., Myerson, J., Holt, D. D., Slevin, J. R., and Estle, S. J. (2004) Discounting of delayed food rewards in pigeons and rats: Is there a magnitude effect? Journal of the Experimental Analysis of Behavior 81, 39-50.
Green, L., Myerson, J., & Macaux, E. W. (2005). Temporal discounting when the choice is between two delayed rewards. Journal of Experimental Psychology: Learning, Memory, & Cognition 31, 1121-1133.
Haigh, M. and List, J. A. “Do professional traders exhibit myopic loss aversion? An experimental analysis,” Journal of Finance, (2005), 60 (1): 523-534.
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R., Alvard, M., Barr, A., Ensminger, J., Henrich, Nalie,.S., Hill, K., Gil-White, F., Gurven, M., Marlowe, F., W., Patton, J Q., and Tracer, D. (2005) “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behavioral and Brain Sciences 28, 795-855.
Herrnstein, R. (1961) Relative and absolute strengths of response as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behavior 4,267-272.
Johnson, E. J., Hershey, J., Meszaros, J., & Kunreuther, H. (1993). Framing, probability distortions, and insurance decisions (pp. 35-51). Springer Netherlands.
Jolls, C., Sunstein, C. R. and Thaler, R. (1998) A Behavioral Approach to Law and Economics, Stanford Law Review 50, 1471-1550.
Jullien, B. and Salanié, B. (2000) Estimating preferences under risk: The case of racetrack bettors. Jounal of Political Economy 108(3), 503-530
Kable, J. W. and Glimcher, . W. (2007) The neural correlates of subjective value during intertemporal choice. Nature Neuroscience 10, 1625-1633.
Kagel, J. H. and Winkler, R. C. (1972) Behavioral economics: Areas of cooperative research between economics and applied behavioral analysis. Journal of Applied Behavior Analysis 5, 335-342.
Kagel, J.H., Battalio, R.C., Rachlin, H., Green, L., Basmann, R.L., and Klemm, W.R. (1975) Experimental studies of consumer demand behavior using laboratory animals. Economic Inquiry, 13, 22-38.
Kahneman, D. (2003) Maps of bounded rationality: Psychology for behavioral economics. American Economic Review 93, 1449-1475.
Kahneman, D., Slovic, P. & Tversky, A. (1982) Judgment Under Uncertainty: Heuristics and Biases. NY: CambridgeUniversity Press.
Kahneman, D. & Tversky, A. (1984) Choices, values, and frames. American Psychologist 39, 341-350.
Kirby, K. N. (1997) Bidding on the future: Evidence against normative discounting of delayed rewards. Journal of Experimental Psychology: General 126, 54-70.
Knetsch, J. L. (1989) The endowment effect and evidence of nonreversible indifference curves. American Economic Review 79, 1277-1284.
Koch, A. K., & Nafziger, J. (2011). Self‐regulation through Goal Setting. The Scandinavian Journal of Economics, 113(1), 212-227.
Kubovy, M. (1999) On the pleasures of the mind. In Well-Being: The Foundations of Hedonic Psychology, Kahneman, D. Diener, E., and Schwartz, N. (eds). Russell Sage.
Laibson, D. (1997) Golden eggs and hyperbolic discounting. Quarterly Journal of Economics, 62, 443-479.
Lengyel, M., & Dayan, P. (2007). Hippocampal contributions to control: the third way. In NIPS (Vol. 20, pp. 889-896).
Loewenstein, G. (1987) Anticipation and the valuation of delayed consumption. The Economic Journal 97, 666-685.
Loewenstein, G. (1988) Frames of mind in intertemporal choice. Management Science 34, 200-214.
Loewenstein, G. and Prelec, D. (1993) Preferences for sequences of outcomes. Psychological Review 100, 91-108.
Logue, A.W., Pena-Correal, T.E., Rodriguez, M.L. and Kabela, E.(1986) Self-control in adult humans: Variations in positive reinforcer amount and delay. Journal of the Experimental Analysis of Behavior 46, 113-127.
Luo, S., Giragosian, L., Ainslie, G., Monterosso, J. (2009) Behavioral and neural evidence of incentive bias for immediate rewards relative to preference-matched delayed rewards. Journal of Neuroscience, 29(47):14820-14827. PMCID: PMC2821568
MacKinnon, I. (1995) Lottery Loser Killed Himself for Just £27. The Independent June 16.
Mazur, J. E. (1997) Choice, delay, probability, and conditioned reinforcement. Animal Learning and Behavior 25, 131-147.
McClure, S. M., Laibson, D. I., Loewenstein, G., and Cohen, J. D. (2004) The grasshopper and the ant: Separate neural systems value immediate and delayed monetary rewards. Science 306, 503-507.
Montague, P. R. and Berns, G. S. (2002) Neural economics and the biological substrates of valuation. Neuron 36, 265-284.
Navarick, D.J. (1982) Negative reinforcement and choice in humans. Learning and Motivation 13, 361-377.
Novemsky, N. and Kahneman, D. (2005) The boundaries of loss aversion. Journal of Marketing Science 42, 119-128.
Odean, T. (1998) Are investors reluctant to realize their losses? Journal of Finance 53, 1775-1798. [In camerer, 2004]
O’Donoghue, T. and Rabin, M. (2001) Choice and procrastination. The Quarterly Journal of Economics 116, 121-160.
Pertusa, A., Frost, R. O., Fullana, M. A., Samuels, J., Steketee, G., Tolin, D. Sanjaya S., Leckman, J. F.& Mataix-Cols, D. (2010). Refining the diagnostic boundaries of compulsive hoarding: a critical review. Clinical psychology review, 30(4), 371-386.
Posner, R. (1998) Rational Choice, Behavioral Economics, and the Law 50 Stanford Law review 1551-1575,
Prelec, D. and Bodner, R. (2003) Self-signaling and self-control. In, Time and Decision: Economic and Psychological Perspectives on Intertemporal Choice, G. Loewenstein, D. Read, and R. Baumeister (eds). Russell Sage, pp. 277-298.
Prelec, D. and Loewenstein, G. F. (1998) The red and the black: Mental accounting of savings and debt. Marketing Science 17, 4-28.
Rachlin, H. (1985) Pain and behavior. Behavior and Brain Sciences 8, 43-83.
Rangel, A., Camerer, C., and Montague, P. R. (2008) A framework for studying the neurobiology of value-based decision making. Nature Reviews 9, 1-12.
Read, D., Frederick, S., & Scholten, M. (2011). Outcome framing in intertemporal choice: the drift model. Working paper: Available at SSRN 1933099
Robbins, L. (1935/1984) An Essay on the Nature and Significance of Economic Science. NYU.
Ross, D., Sharp, ., Vuchinich, R. and Spurrett, D. (2008) Midbrain Mutiny: The Picoeconomics and Neuroeconomics of Disordered Gambling. MIT.
Ryan, R. M., and Deci, E. L. (2000) When rewards compete with nature: The undermining of intrinsic motivation and self-regulation. In Intrinsic and Extrinsic Motivation: The Search for Optimal Motivation and Performance, Sansone, C., and Harackiewicz, J. M. (eds). Academic, 13-54.
Samuelson, W., & Zeckhauser, R. (1988). Status quo bias in decision making. Journal of Risk and Uncertainty, 1(1), 7-59.
Savage, L. J. (1954) The Foundations of Statistics. 2d ed. Wiley.
Scholten, M. and Read, D. (2010) The psychology of intertemporal tradeoffs. Psychological Review 117(1), 925-944.
Schwartz, B., Ward, A., Monterosso, J., Lyubomirsky, S., White, K., and Lehman, D. R. (2002) Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality and Social Psychology 83, 1178-1197.
Shefrin, H. and Thaler, R. (1978) An Economic Theory of Self-Control, Stanford, Calif.: National Bureau of Economic Research, Working Paper No. 208.
Silvia, P. J. (2001) Interest and interests: The psychology of constructive capriciousness. Review of General Psychology 5, 270-290.
Slovic, P., Fischhoff, B, and Lichtenstein, S. (1977) Behavioral decision theory. Annual Review of Psychology 28, 1-39.
Slovic, P., and Tversky, A. (1974) Who accepts Savage’s axioms? Behavioral Science 19, 368-373.
Skinner, B. F. (1938) The Behavior of Organisms. Appleton-Century-Crofts.
Strahilevitz, M. and Loewenstein, G. (1998) The effect of ownership history on the valuation of objects. Ournal of Consumer Research 25, 276-289.
Thaler, R. (1981) Some empirical evidence on dynamic inconsistency. Economics Letters 8, 201-207.
Thaler, R (1985) Mental accounting and consumer choice. Marketing Science 4, 199-214.
Thaler, R. (1990) Anomalies: Saving, fungibility, and mentalaccounts. Journal of Economic Perspectives 4, 193-205.
Thaler, R. (1992) The Winner’s Curse: Paradoxes and Anomalies of Economic Life. Princeton U.
Thaler, R. H. (1999). Mental accounting matters. Journal of Behavioral Decision Making, 12(3), 193-206.
Thaler, R. and Shefrin, H. (1981) An economic theory of self-control. Journal of Political Economy 89,392-406.
Titchener, E.B. (1909/1926) Lectures on the Experimental Psychology of the Thought Processes. New York: Macmillan.
Tversky, A. and Kahneman, D. (1973) Availability: A heuristic for judging frequency and probability. Cognitive Psychology 5, 207-232.
Tversky, A and Kahneman, D. (1983) Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review 90, 293-315.
Woolverton, W. L., Myerson, J., & Green, L. (2007). Delay discounting of cocaine by rhesus monkeys. Experimental and Clinical Psychopharmacology, 15(3), 238-244.
Zhu, L. and Gigerenzer, G. (2006) Children can solve Bayesian problems: The role of representation in mental computation. Cognition 98, 287-308.