Abstract of target article: (copyright 2013 Cambridge University Press)
Why does performing certain tasks cause the aversive experience of mental effort and concomitant deterioration in task performance? One explanation posits a physical resource that is depleted over time. We propose an alternative explanation that centers on mental representations of the costs and benefits associated with task performance. Specifically, certain computational mechanisms, especially those associated with executive function, can be deployed for only a limited number of simultaneous tasks at any given moment. Consequently, the deployment of these computational mechanisms carries an opportunity cost – that is, the next-best use to which these systems might be put. We argue that the phenomenology of effort can be understood as the felt output of these cost/benefit computations. In turn, the subjective experience of effort motivates reduced deployment of these computational mechanisms in the service of the present task. These opportunity cost representations, then, together with other cost/benefit calculations, determine effort expended and, everything else equal, result in performance reductions. In making our case for this position, we review alternative explanations for both the phenomenology of effort associated with these tasks and for performance reductions over time. Likewise, we review the broad range of relevant empirical results from across sub-disciplines, especially psychology and neuroscience. We hope that our proposal will help to build links among the diverse fields that have been addressing similar questions from different perspectives, and we emphasize ways in which alternative models might be empirically distinguished.
Veterans Affairs Medical Center, Coatesville PA, USA
University of Cape Town, South Africa
Published in Behavioral and Brain Sciences 36, 679-680, 2013
Self-control is a necessary component of subjective effort, but it depends only on farsighted motivation, with no additional, depletable resource. The aversiveness of boring tasks probably comes from their interference with endogenous reward, a new and potentially controversial concept. The self-control needed to stick with any kind of aversive experience increases as the values of the competing motives draw closer together.
Kurzban et al. have ably demonstrated that mental fatigue from doing repetitive tasks is a motivational phenomenon, rather than a matter of resource depletion. In doing so they propose an alternative answer to the basic question of why boredom is aversive: To be adaptive in the evolutionary sense, boredom is said to be a sort of meter that warns us about wasting our attention. But this could be said about nonreward in general. Hard evidence is sparse for their hypothesis about calculating cost-effectiveness, as it is for an alternative possibility that I propose: Monotonous tasks interfere with a baseline level of reward that does not depend on external contingencies, and these tasks require increasing amounts of self-control as this interference continues. I would argue that the latter model offers a more general account of the mental effort required for unrewarding activities.
The remarkable feature of monotonous tasks is that they seem to be worth less than nothing – that is, less than we would get by sitting idle. The authors assign the latter option (daydreaming) a utility of 2 units (sect. 2.4.1) – but what generates those 2 units of reward? Examination of this question can tell us something about our basic mental economies. People do not normally experience aversively low levels of reward in the absence of external sources. During idleness – in what is being called the “default mode” (Spreng et al. 2009) – for example, in daydreaming, we seem to generate our own reward. Challenging tasks facilitate this process regardless of the external incentives for them, as in Csikszentmihalyi’s “flow” (1990), whereas boring tasks are characterized by a structured attention that restricts it. When even dull experimental tasks are made more challenging, they become less depleting, as in the authors’ example of Converse and DeShon (2009; sect. 3.2.2). Conversely, the extent to which external incentives can reduce fatigue in monotonous tasks is limited; even an awareness of watching for enemy warplanes does not prevent it (Mackworth 1948).
The phenomenology of this reward is familiar, but its causal properties are theoretically problematic. The defining feature of reward is that it selects for behaviors. In conventional utility theory a person cannot generate her own reward, as that would short-circuit the process that constrains her to behave adaptively. On the other hand, the model that all motivation comes from the expectation of some event-constrained reward is hard to fit to human experience. For one thing, activities that do not perceptibly lead to primary rewards should extinguish – not only doing math problems in experiments, but also playing with smartphones and daydreaming (sect. 2.4.1). For another, the rewardingness of the outcomes of many tasks (in the extreme, puzzles or solitaire) is related more to the nature of the tasks themselves than to whatever events they might predict. Early hypotheses that such examples are based on long chains of secondary reward and broad generalizations (e.g., Dollard & Miller 1950) have not been subjected to later scrutiny, perhaps because they have seemed to be the only possible way that a behavioral economy could be designed. Even precise modern models of reward, such as temporal difference theory (e.g., Daw & Doya 2006), depict a process constrained to seek external events.
And yet it is possible that the great imaginative power that allows people to outthink our evolutionary predecessors has an inseparable, nonadaptive side feature: the ability to coin reward. I have argued elsewhere (Ainslie, 2013) that the utility-based decision sciences (for example, economics and behavioral and evolutionary psychology) should no longer assume all mental reward to be secondary to some innately determined primary. They should at least allow for the possibility that people can generate reward arbitrarily, limited only by our appetites for the processes involved (variously, an emotion, or curiosity, interest, suspense…). In this approach, the short-circuiting of the selection process is prevented, but only partially, by the incentive not to waste appetite. The hyperbolically based urge for premature gratification of appetites creates a countervailing incentive to link this gratification to external events that are singular and surprising – to bet on them, as it were – as limited occasions for this endogenous reward. Accordingly, someone who invests interest in a game of solitaire must protect this interest by not cheating, and someone who uses a novel to occasion reward must avoid reading ahead. The experience of daydreaming suggests that to some extent we can do without current occasions for reward; but the pathologies of sensory deprivation (Zubeck 1973) demonstrate that even endogenous reward deteriorates without some external occasions. Monotonous tasks accelerate this deterioration. Their defining feature is that they require the person to attend to bad (profuse, unsurprising) occasions for endogenous reward.
The ability to coin endogenous reward may interfere with adaptive goal seeking less than might be expected, because optimal occasioning of reward overlaps extensively with the realistic performance of instrumental tasks: Benchmarks of accomplishments also make excellent occasions, in addition to their (predictive, extinguishable) secondary rewarding effects. By the same token, however, people have an incentive to believe in the instrumental effectiveness of tasks that offer good occasions, a possible explanation for the stubborn inefficiency of many ostensibly productive activities, from “X-inefficiency” (March 1978) to pathological gambling. The short span of human evolution should only have required the endogenously rewarding side effect of imagination to be not too maladaptive.Kurzban et al. do without a concept of self-control, positing only prioritization of tasks (sect. 2.3); but low-priority tasks should demand mental effort only to the extent that they elicit self-control, and even then only when the self-control requires attention. Despite the concreteness of resource-depletion theories, they are correct on that point. And these theories extend to mental effort in activities that are directly aversive, such as cold pressor endurance or prolonged handgrip, as well as those that are unrewarding because vigilance for performance errors feels like wasting resources (in the present authors’ proposal) or prevents endogenous reward (as I hypothesize). In all these cases adherence to a less immediately rewarding policy should be felt as mental effort only to the extent that it demands ongoingexecutive function. Forgoing even a strongly motivated activity such as smoking is not experienced as effortful when the person has no doubt that she will succeed (Dar et al. 2005). The closer the value of the forgone opportunities comes to the motivational basis for the unrewarding activity, the more executive function must be devoted to forestalling contrary urges and weighing whether to continue. (See Ainslie 2012 for my model of self-control. Behavioral economists have recently made related proposals: Fudenberg & Levine 2006; Gul & Pesendorfer 2004.) The aversiveness of sensing inadequate motivation for self-control is therefore an example of a familiar phenomenon, cognitive dissonance (Brehm & Cohen 1962). This mechanism can indeed produce the appearance that “the estimation of opportunity costs gives rise to the phenomenology of mental effort” (sect. 3.3, para. 3), where “opportunity costs [are] equal to the value of the next-best use of … mental processes” (sect. 2.3.1, para. 2). However, the authors’ model implies that subjective effort should be a linear function of forgone opportunity; it is more apt to be an accelerating function up to the point where choice reverses. In either case, mental effort can be accounted for entirely within a motivational model, which I take to be the authors’ main point.
Ainslie, G. (2012) Pure hyperbolic discount curves predict “eyes open” self-control. Theory and Decision 73:3–34. [doi:10.1007/s11238-011-9272-5]
Ainslie, G. (2013) Grasping the impalpable: The role of endogenous reward in process addictions. Inquiry 56:446-469.
Brehm, J. W. & Cohen, A. R. (1962) Explorations in cognitive dissonance. Wiley.
Converse, P. D. & DeShon, R. P. (2009) A tale of two tasks: Reversing the self-regulatory resource depletion effect. Journal of Applied Psychology 94:1318–24.
Csikszentmihalyi, M. (1990) Flow: The psychology of optimal experience. Harper and Row.
Dar, R., Stronguin, F., Marouani, R., Krupsky, M. & Frenk, H. (2005) Craving to smoke in orthodox Jewish smokers who abstain on the Sabbath: A comparison to a baseline and a forced abstinence workday. Psychopharmacology 183:294–99.
Daw, N. D. & Doya, K. (2006) The computational neurobiology of learning and reward. Current Opinion in Neurobiology 16:199–204.
Dollard, J. & Miller, N. E. (1950) Personality and psychotherapy: An analysis in terms of learning, thinking, and culture. McGraw-Hill.
Fudenberg, D. & Levine, D. (2006) A dual-self model of impulse control. American Economic Review 96:1449–76.
Gul, F. & Pesendorfer, W. (2004) Self-control, revealed preference and consumption choice. Review of Economic Dynamics 7:243–64.
Mackworth, N. H. (1948) The breakdown of vigilance during prolonged visual search. Quarterly Journal of Experimental Psychology 1:6–21.
March, J. G. (1978) Bounded rationality, ambiguity, and the engineering of choice. Bell Journal of Economics 9:587–610.
Spreng, R. N., Mar, R. A. & Kim, A. S. N. (2009) A common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysis. Journal of Cognitive Neuroscience 21(3):489–510.
Zubeck, J. P. (1973) Behavioral and physiological effects of prolonged sensory and perceptual deprivation: A review. In: Man in isolation and confinement, ed. J. Rasmussen, pp. 9–83. Aldine.