Humans often cooperate in experimental games even when it’s not the strategy that will win them the most money (Ledyard, 1995; Cadsby and Maynes, 1999). They do this in one-shot games, where there is no chance the other could punish them if they took advantage, and in games with anonymous strangers, for whom they have no material reason to care. We see this cooperative tendency typically in the initial rounds of experimental public goods games (e.g., Offerman et al. 1996), but also in real-world common-resource problems (Ostrom 2000). From a naive economic or Darwinian perspective, which expects humans to behave in a mostly rational and self-interested way, this behaviour is puzzling.
One explanation is that this cooperative behaviour is basically a mistake, either the result evolutionary adaptation to past environments or social heuristics learnt through daily life. What is fitness- or payoff-maximising in those environments gets misapplied to novel environments like the lab. This may also explain human cooperation in modern society, which is often in one-off interactions and with complete strangers.
The evolutionary maladaptation hypothesis (reviewed in Boyd and Richerson, 2006; El Mouden et al., 2012; Raihani and Bshary, 2015) typically assumes that humans are instead adapted to life in small social groups with high relatedness (e.g. Johnson et al., 2003). However, some authors make a maladaptation argument about different assumptions and selection pressures, such as cooperation as a way to signal quality of parental investment or the result of the development of deadlier weapons (Phillips, 2015). The hypothesis goes by various names, including the mismatch hypothesis (Hagen and Hammerstein, 2006), the evolutionary legacy hypothesis (Burnham and Johnson, 2005), and the big mistake (Raihani and Bshary, 2015). It is also an old hypothesis; Boyd and Richerson (2006) cite Alexander (1974, 1987), Hamilton (1975), and Tooby and Cosmides (1989) as their earliest sources.
There are three basic ingredients required for the maladaptation hypothesis to work. First, the proximate mechanism of cooperative behaviour needs to be based rules or dispositions of a general nature, rather than determined on a purely case-by-case basis (Güth and Kliemt, 1998). There seems to be good evidence that humans have domain-specific cognition specifically for social situations (Tooby and Cosmides, 1992). For example, people find it easier to solve a logic problem called the Wason card selection test if the logic of the problem is framed in terms of social contracts (Sugiyama et al., 2002; Stone et al., 2002). Vervet monkeys also show clearer logic in social than non-social problems (Cheney and Seyfarth (1990); cited in Boyd and Richerson (2006)). A general-nature mechanism is also needed when the cause is social heuristics rather than genetic. For example, in the dual-process model of decision-making, the intuitive behaviour is shaped by social heuristics resulting from the individual’s past experiences (Rand, 2016). Either way, we should expect that humans’ ability to use an appropriate reasoning strategy will depend upon the way in which the game is framed (Haselton et al., 2015), and there is plenty of evidence for that (e.g., Hsu (2003), Cartwright et al. (2019)).
The second ingredient needed is an adaptively relevant ancestral environment that would have selected for the behaviour. What the ‘adaptively relevant environment’ is depends on which trait we’re considering (Burnham, 2013). Typically (though not always), the maladaptation hypothesis is premised on a social environment of humans living in small groups comprised mostly of kin. This was true for about 95% of our species’ history (Hill et al., 2011 cited in Rusch (2018)). However, there is some argument (below) about whether relatedness was high enough and migration low enough to have selected for the kinds of behaviours that are observed.
The third ingredient is change that was recent and rapid enough that humans have not yet had the chance to adapt. A nice analogous example is humans’ preferences for salty, fatty and sweet foods (e.g., Burnham and Johnson, 2005). These food preferences would have been adaptive in our evolutionary past, but in the modern environment they are mostly maladaptive. Similarly, it is argued that the cultural changes that have taken place during the past 10,000 years, particularly increased group size and frequent encounters with strangers, have occurred far too quickly for human behaviour to adapt (Barrett et al. (2002) cited in Phillips (2015)).
Evidence for various maladaptation hypotheses comes from manipulation experiments in laboratory games. For example, exposing experimental subjects to pictures of stylised eyes will increase cooperative behaviour (the watching-eye effect, Haley and Fessler (2005)). This points to an unconscious mechanism that is alert for cues that others are watching, which is consistent with the indirect reciprocity (reputation) hypothesis. Arguably, there were few circumstances in the ancestral environment that involved truly anonymous interactions (Hagen and Hammerstein, 2006). Therefore, if humans did not evolve in a selective environment of frequent interaction with strangers, then there is no reason to expect them to maximise their payoff in laboratory games with strangers (Hagen and Hammerstein, 2006). Another example comes from a meta-analysis of the body of experimental work performed by Balliet et al. (2014). They found that the results from experimental manipulations in laboratory-based games were consistent with the hypothesis of bounded generalised reciprocity, i.e., indirect reciprocity with ingroup members (Yamagishi and Mifune, 2008).
The above discussion concerns genetic maladaptation, but the same general principles apply to the non-genetic case, where learned experience is encoded as social heuristics. Standard rational choice theory assumes that individuals can ‘jump’ to the utility-maximising strategy; but in reality, humans typically arrive at a solution gradually through an adaptive learning process (Güth and Kliemt, 1998). There are examples of this in the review by Cadsby and Maynes (1999), where participants in threshold PGGs seemed to adjust their behaviour over rounds towards the Nash Equilibrium.
The best evidence for social heuristics comes from cross-cultural studies, where game behaviour can often be connected to daily-life experience (Henrich et al., 2005). For example, the Orma connected the experimental PGG to harambee, which is their real-life cultural practice for raising funds for community projects. They correspondingly made higher contributions in the experimental PGG than comparable cultures without a similar cultural practice.
The cross-cultural studies highlight three points. First, they demonstrate that cultural environment is important to determining behaviour in one-shot games / initial rounds. Variation between cultures was probably not because intrinsic altruistic preferences varied, but because the cognitive heuristics resulting from the cultural environment varied (Heintz, 2005). Second, framing matters. It was the Orma themselves who started calling the PGG the harambee game, and they behaved accordingly. This mirrors framing effects observed with Western subjects. For example, people will contribute more to a ‘community game’ than a ‘Wall Street game’ (Ross and Ward (1996) and Pillutla and Chen (1999) cited in (Henrich et al., 2005)). Third, there are likely multi-layered explanations for behaviour. The real-life harambee includes a punishment element, which makes it likely that the real-life Nash Equilibrium is to contribute to the public good. Therefore, Binmore (2005) argued that initial cooperative behaviour in the lab, although a violation of the expectations from game-theoretic analysis in the lab setting, may yet be the result game-theoretic mechanisms that drove learned behaviour or cultural norms towards that Nash equilibrium in the real world.
Case study: the strong reciprocity debate
Much of the literature I found discussing the maladaptation hypothesis took place in the context of a debate about strong reciprocity. Strong reciprocity is the observation that people will repay favours and punish non-cooperators even in anonymous one-shot encounters with genetically unrelated strangers (key reference: Fehr and Henrich (2003)). The emphasis is typically on the punishment aspect of the behaviour. Strong-reciprocity theorists argued that this behaviour could not be explained as a maladaptation resulting from individual selection, and favoured a cultural group selection explanation instead. Other authors argued back that maladaptation was a sufficient explanation, or took issue with perceived misunderstandings of the maladaptation hypothesis (one good summary is found in Hagen and Hammerstein (2006)). Therefore, this literature provided me with an opportunity to learn common arguments and misunderstandings that can arise. Below, I highlight 5.
(1) Mistake = Misunderstanding? One objection that strong-reciprocity theorists made to the maladaptation argument is that it seemed to imply that experimental subjects misunderstood the game. The maladaptation argument is premised on the idea that, in the adaptively relevant environment, there is a good chance that games are not truly one-shot and anonymous, even if they seem to be. However, post-experiment surveys suggest that subjects really did believe what experimenters tell them about the conditions (Fehr and Henrich, 2003). For example, 96% of subjects in an ultimatum game reported that they believed experimenters that their identities would stay anonymous. Further, Fehr and Henrich (2003) interpret the speed with which subjects change strategies in response to changing game conditions as evidence that their behaviour is “mediated by sophisticated, conscious, cognitive acts” as opposed to “a cognitively inaccessible mechanism drives the baseline pattern of reciprocal responses” (but see Rand (2016) regarding how speed of decision affects cooperation).
However, the fact that subjects understand the game and believe it is anonymous does not rule out maladaptation as an explanation. In one memorable analogy, Hagen and Hammerstein (2006) pointed out that people are still aroused by pornography even though they know they will never have contact with any of the subjects.
This made me wonder what exactly is this proximate mechanism that can override intellectual understanding. One possible mechanism is happiness — broadly defined (El Mouden et al., 2012). In the case of strong reciprocity, we might feel happy to punish someone we think deserved it, and that warm glow may outweigh the price we paid to do so. The important thing to keep in mind is that, here, happiness is a proximate — not ultimate — goal; it is the means to an inclusive-fitness-maximising end in the context of the adapatively relevant ancestral environment. El Mouden et al. (2012) had a rather nice way of putting it [paraphrased]:
Humans are free to do what they want, but they are not free to want what they want
Another possible proximate mechanism is commitment (Akdeniz and van Veelen, 2021). The commitment mechanism has a particularly clear logic in the ultimatum game, where the proposer’s knowledge of the responder’s commitment to a fair outcome forces the proposer to make a fair proposal. Akdeniz and van Veelen (2021) suggest that a similar logic may also apply to strong reciprocity, e.g., a commitment to punish defectors in a PGG. One thing I find very appealing about the commitment mechanism is that irrational behaviour and ‘mistakes’ are a necessary feature. After all, it’s not really a moral commitment if you change your mind depending on the circumstances (c.f. Atran, 2017). Therefore, it will behave exactly like what we observe.
(2) Poorly tuned yet responsive? The second objection is that there seems to be a contradiction between saying, on one hand, that cooperative behaviour is poorly tuned to cues about who is a stranger / kin, while on the other, observing responsiveness to very subtle cues like fake watching eyes. Fehr and Henrich (2003) state this very strongly: “Indirect reciprocity can only account for cooperation in one-shot encounters if our behavioral rules are not contingent upon the likelihood that our actions will be observed by others”.
One counter-argument is basically that proximate cue-response mechanisms are tuned, but not perfectly (Burnham and Johnson, 2005). For example, regarding indirect reciprocity above, Hagen and Hammerstein (2006) argue that it’s unlikely that there were many interactions in ancestral environments that were truly anonymous, therefore the response will be poorly tuned to that scenario. In addition, error management theory predicts a bias away from making the more costly error (Haselton et al., 2015). It is plausible that the risk to one’s reputation from cheating once and being observed is greater than the payoff difference that one could earn by cheating.
However, this counter-argument hinges on some assumptions about the adaptively relevant ancestral environment:–
(3) The adaptively relevant ancestral environment. The third objection is that the typical assumption — that humans evolved in isolated groups with close kin and known individuals (e.g., Johnson et al. (2003)) — is wrong. Phillips (2015) cites Hill et al. (2011) saying that, in modern hunter/gatherer societies, “most individuals are unrelated and regularly change membership of groups”. Fehr and Henrich (2003) also argue that encounters with strangers were common, pointing to evidence such as the existence of rituals used to bring strangers peacefully into a camp. If the relevant human ancestral environmental offered a wide range of social scenarios — including interactions with zero probability of repeating — then it should have selected against the kinds of one-shot, anonymous cooperation we see in the lab.
I have difficulty knowing whether this argument is likely or not without something to quantify it. Ideally, someone would have estimated the likely range of degrees of assortativity and used that to parameterise a good model. So far, I read one paper by Rusch (2018) who did something like that. They used the relatedness measured in hunter-gatherers horticulturalist societies (including that of Hill et al., (2011) cited above) to parameterise a linear one-shot PGG model, and found that cooperation can be maintained in group sizes much larger than previously be thought. But most models I’ve read have a fixed scenario, and this question needs a model that looks at scenario variability and some kind of cue-response / phenotypic plasticity.
(4) Regarding kin selection Strong reciprocity theorists raised a series of objections against kin-selection explanations specifically.
Fehr and Henrich (2003) argued that behavioural anomalies cannot be the result of misfiring of kin recognition because humans can readily distinguish kin from non-kin. Humans and other animals use three cues to identify kin: (1) physical similarity (including looks and smell); (2) familiarity; and (3) proximity of residence (reviewed in Kurland and Gaulin 2005; cited in Krebs (2015)). However, kin recognition can misfire. Boyd and Richerson (2006) give two examples of misfiring, both cases where raising unrelated children together triggers their incest avoidance cue: the reluctance of Israeli Kibbutz age mates to marry; and the low reproductive success of Taiwanese minor marriages, where parents arrange a future spouse for their child by adopting an opposite-sex child and rearing them together. But note they provided both these examples while writing in support of Fehr and Henrich (2003)’s arguments.
Strong-reciprocity theorists agree that kinship plays a role in cooperation. I’ve read elsewhere that both foragers and nomadic herders build groups based on kinship, both real and imagined (Næss, n.d.), and social psychologists have found that kinship cues increase prosocial behaviour (Krebs, 2015). Rather, it seems that they are questioning whether these cues can misfire consistently and to such a degree that would support human-style cooperation.
Richerson and Boyd (1999) argue that when kin selection is scaled up to large scales, it has very different qualities from what we observe in humans. For example, cooperation in social insects is retained by high relatedness within groups and produces a sterile worker caste. Further, human cooperation is vulnerable to nepotistic exploitation, not supported by it, as evidenced by the crime rates in societies that are organised in such a way.
Evidence in support of objections 3 and 4 above also comes from comparisons with nonhuman primates. Other primates have similar levels of relatedness and migration between groups to human hunter-foragers, yet they do not show the kind of non-selective altruism suggested by the maladaptation hypothesis, and are instead very sensitive to differences in relatedness (Boyd and Richerson, 2006). Further, other primates do not mistakenly cooperate in novel social environments, e.g., when forced into larger social groups in zoos (Fehr and Henrich, 2003). Fehr and Henrich (2003) writes: “For the same reason that humans mistakenly cooperate in the modern context, the maladaptation hypothesis predicts that nonhuman primates should “mistakenly” cooperate in such novel social environments”. I note that it is possible that this literature is out of date now and that cooperation with strangers occurs in other primates (e.g., Schmelz et al. (2017) to read), though it’s obviously not to the degree that humans do. This highlights that a good explanation of human cooperation should also explain why the same degree of cooperation did not also evolve in other, closely related primates.
(5) Coexistence of different types The final argument is that the maladaptation hypothesis doesn’t explain the observed inter-individual variability in cooperative behaviour. Specifically, Fehr and Henrich (2003) observe that human populations seem to be split into a certain proportion of individuals who are strong reciprocators and another proportion who are selfish. They ask: “How can the maladaptation account explain the existence of completely self-interested behavior? If the maladaptation account is correct, why do we not observe everybody engaging in strongly reciprocal behavior?”. I haven’t yet looked into the details of this inter-individual variation, but I found a suggestion in Stephens (2005) that there is a cooperative-behaviour polymorphism in humans, and Burnham (2013) cites evidence that behaviour is heritable in the ultimatum game (Wallace et al., 2007) and trust-game (Cesarini et al., 2008).
Taken at face value, I didn’t understand Fehr and Henrich (2003)’s argument; however, I think I might understand if I reinterpret their argument as being about the need for a group-based explanation. A polymorphism is not a problem for a maladaptation argument. We know from e.g., social-learning models that a possible evolutionary steady state is a polymorphism between cooperators and punishers (or contributors to a punishment institution). Therefore, there doesn’t seem to be anything about coexistence of different types that rules out (mal)adaptation. However, some authors (e.g., Sober and Wilson) use the label ‘group selection’ for mechanisms similar to what we might model using Wright’s Infinite Islands. The mechanism is still individual selection — and that’s how you model it — but there is a sense that the islands are groups that are competing. Actually, the mechanism is usually that competition between individuals is being exported outside the group (Taylor 1992; West et al., 2006), and the kinds of social-learning type models that can produce polymorphism (a recent example might be García & Traulsen (2019)) also work by exporting competition (imitation is global). Therefore, what I think Fehr and Henrich (2003) are saying is that the polymorphism that is observed in real humans looks similar to what is predicted by ‘group selection’ models; therefore, the likely explanation probably involves a groups-based mechanism.
I took home 5 key messages from the strong-reciprocity debate:
- Regardless of which hypothesis for human cooperation one favours, the cooperative behaviour observed in the experimental games is a maladaptation.
- Maladaptation does not mean that subjects misunderstand how the game works.
- But maladaptation does require an adaptively relevant environment that didn’t select against the behaviour observed.
- A good explanation should also address why other primates do not show the same level of cooperation as humans.
- The qualitative features of human cooperation should be consistent with what we would expect if the proposed mechanism was ‘scaled up’ to modern-sized societies.
Akdeniz, A. and van Veelen, M. (2021). The evolution of morality and the role of commitment, Evolutionary Human Sciences 3: e41.
Atran, S. (2017). Scott Atran on sacred values. URL: http://traffic.libsyn.com/socialsciencebites/AtranMixSesM.mp3?dest-id=92667
Balliet, D., Wu, J. and De Dreu, C. K. (2014). Ingroup favoritism in cooperation: A meta-analysis., Psychological Bulletin 140(6): 1556.
Binmore, K. (2005). Economic man–or straw man?, Behavioral and Brain Sciences 28(6): 817–818.
Boyd, R. and Richerson, P. J. (2006). Solving the puzzle of human cooperation, in S. C. Levinson and P. Jaisson (eds), Evolution and culture: A Fyssen Foundation Symposium, MIT Press, Cambridge, MA, pp. 105–132.
Burnham, T. C. (2013). Toward a neo-Darwinian synthesis of neoclassical and behavioral economics, Journal of Economic Behavior & Organization 90: S113–S127.
Burnham, T. C. and Johnson, D. D. (2005). The biological and evolutionary logic of human cooperation, Analyse & Kritik 27(1): 113–135.
Cadsby, C. B. and Maynes, E. (1999). Voluntary provision of threshold public goods with continuous contributions: experimental evidence, Journal of Public Economics 71(1): 53-73.
Cartwright, E., Stepanova, A. and Xue, L. (2019). Impulse balance and framing effects in threshold public good games, Journal of Public Economic Theory 21(5): 903–922.
El Mouden, C., Burton-Chellew, M., Gardner, A. and West, S. A. (2012). What do humans maximize?, in S. Okashi and K. Binmore (eds), Evolution and Rationality: Decisions, Cooperation and Strategic Behaviour, Cambridge University Press, Cambridge, pp. 23–49.
Fehr, E. (2003). The puzzle of human cooperation, Nature 421: 912.
Fehr, E. and Henrich, J. (2003). Is strong reciprocity a maladaptation?, in P. Hammerstein (ed.), Genetic and Cultural Evolution of Cooperation, MIT Press, Cambridge, Massachusetts, pp. 55–82.
García, J., & Traulsen, A. (2019). Evolution of coordinated punishment to enforce cooperation from an unbiased strategy space. Journal of the Royal Society Interface, 16(156), 20190127.
Güth, W. and Kliemt, H. (1998). The indirect evolutionary approach: Bridging the gap between rationality and adaptation, Rationality and Society 10(3): 377–399.
Hagen, E. H. and Hammerstein, P. (2006). Game theory and human evolution: A critique of some recent interpretations of experimental games, Theoretical Population Biology 69(3): 339–348.
Haley, K. J. and Fessler, D. M. (2005). Nobody’s watching?: Subtle cues affect generosity in an anonymous economic game, Evolution and Human behavior 26(3): 245–256.
Haselton, M. G., Nettle, D. and Murray, D. R. (2015). The evolution of cognitive bias, in D. Buss (ed.), The handbook of evolutionary psychology, Wiley Online Library, Hoboken, NJ, pp. 1–20. URL: https://onlinelibrary.wiley.com/doi/full/10.1002/9781119125563.evpsych241
Heintz, C. (2005). The ecological rationality of strategic cognition, Behavioral and Brain Sci- ences 28(6): 825–826.
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R., Alvard, M., Barr, A., Ensminger, J., Henrich, N. S., Hill, K., Gil-White, F., Gurven, M., Marlowe, F. W., Patton, J. Q. and Tracer, D. (2005). “economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies, Behavioral and Brain Sciences 28(6): 795–815.
Hill, K. R., Walker, R. S., Božičević, M., Eder, J., Headland, T., Hewlett, B., Hurtado, A. M., Marlowe, F., Wiessner, P. and Wood, B. (2011). Co-residence patterns in hunter-gatherer societies show unique human social structure, Science 331(6022): 1286–1289.
Hsu, L.-C. (2003). Effects of framing, group size, and the contribution mechanism on cooperation in threshold public goods and common resources experiments, Academia Economic Papers 31(1): 1–31.
Johnson, D. D., Stopka, P. and Knights, S. (2003). The puzzle of human cooperation, Nature 421(6926): 911–912.
Krebs, D. L. (2015). Prosocial behavior, in V. Zeigler-Hill, L. L. M. Welling and T. K. Shackelford (eds), Evolutionary perspectives on social psychology, Springer, Switzerland, pp. 231–242.
Ledyard, J. O. (1995). Public goods: A survey of experimental research, in J. Kagel and A. Roth (eds), Handbook of Experimental Economics, Princeton University Press, Princeton, chapter 2, pp. 111-193.
Næss, M. W. (n.d.). From hunter-gatherers to nomadic pastoralists: forager bands do not tell the whole story of the evolution of human cooperation.
Nowak, M. A., Sasaki, A., Taylor, C. and Fudenberg, D. (2004). Emergence of cooperation and evolutionary stability in finite populations, Nature 428(6983): 646–650.
Offerman, T., Sonnemans, J. and Schram, A. (1996). Value orientations, expectations and voluntary contributions in public goods, The Economic Journal 106(437): 817–845
Ostrom, E. (2000). Collective action and the evolution of social norms. Journal of Economic Perspectives, 14(3), 137-158.
Phillips, T. (2015). Human altruism and cooperation explainable as adaptations to past environments no longer fully evident in the modern world, The Quarterly Review of Biology 90(3): 295–314.
Raihani, N. J. and Bshary, R. (2015). Why humans might help strangers, Frontiers in Behavioral Neuroscience 9: 39.
Rand, D. G. (2016). Cooperation, fast and slow: Meta-analytic evidence for a theory of social heuristics and self-interested deliberation, Psychological Science 27(9): 1192–1206.
Rand, D. G., Tarnita, C. E., Ohtsuki, H. and Nowak, M. A. (2013). Evolution of fairness in the one-shot anonymous ultimatum game, Proceedings of the National Academy of Sciences 110(7): 2581–2586.
Richerson, P. J. and Boyd, R. (1999). Complex societies, Human Nature 10(3): 253–289.
Rusch, H. (2013). What niche did human cooperativeness evolve in, Ethics and Politics 15(2): 82–100.
Rusch, H. (2018). Ancestral kinship patterns substantially reduce the negative effect of increasing group size on incentives for public goods provision, Journal of Economic Psychology 64: 105–115.
Rusch, H. and Luetge, C. (2016). Spillovers from coordination to cooperation: Evidence for the interdependence hypothesis?, Evolutionary Behavioral Sciences 10(4): 284.
Schmelz, M., Grueneisen, S., Kabalak, A., Jost, J. and Tomasello, M. (2017). Chimpanzees return favors at a personal cost, Proceedings of the National Academy of Sciences 114(28): 7462–7467.
Stephens, C. (2005). Strong reciprocity and the comparative method, Analyse & Kritik 27(1): 97–105.
Stone, V. E., Cosmides, L., Tooby, J., Kroll, N. and Knight, R. T. (2002). Selective impairment of reasoning about social exchange in a patient with bilateral limbic system damage, Proceedings of the National Academy of Sciences 99(17): 11531–11536.
Sugiyama, L. S., Tooby, J. and Cosmides, L. (2002). Cross-cultural evidence of cognitive adaptations for social exchange among the shiwiar of ecuadorian amazonia, Proceedings of the National Academy of Sciences 99(17): 11537–11542.
Taylor, P. D. (1992). Inclusive fitness in a homogeneous environment. Proceedings of the Royal Society B, 249(1326), 299-302.
Tooby, J. and Cosmides, L. (1992). The psychological foundations of culture, The adapted mind: Evolutionary psychology and the generation of culture 19.
West, S. A., Gardner, A., Shuker, D. M., Reynolds, T., Burton-Chellow, M., Sykes, E. M., Edward, M., Guinnee, M. A. and Griffin, A. S. (2006). Cooperation and the scale of competition in humans. Current Biology, 16(11), 1103-1106.
Yamagishi, T. and Mifune, N. (2008). Does shared group membership promote altruism? Fear, greed, and reputation, Rationality and Society 20(1): 5–30.