Jekyll2023-05-02T03:57:24+00:00https://nadiah.org/feed.xmlNadiah Pardede KristensenVisiting Hisashi Ohtsuki2023-04-29T04:44:54+00:002023-04-29T04:44:54+00:00https://nadiah.org/2023/04/29/hisashi-visit<p>For the past 5 weeks, I’ve been visiting Hisashi Ohtsuki at SOKENDAI in Japan to work on our evolutionary game theory project. We’ve been collaborating with Hisashi over the past couple of years exploring questions related to the evolution of cooperation using a novel mathematical framework that accounts for higher-order genetic associations. This work resulted in a <a href="https://doi.org/10.1038/s41598-022-24590-y">publication</a> a few months ago.</p>
<p>We’ve been putting the final touches on our extension of the higher-order genetic associations approach, to extend the scenario to games with many different strategies, and I’ll be here for another month so we can explore some new ideas we have for modelling homophilic group formation.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2023/04/messy_board.jpg">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2023/04/messy_board.jpg" alt="Working on these equations together helped us gain an intuition for the new formulations we've been creating." />
</a>
<figcaption><span>Working on these equations together helped us gain an intuition for the new formulations we've been creating.</span></figcaption>
</figure>
<p>Hisashi’s group is interested in all kinds of theoretical problems in ecology and evolution, so I gave a talk showcasing some of the conservation/biodiversity work we do in the Chisholm Lab.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2023/04/presentation.jpg">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2023/04/presentation.jpg" alt="Presenting some of our conservation themed work to Hisashi's lab." />
</a>
<figcaption><span>Presenting some of our conservation themed work to Hisashi's lab.</span></figcaption>
</figure>
<p>Apart from that, I brought my Brompton with me, and I’ve been having a lot of fun on the weekends exploring the bay area.</p>
<figure style="max-width: 700px; margin: auto;">
<a href="/wp-content/uploads/2023/04/brompton.jpg">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2023/04/brompton.jpg" alt="" />
</a>
</figure>nadiahkristensenFor the past 5 weeks, I’ve been visiting Hisashi Ohtsuki at SOKENDAI in Japan to work on our evolutionary game theory project. We’ve been collaborating with Hisashi over the past couple of years exploring questions related to the evolution of cooperation using a novel mathematical framework that accounts for higher-order genetic associations. This work resulted in a publication a few months ago.Evolution of cooperation2022-12-01T04:44:54+00:002022-12-01T04:44:54+00:00https://nadiah.org/2022/12/01/evolution-cooperation-scirep<p>When people play public goods games in lab-based experiments,
they often cooperate even though it goes against their self interest.
Many authors have pointed out the ways in which such experiments can be unrealistic,
e.g., by enforcing anonymity between participants,
and have discussed how the incentives in a one-shot encounter differ from when interactions are repeated within the same group.
In our <a href="https://www.nature.com/articles/s41598-022-24590-y">recent paper</a>,
we focused on another common assumption that has received less attention:
the benefits in real-life public goods games are almost never linear.
We were interested in the implications of this nonlinearity for the evolution of human cooperation.</p>
<p>Consider, for example, the method used by Australian Aborigines in southwestern Victoria
to hunt kangaroos.
As discussed in Balme (2018),
in the early nineteenth century,
communal gatherings were associated with mass-scale communal hunting of kangaroos and emus.
People would form a large circle to encircle the animals.
Then they would then move inwards, yelling to frighten the animals,
until they were concentrated in a small area where they could be killed.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/12/Kangaroos_Maranoa.jpeg">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/12/Kangaroos_Maranoa.jpeg" alt="Figure 1. Kangaroos in their native grassland habitat. Contributed to Wikipedia by user AWS10." />
</a>
<figcaption><span>Figure 1. Kangaroos in their native grassland habitat. Contributed to Wikipedia by user AWS10.</span></figcaption>
</figure>
<p>Let’s consider what the benefits function of this encircling technique might look like.
A single individual cannot successfully encircle the animals.
As the number of hunters increases, at first the likelihood of success is small,
but the likelihood increases at an increasing rate.
However, once the number of hunters reaches a certain level,
the animals are already surrounded, and additional hunters become increasingly superfluous.
Therefore,
we can expect something like a sigmoid relationship between the number of hunters and the benefit.
In the most extreme case, the benefits function would have a threshold shape:
a minimum number of hunters is needed to surround the prey, and beyond that,
additional hunters make no difference (Fig. 2).
This threshold game is an \(n\)-player generalisation of the 2-player
<a href="wiki: https://en.wikipedia.org/wiki/Stag_hunt">Stag Hunt</a>.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/12/threshold_game.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/12/threshold_game.png" alt="Figure 2. A hypothetical threshold public goods game involving a prehistoric hunt. A minimum number of hunters (5) must cooperate to successfully surround and kill an animal or their efforts are wasted. A cooperator's payoff is \(W\) if the threshold is met and \(X\) if it is not (blue line). All members share the meat (\(n = 8\)); therefore, the highest payoff goes to defectors (red line) regardless of whether the hunt is successful (payoff \(Y\)) or not (payoff \(Z\)). However, if an individual is likely to be the pivotal hunter, i.e., the hunter that brings the group above the threshold for a successful hunt, then they are incentivised to cooperate. The incentive is that the payoff to a defector when the threshold is not met is less than the payoff to a cooperator when the threshold is met (\(Z < W\))." />
</a>
<figcaption><span>Figure 2. A hypothetical threshold public goods game involving a prehistoric hunt. A minimum number of hunters (5) must cooperate to successfully surround and kill an animal or their efforts are wasted. A cooperator's payoff is \(W\) if the threshold is met and \(X\) if it is not (blue line). All members share the meat (\(n = 8\)); therefore, the highest payoff goes to defectors (red line) regardless of whether the hunt is successful (payoff \(Y\)) or not (payoff \(Z\)). However, if an individual is likely to be the pivotal hunter, i.e., the hunter that brings the group above the threshold for a successful hunt, then they are incentivised to cooperate. The incentive is that the payoff to a defector when the threshold is not met is less than the payoff to a cooperator when the threshold is met (\(Z < W\)).</span></figcaption>
</figure>
<p>We expect that a group who hunts together will include some family members;
however, modelling kin selection in groups is difficult when the benefits function is nonlinear.
In short, unlike 2-player interactions,
where the model can parameterised with single relatedness factor (r) (i.e., Hamilton’s (r)),
in group interactions,
the model must account for all possible combinations of types within the group,
which means accounting for all possible kin + nonkin combinations.
For a more detailed discussion of this, see Allen & Nowak (2016), particularly Eq. 6;
and Van Cleve (2015), particularly Eq. 21.</p>
<p>Hisashi Ohtsuki gave a recipe for describing the dynamics in terms of those kin + nonkin combinations (Ohtsuki, 2014).
Let \(a_k\) and \(b_k\) be the payoff functions for Cooperators and Defectors, respectively,
when \(k\) of the other \(n-1\) group members are Cooperators.
Then the change in the proportion of Cooperators \(p\) in the population is</p>
\[\begin{equation}
\Delta p \propto
\sum_{k=0}^{n-1} \sum_{l=k}^{n-1} (-1)^{l-k} {l \choose k} {n-1 \choose l}
\left[ (1-\rho_1) \rho_{l+1} a_k - \rho_1(\rho_l - \rho_{l+1}) b_k \right].
\tag{1}
\end{equation}\]
<p>where</p>
\[\begin{equation}
\rho_l = \sum_{m=1}^l \theta_{l \rightarrow m} p^m.
\label{rho_l}
\tag{2}
\end{equation}\]
<p>The \(\theta_{l \rightarrow m}\) in Eq. 2 are called the higher-order relatedness coefficients,
and they can be thought of as generalisations of the dyadic relatedness coefficient, Hamilton’s \(r\),
to \(l\) individuals.
Relatedness \(r\) is the probability that,
if we draw 2 individuals without replacement from the group,
they will share a common ancestor and therefore their strategy will be identical by descent.
\(\theta_{l \rightarrow m}\) is the probability that,
if we draw \(l\) individuals without replacement from the group,
they will share exactly \(m\) common ancestors.</p>
<p>However, Ohtsuki (2014) did not provide a general method for obtaining the \(\theta_{l \rightarrow m}\) parameter values.
In our paper, we created 3 homophilic group-formation models that allowed us to parameterise
Eqns. 1 and 2 (Fig. 3).
In short, we assumed that groups are formed sequentially by current members attracting/recruiting new members.
We modelled homophily (the tendency to attract similar others) as an exogenous parameter,
and when homophily is high, new members are more likely to be kin of an existing member
and thus identical by descent.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/12/group_formation.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/12/group_formation.png" alt="Figure 3. Examples of a group of five individuals forming according to the rules of the three homophilic group-formation models: (a) leader driven, (b) members attract, and (c) members recruit. In the leader-driven model, only the leader recruits/attracts new members, and they recruit/attract a kin with probability \(h\) and nonkin with probability \(1-h\). In the members-attract model, every member has an equal chance to attract a new member who is kin, but nonkin are also attracted to the group itself with constant collective weighting that has a negative relationship with \(h\). In the members-recruit model, every member has an equal chance to recruit the next new member, and they recruit a kin with probability \(h\) and nonkin with probability \(1-h\)." />
</a>
<figcaption><span>Figure 3. Examples of a group of five individuals forming according to the rules of the three homophilic group-formation models: (a) leader driven, (b) members attract, and (c) members recruit. In the leader-driven model, only the leader recruits/attracts new members, and they recruit/attract a kin with probability \(h\) and nonkin with probability \(1-h\). In the members-attract model, every member has an equal chance to attract a new member who is kin, but nonkin are also attracted to the group itself with constant collective weighting that has a negative relationship with \(h\). In the members-recruit model, every member has an equal chance to recruit the next new member, and they recruit a kin with probability \(h\) and nonkin with probability \(1-h\).</span></figcaption>
</figure>
<p>We were interested in the question of how cooperation in non-linear games first arose.
We know that, for a benefits function shaped like the hunting examples above,
provided the function and game have suitable parameter values,
then once the number of cooperators in the population is high enough,
a coexistence between cooperators and defectors is evolutionarily stable.
Peña et al. (2014) provides a particularly beautiful way to analyse this mathematically.
However, the ancestral state was presumably a population of all defectors,
and it can also be shown cooperators cannot invade a population of all defectors,
which raises the question of how cooperation got started in the first place.</p>
<p>To understand why cooperators can persist but cannot invade,
we can take the threshold game scenario as an example (Fig. 2).
Even if you are in a group with complete strangers,
so you have no kinship or friendship incentives to cooperate,
the interaction is anonymous, so you have no reputational incentives,
and you know that you will never meet these people again,
so you have no reciprocity / fear-of-punishment incentives,
it can still make sense to cooperate in a threshold game.
Cooperation may be in your self interest
if you know that enough others will cooperate because
your contribution might be the one to push the public-goods benefit above the threshold.
However, if cooperation is rare in the population,
then few others are likely to cooperate,
and therefore it never makes sense to cooperate because your contribution is unlikely to make a difference.
This implies that the first cooperators could never gain a foothold in the population and thus cooperation could never evolve.</p>
<p>However, the reasoning above assumes groups are formed at random with non-family members / strangers.
In our paper, we showed that if groups in the past tended to include family members,
then cooperation could evolve.
If instead of being grouped with strangers,
you are in a group with family members, then if you are a cooperator,
your fellow group members are likely to be cooperators as well because they share your genes.
This positive assortment between cooperators means that cooperation can gain a foothold in the population,
and cooperation can evolve.</p>
<p>In addition, once cooperation has evolved by kin selection,
then even if circumstances change and recruitment of strangers to the group becomes more common,
cooperation can nevertheless persist.
Whether or not it persists depends on how much homophily is lost and the parameter values in the game
(Fig. main one),
but it is possible for cooperation to persist in some circumstances even if homophily is lost altogether
(Fig. (b)).</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/12/main_fig.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/12/main_fig.png" alt="Figure 4. Two examples of how genetic homophily affects the evolutionary dynamics in our model, showing possible trajectories of cooperation as human homophily decreased over time due to changing social environments (blue lines). The evolutionary dynamics separates into qualitatively different regimes depending on the homophily level \(h\): Cooperators cannot persist (dark shading), Defectors can both invade and persist (red shading), and Cooperators can invade (blue shading). In the ancestral past, homophily was high (point A), which allowed Cooperators to invade (B). As homophily decreased (decreasing \(h\)), Cooperation persisted even into the region where it could not invade (C). Depending on the parameter values, Cooperation can either persist even if homophily disappears entirely (D), or Cooperation will be lost below a certain level of homophily (E)." />
</a>
<figcaption><span>Figure 4. Two examples of how genetic homophily affects the evolutionary dynamics in our model, showing possible trajectories of cooperation as human homophily decreased over time due to changing social environments (blue lines). The evolutionary dynamics separates into qualitatively different regimes depending on the homophily level \(h\): Cooperators cannot persist (dark shading), Defectors can both invade and persist (red shading), and Cooperators can invade (blue shading). In the ancestral past, homophily was high (point A), which allowed Cooperators to invade (B). As homophily decreased (decreasing \(h\)), Cooperation persisted even into the region where it could not invade (C). Depending on the parameter values, Cooperation can either persist even if homophily disappears entirely (D), or Cooperation will be lost below a certain level of homophily (E).</span></figcaption>
</figure>
<p>So why do people cooperate in lab-based public goods games?
A lot of attention has been paid to the fact that lab-based games are unrealistic:
we often interact with people we will meet again and we are rarely truly anonymous,
so the social heuristics we use for daily life lead us astray in this artificial environment.
We emphasise that another way lab games are unrealistic is that people are not used to playing linear games,
and we speculate that this might explain some of the cooperative behaviour.</p>
<p>There is some evidence supporting the idea that people in lab-based games behave as though they’re playing a nonlinear game.
People generally prefer to condition their contributions on the level of contribution from others
(Fischbacher et al 2001, Chaudhuri 2011, Thöni & Volk 2018),
which only makes sense from a self-interested perspective if the game is nonlinear.
They will even do this when playing against a computer,
which suggests that this behaviour isn’t purely about a sense of fairness or caring for the welfare of others
(Burton-Chellew et al. 2016).
Chat logs from computer-networked games also reveal a common misperception of linear games as some type of coordination problem
(Cox & Stoddard 2018) like a threshold game.
Thus, it seems likely that some people are just genuinely confused about what the self-interested payoff-maximising strategy is
when the game is linear, and they expect payoffs similar to a nonlinear game.</p>
<p>Our idea that people cooperate because they are “confused” is similar to the
evolutionary maladaptionhypothesis; however, there are some key differences, as well.
Roughly speaking,
the evolutionary maladaptation hypothesis is the idea that when humans cooperate with strangers,
we basically do so because our behavioural programming mistakes those strangers for kin
(Burnham et al. 2005, Hagen & Hammerstein 2006, El-Mouden et al. 2012).
For most of our evolutionary history,
we have lived in small groups composed mostly of relatives.
In that environment,
indiscriminately cooperating with others around you was a good strategy because chances were they were your relatives.
However, our social environment has changed very rapidly recently, in evolutionary terms,
so that now we often interact with nonkin, and evolution hasn’t yet had a chance to catch up.
Thus our behaviour is “maladaptive” because cooperating with relatives provided inclusive fitness benefits,
whereas cooperating indiscriminately with strangers does not.</p>
<p>In contrast, in our model, cooperating with strangers is not maladaptive.
Although past kin selection is needed to explain how cooperation first got started,
once it is established in the population,
cooperating with strangers can be in one’s self interest.</p>
<p>Our model also implies a different narrative about how/why cooperation persisted as humans transitioned from
mostly interacting with family to interacting with nonkin.
In the evolutionary maladaptation hypothesis,
cooperation was extended to nonkin maladaptively due to, e.g., rapidly increasing population size.
This seems to imply somehow that cooperating with nonkin was an “easy” mistake to make.
In our model, cooperation persisted because it was evolutionarily stable.
We expect that it might have been quite challenging to extend cooperation
to nonkin because that means overcoming kin bias and a suspicion of strangers or outgroup members.
Our view seems to sit more easily with cross-cultural empirical studies showing that cooperative behaviour
with strangers/nonkin is not as universal in small-community societies
as it is in societies with high market integration, etc. (Henrich et al. 2005, Henrich et al. 2010).</p>
<p>Our work is published now in <a href="https://www.nature.com/articles/s41598-022-24590-y">Scientific Reports</a>,
and we are currently working on extending these modelling techniques from situations
with only 2 strategies to many strategies (Cooperate, Defect, Coordinate, Punish, etc.).</p>
<h3>References</h3>
<p>Allen, B. and Nowak, M. A. (2016). There is no inclusive fitness at the level of the individual, Current Opinion in Behavioral Sciences 12:122-128.</p>
<p>Balme, J. (2018). Communal hunting by Aboriginal Australians: Archaeological and ethnographic evidence. In Manipulating Prey: Development of Large-Scale Kill Events Around the Globe, eds Carlson, K. & Bemet, L., University of Colorado Press, Boulder, Colorado, 42062.</p>
<p>Burnham, T. C. & Johnson, D. D. (2005). The biological and evolutionary logic of human cooperation. Anal. Kritik 27, 113-135.</p>
<p>Burton-Chellew, M. N., El Mouden, C. & West, S. A. (2016). Conditional cooperation and confusion in public-goods experiments. Proceedings of the National Academy of Sciences 113:1291-1296.</p>
<p>Chaudhuri, A. (2011). Sustaining cooperation in laboratory public goods experiments: A selective survey of the literature. Exp. Econ. 14, 47-83.</p>
<p>Cox, C. A. & Stoddard, B. Strategic thinking in public goods games with teams. J. Public Econ. 161, 31–43 (2018).</p>
<p>El-Mouden, C., Burton-Chellew, M., Gardner, A. & West, S. A. (2012). What do humans maximize? In Evolution and Rationality: Decisions, Cooperation and Strategic Behaviour, eds Okashi, S. & Binmore, K., Cambridge University Press, Cambridge, 23-49.</p>
<p>Fischbacher, U., Gächter, S. & Fehr, E. (2001). Are people conditionally cooperative? Evidence from a public goods experiment. Econ. Lett. 71, 397-404.</p>
<p>Hagen, E. H. & Hammerstein, P. (2006). Game theory and human evolution: A critique of some recent interpretations of experimental games. Theor. Popul. Biol. 69, 339-348.</p>
<p>Henrich, J. et al. (2005) “Economic man” in cross-cultural perspective: Behavioral experiments in 15 small-scale societies. Behav. Brain Sci. 28:795-815.</p>
<p>Henrich, J., et al. (2010). Markets, religion, community size, and the evolution of fairness and punishment. Science, 327(5972):1480-1484.</p>
<p>Ohtsuki, H. (2014). Evolutionary dynamics of n-player games played by relatives. Philosophical Transactions of the Royal Society B: Biological Sciences 369, 20130359</p>
<p>Peña, J., Lehmann, L. & Nöldeke, G. (2014). Gains from switching and evolutionary stability in multi-player matrix games. Journal of Theoretical Biology 346:23-33.</p>
<p>Thöni, C. & Volk, S. (2018). Conditional cooperation: Review and refinement. Econ. Lett. 171, 37-40.</p>
<p>Van Cleve, J. (2015). Social evolution and genetic interactions in the short and long term, Theoretical Population Biology 103:2-26.</p>nadiahkristensenWhen people play public goods games in lab-based experiments, they often cooperate even though it goes against their self interest. Many authors have pointed out the ways in which such experiments can be unrealistic, e.g., by enforcing anonymity between participants, and have discussed how the incentives in a one-shot encounter differ from when interactions are repeated within the same group. In our recent paper, we focused on another common assumption that has received less attention: the benefits in real-life public goods games are almost never linear. We were interested in the implications of this nonlinearity for the evolution of human cooperation.Example numerical analysis of replicator dynamics with more than 2 strategies2022-11-29T04:44:54+00:002022-11-29T04:44:54+00:00https://nadiah.org/2022/11/29/replicator-numerical<p>Recently, I set a task for a student to use Python to analyse the replicator dynamics of a game with three strategies,
including using the Jacobian to determine the stability of the steady state.
The purpose of this blog post is to share the solution in case that’s useful to someone.</p>
<h3>The game</h3>
<p>For this task,
we will consider the replicator dynamics of a 2-player game with 3 strategies.</p>
<p>Recall the general equation for the replicator dynamics is</p>
\[\begin{equation}
\dot{p}_i = p_i (f_i - \bar{f})
\end{equation}\]
<p>where \(p_i\) is proportion of \(i\)-strategists in the population,
\(f_i\) is fitness effect of strategy \(i\),
and \(\bar{f}\) is the average fitness in the population.</p>
<p>The fitness effect is the expected payoff</p>
\[\begin{equation}
f_i = \sum_j p_j \pi(i \mid j)
\end{equation}\]
<p>where \(\pi(i \mid j)\) is the payoff to an \(i\)-strategist who has been paired against a \(j\)-strategist.
The average fitness</p>
\[\begin{equation}
\bar{f} = \sum_j f_j p_j.
\end{equation}\]
<p>For this example, the payoffs are given by the matrix</p>
\[\begin{equation}
\pi =
\begin{pmatrix}
0 & 1 & 4 \\
1 & 4 & 0 \\
-1 & 6 & 2 \\
\end{pmatrix},
\end{equation}\]
<p>where \(\pi(i \mid j)\) is given by element \(\pi_{i,j}\).
For example,
when strategy 1 plays against strategy 2, strategy 1 receives payoff 1;
when strategy 1 plays against strategy 3, strategy 1 receives payoff 4;
and so on.
This example is taken from Ohtsuki <em>et. al.</em> (2006).</p>
<h3>Plot the dynamics</h3>
<p>To gain an intuition for the dynamics, we will first plot them.</p>
<p><a href="http://web.evolbio.mpg.de/~boettcher//other/2016/egtsimplex.html">Marvin Böttcher</a>
has written a handy utility
for plotting the dynamics of a 3-strategy game on a simplex called <strong>egtsimplex</strong>. It can be downloaded from
the Github repository here: <a href="https://github.com/marvinboe/egtsimplex">https://github.com/marvinboe/egtsimplex</a>.
The repository includes an example that we can look at to see how it works.</p>
<p>First, we need to create a function that defines the dynamics and returns \(\dot{\boldsymbol{p}}\).</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">calc_dotps</span><span class="p">(</span><span class="n">pis</span><span class="p">,</span> <span class="n">t</span><span class="p">):</span>
<span class="c1"># matrix of payoffs between R1, R2, and R3
</span> <span class="n">payoffs</span> <span class="o">=</span> <span class="p">[</span>
<span class="p">[</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span>
<span class="p">[</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span>
<span class="p">]</span>
<span class="c1"># replicator dynamics: \dot{p}_i = p_i (f_i - \bar{f})
</span> <span class="c1"># where p_i is proportion of i in the population
</span> <span class="c1"># f_i is fitness effect of strategy i
</span> <span class="c1"># f_i = \sum_j p_j pay(i|j) where pay(i|j) is the payoff to i playing against j
</span> <span class="c1"># \bar{f} is the average fitness in the population
</span> <span class="c1"># \bar{f} = \sum_j f_j p_j
</span>
<span class="c1"># calculate the fitness of each strategy in the population
</span>
<span class="n">fis</span> <span class="o">=</span> <span class="nb">list</span><span class="p">()</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">):</span>
<span class="n">fi</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">):</span>
<span class="n">fi</span> <span class="o">+=</span> <span class="n">payoffs</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="n">j</span><span class="p">]</span><span class="o">*</span><span class="n">pis</span><span class="p">[</span><span class="n">j</span><span class="p">]</span>
<span class="n">fis</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">fi</span><span class="p">)</span>
<span class="c1"># average fitness in the population
</span> <span class="n">fbar</span> <span class="o">=</span> <span class="nb">sum</span><span class="p">(</span><span class="n">fis</span><span class="p">[</span><span class="n">j</span><span class="p">]</span><span class="o">*</span><span class="n">pis</span><span class="p">[</span><span class="n">j</span><span class="p">]</span> <span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">))</span>
<span class="c1"># calculate the derivatives
</span> <span class="n">dotps</span> <span class="o">=</span> <span class="p">[</span><span class="n">pis</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">fis</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">-</span><span class="n">fbar</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">)]</span>
<span class="k">return</span> <span class="n">dotps</span></code></pre></figure>
<p>Above, I chose to define each fitness effect in a for-loop to be as explict as possible about the connection
to the equations above.</p>
<p>To plot the dynamics, we create a <code class="language-plaintext highlighter-rouge">simplex_dynamics</code> object called <code class="language-plaintext highlighter-rouge">dynamics</code>, and use the method <code class="language-plaintext highlighter-rouge">plot_simplex()</code>:</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">import</span> <span class="nn">egtsimplex</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="n">dynamics</span> <span class="o">=</span> <span class="n">egtsimplex</span><span class="p">.</span><span class="n">simplex_dynamics</span><span class="p">(</span><span class="n">calc_dotps</span><span class="p">)</span>
<span class="n">fig</span><span class="p">,</span><span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="p">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">dynamics</span><span class="p">.</span><span class="n">plot_simplex</span><span class="p">(</span><span class="n">ax</span><span class="p">,</span> <span class="n">typelabels</span><span class="o">=</span><span class="p">[</span><span class="s">'R1'</span><span class="p">,</span> <span class="s">'R2'</span><span class="p">,</span> <span class="s">'R3'</span><span class="p">])</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<p>That produces a nice graph like the one below.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/plot_dynamics.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/plot_dynamics.png" alt="Replicator dynamics on the simplex produced using egtsimplex by Marvin Böttcher" />
</a>
<figcaption><span>Replicator dynamics on the simplex produced using egtsimplex by Marvin Böttcher</span></figcaption>
</figure>
<p>In the process of finding the dynamics,
<strong>egtsimplex</strong> stored the fixed points it found…</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">fp_xy</span> <span class="o">=</span> <span class="n">dynamics</span><span class="p">.</span><span class="n">fixpoints</span>
<span class="n">fp_xy</span>
<span class="n">array</span><span class="p">([[</span> <span class="mf">1.00000000e+00</span><span class="p">,</span> <span class="o">-</span><span class="mf">6.69773909e-14</span><span class="p">],</span>
<span class="p">[</span> <span class="mf">5.00000000e-01</span><span class="p">,</span> <span class="mf">8.66025404e-01</span><span class="p">],</span>
<span class="p">[</span> <span class="mf">3.57142857e-01</span><span class="p">,</span> <span class="mf">2.47435830e-01</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">3.01051047e-13</span><span class="p">,</span> <span class="o">-</span><span class="mf">1.99845407e-13</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">4.18950369e-15</span><span class="p">,</span> <span class="o">-</span><span class="mf">2.21083405e-15</span><span class="p">],</span>
<span class="p">[</span> <span class="mf">5.76254503e-14</span><span class="p">,</span> <span class="mf">3.29021429e-14</span><span class="p">]])</span></code></pre></figure>
<p>… however, these are in \((x,y)\) coordinates for plotting.
To get the barycentric coordinates, i.e., the actual quanties of \(p_1\), \(p_2\), and \(p_3\),
we can use the <code class="language-plaintext highlighter-rouge">xy2ba()</code> method</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">fp_ba</span> <span class="o">=</span> <span class="p">[</span><span class="n">dynamics</span><span class="p">.</span><span class="n">xy2ba</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span><span class="p">,</span> <span class="n">y</span> <span class="ow">in</span> <span class="n">fp_xy</span><span class="p">]</span>
<span class="n">fp_ba</span>
<span class="p">[</span><span class="n">array</span><span class="p">([</span> <span class="mf">2.62163913e-14</span><span class="p">,</span> <span class="mf">1.00000000e+00</span><span class="p">,</span> <span class="o">-</span><span class="mf">7.72715225e-14</span><span class="p">]),</span>
<span class="n">array</span><span class="p">([</span><span class="mf">3.31994121e-14</span><span class="p">,</span> <span class="mf">3.74364097e-18</span><span class="p">,</span> <span class="mf">1.00000000e+00</span><span class="p">]),</span>
<span class="n">array</span><span class="p">([</span><span class="mf">0.5</span> <span class="p">,</span> <span class="mf">0.21428571</span><span class="p">,</span> <span class="mf">0.28571429</span><span class="p">]),</span>
<span class="n">array</span><span class="p">([</span> <span class="mf">1.00000000e+00</span><span class="p">,</span> <span class="o">-</span><span class="mf">1.85694097e-13</span><span class="p">,</span> <span class="o">-</span><span class="mf">2.30639537e-13</span><span class="p">]),</span>
<span class="n">array</span><span class="p">([</span> <span class="mf">1.00000000e+00</span><span class="p">,</span> <span class="o">-</span><span class="mf">2.94854279e-15</span><span class="p">,</span> <span class="o">-</span><span class="mf">2.60257234e-15</span><span class="p">]),</span>
<span class="n">array</span><span class="p">([</span><span class="mf">1.0000000e+00</span><span class="p">,</span> <span class="mf">3.8651550e-14</span><span class="p">,</span> <span class="mf">3.8064861e-14</span><span class="p">])]</span></code></pre></figure>
<p>The interior steady state is the 3rd one in <code class="language-plaintext highlighter-rouge">fp_ba</code> above.
We can already tell from the plot there are oscillatory dynamics,
but it’s not immediately obvious whether the interior steady state is an attractor, repellor, or neutral.
Let’s try plotting a trajectory that starts nearby.</p>
<p>To plot a trajectory, I’ll use <code class="language-plaintext highlighter-rouge">solve_ivp</code> with the <code class="language-plaintext highlighter-rouge">LSODA</code> method.
I know that \(\sum_j p_j = 1\), so I’ll write a lambda function that takes into account that constraint
and reduce the number of state variables from 3 to 2.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">from</span> <span class="nn">scipy.integrate</span> <span class="kn">import</span> <span class="n">solve_ivp</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="c1"># a function that accepts t and the population proportions for the first
# two variables and returns the dv/dt for the first two variables
</span><span class="n">fnc</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">t</span><span class="p">,</span> <span class="n">p</span><span class="p">:</span> <span class="n">calc_dotps</span><span class="p">([</span><span class="n">p</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">p</span><span class="p">[</span><span class="mi">1</span><span class="p">],</span> <span class="mi">1</span><span class="o">-</span><span class="n">p</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">-</span><span class="n">p</span><span class="p">[</span><span class="mi">1</span><span class="p">]],</span> <span class="n">t</span><span class="p">)[:</span><span class="mi">2</span><span class="p">]</span>
<span class="c1"># numerically integrate the dynamics
</span><span class="n">sol</span> <span class="o">=</span> <span class="n">solve_ivp</span><span class="p">(</span><span class="n">fnc</span><span class="p">,</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">50</span><span class="p">],</span> <span class="p">[</span><span class="mf">0.5</span><span class="p">,</span> <span class="mf">0.2</span><span class="p">],</span> <span class="n">method</span><span class="o">=</span><span class="s">'LSODA'</span><span class="p">)</span>
<span class="c1"># the solution is stored in y
</span><span class="n">pt</span> <span class="o">=</span> <span class="n">sol</span><span class="p">.</span><span class="n">y</span><span class="p">.</span><span class="n">T</span>
<span class="c1"># look at the first 10 points
</span><span class="n">pt</span><span class="p">[:</span><span class="mi">10</span><span class="p">]</span>
<span class="n">array</span><span class="p">([[</span><span class="mf">0.5</span> <span class="p">,</span> <span class="mf">0.2</span> <span class="p">],</span>
<span class="p">[</span><span class="mf">0.5036369</span> <span class="p">,</span> <span class="mf">0.19880293</span><span class="p">],</span>
<span class="p">[</span><span class="mf">0.50716848</span><span class="p">,</span> <span class="mf">0.19788863</span><span class="p">],</span>
<span class="p">[</span><span class="mf">0.5138994</span> <span class="p">,</span> <span class="mf">0.196637</span> <span class="p">],</span>
<span class="p">[</span><span class="mf">0.51973676</span><span class="p">,</span> <span class="mf">0.19656723</span><span class="p">],</span>
<span class="p">[</span><span class="mf">0.52426076</span><span class="p">,</span> <span class="mf">0.19769877</span><span class="p">],</span>
<span class="p">[</span><span class="mf">0.52731171</span><span class="p">,</span> <span class="mf">0.20026023</span><span class="p">],</span>
<span class="p">[</span><span class="mf">0.52808714</span><span class="p">,</span> <span class="mf">0.20405974</span><span class="p">],</span>
<span class="p">[</span><span class="mf">0.52646399</span><span class="p">,</span> <span class="mf">0.2088587</span> <span class="p">],</span>
<span class="p">[</span><span class="mf">0.52363774</span><span class="p">,</span> <span class="mf">0.21300196</span><span class="p">]])</span></code></pre></figure>
<p>To add the trajectory to the plot,
we will need to revert <code class="language-plaintext highlighter-rouge">pt</code> above from a 2-dimensional to 3-dimensional system,
and then convert the 3-dimensional barycentric coordinates to \((x,y)\) for plotting.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">traj_xy</span> <span class="o">=</span> <span class="nb">list</span><span class="p">()</span>
<span class="k">for</span> <span class="n">p1</span><span class="p">,</span> <span class="n">p2</span> <span class="ow">in</span> <span class="n">pt</span><span class="p">:</span>
<span class="n">p3</span> <span class="o">=</span> <span class="mi">1</span> <span class="o">-</span> <span class="n">p1</span> <span class="o">-</span> <span class="n">p2</span>
<span class="n">traj_xy</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="nb">list</span><span class="p">(</span><span class="n">dynamics</span><span class="p">.</span><span class="n">ba2xy</span><span class="p">([</span><span class="n">p1</span><span class="p">,</span> <span class="n">p2</span><span class="p">,</span> <span class="n">p3</span><span class="p">])))</span>
<span class="n">traj_x</span><span class="p">,</span> <span class="n">traj_y</span> <span class="o">=</span> <span class="nb">zip</span><span class="p">(</span><span class="o">*</span><span class="n">traj_xy</span><span class="p">)</span>
<span class="c1"># plot the simplex dynamics again and put the trajectory on top in red
</span><span class="n">fig</span><span class="p">,</span><span class="n">ax</span> <span class="o">=</span> <span class="n">plt</span><span class="p">.</span><span class="n">subplots</span><span class="p">()</span>
<span class="n">dynamics</span><span class="p">.</span><span class="n">plot_simplex</span><span class="p">(</span><span class="n">ax</span><span class="p">,</span> <span class="n">typelabels</span><span class="o">=</span><span class="p">[</span><span class="s">'R1'</span><span class="p">,</span> <span class="s">'R2'</span><span class="p">,</span> <span class="s">'R3'</span><span class="p">])</span>
<span class="n">ax</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="n">traj_x</span><span class="p">[</span><span class="mi">0</span><span class="p">]],</span> <span class="p">[</span><span class="n">traj_y</span><span class="p">[</span><span class="mi">0</span><span class="p">]],</span> <span class="n">color</span><span class="o">=</span><span class="s">'red'</span><span class="p">)</span>
<span class="n">ax</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">traj_x</span><span class="p">,</span> <span class="n">traj_y</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'red'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<p>That produces the figure below, which suggests that the interior steady state is unstable.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/plot_dynamics_with_trajectory.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/plot_dynamics_with_trajectory.png" alt="A trajectory of the replicator dynamics (start point marked with dot)" />
</a>
<figcaption><span>A trajectory of the replicator dynamics (start point marked with dot)</span></figcaption>
</figure>
<h3>Use the eigenvalues of the Jacobian matrix to assess the stability of the interior steady state</h3>
<p>Ohtsuki <em>et al.</em> (2006) determined the eigenvalues of the Jacobian matrix analytically,
but let’s use this example to learn how we would do so numerically.</p>
<p>First, we need to get an expression for each element of the <a href="https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant">Jacobian matrix</a>.</p>
<p>Recall the replicator dynamics:</p>
\[\begin{equation}
\dot{p}_i = p_i (f_i - \bar{f})
\end{equation}\]
<p>Each element of the Jacobian matrix</p>
\[\begin{equation}
J_{i,k} = \left. \frac{\partial \dot{p}_i}{\partial p_k} \right|_{\boldsymbol{p}^*} = \left. [f_i - \overline{f}] \right|_{\boldsymbol{p}^*}
+ p_i^* \left( \left. \frac{\partial f_i}{\partial p_k} \right|_{\boldsymbol{p}^*} - \left. \frac{\partial \overline{f}}{\partial p_k} \right|_{\boldsymbol{p}^*} \right)
\end{equation}\]
<p>At \(\boldsymbol{p}^*\), \(f_i = \overline{f}\), so</p>
\[\begin{equation}
J_{i,k} = p_i^* \left( \left. \frac{\partial f_i}{\partial p_k} \right|_{\boldsymbol{p}^*} - \left. \frac{\partial \overline{f}}{\partial p_k} \right|_{\boldsymbol{p}^*} \right)
\tag{1}
\end{equation}\]
<p>Write the expression for each derivative individually, replacing the last variable: \(p_m = 1 - \sum_{j=1}^{m-1} p_j\).</p>
<p>To get the left-hand term in the Jacobian element, start with</p>
\[\begin{align}
f_i &= \sum_{j=1}^m p_j \pi(i \mid j) \\
&= \left[ \sum_{j=1}^{m-1} p_j \pi(i \mid j) \right] + \pi(i \mid m) \left[ 1 - \sum_{j=1}^{m-1} p_j \right] \\
&=\left[ \sum_{j=1}^{m-1} p_j [\pi(i \mid j) - \pi(i \mid m)] \right] + \pi(i \mid m)
\end{align}\]
<p>Therefore:</p>
\[\begin{equation}
\left. \frac{\partial f_i}{\partial p_k} \right|_{\boldsymbol{p}^*} = \pi(i \mid k) - \pi(i \mid m)
\tag{left-hand term of (1)}
\end{equation}\]
<p>For the right-hand term in the Jacobian element,
split the population average fitness effect into three terms</p>
\[\begin{equation}
\overline{f} = \sum_j p_j f_j =
\left[ \sum_{j=1, j\neq k}^{m-1} p_j f_j \right] + p_k f_k + p_m f_m
\end{equation}\]
<p>and take the derivatives of each term separately.</p>
<p>The first term:</p>
\[\begin{align}
\left. \frac{\partial}{\partial p_k} \left[ \sum_{j=1, j \neq k}^{m-1} p_j f_j \right] \right|_{\boldsymbol{p}^*} &=
\sum_{j=1, j \neq k}^{m-1} p_j^* \left. \frac{\partial f_j}{\partial p_k} \right|_{\boldsymbol{p}^*} \\
&=\sum_{j=1, j \neq k}^{m-1} p_j^* [\pi(j \mid k) - \pi(j \mid m)]
\end{align}\]
<p>The second term:</p>
\[\begin{align}
\left. \frac{\partial \left[ p_k f_k \right]}{\partial p_k} \right|_{\boldsymbol{p}^*} &=
\left. f_k \right|_{\boldsymbol{p}^*} + p_k^* \left. \frac{\partial f_k}{\partial p_k} \right|_{\boldsymbol{p}^*} \\
&= \left[ \sum_{j=1}^m p^*_j \pi(k \mid j) \right] + p_k^* [\pi(k \mid k) - \pi(k \mid m)]
\end{align}\]
<p>The third term:</p>
\[\begin{align}
\left. \frac{\partial \left[ p_m f_m \right]}{\partial p_k} \right|_{\boldsymbol{p}^*} &=
\left. \frac{\partial }{\partial p_k} \left[1-\sum_{j=1}^{m-1} p_j \right] \right|_{\boldsymbol{p}^*} \left. f_m \right|_{\boldsymbol{p}^*} +
p_m^* \left. \frac{\partial f_m}{\partial p_k} \right|_{\boldsymbol{p}^*} \\
&= - \left[ \sum_{j=1}^m p_j^* \pi(m \mid j) \right] + p_m^* [\pi(m \mid k) - \pi(m \mid m)]
\end{align}\]
<p>So summing them together to get the right-hand term in the Jacobian element:</p>
\[\begin{equation}
\left. \frac{\partial \overline{f}}{\partial p_k} \right|_{\boldsymbol{p}^*}
= \sum_{j=1}^m p_j^* [ \pi(j \mid k) - \pi(j \mid m) + \pi(k \mid j) - \pi(m \mid j) ]
\tag{right-hand term of (1)}
\end{equation}\]
<p>Let’s now code the Jacobian.</p>
<p>First, code the payoffs so \(\pi(j \mid k) =\) <code class="language-plaintext highlighter-rouge">pays[j][k]</code>
and the interior steady state \(p_j^* =\) <code class="language-plaintext highlighter-rouge">ps[j]</code>.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">pays</span> <span class="o">=</span> <span class="p">[</span> <span class="p">[</span> <span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">4</span><span class="p">],</span> <span class="p">[</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">4</span><span class="p">,</span> <span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">2</span><span class="p">]</span> <span class="p">]</span>
<span class="n">ps</span> <span class="o">=</span> <span class="n">fp_ba_tidy</span><span class="p">[</span><span class="mi">2</span><span class="p">]</span> <span class="c1"># recall the interior steady state was the third element</span></code></pre></figure>
<p>Code the right-hand term of (1),
\(\begin{equation}
\left. \frac{\partial \overline{f}}{\partial p_k} \right|_{\boldsymbol{p}^*}
\end{equation}\)</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">m</span> <span class="o">=</span> <span class="mi">2</span> <span class="c1"># because Python counts indices 0, 1, 2 instead of 1, 2, 3
</span><span class="n">dfbar_dps</span> <span class="o">=</span> <span class="p">[</span> <span class="nb">sum</span><span class="p">(</span><span class="n">ps</span><span class="p">[</span><span class="n">j</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">pays</span><span class="p">[</span><span class="n">j</span><span class="p">][</span><span class="n">k</span><span class="p">]</span><span class="o">-</span><span class="n">pays</span><span class="p">[</span><span class="n">j</span><span class="p">][</span><span class="n">m</span><span class="p">]</span> <span class="o">+</span> <span class="n">pays</span><span class="p">[</span><span class="n">k</span><span class="p">][</span><span class="n">j</span><span class="p">]</span> <span class="o">-</span> <span class="n">pays</span><span class="p">[</span><span class="n">m</span><span class="p">][</span><span class="n">j</span><span class="p">])</span> <span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">))</span> <span class="k">for</span> <span class="n">k</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">2</span><span class="p">)]</span></code></pre></figure>
<p>Code the full \(J_{i,k}\) in (1)</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">J</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">array</span><span class="p">([</span> <span class="p">[</span><span class="n">ps</span><span class="p">[</span><span class="n">i</span><span class="p">]</span><span class="o">*</span><span class="p">(</span><span class="n">pays</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="n">k</span><span class="p">]</span> <span class="o">-</span> <span class="n">pays</span><span class="p">[</span><span class="n">i</span><span class="p">][</span><span class="n">m</span><span class="p">]</span> <span class="o">-</span> <span class="n">dfbar_dps</span><span class="p">[</span><span class="n">k</span><span class="p">])</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">2</span><span class="p">)]</span> <span class="k">for</span> <span class="n">k</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">2</span><span class="p">)</span> <span class="p">])</span></code></pre></figure>
<p>We find that the Jacobian matrix is</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">J</span>
<span class="n">array</span><span class="p">([[</span><span class="o">-</span><span class="mf">0.67857143</span><span class="p">,</span> <span class="mf">0.78061224</span><span class="p">],</span>
<span class="p">[</span><span class="o">-</span><span class="mf">1.75</span> <span class="p">,</span> <span class="mf">0.75</span> <span class="p">]])</span></code></pre></figure>
<p>and its eigenvalues</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">w</span><span class="p">,</span> <span class="n">v</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">linalg</span><span class="p">.</span><span class="n">eig</span><span class="p">(</span><span class="n">J</span><span class="p">)</span> <span class="c1"># w are the eigenvalues, v are the eigenvectors
</span><span class="n">w</span>
<span class="n">array</span><span class="p">([</span><span class="mf">0.03571429</span><span class="o">+</span><span class="mf">0.92513099j</span><span class="p">,</span> <span class="mf">0.03571429</span><span class="o">-</span><span class="mf">0.92513099j</span><span class="p">])</span></code></pre></figure>
<p>The maximum real part is positive, so the interior steady state is unstable.</p>
<h3>References</h3>
<p>Ohtsuki, H., & Nowak, M. A. (2006).
<a href="https://www.sciencedirect.com/science/article/abs/pii/S0022519306002426">The replicator equation on graphs</a>. Journal of Theoretical Biology, 243(1), 86-97.</p>nadiahkristensenRecently, I set a task for a student to use Python to analyse the replicator dynamics of a game with three strategies, including using the Jacobian to determine the stability of the steady state. The purpose of this blog post is to share the solution in case that’s useful to someone.Group nepotism and the Brothers Karamazov Game2022-11-10T04:44:54+00:002022-11-10T04:44:54+00:00https://nadiah.org/2022/11/10/group-nepotism<p>I recently read a classic paper by anthropologist Jones (2000), <a href="https://www.researchgate.net/profile/Doug-Jones/publication/271653816_Group_Nepotism_and_Human_Kinship/links/5577240608ae7521586e1319/Group-Nepotism-and-Human-Kinship.pdf"><em>‘Group nepotism and human kinship’</em></a>, which argues that collective action may explain some of the key features of human cooperation between kin. Collective action here is action by a group (e.g., clan, tribe) that has sufficient ingroup solidarity to decide how to act as a collective. The key features it may help explain are: 1. the ‘axiom of amity’, that one is obliged to help kin just because they’re kin; 2. that kin groups come in many sizes and can be quite large; and 3. that human notions of kinship can be very different from biological kinship, sometimes occurring between individuals with no known geneological relationship.</p>
<p>The paper gives many detailed examples and arguments for why real-world social kinship has the features we would expect if group nepotism — combining relatedness with collective action — was an important mechanism (and also a section on ethnocentrism), which I cannot do justice here. Instead, the purpose of this blog post is to simply to teach myself about the basic idea by working through the first model in the paper.</p>
<h3>Model background</h3>
<p>The first model is called the Brothers Karamazov Game, named after the Dostoevsky novel. I’ve never read <em>The Brothers Karamazov</em>, but I gather there are three brothers — Ivan, Alyosha, and Dmitri — and Dmitri is prone to making poor life choices. The question is, should Ivan and Alyosha help out their brother Dmitri?</p>
<p>When Ivan and Alyosha act independently, then the rule for helping Dmitri to a benefit \(B\) at a cost \(C\) follows directly from Hamilton’s rule. The coefficient of relatedness between full siblings is \(r=1/2\); therefore, Ivan (or Alyosha) should help Dmitri if the benefit to Dmitri is at least twice the cost, \(B/C > 1/r = 2\). However, Jones shows this constraint can be made much easier to overcome if Ivan and Alyosha act together.</p>
<p>Jones introduces a situation he calls “conditional nepotism”: Ivan proposes to Alyosha that he will help out Dmitri <em>if and only if</em> Alyosha agrees to do the same. If Alyosha agrees to this, then when we add up Ivan’s inclusive fitness costs and benefits (details below), the pooling of nepotistic effort leads for a condition for helping \(B/C > 1/r_c\) where \(r_c > r\). In other words, provided Alyosha agrees, then Ivan should treat Dmitri as though he is even <em>more</em> related to him than he really is. This is an interesting result because it might help explain why helping between kin — culturally defined — is common in hunter-gatherer societies even though relatedness coefficients between pairs of individuals are often quite low.</p>
<h3>Approach</h3>
<p>I will now work through the sexual haploid case in the first appendix. However, instead of using Jones’ method, I will try performing the calculations using the framework provided in Ohtsuki (2014; Proc R Soc B).</p>
<p>Ohtsuki’s method allows us to describe the replicator dynamics of two strategies \(A\) and \(B\) in nonlinear games played between kin by taking into account the <em>higher-order genetic associations</em> between group members, beyond their dyadic relatedness. Dyadic relatedness \(r\) above is the probability that 2 individuals randomly drawn without replacement from the group will have strategies that are identical by descent, denoted \(\theta_{2 \rightarrow 1}\) in Ohtsuki’s scheme. We can also say in shorthand that the two individuals “share a common ancestor”. Higher-order relatedness generalises this concept to larger samples. For example, \(\theta_{3 \rightarrow 1}\) is the probability that three individuals randomly drawn share a common ancestor, and in general, \(\theta_{l \rightarrow m}\) is the probability that, if we draw \(l\) individuals from the group, they share \(m\) common ancestors.</p>
<p>The higher-order genetic associations \(\theta_{l \rightarrow m}\) can then be used to find the probilities \(\rho_l\) that \(l\) players randomly sampled from the group without replacement are \(A\)-strategists (Eq. 2.5):</p>
\[\rho_l = \sum_{m=1}^l \theta_{l \rightarrow m} p^m\]
<p>where \(p\) is the proportion of \(A\)-strategists in the population and \(\rho_0=1\).</p>
<p>The reasoning behind the \(\rho_l\) equation can be seen by considering the probability that two randomly sampled group members are \(A\)-strategists, \(\rho_2\). Either the two individuals have the same ancestor, or they have different ancestors. Assuming that the proportion of \(A\)-strategists in the ancestral population can be approximated by the proportion of \(A\)-strategists now, then:</p>
\[\rho_2 =
\quad
\underbrace{\theta_{2 \rightarrow 1}}_{\substack{\text{same} \\ \text{ancestor}}}
\quad
\underbrace{p}_{\substack{\text{ancestor} \\ \text{=A}} }
\quad
+
\quad
\underbrace{\theta_{2 \rightarrow 2}}_{\substack{\text{two} \\ \text{ancestors}} }
\quad
\underbrace{p^2}_{\substack{\text{both} \\ \text{=A}}}\]
<p>Let’s define the payoffs to the two strategies using two functions: \(a_k\) is the payoff to \(A\)-strategists when \(k\) of the \(n-1\) other group members are \(A\)-strategists, and \(b_k\) is the payoff to \(B\)-strategists when \(k\) of the \(n-1\) other group members are also \(A\)-strategists.</p>
<p>Then Ohtsuki (2014) shows that the change in the proportion \(A\)-strategists in the population, \(p\), over one generation is proportional to (Eq. 2.4)</p>
\[\Delta p \propto
\sum_{k=0}^{n-1} \sum_{l=k}^{n-1} (-1)^{l-k} {l \choose k} {n-1 \choose l}
\left[ (1-\rho_1) \rho_{l+1} a_k - \rho_1(\rho_l - \rho_{l+1}) b_k \right].\]
<p>This equation is derived from first principles from the Price equation (see Ohtsuki (2014) appendices).</p>
<h3>Results</h3>
<h4>Dynamics equation</h4>
<p>The number of terms in the dynamics equation quickly becomes unweildy, so I’ll use <a href="https://www.sagemath.org/">SageMath</a>
below to work through the algebra.</p>
<p>First, I will prepare the symbolic function “\(\Delta p \propto \ldots\)”</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="o">%</span><span class="n">display</span> <span class="n">latex</span>
<span class="n">var</span><span class="p">(</span><span class="s">'B, C'</span><span class="p">)</span> <span class="c1"># benefit and cost of helping behaviour
</span><span class="n">n</span> <span class="o">=</span> <span class="mi">3</span> <span class="c1"># size of the group (3 brothers)
</span>
<span class="c1"># a placeholder for the payoff functions a_k and b_k we'll define later
</span><span class="n">a</span> <span class="o">=</span> <span class="n">function</span><span class="p">(</span><span class="s">'a'</span><span class="p">)</span>
<span class="n">b</span> <span class="o">=</span> <span class="n">function</span><span class="p">(</span><span class="s">'b'</span><span class="p">)</span>
<span class="c1"># a placeholder for the rho function we'll define later
</span><span class="n">rho</span> <span class="o">=</span> <span class="n">function</span><span class="p">(</span><span class="s">'rho'</span><span class="p">)</span>
<span class="n">var</span><span class="p">(</span><span class="s">'l,k'</span><span class="p">)</span> <span class="c1"># sum counters
</span>
<span class="c1"># Eq. 2.4 from Ohtsuki (2014)
</span><span class="n">delta_pr</span> <span class="o">=</span> <span class="nb">sum</span><span class="p">(</span><span class="nb">sum</span><span class="p">((</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">^</span><span class="p">(</span><span class="n">l</span><span class="o">-</span><span class="n">k</span><span class="p">)</span> <span class="o">*</span> <span class="n">binomial</span><span class="p">(</span><span class="n">l</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="o">*</span> <span class="n">binomial</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">l</span><span class="p">)</span> <span class="o">*</span> <span class="p">(</span> <span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">rho</span><span class="p">(</span><span class="mi">1</span><span class="p">))</span><span class="o">*</span><span class="n">rho</span><span class="p">(</span><span class="n">l</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="n">a</span><span class="p">(</span><span class="n">k</span><span class="p">)</span> <span class="o">-</span> <span class="n">rho</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span><span class="o">*</span><span class="p">(</span><span class="n">rho</span><span class="p">(</span><span class="n">l</span><span class="p">)</span><span class="o">-</span><span class="n">rho</span><span class="p">(</span><span class="n">l</span><span class="o">+</span><span class="mi">1</span><span class="p">))</span><span class="o">*</span><span class="n">b</span><span class="p">(</span><span class="n">k</span><span class="p">)</span> <span class="p">),</span> <span class="n">l</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">),</span> <span class="n">k</span><span class="p">,</span> <span class="mi">0</span><span class="p">,</span> <span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span>
<span class="n">delta_pr</span></code></pre></figure>
\[\begin{gather}
-{\left(a\left(0\right) + 2 \, b\left(1\right) - 3 \, b\left(0\right)\right)} \rho\left(1\right)^{2} - {\left({\left(\rho\left(1\right) - 1\right)} a\left(2\right) - {\left(2 \, a\left(1\right) - a\left(0\right) - 2 \, b\left(1\right) + b\left(0\right)\right)} \rho\left(1\right) - b\left(2\right) \rho\left(1\right) + 2 \, a\left(1\right) - a\left(0\right)\right)} \rho\left(3\right) \\
- {\left({\left(2 \, a\left(1\right) - 2 \, a\left(0\right) - 4 \, b\left(1\right) + 3 \, b\left(0\right)\right)} \rho\left(1\right) + b\left(2\right) \rho\left(1\right) - 2 \, a\left(1\right) + 2 \, a\left(0\right)\right)} \rho\left(2\right) - {\left(b\left(0\right) \rho\left(0\right) - a\left(0\right)\right)} \rho\left(1\right)
\end{gather}\]
<p>The example in Section 4(b) of Ohtsuki (2014) provides us with expressions for the higher-order relatedness terms between siblings. Define the dyadic relatedness \(r = \theta_{2 \rightarrow 1}\) as before, and define the triplet relatedness \(s = \theta_{3 \rightarrow 1}\). Then it can be shown:</p>
\[\begin{align}
\theta_{1 \rightarrow 1} &= 1, \\
\theta_{2 \rightarrow 1} &= r, \\
\theta_{2 \rightarrow 2} &= 1-r, \\
\theta_{3 \rightarrow 1} &= s, \\
\theta_{3 \rightarrow 2} &= 3r - 3s, \\
\theta_{3 \rightarrow 3} &= 1 - 3r + 2s.
\end{align}\]
<p>Between siblings, \(r = 1/2\) and \(s = 1/4\), therefore</p>
\[\begin{align}
\rho_1 &= \theta_{1 \rightarrow 1} p = p, \\
\rho_2 &= \theta_{2 \rightarrow 1} p + \theta_{2 \rightarrow 2} p^2 = \frac{p}{2} + \frac{p^2}{2}, \\
\rho_3 &= \theta_{3 \rightarrow 1} p + \theta_{3 \rightarrow 2} p^2 + \theta_{3 \rightarrow 3} p^3 = \frac{p}{4} + \frac{3p^2}{4}
\end{align}\]
<p>We will substitute these expressions for \(\rho_l\) into the symbolic function:</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">var</span><span class="p">(</span><span class="s">'p'</span><span class="p">)</span>
<span class="n">delta_pr</span> <span class="o">=</span> <span class="n">delta_pr</span><span class="p">.</span><span class="n">subs</span><span class="p">({</span>
<span class="n">rho</span><span class="p">(</span><span class="mi">0</span><span class="p">):</span> <span class="mi">1</span><span class="p">,</span>
<span class="n">rho</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="n">p</span><span class="p">,</span>
<span class="n">rho</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span> <span class="n">p</span><span class="o">/</span><span class="mi">2</span> <span class="o">+</span> <span class="n">p</span><span class="o">^</span><span class="mi">2</span><span class="o">/</span><span class="mi">2</span><span class="p">,</span>
<span class="n">rho</span><span class="p">(</span><span class="mi">3</span><span class="p">):</span> <span class="n">p</span><span class="o">/</span><span class="mi">4</span> <span class="o">+</span> <span class="mi">3</span><span class="o">*</span><span class="n">p</span><span class="o">^</span><span class="mi">2</span><span class="o">/</span><span class="mi">4</span><span class="p">,</span>
<span class="p">})</span>
<span class="n">delta_pr</span></code></pre></figure>
\[\begin{gather}
-p^{2} {\left(a\left(0\right) + 2 \, b\left(1\right) - 3 \, b\left(0\right)\right)} + \frac{1}{4} \, {\left(3 \, p^{2} + p\right)} {\left(p {\left(2 \, a\left(1\right) - a\left(0\right) - 2 \, b\left(1\right) + b\left(0\right)\right)} - {\left(p - 1\right)} a\left(2\right) + p b\left(2\right) - 2 \, a\left(1\right) + a\left(0\right)\right)} \\
- \frac{1}{2} \, {\left(p^{2} + p\right)} {\left(p {\left(2 \, a\left(1\right) - 2 \, a\left(0\right) - 4 \, b\left(1\right) + 3 \, b\left(0\right)\right)} + p b\left(2\right) - 2 \, a\left(1\right) + 2 \, a\left(0\right)\right)} + p {\left(a\left(0\right) - b\left(0\right)\right)}
\end{gather}\]
<p>Now we have an expression proportional to \(\Delta p\) to which we can apply different payoff functions \(a_k\) and \(b_k\).</p>
<h4>Unconditonal helping vs. defection</h4>
<p>First, let’s consider the case where the two brothers always help Dmitri regardless of what the other does. We already know the answer whether or not this behaviour can evolve, it follows directly from Hamilton’s Rule, so this also serves to sanity-check the calculations.</p>
<p>Let the \(A\) strategy be unconditional helping and \(B\) be defection.
When the focal player is playing the role of Dmitri, which is drawn with probability \(1/3\), their payoffs are independent of their strategy</p>
\[\begin{pmatrix}
a_0 & a_1 & a_2 \\
b_0 & b_1 & b_2
\end{pmatrix}
=
\begin{pmatrix}
0 & B & 2B \\
0 & B & 2B
\end{pmatrix}\]
<p>If the focal is not a Dmitri (occurs with probability 2/3), the payoffs to the focal depend on whether or not they help the Dmitri</p>
\[\begin{pmatrix}
a_0 & a_1 & a_2 \\
b_0 & b_1 & b_2
\end{pmatrix}
=
\begin{pmatrix}
-C & -C & -C \\
0 & 0 & 0
\end{pmatrix}\]
<p>Therefore, when the brothers act independently, the expected payoffs</p>
\[\begin{pmatrix}
a_0 & a_1 & a_2 \\
b_0 & b_1 & b_2
\end{pmatrix}
=
\frac{1}{3}
\begin{pmatrix}
-2C & B-2C & 2B-2C \\
0 & B & 2B
\end{pmatrix}\]
<p>Substituting these payoffs into the expression proportional to \(\Delta p\)</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">uncond_delta_pr</span> <span class="o">=</span> <span class="n">delta_pr</span><span class="p">.</span><span class="n">subs</span><span class="p">({</span>
<span class="n">a</span><span class="p">(</span><span class="mi">0</span><span class="p">):</span> <span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">a</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="n">B</span><span class="o">/</span><span class="mi">3</span> <span class="o">-</span> <span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">a</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span> <span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="o">/</span><span class="mi">3</span> <span class="o">-</span> <span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">0</span><span class="p">):</span> <span class="mi">0</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="n">B</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span> <span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="o">/</span><span class="mi">3</span>
<span class="p">})</span>
<span class="n">uncond_delta_pr</span> <span class="o">=</span> <span class="n">uncond_delta_pr</span><span class="p">.</span><span class="n">expand</span><span class="p">().</span><span class="n">factor</span><span class="p">()</span>
<span class="n">uncond_delta_pr</span></code></pre></figure>
\[-\frac{1}{3} \, {\left(B - 2 \, C\right)} {\left(p - 1\right)} p\]
<p>The condition for the \(A\)-strategist (the ones who help Dmitri) to increase is</p>
\[B-2C > 0\]
<p>and the condition is</p>
\[B/C > 2 = 1/r\]
<p>as expected.</p>
<h4>Conditional nepotism vs. defection</h4>
<p>Now let’s consider Jones’ scenario, where Ivan and Alyosha help Dmitri if and only if they both agree, i.e., they are both conditional nepotists.</p>
<p>You might notice in the first appendix that Jones talks about a ‘conditional nepotists’ vs ‘Hamiltonian nepotists’ scenario,
which confused me at first, because when you look at the payoffs Table A2, there’s no helping by the Hamiltonian \(H\) type.
I <em>think</em> what’s happening is he is actually restricting his attention to the situation \(B/C < 2\),
and assuming that what he calls ‘Hamiltonian nepotists’ will decide to not help Dmitri in this situation,
i.e., act like defectors (see wording of the paragraph right after Eq. A1).
Therefore, the scenario in question is actually ‘conditional nepotists vs defectors’.
Hopefully this will become clear by the time we’ve gone through all the cases (final figure summary).</p>
<p>Let the \(A\) strategy be conditional nepotism and the \(B\) strategy be defection.
When the focal player is a Dmitri (probability \(1/3\)), their payoffs are again independent of their strategy</p>
\[\begin{pmatrix}
a_0 & a_1 & a_2 \\
b_0 & b_1 & b_2
\end{pmatrix}
=
\begin{pmatrix}
0 & 0 & 2B \\
0 & 0 & 2B
\end{pmatrix}\]
<p>When the focal player is not a Dmitri (probability 2/3), their payoffs are</p>
\[\begin{pmatrix}
a_0 & a_1 & a_2 \\
b_0 & b_1 & b_2
\end{pmatrix}
=
\begin{pmatrix}
0 & -C/2 & -C \\
0 & 0 & 0
\end{pmatrix}\]
<p>where \(a_1 = -C/2\) because, if there is one other helper in the group, there is half a chance that the other helper is also not a Dmitri and thus the \(A\)-strategist gives the help.</p>
<p>Therefore, when the brothers act as conditional nepotists, the expected payoffs</p>
\[\begin{pmatrix}
a_0 & a_1 & a_2 \\
b_0 & b_1 & b_2
\end{pmatrix}
=
\frac{1}{3}
\begin{pmatrix}
0 & -C & 2B-2C \\
0 & 0 & 2B
\end{pmatrix}\]
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">cond_delta_pr</span> <span class="o">=</span> <span class="n">delta_pr</span><span class="p">.</span><span class="n">subs</span><span class="p">({</span>
<span class="n">a</span><span class="p">(</span><span class="mi">0</span><span class="p">):</span> <span class="mi">0</span><span class="p">,</span>
<span class="n">a</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="o">-</span><span class="n">C</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">a</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span> <span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="o">/</span><span class="mi">3</span> <span class="o">-</span> <span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">0</span><span class="p">):</span> <span class="mi">0</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="mi">0</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span> <span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="o">/</span><span class="mi">3</span>
<span class="p">})</span>
<span class="n">cond_delta_pr</span> <span class="o">=</span> <span class="n">cond_delta_pr</span><span class="p">.</span><span class="n">expand</span><span class="p">().</span><span class="n">factor</span><span class="p">()</span>
<span class="n">cond_delta_pr</span></code></pre></figure>
\[-\frac{1}{6} \, {\left(2 \, B p - 2 \, C p + B - 2 \, C\right)} {\left(p - 1\right)} p\]
<p>The condition for conditional nepotists to increase matches the equation that Jones presents in the appendix (Eq. A1)</p>
\[2Bp - 2Cp + B - 2C > 0\]
<p>or</p>
\[\frac{B}{C} > \frac{2 + 2p}{1 + 2p} \equiv \frac{1}{r_c}.\]
<p>As \(p\) increases from 0 to 1, \(r_c\) above increases from 1/2 to 3/4, meaning that when \(p\) is high, the brothers will treat Dmitri as though he is more related to them than he really is.
Conditional nepotists can invade defectors when (sub in \(p=0\)) \(B/C > 2\).
Defectors can invade conditional nepotists (sub in \(p=1\), reverse condition) when \(B/C < 4/3\).</p>
<p>Let’s plot some examples of the \(\Delta p\) vs \(p\) function to get a more intuitive feel. When \(B/C = 2.5 > 2\), we see below that \(\Delta p\) is always greater than zero.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/cond_1.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/cond_1.png" alt="Change in proportion of conditional nepotists when \(B/C = 2.5\)" />
</a>
<figcaption><span>Change in proportion of conditional nepotists when \(B/C = 2.5\)</span></figcaption>
</figure>
<p>When \(B/C = 1.5\) (satisfying \(4/3 < B/C < 2\)), we see that \(\Delta p\) is less than zero when \(p\) is small, and greater than zero when \(p\) is large. There are two stable steady states — all Defectors and all Conditional Nepotists — and a separatrix between them. The position of the separatrix is found by setting the \(B/C\) condition above to an equality: \(p^{\star} = \frac{2C-B}{2B-2C}\).</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/cond_2.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/cond_2.png" alt="Change in proportion of conditional nepotists when \(B/C = 1.5\)." />
</a>
<figcaption><span>Change in proportion of conditional nepotists when \(B/C = 1.5\).</span></figcaption>
</figure>
<p>When \(B/C = 1.2 < 4/3\), \(\Delta p\) is always below zero. All-defectors is the evolutionary endpoint.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/cond_3.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/cond_3.png" alt="Change in proportion of conditional nepotists when \(B/C = 1.2\)." />
</a>
<figcaption><span>Change in proportion of conditional nepotists when \(B/C = 1.2\).</span></figcaption>
</figure>
<h4>Conditional nepotism vs. unconditional helping</h4>
<p>For completeness, we should now consider the case of ‘conditional nepotists’ vs ‘unconditional helpers’.
Let the \(A\) strategy be conditional nepotism and the \(B\) strategy be unconditional helping.
Substituting in the expected payoffs:</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">x_delta_pr</span> <span class="o">=</span> <span class="n">delta_pr</span><span class="p">.</span><span class="n">subs</span><span class="p">({</span>
<span class="n">a</span><span class="p">(</span><span class="mi">0</span><span class="p">):</span> <span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="p">)</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">a</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="p">(</span><span class="n">B</span><span class="o">-</span><span class="n">C</span><span class="p">)</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">a</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span> <span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="p">)</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">0</span><span class="p">):</span> <span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="p">)</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">1</span><span class="p">):</span> <span class="p">(</span><span class="n">B</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="p">)</span><span class="o">/</span><span class="mi">3</span><span class="p">,</span>
<span class="n">b</span><span class="p">(</span><span class="mi">2</span><span class="p">):</span> <span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">B</span><span class="o">-</span><span class="mi">2</span><span class="o">*</span><span class="n">C</span><span class="p">)</span><span class="o">/</span><span class="mi">3</span>
<span class="p">})</span>
<span class="n">x_delta_pr</span> <span class="o">=</span> <span class="n">x_delta_pr</span><span class="p">.</span><span class="n">expand</span><span class="p">().</span><span class="n">factor</span><span class="p">()</span>
<span class="n">x_delta_pr</span></code></pre></figure>
\[-\frac{1}{6} \, {\left(2 \, B p - 2 \, C p - B + 2 \, C\right)} {\left(p - 1\right)} p\]
<p>The condition for conditional nepotists to increase is</p>
\[B(2p-1) > C(2p-2)\]
<p>which splits up into two cases. When \(p > 1/2\), the condition becomes</p>
\[\frac{B}{C} > \frac{2p-2}{2p-1}\]
<p>and so conditional nepotists can always grow. When \(p < 1/2\), the condition becomes</p>
\[\frac{B}{C} < \frac{2p-2}{2p-1}\]
<p>and conditional nepotists can only invade (\(p=0\)) if \(B/C < 2\).</p>
<p>Let’s again plot some examples to obtain the intuition. First, let’s plot an example where \(B/C < 2\). The \(p^*=0\) steady state is unstable, \(p^* =1\) is stable, and there are no interior equilibria. Therefore, conditional nepotists can invade, and an all-conditional-nepotists population is the evolutionary endpoint.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/cross_1.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/cross_1.png" alt="Change in proportion of conditional nepotists when \(B/C = 1.5\)." />
</a>
<figcaption><span>Change in proportion of conditional nepotists when \(B/C = 1.5\).</span></figcaption>
</figure>
<p>Now let’s see how the situation changes as we move to the \(B/C > 2\) space. In the plot below, we see how, when we reach \(B/C = 2\), an interior separatrix appears, so now there are two evolutionarily stable states: all conditional nepotists, and all unconditional helpers. So conditional nepotists cannot invade, but once established, they can resist invasion by unconditional helpers.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/cross_2.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/cross_2.png" alt="Change in proportion of conditional nepotists for a range of \(B/C\) values." />
</a>
<figcaption><span>Change in proportion of conditional nepotists for a range of \(B/C\) values.</span></figcaption>
</figure>
<p>As \(B/C\) becomes larger, the separatrix moves to the right. When we plot an example with large \(B/C\), the separatrix is close to \(p \approx 1/2\), as suggested above.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/cross_3.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/cross_3.png" alt="Change in proportion of conditional nepotists for \(B/C = 100\)." />
</a>
<figcaption><span>Change in proportion of conditional nepotists for \(B/C = 100\).</span></figcaption>
</figure>
<h4>Summary of dynamics</h4>
<p>All evolutionary endpoints are monomorphic, so I summarised the evolutionary dynamics in terms of pairwise invasibility below.</p>
<p>When benefits are high compared to costs, defectors cannot prevail.
The evolutionary endpoints are conditional nepotists or unconditional helpers.
When benefits are intermediate \(4/3 < B/C < 2\),
unconditional helpers are no longer evolutionarily stable. Either defectors or conditional nepotists will prevail.
When benefits are low, defectors prevail.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/11/summary.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/summary.png" alt="A summary of the evolutionary dynamics of the Brothers Karamazov game across three regimes determined by the cost-benefit ratio. White circles indicate evolutionary endpoints, directions of arrows indicate whether populations are resistant to invasion or invasible, and solid lines on edges indicate separatrices." />
</a>
<figcaption><span>A summary of the evolutionary dynamics of the Brothers Karamazov game across three regimes determined by the cost-benefit ratio. White circles indicate evolutionary endpoints, directions of arrows indicate whether populations are resistant to invasion or invasible, and solid lines on edges indicate separatrices.</span></figcaption>
</figure>
<p>Conditional nepotists can never invade. However, if the benefits are close to but just above \(B/C = 2\),
the separatrix between conditional nepotists and unconditional helpers is close to zero.
This suggests that, if a cluster of unconditional helpers have a brainwave and strike upon the idea of making a deal like Jones describes,
then they might be able to invade and explore the new space of potential collaborative situations not previously accessible to the unconditional
helpers.
The separatrix between conditional nepotists and defectors is near 0 when \(B/C\) is near \(2\), so they will (at least, initially),
be quite resistant to invasion by defectors.</p>
<h3>Conclusion</h3>
<p>The Brothers Karamazov model above is only the first of 3 models that Jones (2000) discusses. In the second model, Jones considers a large donor group facing the choice of whether to help another group of relatives. Through a similar principle to “conditional nepotism” above, the donor group decides as a collective whether or not to help, i.e., “group nepotism”. Analogous to \(r_c\) above, “group coefficient of relatedness” \(r_g\) is obtained from the condition \(B/C > 1/r_g\), and it too can be larger than dyadic relatedness \(r\).</p>
<p>Table 1 below compares estimates of the dyadic relatedness (labelled \(r_{11}\) in the talbe) and the group coefficient of relatedness for a sampling of tribal societies. Group relatedness is often much higher than dyadic relatedness, sometimes nearing 1. This suggests that, <em>if</em> groups of relatives can somehow coordinate themselves and act as a collective, then they will render help more easily than one would expect just from individual inclusive fitness considerations alone.</p>
<figure style="max-width: 700px; margin: auto;">
<a href="/wp-content/uploads/2022/11/Jones_Table.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/11/Jones_Table.png" alt="" />
</a>
</figure>
<p>Jones proposes that such collective behaviour might be achieved by “mutual coercion, mutually agreed upon”, and gives many real-life anthropological examples where individuals are punished for not obeying norms about how one ought to treat their (culturally defined) kin. Models that also include coercion (punishment) could be used to learn more about the exact conditions under which this can evolve. So I have more reading to do.</p>
<h3>Further reading</h3>
<p>Jones, D. (15 June, 2016). Blog post: “Beating Hamilton’s Rule”, <a href="https://logarithmichistory.wordpress.com/2016/06/15/beating-hamiltons-rule/">https://logarithmichistory.wordpress.com/2016/06/15/beating-hamiltons-rule/</a></p>
<h3>References</h3>
<p>Jones, D. (2000). Group nepotism and human kinship. <a href="https://www.journals.uchicago.edu/doi/abs/10.1086/317406">Current Anthropology</a>, 41(5): 779-809</p>
<p>Ohtsuki, H. (2014). Evolutionary dynamics of n-player games played by relatives. <a href="https://royalsocietypublishing.org/doi/full/10.1098/rstb.2013.0359">Philosophical Transactions of the Royal Society B: Biological Sciences</a>, 369(1642), 20130359.</p>nadiahkristensenI recently read a classic paper by anthropologist Jones (2000), ‘Group nepotism and human kinship’, which argues that collective action may explain some of the key features of human cooperation between kin. Collective action here is action by a group (e.g., clan, tribe) that has sufficient ingroup solidarity to decide how to act as a collective. The key features it may help explain are: 1. the ‘axiom of amity’, that one is obliged to help kin just because they’re kin; 2. that kin groups come in many sizes and can be quite large; and 3. that human notions of kinship can be very different from biological kinship, sometimes occurring between individuals with no known geneological relationship.Lab meeting about threshold games2022-08-01T04:44:54+00:002022-08-01T04:44:54+00:00https://nadiah.org/2022/08/01/dejaegher<p>Last month,
I chose <a href="https://www.nature.com/articles/s41598-020-62626-3">a paper by Kris De Jaegher in <em>Scientific Reports</em></a> for our
weekly <a href="https://ryanchisholm.com/">lab</a>-meeting discussion.
We used the paper to teach ourselves about threshold games,
and the purpose of this blog post is to summarise the things we learnt.</p>
<h3>Replicator dynamics revision</h3>
<p>De Jaegher used the replicator dynamics approach.
Replicator dynamics assumes a well-mixed, infinitely large population of players that reproduces asexually,
where the number of offspring they produce depends upon the payoff they receive from a game-theoretic ‘game’,
as illustrated below:–</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/800px-Game_Diagram_AniFin.gif">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/800px-Game_Diagram_AniFin.gif" alt="Replicator dynamics animation, created by HowieKor (Creative Commons)" />
</a>
<figcaption><span>Replicator dynamics animation, created by HowieKor (Creative Commons)</span></figcaption>
</figure>
<p>We start with a population of \(n\) individuals with different game strategies.
Individuals are randomly selected to form groups and play the game.
The payoff from the game determines how many offspring they have.
Offspring inherit the strategies of their parents (clonal reproduction), and the cycle repeats.</p>
<p>Let’s rederive the dynamics.
The number of individuals pursuing strategy \(i\) changes according to</p>
\[\frac{dn_i}{dt} = \dot{n_i} = n_i (\beta + f_i),\]
<p>where \(\beta\) is the background reproduction rate (apart from the game’s effect),
and \(f_i\) is the fitness effect of playing strategy \(i\).</p>
<p>Denote the proportion of \(i\)-strategists in the population</p>
\[p_i = \frac{n_i}{N}\]
<p>where \(N\) is the total population size.</p>
<p>We want to know \(\frac{dp_i}{dt}\).
Rearrange the \(p_i\) definition and take the derivatives</p>
\[n_i = p_i N\]
\[\dot{n_i} = p_i \dot{N} + \dot{p_i} N\]
\[\dot{p} = \frac{1}{N} (\dot{n_i} - p_i \dot{N})\]
<p>So we need to sort out \(\dot{N}\)</p>
\[\begin{align}
\dot{N} &= \sum_i \dot{n_i} \\
&= \beta \sum_i n_i + \sum_i f_i n_i \\
&= \beta N + N \sum_i f_i \frac{n_i}{N} \\
&= \beta N + N \sum_i f_i p_i \\
\dot{N} &= N (\beta +\bar{f}) \\
\end{align}\]
<p>Substitute our equations for \(\dot{N}\) and \(\dot{n_i}\) into the equation for \(\dot{p_i}\),
and we obtain the dynamics of strategy proportions</p>
\[\frac{dp_i}{dt} = \dot{p} = p_i (f_i - \bar{f})\]
<p>In the De Jaegher paper,
players face the binary choice of cooperating or defecting.
Denote cooperators and defectors by C and D, respectively,
and define \(p = p_C = 1-p_D\).
Then the equation governing the dynamics simplifies to</p>
\[\begin{align}
\dot{p}
&= p \, (f_C - [p f_C + (1-p) f_D]) \\
\dot{p} &= p (1-p) (f_C - f_D)
\end{align}\]
<h3>Threshold game</h3>
<p>In the threshold game, a group is formed with \(n\) players (different to \(n\)) above,
and if the group contains \(k\) or more cooperators,
then they provide a benefit \(b\).
The benefit \(b\) is received by all group members regardless of whether that member was a cooperator or defector</p>
\[b_i =
\begin{cases}
b &\text{if } i >= k \text{ (threshold met)}\\
0 &\text{otherwise (threshold not met)}
\end{cases}\]
<p>Throughout their paper, they have normalised \(b=1\).</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="n">np</span>
<span class="kn">from</span> <span class="nn">scipy.special</span> <span class="kn">import</span> <span class="n">binom</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="k">as</span> <span class="n">plt</span>
<span class="kn">from</span> <span class="nn">scipy.optimize</span> <span class="kn">import</span> <span class="n">brentq</span>
<span class="n">b</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">x</span><span class="p">,</span> <span class="n">k</span><span class="p">:</span> <span class="mi">1</span> <span class="k">if</span> <span class="n">x</span> <span class="o">>=</span> <span class="n">k</span> <span class="k">else</span> <span class="mi">0</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">5</span>
<span class="n">xV</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">7</span><span class="o">+</span><span class="mi">1</span><span class="p">))</span>
<span class="n">bV</span> <span class="o">=</span> <span class="p">[</span><span class="n">b</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xV</span><span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'7 player game w. threshold $k=5$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">xV</span><span class="p">,</span> <span class="n">bV</span><span class="p">,</span> <span class="s">'-o'</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'no. cooperators in group'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s">'benefit $b$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xticks</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">7</span><span class="o">+</span><span class="mi">1</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/benefit_v_nocoops.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/benefit_v_nocoops.png" alt="Benefit provided to each player in the threshold game as a function of the number of cooperators in the group." />
</a>
<figcaption><span>Benefit provided to each player in the threshold game as a function of the number of cooperators in the group.</span></figcaption>
</figure>
<p>Everyone receives the benefit, but only cooperators pay the cost \(c\).
So the total payoff to cooperators is</p>
\[\text{cooperator payoff} =
\begin{cases}
b - c & \text{if threshold met,} \\
-c & \text{if threshold not met.}
\end{cases}\]
<p>and defectors receive</p>
\[\text{defector payoff} =
\begin{cases}
b & \text{if threshold met,} \\
0 & \text{if threshold not met.}
\end{cases}\]
<p>Defectors always do better than cooperators.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">5</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.4</span>
<span class="n">xV</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="o">+</span><span class="mi">1</span><span class="p">))</span>
<span class="c1"># payoffs
</span><span class="n">pay_C</span> <span class="o">=</span> <span class="p">[</span> <span class="n">b</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="o">-</span> <span class="n">c</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xV</span><span class="p">[</span><span class="mi">1</span><span class="p">:]</span> <span class="p">]</span>
<span class="n">pay_D</span> <span class="o">=</span> <span class="p">[</span> <span class="n">b</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="k">for</span> <span class="n">x</span> <span class="ow">in</span> <span class="n">xV</span><span class="p">[:</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">xV</span><span class="p">[</span><span class="mi">1</span><span class="p">:],</span> <span class="n">pay_C</span><span class="p">,</span> <span class="s">'-o'</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'cooperators'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">xV</span><span class="p">[:</span><span class="o">-</span><span class="mi">1</span><span class="p">],</span> <span class="n">pay_D</span><span class="p">,</span> <span class="s">'-o'</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'red'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'defectors'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'7 player game w. threshold $k=5$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'no. of cooperators in group'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s">'total payoff'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xticks</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="o">+</span><span class="mi">1</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/payoff_v_nocoops.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/payoff_v_nocoops.png" alt="Payoff to cooperators and defectors in the threshold game as a function of the number of cooperators in the group." />
</a>
<figcaption><span>Payoff to cooperators and defectors in the threshold game as a function of the number of cooperators in the group.</span></figcaption>
</figure>
<h3>Fitness effects</h3>
<p>The payoffs above tell us about what happens in a particular game with a particular number of cooperators and defectors,
but we need the fitness effects of being a cooperator and defector,
\(f_C\) and \(f_D\), which are averaged over all the games played.</p>
<p>The replicator dynamics assumes that groups are formed randomly from an infinite population,
so the make-up of the other players is binomially distributed.
Therefore, the fitness effect is the sum of the payoffs from each game multiplied by the probability that the player will end up in that game</p>
\[f_C(p) = \sum_{i=0}^{n-1} \underbrace{ {n-1 \choose i} p^i (1-p)^{n-1-i}}_{\text{Pr grouped with $i$ cooperators}} \: b_{i+1} - c,\]
\[f_D(p) = \sum_{i=0}^{n-1} \underbrace{ {n-1 \choose i} p^i (1-p)^{n-1-i}}_{\text{Pr grouped with $i$ cooperators}} \: b_{i}.\]
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">f_c</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">:</span> <span class="nb">sum</span><span class="p">(</span><span class="n">binom</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">l</span><span class="p">)</span> <span class="o">*</span> <span class="n">p</span><span class="o">**</span><span class="n">l</span> <span class="o">*</span> <span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">p</span><span class="p">)</span><span class="o">**</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="o">-</span><span class="n">l</span><span class="p">)</span> <span class="o">*</span> <span class="n">b</span><span class="p">(</span><span class="n">l</span><span class="o">+</span><span class="mi">1</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="k">for</span> <span class="n">l</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="p">))</span> <span class="o">-</span> <span class="n">c</span>
<span class="n">f_d</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">:</span> <span class="nb">sum</span><span class="p">(</span><span class="n">binom</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">l</span><span class="p">)</span> <span class="o">*</span> <span class="n">p</span><span class="o">**</span><span class="n">l</span> <span class="o">*</span> <span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">p</span><span class="p">)</span><span class="o">**</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="o">-</span><span class="n">l</span><span class="p">)</span> <span class="o">*</span> <span class="n">b</span><span class="p">(</span><span class="n">l</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="k">for</span> <span class="n">l</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="p">))</span></code></pre></figure>
<h3>Pivot probability</h3>
<p>The pivot probability is the probability that a player will end up in a game where \(k-1\) of the other \(n-1\) players is a cooperator</p>
\[\pi_k = {n-1 \choose k-1} p^{k-1} (1-p)^{n-k}.\]
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">pi_k_fnc</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">:</span> <span class="n">binom</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">k</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="o">*</span> <span class="n">p</span><span class="o">**</span><span class="p">(</span><span class="n">k</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="o">*</span> <span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">p</span><span class="p">)</span><span class="o">**</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="n">k</span><span class="p">)</span>
<span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">5</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.4</span>
<span class="c1"># y is the number of the other n-1 players who are cooperators
</span><span class="n">yV</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="n">n</span><span class="p">))</span>
<span class="c1"># payoffs
</span><span class="n">pay_C</span> <span class="o">=</span> <span class="p">[</span> <span class="n">b</span><span class="p">(</span><span class="n">y</span><span class="o">+</span><span class="mi">1</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="o">-</span> <span class="n">c</span> <span class="k">for</span> <span class="n">y</span> <span class="ow">in</span> <span class="n">yV</span> <span class="p">]</span>
<span class="n">pay_D</span> <span class="o">=</span> <span class="p">[</span> <span class="n">b</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">k</span><span class="p">)</span> <span class="k">for</span> <span class="n">y</span> <span class="ow">in</span> <span class="n">yV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">yV</span><span class="p">,</span> <span class="n">pay_C</span><span class="p">,</span> <span class="s">'-o'</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'cooperators'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">yV</span><span class="p">,</span> <span class="n">pay_D</span><span class="p">,</span> <span class="s">'-o'</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'red'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'defectors'</span><span class="p">)</span>
<span class="c1"># show the pivot
</span><span class="n">plt</span><span class="p">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">text</span><span class="o">=</span><span class="s">''</span><span class="p">,</span> <span class="n">xy</span><span class="o">=</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span> <span class="mi">0</span><span class="p">),</span> <span class="n">xytext</span><span class="o">=</span><span class="p">(</span><span class="mi">4</span><span class="p">,</span> <span class="mi">1</span><span class="o">-</span><span class="n">c</span><span class="p">),</span> <span class="n">arrowprops</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">arrowstyle</span><span class="o">=</span><span class="s">'<->'</span><span class="p">,</span> <span class="n">linewidth</span><span class="o">=</span><span class="mi">4</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'7 player game w. threshold $k=5$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'no. of cooperators among $n-1$ other players'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s">'total payoff'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xticks</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">7</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<p>Why is the pivot probability significant?
First, consider just a single game,
and imagine I know that I am the pivotal player.
If I am a cooperator, then it is in my interests to remain a cooperator
because if I switch then I will reduce my own payoff.
If I am defector and I have the option of switching strategies,
I would want to switch to cooperate.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/show_pivot.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/show_pivot.png" alt="Payoff to cooperators and defectors in the threshold game as a function of the number of cooperators among the other members of the group. If I am the pivotal player, then it is in my interests to cooperate (marked with an arrow)." />
</a>
<figcaption><span>Payoff to cooperators and defectors in the threshold game as a function of the number of cooperators among the other members of the group. If I am the pivotal player, then it is in my interests to cooperate (marked with an arrow).</span></figcaption>
</figure>
<p>We can also intuit that,
if the probability that I will end up in a game where I am the pivotal player is high,
then assuming I have a fixed strategy that I play all the time,
it might be in my interests to be a cooperator… We’ll return to this point soon.</p>
<h3>First example of the threshold game</h3>
<p>Let’s just plot an example to start with to get a sense of the dynamics.
Recall</p>
\[\dot{p} = p (1-p) (f_C - f_D)\]
<p>where \(f_C\) and \(f_D\) defined in the paper. Let us plot this.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c1"># dynamics equation
</span><span class="n">delta_p</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">:</span> <span class="n">p</span><span class="o">*</span><span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">p</span><span class="p">)</span><span class="o">*</span><span class="p">(</span><span class="n">f_c</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">-</span> <span class="n">f_d</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">))</span>
<span class="c1"># some parameter values
</span><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">5</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.3</span>
<span class="c1"># how Delta p changes with p
</span><span class="n">pV</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">60</span><span class="p">)</span>
<span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="sa">r</span><span class="s">'$\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="c1"># solve for stable steady state
</span>
<span class="c1"># the p-value at the peak (I'll explain this bit later)
</span><span class="n">p_hat_fnc</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">:</span> <span class="p">(</span><span class="n">k</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span><span class="o">/</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">)</span>
<span class="n">p_hat</span> <span class="o">=</span> <span class="n">p_hat_fnc</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="n">eql0</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">:</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">-</span> <span class="n">c</span>
<span class="n">p_s</span> <span class="o">=</span> <span class="n">brentq</span><span class="p">(</span><span class="n">eql0</span><span class="p">,</span> <span class="n">p_hat</span><span class="p">,</span> <span class="mf">0.9</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="n">p_s</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="c1"># also plot defector steady state
</span><span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="c1"># show changes with arrows
</span><span class="n">y</span> <span class="o">=</span> <span class="mf">0.01</span>
<span class="n">p_u</span> <span class="o">=</span> <span class="n">brentq</span><span class="p">(</span><span class="n">eql0</span><span class="p">,</span> <span class="n">p_hat</span><span class="p">,</span> <span class="mf">0.1</span><span class="p">)</span> <span class="c1"># solve for unstable steady state
</span><span class="n">plt</span><span class="p">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">text</span><span class="o">=</span><span class="s">''</span><span class="p">,</span> <span class="n">xy</span><span class="o">=</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">y</span><span class="p">),</span> <span class="n">xytext</span><span class="o">=</span><span class="p">(</span><span class="n">p_u</span><span class="p">,</span> <span class="n">y</span><span class="p">),</span> <span class="n">arrowprops</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">arrowstyle</span><span class="o">=</span><span class="s">'->'</span><span class="p">,</span> <span class="n">linewidth</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">text</span><span class="o">=</span><span class="s">''</span><span class="p">,</span> <span class="n">xy</span><span class="o">=</span><span class="p">(</span><span class="n">p_u</span><span class="p">,</span> <span class="n">y</span><span class="p">),</span> <span class="n">xytext</span><span class="o">=</span><span class="p">(</span><span class="n">p_s</span><span class="p">,</span> <span class="n">y</span><span class="p">),</span> <span class="n">arrowprops</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">arrowstyle</span><span class="o">=</span><span class="s">'<-'</span><span class="p">,</span> <span class="n">linewidth</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">annotate</span><span class="p">(</span><span class="n">text</span><span class="o">=</span><span class="s">''</span><span class="p">,</span> <span class="n">xy</span><span class="o">=</span><span class="p">(</span><span class="n">p_s</span><span class="p">,</span> <span class="n">y</span><span class="p">),</span> <span class="n">xytext</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">y</span><span class="p">),</span> <span class="n">arrowprops</span><span class="o">=</span><span class="nb">dict</span><span class="p">(</span><span class="n">arrowstyle</span><span class="o">=</span><span class="s">'->'</span><span class="p">,</span> <span class="n">linewidth</span><span class="o">=</span><span class="mi">2</span><span class="p">))</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'$k=5$, $n=7$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'change in coops $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'lower right'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylim</span><span class="p">((</span><span class="o">-</span><span class="mf">0.06</span><span class="p">,</span> <span class="mf">0.02</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<p>In the figure below,
we find that the all-defector population is an evolutionarily stable steady state,
and the all-cooperation population is unstable.
There is also an evolutionarily stable state with a mix of cooperators and defectors
(at \(p^{\star} = 0.746\)).</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/dpdt_v_p.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/dpdt_v_p.png" alt="How the change in the proportion of cooperators varies with the current proportion of cooperators \(p\). There are 2 stable steady states (marked with a solid dot)
and two unstable steady states. The arrows indicate the direction in which the population will evolve given its current \(p\) value." />
</a>
<figcaption><span>How the change in the proportion of cooperators varies with the current proportion of cooperators \(p\). There are 2 stable steady states (marked with a solid dot)
and two unstable steady states. The arrows indicate the direction in which the population will evolve given its current \(p\) value.</span></figcaption>
</figure>
<p>We can also see some of this directly by inspecting the equation for the dynamics.
Recall</p>
\[\frac{dp}{dt} = p (1-p) (f_C(p) - f_D(p)).\]
<p>Steady states occur when \(\frac{dp}{dt} =0\),
so there are two trivial steady states</p>
\[p = 0,\]
<p>and</p>
\[p = 1.\]
<p>and any interior steady states solve</p>
\[0 = f_C(p) - f_D(p).\]
<p>Let’s focus on these interior steady states.
Recall</p>
\[f_C(p) = \sum_{i=0}^{n-1} { n-1 \choose i } p^i (1-p)^{n-1-i} b_{i+1} - c,\]
<p>and</p>
\[f_D(p) = \sum_{i=0}^{n-1} { n-1 \choose i } p^i (1-p)^{n-1-i} b_i,\]
<p>but the \(b_i\) values are only 1 when \(i >=k\), otherwise 0. Therefore,
we can simplify</p>
\[f_C(p) - f_D(p) = \underbrace{ {n-1 \choose k-1} p^{k-1} (1-p)^{n-k}}_{\pi_k(p)} - c,\]
<p>and notice that the pivot probability is in it.</p>
<p>In summary, the dynamics are</p>
\[\frac{dp}{dt} = p \, (1-p) \, \overbrace{ \underbrace{ {n-1 \choose k-1} p^{k-1} (1-p)^{n-k} }_{\pi_k(p)} - c}^{f_C(p) - f_D(p)}\]
<p>Let’s plot just the function that solves the interior steady states</p>
\[\pi_k(p) - c = 0.\]
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c1"># our \pi_k(p) - c = 0 function to solve
</span><span class="n">Delta_f</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">:</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">-</span> <span class="n">c</span>
<span class="c1"># some parameter values
</span><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">5</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.3</span>
<span class="c1"># how the delta f function changes with p
</span><span class="n">Delta_fV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">Delta_f</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">Delta_fV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="sa">r</span><span class="s">'$f_C - f_D$'</span><span class="p">)</span>
<span class="c1"># the p-value at the peak (see further below for explanation)
</span><span class="n">p_hat</span> <span class="o">=</span> <span class="n">p_hat_fnc</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="c1"># solve for stable steady state
</span><span class="n">eql0</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">:</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">-</span> <span class="n">c</span>
<span class="n">p_s</span> <span class="o">=</span> <span class="n">brentq</span><span class="p">(</span><span class="n">eql0</span><span class="p">,</span> <span class="n">p_hat</span><span class="p">,</span> <span class="mf">0.9</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="n">p_s</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'where $\pi_k(p)-c=0$'</span><span class="p">)</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'$k=5$, $n=7$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'$f_C(p) - f_D(p)$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/fCminusfD_v_p.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/fCminusfD_v_p.png" alt="How the difference in fitness effects changes with the proportion of cooperators. This function also isolates the interior steady states, which solve \( \pi_k(p) - c = 0 \) (stable steady state shown with a solid dot)." />
</a>
<figcaption><span>How the difference in fitness effects changes with the proportion of cooperators. This function also isolates the interior steady states, which solve \( \pi_k(p) - c = 0 \) (stable steady state shown with a solid dot).</span></figcaption>
</figure>
<h3>The peak</h3>
<p>De Jaegher found that,
when \(1 < k < n\), both \(f_C - f_D\) and \(\pi_k\) have a peak at \(\hat{p} = \frac{k-1}{n-1}\).
They found this by taking advantage of the fact that the function is unimodal, and finding when the derivative was 0.</p>
\[f_C(p) - f_D(p) = {n-1 \choose k-1} p^{k-1} (1-p)^{n-k} - c,\]
<p>so</p>
\[\frac{d(f_C(p) - f_D(p))}{dp}
= {n-1 \choose k-1} (1-p)^{n-k-1} p^{k-2} \bigl( (k-1)(1-p) + (k-n) p \bigr).\]
<p>The peak is located where \(\frac{d(f_C(p) - f_D(p))}{dp} = 0\)
that is</p>
\[0 = (k-1)(1-\hat{p}) + (k-n) \hat{p},\]
<p>which gives the peak location</p>
\[\hat{p} = \frac{k-1}{n-1}.\]
<p>Because \(f_C - f_D\) and \(\pi_k\) differ only by a constant,
this point is also the location of the peak of the pivot probability function.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c1"># some parameter values
</span><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">5</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.3</span>
<span class="c1"># how the gain function changes with p
</span><span class="n">Delta_fV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">Delta_f</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">Delta_fV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="sa">r</span><span class="s">'$f_C - f_D$'</span><span class="p">)</span>
<span class="c1"># the p-value at the peak
</span><span class="n">p_hat</span> <span class="o">=</span> <span class="n">p_hat_fnc</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axvline</span><span class="p">(</span><span class="n">p_hat</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">,</span> <span class="n">ls</span><span class="o">=</span><span class="s">'dashed'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'peak $\hat{p}$'</span><span class="p">)</span>
<span class="c1"># solve for steady state
</span><span class="n">eql0</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">:</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">-</span> <span class="n">c</span>
<span class="n">p_s</span> <span class="o">=</span> <span class="n">brentq</span><span class="p">(</span><span class="n">eql0</span><span class="p">,</span> <span class="n">p_hat</span><span class="p">,</span> <span class="mf">0.9</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="n">p_s</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'$k=5$, $n=7$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'$f_C(p) - f_D(p)$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/where_the_peak.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/where_the_peak.png" alt="Example showing the location fo the peak." />
</a>
<figcaption><span>Example showing the location fo the peak.</span></figcaption>
</figure>
<p>Knowing where this peak is and
whether it is above or below the line tells us about the dynamics, as will become clearer in the examples below.</p>
<h3>Examples for Result 1</h3>
<p>Below, we’ll go through examples for each of the cases in Result 1.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c1"># keep group-size parameter value constant
</span><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="c1"># grid size for plotting
</span><span class="n">pV</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="n">linspace</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">60</span><span class="p">)</span></code></pre></figure>
<h4>When \(k=1\)</h4>
<p>For the minimal threshold \(k = 1\),
the game has a unique interior fxed point where a fraction \(p_1^{\text{II}} = 1-c^{1/(n-1}\) of players cooperates (Volunteer’s Dilemma).</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">k</span> <span class="o">=</span> <span class="mi">1</span>
<span class="c1"># how Delta p changes with p
</span><span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="sa">r</span><span class="s">'$\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="c1"># fixed point
</span><span class="n">p_s</span> <span class="o">=</span> <span class="mi">1</span><span class="o">-</span><span class="n">c</span><span class="o">**</span><span class="p">(</span><span class="mi">1</span><span class="o">/</span><span class="p">(</span><span class="n">n</span><span class="o">-</span><span class="mi">1</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="n">p_s</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'$p_1^{II}$'</span><span class="p">)</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'$k=1$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'change in coops $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/example_volunteers.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/example_volunteers.png" alt="Example dynamics for the Volunteer's Dilemma." />
</a>
<figcaption><span>Example dynamics for the Volunteer's Dilemma.</span></figcaption>
</figure>
<h4>When \(1 < k < n\)</h4>
<p><strong>Low cost</strong></p>
<p>When participation costs are small (\(c < \bar{c}_k\)),
the game both has a fixed point where all players defect (\(p = 0\))
and a stable fixed point \(p_k^{\text{II}}\),
which is implicitly given by \(\pi_k(p_k^{\text{II}}) = c\).</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">k</span> <span class="o">=</span> <span class="mi">5</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.3</span>
<span class="c1"># how Delta p changes with p
</span><span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="sa">r</span><span class="s">'$\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="c1"># the p-value at the peak
</span><span class="n">p_hat</span> <span class="o">=</span> <span class="n">p_hat_fnc</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="c1"># the pivot probability at the p-value at the peak
</span><span class="n">c_bar</span> <span class="o">=</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p_hat</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="c1"># solve for steady state
</span><span class="n">eql0</span> <span class="o">=</span> <span class="k">lambda</span> <span class="n">p</span><span class="p">:</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="o">-</span> <span class="n">c</span>
<span class="n">p_s</span> <span class="o">=</span> <span class="n">brentq</span><span class="p">(</span><span class="n">eql0</span><span class="p">,</span> <span class="n">p_hat</span><span class="p">,</span> <span class="mf">0.9</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="n">p_s</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="s">'$p_k^{II}$'</span><span class="p">)</span>
<span class="c1"># also plot defector ss
</span><span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'$k=5$, '</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> <span class="o">+</span> <span class="s">' = $c < c_k$ = '</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="nb">int</span><span class="p">(</span><span class="mi">1000</span><span class="o">*</span><span class="n">c_bar</span><span class="p">)</span><span class="o">/</span><span class="mi">1000</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'change in coops $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/example_hybrid.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/example_hybrid.png" alt="Example dynamics for the Hybrid Game." />
</a>
<figcaption><span>Example dynamics for the Hybrid Game.</span></figcaption>
</figure>
<p><strong>High cost</strong></p>
<p>When participation costs are high, \(c > \bar{c}_k\),
the game has a unique stable fixed point where all players defect (\(p = 0\)).</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">c</span> <span class="o">=</span> <span class="mf">0.35</span>
<span class="c1"># how Delta p changes with p
</span><span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="sa">r</span><span class="s">'$\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="c1"># the p-value at the peak
</span><span class="n">p_hat</span> <span class="o">=</span> <span class="n">p_hat_fnc</span><span class="p">(</span><span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="c1"># plot defector ss
</span><span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'$k=5$, '</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> <span class="o">+</span> <span class="s">' = $c > c_k$ = '</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="nb">int</span><span class="p">(</span><span class="mi">1000</span><span class="o">*</span><span class="n">c_bar</span><span class="p">)</span><span class="o">/</span><span class="mi">1000</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'change in coops $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/example_PD.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/example_PD.png" alt="Example dynamics for the situation they liken to the Prisoner's Dilemma." />
</a>
<figcaption><span>Example dynamics for the situation they liken to the Prisoner's Dilemma.</span></figcaption>
</figure>
<p>What’s the significance of this \(\bar{c}_k\) value?
Recall</p>
\[f_C(p) - f_D(p) = \underbrace{ {n-1 \choose k-1} p^{k-1} (1-p)^{n-k}}_{\pi_k(p)} - c.\]
<p>They’ve defined</p>
\[\bar{c}_k = \pi_k(\hat{p}),\]
<p>where \(\hat{p}\) is the probability at the peak.
So we know that, if \(\bar{c}_k > c\),
the \(f_C - f_D\) line will be above the zero line at the peak and there will be an interior equilibrium,
and if \(\bar{c}_k < c\),
the peak is below the line and there are no interior equilibria.</p>
<h4>When \(k = n\)</h4>
<p>Our final case is the maximum threshold,
\(k=n\).
Here,
the game both has a stable steady state where all players defect (\(p = 0\)),
and a stable steady state where all players cooperate (\(p = 1\)).
The basin of attraction is defined by \(p_n^{\text{I}} = c^{1/(n-1)}\),
the unstable interior steady state.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">k</span> <span class="o">=</span> <span class="n">n</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.3</span>
<span class="c1"># how Delta p changes with p
</span><span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'blue'</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="sa">r</span><span class="s">'$\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="c1"># the pivot probability at the p-value at the peak
</span><span class="n">pi_k</span> <span class="o">=</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p_hat</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span>
<span class="c1"># separatrix
</span>
<span class="c1"># trivial equilibria are steady states
</span><span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="mi">0</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">scatter</span><span class="p">([</span><span class="mi">1</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">s</span><span class="o">=</span><span class="mi">50</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'$k=n$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'change in coops $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'best'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/example_staghunt.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/example_staghunt.png" alt="Example dynamics for the situation they liken to a Stag Hunt." />
</a>
<figcaption><span>Example dynamics for the situation they liken to a Stag Hunt.</span></figcaption>
</figure>
<h4>Bringing it all together</h4>
<p>We can place each of these examples on De Jaegher’s Fig. 4.
The 4 regions of the figure represent four different qualitative regimes for the dynamics (game types).</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/DeJaegher_Fig4_spaceout.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/DeJaegher_Fig4_spaceout.png" alt="Placing each of our examples onto De Jaegher's Fig. 4. The four regions reprsent four qualitatively different regimes for the dynamics." />
</a>
<figcaption><span>Placing each of our examples onto De Jaegher's Fig. 4. The four regions reprsent four qualitatively different regimes for the dynamics.</span></figcaption>
</figure>
<h3>Examples for Result 2</h3>
<p>De Jaegher found that the threshold level has a U-shaped efect on the level of cooperation,
which can be seen in their Fig. 4 (the shape of the division between the blue and green region).
Let’s plot how the dynamics varies with varying threshold level.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.33</span>
<span class="k">for</span> <span class="n">k</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">n</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
<span class="c1"># how Delta p changes with p
</span> <span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="nb">str</span><span class="p">(</span><span class="n">k</span><span class="p">))</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="sa">r</span><span class="s">'the full $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'change in coops $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'upper center'</span><span class="p">,</span> <span class="n">title</span><span class="o">=</span><span class="s">'$k=$'</span><span class="p">,</span> <span class="n">ncol</span> <span class="o">=</span> <span class="mi">7</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="s">'x-small'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylim</span><span class="p">((</span><span class="o">-</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">0.06</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<p>Here,
we see that cooperation can evolve both for low and for high thresholds, but not for intermediate thresholds.
It is perhaps surprising because, intuitively, it seems like a high threshold would be more difficult to obtain.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/threshold_dpdt.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/threshold_dpdt.png" alt="The effect of different threshold levels on the dynamics. In this example, cooperation can evolve if the threshold is low or high, but not for intermediate values." />
</a>
<figcaption><span>The effect of different threshold levels on the dynamics. In this example, cooperation can evolve if the threshold is low or high, but not for intermediate values.</span></figcaption>
</figure>
<p>This U-shape emerges from the pivot probabilities.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.33</span>
<span class="k">for</span> <span class="n">k</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">n</span><span class="o">+</span><span class="mi">1</span><span class="p">):</span>
<span class="c1"># how Delta p changes with p
</span> <span class="n">pi_kV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">pi_kV</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="nb">str</span><span class="p">(</span><span class="n">k</span><span class="p">))</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'just the $\pi_k$ bit'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s">'$\pi_k(p)$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'upper center'</span><span class="p">,</span> <span class="n">title</span><span class="o">=</span><span class="s">'$k=$'</span><span class="p">,</span> <span class="n">ncol</span> <span class="o">=</span> <span class="mi">7</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="s">'xx-small'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/threshold_pivot.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/threshold_pivot.png" alt="Different threshold levels have a U-shaped effect on the pivot probability." />
</a>
<figcaption><span>Different threshold levels have a U-shaped effect on the pivot probability.</span></figcaption>
</figure>
<h3>Group size</h3>
<p>De Jaegher discuss the possibility of stabilising a game by changing the group size.
If cooperation is more likely to persist at low or high thresholds,
then perhaps increasing or decreasing the group size to shift the relative position of the threshold could stabilise cooperation.</p>
<p>It turns out this is true for decreasing the group size, but not for increasing it.
Increasing the group size has a negative effect on cooperation.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.33</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">4</span>
<span class="c1"># plot the previous
</span><span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">,</span> <span class="n">ls</span><span class="o">=</span><span class="s">'dashed'</span><span class="p">)</span>
<span class="c1"># plot for a range
</span><span class="n">nV</span> <span class="o">=</span> <span class="p">[</span><span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">12</span><span class="p">]</span>
<span class="k">for</span> <span class="n">n</span> <span class="ow">in</span> <span class="n">nV</span><span class="p">:</span>
<span class="c1"># how Delta p changes with p
</span> <span class="n">delta_pV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">delta_p</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">c</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">delta_pV</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="nb">str</span><span class="p">(</span><span class="n">n</span><span class="p">))</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="sa">r</span><span class="s">'the full $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="sa">r</span><span class="s">'change in coops $\frac{dp}{dt}$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'upper center'</span><span class="p">,</span> <span class="n">title</span><span class="o">=</span><span class="s">'$n=$'</span><span class="p">,</span> <span class="n">ncol</span> <span class="o">=</span> <span class="mi">7</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="s">'x-small'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylim</span><span class="p">((</span><span class="o">-</span><span class="mf">0.1</span><span class="p">,</span> <span class="mf">0.06</span><span class="p">))</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/dpdt_vary_n.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/dpdt_vary_n.png" alt="In this example, we have kept the threshold level \(k=4\) constant, but varied the group size. Decreasing the group size allows the interior steady state, with coexistence of cooperators and defectors, to emerge; but increasing the group size has a negative effect on cooperation." />
</a>
<figcaption><span>In this example, we have kept the threshold level \(k=4\) constant, but varied the group size. Decreasing the group size allows the interior steady state, with coexistence of cooperators and defectors, to emerge; but increasing the group size has a negative effect on cooperation.</span></figcaption>
</figure>
<p>The reason why is due to the effect of group size on the pivot probability: the larger the group is, the less likely it is for a player to be the pivotal player, and so the lower the benefit of being a cooperator.</p>
<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">n</span> <span class="o">=</span> <span class="mi">7</span>
<span class="n">c</span> <span class="o">=</span> <span class="mf">0.33</span>
<span class="n">k</span> <span class="o">=</span> <span class="mi">4</span>
<span class="c1"># plot the previous
</span><span class="n">pi_kV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">pi_kV</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">,</span> <span class="n">ls</span><span class="o">=</span><span class="s">'dashed'</span><span class="p">)</span>
<span class="c1"># plot for a range
</span><span class="n">nV</span> <span class="o">=</span> <span class="p">[</span><span class="mi">4</span><span class="p">,</span> <span class="mi">5</span><span class="p">,</span> <span class="mi">6</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">12</span><span class="p">]</span>
<span class="k">for</span> <span class="n">n</span> <span class="ow">in</span> <span class="n">nV</span><span class="p">:</span>
<span class="c1"># how pivot probability changes with p
</span> <span class="n">pi_kV</span> <span class="o">=</span> <span class="p">[</span> <span class="n">pi_k_fnc</span><span class="p">(</span><span class="n">p</span><span class="p">,</span> <span class="n">k</span><span class="p">,</span> <span class="n">n</span><span class="p">)</span> <span class="k">for</span> <span class="n">p</span> <span class="ow">in</span> <span class="n">pV</span> <span class="p">]</span>
<span class="n">plt</span><span class="p">.</span><span class="n">plot</span><span class="p">(</span><span class="n">pV</span><span class="p">,</span> <span class="n">pi_kV</span><span class="p">,</span> <span class="n">label</span><span class="o">=</span><span class="nb">str</span><span class="p">(</span><span class="n">n</span><span class="p">))</span>
<span class="c1"># decorate plot
</span><span class="n">plt</span><span class="p">.</span><span class="n">title</span><span class="p">(</span><span class="s">'just the $\pi_k$ bit'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">axhline</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">color</span><span class="o">=</span><span class="s">'black'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">xlabel</span><span class="p">(</span><span class="s">'propn cooperators $p$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">ylabel</span><span class="p">(</span><span class="s">'$\pi_k(p)$'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">legend</span><span class="p">(</span><span class="n">loc</span><span class="o">=</span><span class="s">'upper center'</span><span class="p">,</span> <span class="n">title</span><span class="o">=</span><span class="s">'$n=$'</span><span class="p">,</span> <span class="n">ncol</span> <span class="o">=</span> <span class="mi">7</span><span class="p">,</span> <span class="n">fontsize</span><span class="o">=</span><span class="s">'xx-small'</span><span class="p">)</span>
<span class="n">plt</span><span class="p">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/08/pivot_vary_n.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/08/pivot_vary_n.png" alt="The larger the group size is, the less likely it is that a player is the pivotal player." />
</a>
<figcaption><span>The larger the group size is, the less likely it is that a player is the pivotal player.</span></figcaption>
</figure>
<h3>References</h3>
<p>De Jaegher, K. (2020). <a href="https://www.nature.com/articles/s41598-020-62626-3">High thresholds encouraging the evolution of cooperation in threshold public-good games</a>. <em>Scientific Reports</em>, <strong>10</strong>(1), 1-10.</p>nadiahkristensenLast month, I chose a paper by Kris De Jaegher in Scientific Reports for our weekly lab-meeting discussion. We used the paper to teach ourselves about threshold games, and the purpose of this blog post is to summarise the things we learnt.Predator dilution effect synchronises fish migration2022-07-01T04:44:54+00:002022-07-01T04:44:54+00:00https://nadiah.org/2022/07/01/fish-synchrony<p>I recently came across <a href="https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2656.13790">a paper by Kaj Hulthen and others in <em>Journal of Animal Ecology</em></a>
showing good empirical evidence for <a href="https://nadiah.org/wp-content/uploads/2019/03/Harts_et_al-2016-Oikos.pdf">a model</a> I coauthored a few years ago with Anna Harts and <a href="https://www.kokkonuts.org/">Hanna Kokko</a>.
Hulthen <em>et al.</em> (2022) were interested in the migration timing of roach (<em>Rutilus rutilus</em>).
Specifically, they were interested in how the combination of selection for early arrival plus high predation risk could explain the relatively high migration synchrony in spring compared to autumn.</p>
<p>Roach migrate from lake to stream in autumn,
and from stream to lake in spring.
In the lake in spring, zooplankton numbers peak,
and arriving too late may mean missing out on foraging and also mating opportunities.
However,
lake pike and piscivorous birds present a high predation risk.
Individuals can reduce their predation risk by arriving later than the others, when there already many other roach in the lake. This provides ‘safety in numbers’, also known as the predator dilution effect.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/07/Rutilus_rutilus_Prague_Vltava_3.jpg">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/07/Rutilus_rutilus_Prague_Vltava_3.jpg" alt="A roach. By Karelj - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=14954932 https://commons.wikimedia.org/wiki/User:Karelj." />
</a>
<figcaption><span>A roach. By Karelj - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=14954932 https://commons.wikimedia.org/wiki/User:Karelj.</span></figcaption>
</figure>
<p>According to our model (Harts <em>et al.</em> 2016),
the net effect of selection for both early and late arrival (relative to conspecifics) is that selection will favour synchronous arrival–
and this is also what Hulthen <em>et al.</em> found.</p>
<p>Hulthen <em>et al.</em> gathered highly detailed individual-based tracking data.
They surveyed two different lake-and-stream systems,
lake Krankesjön and lake Søgård,
over 7 and 9 years, tracking 4093 and 1909 individuals, respectively.
They used return migration as a proxy for survival,
and measured synchrony in arrival time using Cagnacci <em>et al.</em> (2011, 2016)’s circular variable \(\rho\).</p>
<p>The circular variable is a rather neat way of summarising how synchronous timings are.
The days of the year are evenly spaced along the perimeter of a circle with radius 1,
and each arrival is encoded as a vector from the origin to the day.
Synchrony is then measured as the length of the vector that results from taking the average of all the arrival vectors.
The length varies from \(\rho = 0\) to 1, where low values indicate low synchrony and high values indicate high synchrony.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/07/circular_variable.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/07/circular_variable.png" alt="An example calculating the synchrony between two dates for (a) highly synchronous (b) less synchronous dates. The length of the average vector (blue) is the circular variable \( \rho \).
The more synchronous the dates are, the longer the average vector is." />
</a>
<figcaption><span>An example calculating the synchrony between two dates for (a) highly synchronous (b) less synchronous dates. The length of the average vector (blue) is the circular variable \( \rho \).
The more synchronous the dates are, the longer the average vector is.</span></figcaption>
</figure>
<p>Hulthen <em>et al.</em> (2022)’s findings were that migration during spring, from streams to lakes, was more synchronous than migration during autumn.
They also found that there was a survival cost associated with early migration in the spring but not the autumn,
consistent with a predator dilution effect.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/07/Hulthen22_Fig2.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/07/Hulthen22_Fig2.png" alt="Comparing the relative migration timing of individuals who survived and did not survive to the next year. Non-survivors migrated earlier in the spring, but not in the autumn, consistent with a predator dilution effect that selects against arriving early relative to the bulk of the population in spring. Adapted from their Fig. 2." />
</a>
<figcaption><span>Comparing the relative migration timing of individuals who survived and did not survive to the next year. Non-survivors migrated earlier in the spring, but not in the autumn, consistent with a predator dilution effect that selects against arriving early relative to the bulk of the population in spring. Adapted from their Fig. 2.</span></figcaption>
</figure>
<p>I was curious to see the synchrony result visually,
so I downloaded their data and plotted the synchrony in each lake in each year for spring and autumn
(Python script <a href="https://github.com/nadiahpk/hulthen-2022-playground">here</a>).
The effect is strong enough that it can be seen just by looking at this plot.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/07/my_plot.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/07/my_plot.png" alt="The synchrony (\(\rho\)) for each year for each lake in spring (red) versus autumn (blue). Synchrony is higher in spring than autumn." />
</a>
<figcaption><span>The synchrony (\(\rho\)) for each year for each lake in spring (red) versus autumn (blue). Synchrony is higher in spring than autumn.</span></figcaption>
</figure>
<p>It’s really interesting to me to see these results for fish.
As Hulthen <em>et al.</em> note, a lot of the work on migratory timing is for birds,
and certainly I had bird examples in mind when working on the model simply because that is my background.
But evidently the concepts of early arrival and predator dilution are general and apply to many taxonomic groups.</p>
<p>Another recent <a href="https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2656.13308">study by Pärssinen <em>et al.</em> (2020)</a>
also found evidence of the predator dilution effect in fish.
Hybrids of roach and bream had intermediate migration time that left them vulnerable to predation by cormorants.
The special thing about predator dilution is that, all else being equal, the particular timing that evolves is evolutionarily stable but not convergent stable. Essentially this means that the timing that evolves is arbitrary,
contingent on initial conditions or chance clustering.
This implies there could be a great many timings that initiate or maintain divergence provided they are far enough apart and a large enough population maintains each timing.</p>
<h3>References</h3>
<p>Cagnacci, F., Focardi, S., Ghisla, A., van Moorter, B., Merrill, E. H., Gurarie, E., Heurich, M., Mysterud, A., Linnell, J., Panzacchi, M., May, R., Nygard, T., Rolandsen, C., & Hebblewhite, M. (2016). How many routes lead to migration? Comparison of methods to assess and characterize migratory movements. Journal of Animal Ecology, 85, 54–68.</p>
<p>Cagnacci, F., Focardi, S., Heurich, M., Stache, A., Hewison, A. J. M., Morellet, N., Kjellander, P., Linnell, J. D. C., Mysterud, A., Neteler, M., Delucchi, L., Ossi, F., & Urbano, F. (2011). Partial migration in roe deer: Migratory and resident tactics are end points of a behavioural gradient determined by ecological factors. Oikos, 120, 1790–1802.</p>
<p>Harts, A. M. F., Kristensen, N. P., & Kokko, H. (2016). <a href="https://nadiah.org/wp-content/uploads/2019/03/Harts_et_al-2016-Oikos.pdf">Predation can select for later and more synchronous arrival times in migrating species</a>. Oikos, 125, 1528–1538.</p>
<p>Hulthén, K., Chapman, B. B., Nilsson, P. A., Hansson, L. A., Skov, C., Brodersen, J., & Brönmark, C. (2022). <a href="https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2656.13790">Timing and synchrony of migration in a freshwater fish: Consequences for survival</a>. Journal of Animal Ecology, In Press</p>
<p>Pärssinen, V., Hulthén, K., Brönmark, C., Skov, C., Brodersen, J., Baktoft, H., Chapman, B. B., Hansson, L-A. & Nilsson, P. A. (2020). <a href="https://besjournals.onlinelibrary.wiley.com/doi/full/10.1111/1365-2656.13308">Maladaptive migration behaviour in hybrids links to predator‐mediated ecological selection</a>. Journal of Animal Ecology, 89(11), 2596–2604.</p>nadiahkristensenI recently came across a paper by Kaj Hulthen and others in Journal of Animal Ecology showing good empirical evidence for a model I coauthored a few years ago with Anna Harts and Hanna Kokko. Hulthen et al. (2022) were interested in the migration timing of roach (Rutilus rutilus). Specifically, they were interested in how the combination of selection for early arrival plus high predation risk could explain the relatively high migration synchrony in spring compared to autumn.Human cooperation and social network expansion over time2022-04-01T04:44:54+00:002022-04-01T04:44:54+00:00https://nadiah.org/2022/04/01/declining-homophily<p>It is generally held that human cooperation first evolved in our ancestral past,
when we tended to live in small groups composed mostly of family members,
and when cooperation could therefore be selected for by kin selection.
This narrative is also sometimes invoked to explain the origin of cooperative mechanisms that are primarily about
cooperation between nonkin.
For example,
in the iterated Prisoner’s Dilemma,
cooperation between nonkin can be maintained by the tit-for-tat strategy,
and a population of tit-for-tat players can resist invasion by defectors.
However, tit-for-tat strategists cannot invade a population of defectors (presumably the primordial strategy),
and so the question remains how it got started in the first place.
Axelrod & Hamilton (1981) proposed that one way could have been if
dispersal was low enough in the past that cooperative types tended to cluster together.</p>
<p>Over the course of the human lineage,
there is a general pattern expanding social networks and declining relatedness between interacting individuals.
DNA analysis back to 45 ka shows a general decline in background relatedness over time,
with a marked change at the Neolithic Demographic Transition (Ringbauer et al., 2021),
when the advent of farming in each region coincided with a sudden increase in population size (Bocquet-Appel, 2011).
In industrialised societies,
falling mortality and fertility has also reduced the size of kin networks (David-Barrett, 2019),
motivating new bases for social identity (David-Barrett, 2020).
In the popular imagination,
the lifeways of hunter-gatherer people represent our closest analogue to what the deep ancestral past must have been like;
however, modern hunter-gatherers also maintain expansive social networks with hundreds of unrelated individuals
(Hill et al., 2011; Bird et al., 2019),
and groups congregate seasonally for communal hunting and socialising (Kelly, 2013; Balme, 2018).</p>
<p>Not a lot is known about social structure before 45 ka (Graeber and Wengrow, 2018),
and so inferences must be made on the basis of fossils and other material evidence.
One potential indicator of social structure is the distance that materials were transported from their source.
Before around 1.6 Ma, raw-material transport distances are comparable to chimpanzee home-range sizes (∼ 13 km),
indicating relatively isolated social groups composed mostly of kin (Marwick, 2003).
Distances and occurrences of material transport subsequently increased over the course of the Early and Middle Stone Age.
For example,
approximately 295-320 ka, obsidian and ochre were transported 25-50 km (as the crow flies) (Brooks et al., 2018).
After ~130 ka, raw-material transport distances frequently exceeded 300 km (Marwick, 2003).
These distances may be indicative of networks of exchange,
which implies increased language abilities (Marwick, 2003) and notions of relatedness beyond genetic kin (Moutsiou, 2012).
However, we must also be cautious when trying to infer what these exchanges meant,
because humans are quirky and sometimes transport materials over long distances for unexpected reasons (Graeber and Wengrow, 2018).</p>
<p>The types of materials that are found can also be indicative of something social.
Tool sophistication and symbolic development may be indicative of cognitive and social development,
and their stylistic diversity indicative of cultures and the flow of information between them.
In the early period (~1.5-0.4 Ma),
evidence of material innovation is scarce;
but nonetheless,
encephalisation increased over this period,
which some authors attribute to the demands of increasing social complexity (Gamble et al., 2011).
The transport of ochre mentioned above, from 295-320 ka,
is notable because ochre is used by modern people as a pigment, either for artwork or body ornamentation,
and as a potential indicator of one’s group identity (Brooks et al., 2018).</p>
<p>The appearance of beads (> 142 ka Sehasseh et al., 2021) may be important because they can be used
to communicate social identity (e.g., group membership and marital status) to strangers.
The use of beads greatly increased around the same time that population sizes increased (40-45 ka, Kuhn et al., 2001),
which further supports the idea that beads were used in this way.
Beads were also transported long distances;
for example, shell beads found in the Kimberly, Australia, from 30 ka,
were transported > 300 km from their source (Balme and Morse, 2006).
It is interesting to note that some modern hunter-gatherers use the exchange of beads
as the substrate for indoctrinating children into socially defined notions of kinship (Wiessner, 1998).
We also find long-distance transport of other materials potentially used to communicate identity,
e.g., ochre from 32 ka transported 125 km in central Australia (Smith et al., 1998).</p>
<p>An expanding social network could have provided the opportunity to experiment with different styles of
large-scale collective action (Graeber and Wengrow, 2021).
For example,
the use of nets is an indicator of communal hunting,
particularly of the integration of labour from women, children, and the elderly (Soffer et al., 2001).
Evidence of large-scale fishing operations occur from 27 ka in Australia (Balme, 1995).</p>
<p>Acquiring meat may have been one of our ancestors’ first collaborative activities,
and collaborative foraging in general has been linked to the early stages of human cooperation
(Tomasello et al. 2012).
Unfortunately, it is unclear exactly how animal products were first acquired by our ancestors:
were they working together to take down large prey in some way (Domínguez-Rodrigo et al., 2021),
or were they opportunistically finding and/or snatching scraps from animals taken down by other carnivores
(Pobiner, 2020)?
Perhaps scavenging itself was a collaborative activity if that meant confronting and scaring away
large, dangerous carnivores who were still at their meals (Bickerton & Szathmáry, 2011)?</p>
<p>In the figure below,
I’ve sketched out a rough timeline of some of the elements above.
This should help orient different narratives about how cooperation evolved.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/04/key_dates_image.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/04/key_dates_image.png" alt="Figure 1: A rough timeline of transport distances, social care, and hunting over the course of the human lineage. Sources: [1] Thompson et al. (2019); [2] Cunha (2016); [3] Domínguez-Rodrigo et al. (2005); [4] Domínguez-Rodrigo et al. (2021); [5] Bramble (2004); [6] Pobiner (2020); [7] Bickerton & Szathmáry (2011); [8] Wilkins et al. (2012); [9] Oakley et al. (1977); [10] Allington-Jones (2015); [11] Lombard (2016); [12] Balme (2018); [13] Marwick (2003); [14] Balme and Morse (2006); [15] Smith et al. (1998); [16] Brooks et al. (2018); [17] Balme et al. (2009); [18] Sehasseh et al. (2021); [19] Gamble et al. (2011); [20] Kuhn et al. (2001)." />
</a>
<figcaption><span>Figure 1: A rough timeline of transport distances, social care, and hunting over the course of the human lineage. Sources: [1] Thompson et al. (2019); [2] Cunha (2016); [3] Domínguez-Rodrigo et al. (2005); [4] Domínguez-Rodrigo et al. (2021); [5] Bramble (2004); [6] Pobiner (2020); [7] Bickerton & Szathmáry (2011); [8] Wilkins et al. (2012); [9] Oakley et al. (1977); [10] Allington-Jones (2015); [11] Lombard (2016); [12] Balme (2018); [13] Marwick (2003); [14] Balme and Morse (2006); [15] Smith et al. (1998); [16] Brooks et al. (2018); [17] Balme et al. (2009); [18] Sehasseh et al. (2021); [19] Gamble et al. (2011); [20] Kuhn et al. (2001).</span></figcaption>
</figure>
<h3>References</h3>
<p>Allington-Jones, L. (2015). The Clacton spear: The last one hundred years. Archaeological Journal, 172(2), 273-296.</p>
<p>Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211(4489), 1390-1396.</p>
<p>Balme, J., Davidson, I., McDonald, J., Stern, N., & Veth, P. (2009). Symbolic behaviour and the peopling of the southern arc route to Australia. Quaternary International, 202(1-2), 59-68.</p>
<p>Balme, J. (1995). 30,000 years of fishery in western New South Wales, Archaeology in Oceania 30(1): 1–21.</p>
<p>Balme, J. (2018). Communal hunting by aboriginal australians: archaeological and ethnographic evidence, in K. Carlson and L. Bemet (eds), Manipulating Prey: Development of Large-Scale Kill Events Around the Globe, University of Colorado Press, Boulder, Colorado, pp. 42–62.</p>
<p>Balme, J. and Morse, K. (2006). Shell beads and social behaviour in Pleistocene Australia, antiquity 80(310): 799–811.</p>
<p>Bickerton, D., & Szathmáry, E. (2011). Confrontational scavenging as a possible source for language and cooperation. BMC Evolutionary Biology, 11(1), 1-7.</p>
<p>Bird, D. W., Bird, R. B., Codding, B. F. and Zeanah, D. W. (2019). Variability in the organization and size of hunter-gatherer groups: Foragers do not live in small-scale societies, Journal of human evolution 131: 96–108.</p>
<p>Bocquet-Appel, J.-P. (2011). When the world’s population took off: the springboard of the Neolithic Demographic Transition, Science 333(6042): 560–561.</p>
<p>Bramble, D. M., & Lieberman, D. E. (2004). Endurance running and the evolution of Homo. nature, 432(7015), 345-352.</p>
<p>Brooks, A. S., Yellen, J. E., Potts, R., Behrensmeyer, A. K., Deino, A. L., Leslie, D. E., Ambrose, S. H., Ferguson, J. R., d’Errico, F., Zipkin, A. M., Whittaker, S., Post, J., Veatch, E. G., Foecke, K. and Clark, J. B. (2018). Long-distance stone transport and pigment use in the earliest Middle Stone Age, Science 360(6384): 90–94.</p>
<p>Cunha, E. (2016). Compassion between humans since when? What the fossils tell us. Etnográfica. Revista do Centro em Rede de Investigação em Antropologia, 20(3)), 653-657.</p>
<p>David-Barrett, T. (2019). Network effects of demographic transition, Scientific Reports 9(1): 1–10.</p>
<p>David-Barrett, T. (2020). Herding friends in similarity-based architecture of social networks, Scientific Reports 10(1): 1–6.</p>
<p>Domínguez-Rodrigo, M., Pickering, T. R., Semaw, S., & Rogers, M. J. (2005). Cutmarked bones from Pliocene archaeological sites at Gona, Afar, Ethiopia: implications for the function of the world’s oldest stone tools. Journal of Human Evolution, 48(2), 109-121.</p>
<p>Domínguez-Rodrigo, M., Baquedano, E., Organista, E., Cobo-Sánchez, L., Mabulla, A.,
Maskara, V., Gidna, A., Pizarro-Monzo, M., Aramendi, J. et al. (2021) Early Pleistocene
faunivorous hominins were not kleptoparasitic, and this impacted the evolution of human
anatomy and socio-ecology. Scientific Reports, 11(1), 1–13.</p>
<p>Gamble, C., Gowlett, J. and Dunbar, R. (2011). The social brain and the shape of the Palaeolithic, Cambridge Archaeological Journal 21(1): 115–136.</p>
<p>Graeber, D., & Wengrow, D. (2018). How to change the course of human history. Eurozine. Retrieved from https://www. eurozine. com/change-course-human-history.</p>
<p>Graeber, D. and Wengrow, D. (2021). The dawn of everything: A new history of humanity, Farrer, Straus and Giroux, New York.</p>
<p>Hill, K. R., Walker, R. S., Božičević, M., Eder, J., Headland, T., Hewlett, B., Hurtado, A. M., Marlowe, F., Wiessner, P. and Wood, B. (2011). Co-residence patterns in hunter-gatherer societies show unique human social structure, Science 331(6022): 1286–1289.</p>
<p>Kelly, R. L. (2013). The Lifeways of Hunter-Gatherers: The Foraging Spectrum, Cambridge University Press, Cambridge.</p>
<p>Kuhn, S. L., Stiner, M. C., Reese, D. S. and Güleç, E. (2001). Ornaments of the earliest Upper Paleolithic: New insights from the Levant, Proceedings of the National Academy of Sciences 98(13): 7641–7646.</p>
<p>Lombard, M. (2016). Mountaineering or ratcheting? Stone Age hunting weapons as proxy for the evolution of human technological, behavioral and cognitive flexibility. In The nature of culture (pp. 135-146). Springer, Dordrecht.</p>
<p>Marwick, B. (2003). Pleistocene exchange networks as evidence for the evolution of language, Cambridge Archaeological Journal 13(1): 67–81.</p>
<p>Moutsiou, T. (2012). Changing scales of obsidian movement and social networking, Unravelling the Palaeolithic: Ten years of research at the Centre for the Archaeology of Human Origins (CAHO, University of Southampton), British Archaeological Reports, pp. 85–95.</p>
<p>Oakley, K. P., Andrews, P., Keeley, L. H., & Clark, J. D. (1977). A reappraisal of the Clacton spearpoint. In Proceedings of the Prehistoric Society (Vol. 43, pp. 13-30). Cambridge University Press.</p>
<p>Pobiner, B. L. (2020). The zooarchaeology and paleoecology of early hominin scavenging. Evolutionary Anthropology: Issues, News, and Reviews, 29(2), 68-82.</p>
<p>Ringbauer, H., Novembre, J. and Steinrücken, M. (2021). Parental relatedness through time revealed by runs of homozygosity in ancient dna, Nature Communications 12(1): 1–11.</p>
<p>Soffer, O., Adovasio, J. M., & Hyland, D. C. (2001). Perishable technologies and invisible people: nets, baskets, and “Venus” wear ca. 26,000 BP. Enduring Records: the Environmental and Cultural Heritage, 233-45.</p>
<p>Thompson, J. C., Carvalho, S., Marean, C. W., & Alemseged, Z. (2019). Origins of the human predatory pattern: The transition to large-animal exploitation by early hominins. Current Anthropology, 60(1), 1-23.</p>
<p>Tomasello, M., Melis, A. P., Tennie, C., Wyman, E. & Herrmann, E. (2012) Two key steps in the evolution of human cooperation: The interdependence hypothesis. Current Anthropology, 53(6), 673–692.</p>
<p>Sehasseh, E. M., Fernandez, P., Kuhn, S., Stiner, M., Mentzer, S., Colarossi, D., Clark, A., Lanoe, F., Pailes, M., Hoffmann, D. et al. (2021). Early Middle Stone Age personal ornaments from Bizmoune Cave, Essaouira, Morocco, Science Advances 7(39): eabi8620.</p>
<p>Smith, M., Fankhauser, B. and Jercher, M. (1998). The changing provenance of red ochre at puritjarra rock shelter, central australia: Late pleistocene to present, 64: 275–292.</p>
<p>Wiessner, P. (1998). Indoctrinability and the evolution of socially defined kinship, in I. Eibl-Eibesfeldt and F. Salter (eds), Indoctrinability, ideology and warfare: evolutionary perspectives, Berghahn Books, Oxford, pp. 133–150.</p>
<p>Wilkins, J., Schoville, B. J., Brown, K. S., & Chazan, M. (2012). Evidence for early hafted hunting technology. Science, 338(6109), 942-946.</p>nadiahkristensenIt is generally held that human cooperation first evolved in our ancestral past, when we tended to live in small groups composed mostly of family members, and when cooperation could therefore be selected for by kin selection. This narrative is also sometimes invoked to explain the origin of cooperative mechanisms that are primarily about cooperation between nonkin. For example, in the iterated Prisoner’s Dilemma, cooperation between nonkin can be maintained by the tit-for-tat strategy, and a population of tit-for-tat players can resist invasion by defectors. However, tit-for-tat strategists cannot invade a population of defectors (presumably the primordial strategy), and so the question remains how it got started in the first place. Axelrod & Hamilton (1981) proposed that one way could have been if dispersal was low enough in the past that cooperative types tended to cluster together.Stochastic evolutionary dynamics of the Volunteers’ Dilemma2022-03-01T04:44:54+00:002022-03-01T04:44:54+00:00https://nadiah.org/2022/03/01/tutic-model<p>The purpose of this blog post is to use a recent paper by Tutić (2021) to teach myself about stochastic
evolutionary game theory.</p>
<p>Tutić’s (2021) model concerns the Volunteer’s Dilemma, which is a public goods game where the public
good is provided if at least one group member cooperates. In the replicator dynamics, cooperators
can always invade a population of defectors. However, as the group size increases, the proportion
of cooperators in the population at the evolutionary steady-state declines (see Tutić (2021) Fig. 6),
which increases the risk that cooperators will be lost from a finite population where stochastic forces are strong.</p>
<p>The game payoffs to defectors are 0 if no other group members are cooperators (no public good produced),
and \(\beta\) if there is at least one.
The payoff to cooperators is always \(\beta - \gamma\), where \(\gamma\) is the cost of providing the public good.
Tutić (2021) used the default values of \(\beta = 4\) and \(\gamma = 2\).</p>
<p>The contribution of the game to fitness is governed by selection strength \(w\),
so the fitness of cooperators and defectors is</p>
\[\begin{align}
\pi^w_c(i, h) &= 1 - w + w \pi_c(i, h), \\
\pi^w_d(i, h) &= 1 - w + w \pi_d(i, h),
\end{align}\]
<p>where \(\pi_x(i, h)\) are the expected payoffs in a population (size \(n\)) with \(i\) cooperators from games with \(h\) players.
They are,</p>
\[\begin{align}
\pi_c(i, h) &= \beta - \gamma, \\
\pi_d(i, h) &= \beta \left( 1 - \frac{ {n-i-1 \choose h-1} }{ {n-1 \choose h-1} } \right).
\end{align}\]
<p>The combinatorial term is the probability that all other members of the defector’s group are defectors,
which has a hypergeometric distribution.</p>
<p>The dynamics are modelled as a Moran process,
and we’re interested in the transitions in the number of cooperators \(i\).
The probability of a transition from \(i\) cooperators to \(i+1\) is</p>
\[\begin{equation}
p_{i, i+1} = \left( \frac{n-i}{n} \right) \left( \frac{\pi^w_c(i,h) i}{\pi^w_c(i,h) i + \pi^w_d(i,h)(n-i)} \right),
\end{equation}\]
<p>where the first bracketed term is the probability that a defector dies,
and the second term is the probability that they are replaced by a cooperator,
which depends on the average fitness of cooperators above.
By similar reasoning,</p>
\[\begin{equation}
p_{i, i-1} = \left( \frac{i}{n} \right) \left( \frac{\pi^w_d(i,h) i}{\pi^w_c(i,h) i + \pi^w_d(i,h)(n-i)} \right),
\end{equation}\]
<p>and \(p_{i, i} = 1 - p_{i, i+1} - p_{i, i-1}\).</p>
<p>Tutić (2021) used the method of Nowak (2006) to find the cooperator fixation probabilities.
There seems to be a typo in the second equation on page 7 (the `\(\ldots\)’);
according to page 99 of Nowak (2006),
the fixation probability given initial \(i\) should read</p>
\[\begin{equation}
x_i = x_1 \left( 1 + \sum_{j=1}^{i-1} \prod_{k=1}^j \frac{\pi_d^w(k,h)}{\pi_c^w(k,h)} \right).
\end{equation}\]
<p>To find cooperator fixation probabilities,
I decided instead to use the general Markov approach.
Warren Weckesser at Colgate University has made available some <a href="http://math.colgate.edu/~wweckesser/math312Spring05/handouts/MarkovChains.pdf">nice lecture notes</a>,
but note that my transition matrix \(P\) is the transpose of what Weckesser uses,
to match the convention used in some other references below.</p>
<p>Define the transition matrix \(P(i,j) = p_{i,j}\),
which is the probability that the population will transition from \(i\) to \(j\) cooperators.
Reorder the transition matrix into its canonical form,</p>
\[\begin{equation}
P =
\begin{bmatrix}
I & \mathbf{0} \\
R & Q \\
\end{bmatrix},
\end{equation}\]
<p>where, in our case, \(I\) is a \(2 \times 2\) matrix corresponding to the absorbing states \(i=0\) and \(i=n\).
The fundamental matrix</p>
\[\begin{equation}
N = (I - Q)^{-1}.
\end{equation}\]
<p>The \(i\)th row \(NR\) gives the probabilities of ending up in each of the absorbing states (columns)
given that the process started in the \(i\)th transient state.
My Fig. 1 below matches Tutić’s (2021) Fig. 2 and 5.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/03/tutic_1.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/03/tutic_1.png" alt="Figure 1: The fixation probabilities of cooperators obtained from \(NR\)." />
</a>
<figcaption><span>Figure 1: The fixation probabilities of cooperators obtained from \(NR\).</span></figcaption>
</figure>
<p>Tutić (2021) used simulations to explore the long-term dynamical behaviour.
Tutić (2021) reports that, when \(h=5\),
most simulations went to fixation unless the selection was strong, e.g., \(w=0.8\).</p>
<p>\(N(i, j)\) gives the expected number of times that the process is in the \(j\)th
transient state given that it started in the \(i\)th transient state.
Therefore, the sum of the \(i\)th row of \(N\)
gives the expected number of times that the process will be in some transient state
given that the process started in the \(i\)th transient state.</p>
<p>Fig. 2 shows the expected transient times I found using \(N\)
when \(h=5\) and \(w=0.8\) for different initial values of \(i\).
For most initial values, the transient time is quite long,
as Tutić (2021) reports.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/03/tutic_2.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/03/tutic_2.png" alt="Figure 2: The transient times for different initial number of cooperators \(i\).
Parameters \(h=5\) and \(w=0.8\)." />
</a>
<figcaption><span>Figure 2: The transient times for different initial number of cooperators \(i\).
Parameters \(h=5\) and \(w=0.8\).</span></figcaption>
</figure>
<p>When an absorbing Markov chain spends a long time in transient states,
the quasi-stationary behaviour can be characterised using the methods reviewed in van Doorn and Pollett
(2013) and used in e.g., Day and Possingham’s (1995) paper about metapopulation persistence.
Following Day and Possingham (1995),
the left eigenvector corresponding to the maximal eigenvalue \(\mu_1\) of \(Q\) gives the quasi-stationary distribution.
\(\mu_1\) is always less than 1,
and the closer \(\mu_1\) is to 1, the longer the process continues before absorption.
Van Doorn and Pollett (2013)
defines the spectral gap \(\gamma\) as the distance between the two largest eigenvalues,
and if \(\gamma\) is is substantially larger than the decay parameter \(\alpha = 1-\mu_1\),
then the dynamics will exhibit quasi stationary behaviour,
i.e., relatively fast convergence to the limiting conditional distribution, and eventual evanescence after a much longer time.</p>
<p>Fig. 3 shows the quasi-stationary distribution found from the leading left eigenvector of \(Q\).
The expected value matches the expected value Tutić (2021) observed in simulations
(compare with Fig. 6 in Tutić (2021)).
The systems spends most of its time near the all-defector state,
which puts it at risk of eventually losing the cooperators from the population.</p>
<p>I found \(\mu_1 = 0.9999989\),
which is close to 1 and indicates the transient dynamics will continue a long time before fixation.
I found \(\gamma = 5.5 \times 10^{-3}\) and \(\alpha = 1 \times 10^{-6}\),
which indicates quasi-stationary behaviour.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/03/tutic_3.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/03/tutic_3.png" alt="Figure 3: The quasi-stationary distribution. Parameters \(h=5\) and \(w=0.8\)." />
</a>
<figcaption><span>Figure 3: The quasi-stationary distribution. Parameters \(h=5\) and \(w=0.8\).</span></figcaption>
</figure>
<p>The model assumes that, if there is more than one cooperator in the group,
then they will all pay the cost \(\gamma\) to provide the public good.
I wondered, what would happen if only one cooperator provided the good,
e.g., if they draw straws or one is randomly chosen to do the job first,
or if they split the costs between them?</p>
<p>I modified the cooperator payoff</p>
\[\begin{equation}
\pi_c(i, h) = \beta - \sum_{k=0}^{h-1} \frac{\gamma}{k+1} \cdot \frac{ {i-1 \choose k} {n-i \choose h-1-k} }{ {n-1 \choose h-1} },
\end{equation}\]
<p>where the sum is over the number of other group members that are cooperators \(k\),
the first fraction represents the cost of providing the good given that \(k\),
and the second fraction represents the probability of being grouped with \(k\) other cooperators
(hypergeometric distribution again).</p>
<p>In the new model, group size still has a negative effect on fixation probabilities,
but it is less severe than in the original model
and doesn’t have such strong effects when the intial number of cooperators is high (Fig. 4).</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/03/tutic_4.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/03/tutic_4.png" alt="Figure 4: The fixation probabilities of cooperators for the new model where costs are split between cooperators." />
</a>
<figcaption><span>Figure 4: The fixation probabilities of cooperators for the new model where costs are split between cooperators.</span></figcaption>
</figure>
<p>A population with an initially modest number of cooperators is now more likely to go to the all-cooperator absorbing state
(Fig. 5).</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/03/tutic_5.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/03/tutic_5.png" alt="Figure 5: The fixation probabilities of cooperators for (a) the original model, and (b) the new model where costs are split between cooperators." />
</a>
<figcaption><span>Figure 5: The fixation probabilities of cooperators for (a) the original model, and (b) the new model where costs are split between cooperators.</span></figcaption>
</figure>
<p>However,
the quasi-stationary distribution still spends most of its time near the all-defect absorbing state (Fig. 6).
Therefore, while there is a better chance in the new model that the population will be absorbed to the all-cooperator state,
the overall story that the original Tutić (2021) model tells is still true:
cooperation becomes harder to maintain as group size increases.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/03/tutic_6.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/03/tutic_6.png" alt="Figure 6: Quasi-stationary behaviour in the new model where costs are split between cooperators, (a) transient times, (b) quasi-stationary distribution. \(\gamma = 6.8 \times 10^{-4}\) and \(\alpha = 3.45 \times 10^{-5}\). For parameters \(h=10\) and \(w=0.8\)." />
</a>
<figcaption><span>Figure 6: Quasi-stationary behaviour in the new model where costs are split between cooperators, (a) transient times, (b) quasi-stationary distribution. \(\gamma = 6.8 \times 10^{-4}\) and \(\alpha = 3.45 \times 10^{-5}\). For parameters \(h=10\) and \(w=0.8\).</span></figcaption>
</figure>
<p>The code I used for this blog post is available on Github: <a href="https://github.com/nadiahpk/tutic-2021-playground">tutic-2021-playground</a>.</p>
<h3>References</h3>
<p>Day, J. R. and Possingham, H. P. (1995). A stochastic metapopulation model with variability in patch size and position, Theoretical Population Biology 48(3): 333–360.</p>
<p>Nowak, M. A. (2006). Evolutionary dynamics: exploring the equations of life, Harvard University Press.</p>
<p>Tutić, A. (2021). <a href="https://www.tandfonline.com/doi/abs/10.1080/0022250X.2021.1988946">Stochastic evolutionary dynamics in the volunteer’s dilemma</a>, The Journal of Mathematical Sociology: 1–20.</p>
<p>van Doorn, E. A. and Pollett, P. K. (2013). Quasi-stationary distributions for discrete-state models, European Journal of Operational Research 230(1): 1–14.</p>nadiahkristensenThe purpose of this blog post is to use a recent paper by Tutić (2021) to teach myself about stochastic evolutionary game theory.Using coalescence models to approximate interaction probabilities in weak selection2022-02-01T04:44:54+00:002022-02-01T04:44:54+00:00https://nadiah.org/2021/02/01/antal-weak-selection<p>To calculate the conditions under which selection favours cooperation,
<a href="https://www.pnas.org/doi/pdf/10.1073/pnas.0902528106">Antal et al. (2009)</a>
used interaction probabilities calculated from a coalescence model. However, a coalescence model
is a neutral model, it assumes no selective differences between types. So why can they assume <em>no
selective difference</em> between types in a <em>selective model</em>? The purpose of this blog post is to explore the
explanation detailed in their SI.</p>
<p>First, a quick overview of the paper. The model involves the simultaneous evolution of two traits:
(1) the strategy in a one-shot Prisoner’s Dilemma, either Cooperate or Defect; and (2) a phenotypic
tag, modelled as an integer from \(-\infty\) to \(\infty\). Both the tag and strategy are inherited clonally with
a small chance of mutation, which means individuals who have the same tag are likely to have the
same strategy. The correlation between tag and strategy favours the evolution of cooperation, similar
to the Green Beard concept. If Cooperators only cooperate with other individuals who have the
same tag as themselves, that reduces their chances of being taken advantage of by Defectors. Before
this paper, most tag-based models had found that it was difficult to obtain cooperation for well-
mixed populations, suggesting that some spatial structure was needed. In this paper, however, they
investigate a well-mixed population, and discover a simple benefits/costs condition for cooperation to
evolve: \(\frac{b}{c} > 1 + \frac{2}{\sqrt{3}}\).</p>
<p>They assume a Cooperator only cooperates with a matching-tag partner,
paying cost \(c\) to provide a benefit \(b\) to the partner;
otherwise, it behaves like a Defector.
The Defector contributes nothing;
it receives \(b\) from a partner who cooperators and \(0\) from a partner who defects.
Let \(m_i\) be the number of Cooperators and \(n_i\) be the total number of individuals
(Cooperators + Defectors) with tag \(i\).
The average payoff to a Cooperator is the total payoff divided by the number of Cooperators</p>
\[f_C = \frac{\sum_i m_i (b m_i - c n_i)}{\sum_i m_i},\]
<p>and similarly for Defectors</p>
\[f_D = \frac{\sum_i (n_i - m_i) b m_i }{N - \sum_i m_i}.\]
<p>Selection favours cooperation if Cooperators have higher fitness than Defectors,
\(f_C > f_D\), which is</p>
\[b \sum_i m_i^2 - c \sum_i m_i n_i > \frac{(b-c)}{N} \sum_{ij} m_i m_j n_j.
\label{single_s_condition}
\tag{1}\]
<p>Eq. \ref{single_s_condition} is the equation for a single configuration of the
population \(s = (\mathbf{m}, \mathbf{n})\).
In order to find the condition for evolution of cooperation overall,
we must average this condition over every possible population configuration \(s\).
Let \(\pi(s)\) be the probability that the population finds itself in configuration \(s\).
We’ll indicate the averaging by angle brackets, i.e.,</p>
\[\langle \bullet \rangle = \sum_s \bullet \: \pi(s)\]
<p>Then the condition for the evolution of cooperation is</p>
\[b \Bigg \langle \sum_i m_i^2 \Bigg \rangle - c \Bigg \langle \sum_i m_i n_i \Bigg \rangle > \frac{(b-c)}{N} \Bigg \langle \sum_{ij} m_i m_j n_j \Bigg \rangle.
\tag{Ant.1}
\label{Ant.1}\]
<p>The terms in the angle brackets look like they’d be proportional to probabilities:
the probability of drawing two cooperators with the same tag,
the probability of drawing a cooperator and any other with the same tag,
and a more complicated probability involving three types.</p>
<p>So far, so good.</p>
<p>However,
they say that they are “Averaging these quantities over every possible configuration of the population,
<em>weighted by their stationary probability under neutrality</em>” (emphasis added).
So, if we define every possible population configuration at the neutral-model steady state,
\(\pi^{(0)}(s)\),
and indicate this average by a subscript 0, i.e.,</p>
\[\langle \bullet \rangle_0 = \sum_s \bullet \: \pi^{(0)}(s)\]
<p>then, in fact, they are evaluating</p>
\[b \Bigg \langle \sum_i m_i^2 \Bigg \rangle_0 - c \Bigg \langle \sum_i m_i n_i \Bigg \rangle_0 > \frac{(b-c)}{N} \Bigg \langle \sum_{ij} m_i m_j n_j \Bigg \rangle_0.
\label{avg_at_ss}
\tag{2}\]
<p>Why is it possible to replace the full averaging in Eq. \ref{Ant.1} with the averaging at the neutral steady state
in Eq. \ref{avg_at_ss}?
The answer is given in their SI.</p>
<p>In the SI in Section 3,
for a given population configuration \(s\),
they derive the fitness of the Cooperator with tag \(i\).
Start with the effective payoffs:</p>
\[f_{C,i} = 1 + \delta (b m_i - c n_i)\]
\[f_{D,i} = 1 + \delta b m_i\]
<p>Assume that every individual interacts with every individual once per generation.
Then, the fitness of the Cooperator</p>
\[\begin{align}
w_{C,i}
&= \frac{\text{payoff to Cooperator with tag }i}{\text{payoff to all Cooperators} + \text{payoff to all Defectors}} \\
&= \frac{N f_{C,i}}{\sum_j m_j f_{C,j} + (n_j - m_j) f_{D,j}} \\
&= \frac{1 + \delta(b m_i - c n_i)}{1 + \frac{\delta(b-c)}{N} \sum_j m_j n_j}
\end{align}\]
<p>For brevity, define \(A = b m_i - c n_i\) and \(B = \frac{(b-c)}{N} \sum_j m_j n_j\).
Do a Taylor expansion around \(a = 0\)</p>
\[\begin{align}
w_{C,i}(\delta) &= w_{C,i}(a) + w_{C,i}'(a)(\delta-a) + \frac{w_{C,i}''(a)}{2} (\delta-a)^2 + \ldots \nonumber \\
&= \frac{1+a A}{1+a B} + \left( \frac{A}{Ba + 1} - \frac{B(Aa + 1)}{(B a + 1)^2} \right) (\delta - a) + \mathcal(O)(\delta^2) \nonumber \\
&= 1 + \delta(A-B) + \mathcal{O}(\delta^2) \nonumber \\
&= 1 + \delta \left( bm_i -cn_i - \frac{b-c}{N} \sum_j m_j n_j \right) + \mathcal{O}(\delta^2)
\tag{Ant.SI.30}
\label{Ant.SI30}
\end{align}\]
<p>We can verify that, when selection \(\delta = 0\), the Cooperator’s fitness \(w_{C,i} = 1\), as we’d hope.</p>
<p>For selection to favour Cooperators,
the average change in the proportion of Cooperators due to selection must be greater than zero, i.e.,</p>
\[\langle \Delta p(s) \rangle > 0.
\label{invasion_condition}
\tag{3}\]
<p>For a given population configuration</p>
\[\begin{align}
\Delta p(s)
&= \frac{1}{N} \left( \text{number of Cooperator offspring born} - \text{number of Cooperator adults died} \right) \nonumber \\
&= \frac{1}{N} \left( \sum_i m_i w_{C,i} - \sum_i m_i \right)
\end{align}\]
<p>We can again verify that, when selection \(\delta = 0\), the change due to selection equals 0.
\(\Delta p(s)\) has the Taylor expansion</p>
\[\Delta p(s) = 0+ \frac{\delta}{N} \sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0} + \mathcal{O}(\delta^2)
\tag{Ant.SI.35}
\label{AntSI.35}\]
<p>Recall we want to average over all possible configurations, that is, to find</p>
\[\bigl\langle \Delta p(s) \bigr\rangle = \sum_s \pi(s) \Delta p(s)
\label{how_to_average}
\tag{4}\]
<p>The stationary probabilities can also be Taylor expanded</p>
\[\pi(s) = \pi^{(0)}(s) + \delta \pi^{(1)}(s) + \mathcal{O}(\delta^2)
\tag{Ant.SI.36}
\label{AntSI.36}\]
<p>\(\pi^{(0)}(s)\) is the stationary probability when \(\delta=0\), which is the stationary probability at neutrality.
And \(\pi^{(1)}(s)\) is some very complicated expression that we don’t know about.</p>
<p>But the good news is that, if we substitute Eq. \ref{AntSI.35} and \ref{AntSI.36}
into Eq. \ref{how_to_average},
then the complicated \(\pi^{(1)}(s)\) and higher terms end up multiplied by \(\delta^2\) and higher terms,
and so when \(\delta\) is very small (weak selection), those terms can be dropped:</p>
\[\begin{align}
\bigl\langle \Delta p(s) \bigr\rangle
&= \sum_s \pi(s) \Delta p(s) \\
& = \sum_s
\left[ \pi^{(0)}(s) + \delta \pi^{(1)}(s) + \mathcal{O}(\delta^2) \right]
\cdot
\left[ \frac{\delta}{N} \sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0} + \mathcal{O}(\delta^2) \right]
\\
%& = \sum_s
%\pi^{(0)}(s) \frac{\delta}{N} \sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0}
%+ \pi^{(1)}(s) \frac{\delta^2}{N} \sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0}
%+ \ldots \\
& = \sum_s
\pi^{(0)}(s) \frac{\delta}{N} \sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0}
+ \mathcal{O}(\delta^2) \\
& \approx \sum_s \pi^{(0)}(s) \Delta p(s) \label{neut} \tag{5}
\end{align}\]
<p>Eq. \ref{neut} indicates an average taken over configurations at the neutral steady state, i.e.,</p>
\[\bigl\langle \Delta p(s) \bigr\rangle
\approx \bigl\langle \Delta p(s) \bigr \rangle_0
= \frac{\delta}{N} \biggl\langle \sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0} \biggr \rangle_0\]
<p>Let’s evaluate that term inside the angle brackets</p>
\[\begin{align}
w_{C,i} &= 1 + \delta \left( b m_i - c n_i - \frac{b-c}{N} \sum_j m_j n_j \right) + \mathcal{O}(\delta^2) \nonumber \\
\frac{d w_{C,i}}{d\delta} &= b m_i - c n_i - \frac{b-c}{N} \sum_j m_j n_j + \mathcal{O}(\delta) \nonumber \\
\left. \frac{d w_{C,i}}{d\delta} \right|_{\delta=0} &= b m_i - c n_i - \frac{b-c}{N} \sum_j m_j n_j \nonumber \\
\sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0}
&= b \sum_i m_i^2 - c \sum_i m_i n_i - \frac{b-c}{N} \sum_{ij} m_i m_j n_j
\label{eq99}
\tag{6}
\end{align}\]
<p>Remember that, for selection to favour cooperators, we want our invasion condition \(\langle \Delta p(s) \rangle > 0\)
(Eq. \ref{invasion_condition}), i.e.,</p>
\[\biggl\langle \sum_i m_i \left. \frac{d w_{C_i}}{d\delta} \right|_{\delta=0} \biggr \rangle_0 > 0
\label{eq98}
\tag{7}\]
<p>Substitute Eq. \ref{eq99} into the invasion condition Eq. \ref{eq98} and so some re-arranging</p>
\[b \Bigg \langle \sum_i m_i^2 \Bigg \rangle_0 - c \Bigg \langle \sum_i m_i n_i \Bigg \rangle_0 > \frac{(b-c)}{N} \Bigg \langle \sum_{ij} m_i m_j n_j \Bigg \rangle_0\]
<p>which is the Eq. \ref{avg_at_ss} that they use.</p>
<p>In conclusion, the reason why they can replace the full population configuration with the configurations
at the neutral steady state is because of weak selection and the structure of the game.
Under weak selection, the effect of the game on fitness is scaled by \(\delta << 1\).
The change in the proportion of Cooperators due to selection has no linear term (\(\delta^0\)).
Therefore, when the averaging is done,
the non-neutral terms in the population configurations distribution (\(\delta^1\) and higher) are multiplied by \(\delta\),
which gives them order \(\delta^2\) and higher, and so they can be dropped.</p>
<h3>References</h3>
<p>Antal, T., Ohtsuki, H., Wakeley, J., Taylor, P. D. and Nowak, M. A. (2009). Evolution of cooperation by phenotypic similarity, Proceedings of the National Academy of Sciences 106(21): 8597–8600.</p>nadiahkristensenTo calculate the conditions under which selection favours cooperation, Antal et al. (2009) used interaction probabilities calculated from a coalescence model. However, a coalescence model is a neutral model, it assumes no selective differences between types. So why can they assume no selective difference between types in a selective model? The purpose of this blog post is to explore the explanation detailed in their SI.Island bird diversity not influenced by historical connection to the mainland2022-01-01T04:44:54+00:002022-01-01T04:44:54+00:00https://nadiah.org/2021/12/12/sundaic-birds<p>The number of species on an island generally increases with island size and connectance to the mainland.
However, if an island’s characteristics have changed in the recent past, that pattern will be disrupted.
In <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/jbi.14293">Keita Sin’s recent paper in Journal of Biogeography</a>,
we tested if that expectation was borne out for birds on the Sundaic islands.
Perhaps surprisingly, we found that it was not.</p>
<p>The number of species on an island is generally determined by immigration and extinction (MacArthur & Wilson, 1967).
Over long timescales,
these two forces reach a stochastic equilibrium, producing the well-known species-richness patterns.
Larger islands permit larger populations that are less likely to go extinct,
and proximity to the mainland and connectance to it (e.g., via stepping-stone islands) increase the chance of colonisation by new species.
As a consequence, larger islands and islands with greater connectance to the mainland have higher species richness.</p>
<p>However, these general species-richness relationships assume static conditions.
If an island has recently decreased in size or connectance to the mainland,
then we would expect it to have more species than similar islands that have not changed
(see <a href="https://nadiah.org/2020/02/22/transient-dynamics-in-neutral-models/">previous blog post</a>).
This is because it takes time for species diversity to decline to reach its new, lower stochastic-equilibrium value.
The mechanism is similar to extinction debt, where there is a time delay between habitat fragmentation and species extinction.</p>
<p>Over the past 20,000 years, sea level changes have changed the size and connectance of Sundaic islands.
During the Last Glacial Maximum, the sea level was about 120 m below its present level, and the Sunda shelf was exposed and formed a large continuous landmass with Asia.
More recently, around 7000 years ago, there was a peak 3-5 m above the current level, and low-lying islands were completely submerged.</p>
<p>The particular ways in which each island changed depends on the specific topology of the shelf around that island.
As a consequence, two islands of similar size and connectance today can have very different histories of size and connection to the mainland,
providing a natural laboratory for macroecological theory.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/01/sunda_map.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/01/sunda_map.png" alt="Figure 1: A map of the study islands. Note the Sunda shelf in light blue. At the Last Glacial Maximum (20,000 years ago), the Sunda shelf was above sea level." />
</a>
<figcaption><span>Figure 1: A map of the study islands. Note the Sunda shelf in light blue. At the Last Glacial Maximum (20,000 years ago), the Sunda shelf was above sea level.</span></figcaption>
</figure>
<p>Previous work has found that historical changes to island characteristics can influence species richness today.
For example, in Baja California, the diversity of lizards was higher than expected in islands that were more recently connected to the mainland (Wilcox, 1978).
Similar results have also been found for certain birds on satellite islands of New Guinea (Diamond, 1972).
However, some studies have found little effect (e.g., Sundaic mammals, Heaney (1984)).</p>
<p>We found that an island’s historical characteristics had little effect on species richness.
Two islands of similar size had similar diversity regardless of how long they had been disconnected from the mainland, or whether they had been recently submerged or not.
This implies that, for Sundaic birds, the stochastic immigration-extinction equilibrium is reached relatively rapidly.
Extinctions occur soon after an island becomes disconnected from the mainland,
and highly dispersive species rapidly colonise islands that had been recently submerged.</p>
<figure style="max-width: 700px; margin: auto; padding-bottom: 20px;">
<a href="/wp-content/uploads/2022/01/keita_fig_3.png">
<img style="max-height: auto; max-width: 100%;" src="/wp-content/uploads/2022/01/keita_fig_3.png" alt="Figure 2: The relationship between island species richness and (a) island area, (b) isolation in a 10 km buffer, and (c) distance to the mainland. The best model (by AIC) to explain species richness on islands included these 3 variables. It did not include change in the island's area, time of isolation from the mainland, whether it had been submerged, or whether it was a deep-sea versus shelf island." />
</a>
<figcaption><span>Figure 2: The relationship between island species richness and (a) island area, (b) isolation in a 10 km buffer, and (c) distance to the mainland. The best model (by AIC) to explain species richness on islands included these 3 variables. It did not include change in the island's area, time of isolation from the mainland, whether it had been submerged, or whether it was a deep-sea versus shelf island.</span></figcaption>
</figure>
<p>The one effect of island history was on <em>island endemism</em>: most island endemics were found on deep-sea islands with no historic land connection to the mainland.
This was true even though some shelf islands are currently more isolated today than the deep-sea islands in the dataset.
This implies that,
although birds are a relatively dispersive taxonomic group,
long-distance overwater colonisation is generally unsuccessful in the less dispersive species.</p>
<p>A special note needs to be made about the uniqueness of the dataset collected by Keita and the team.
Although ornithological knowledge of the region is fairly comprehensive,
the overwhelming majority of surveys have concentrated on large islands,
creating a serious bias in the literature.
In contrast, the surveys conducted for this study focused on small islands (less than 10 sq-km),
providing valuable data to test theory across the range of mechanisms that determine island species diversity.</p>
<p><a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/jbi.14293">Sin, Y. C. K., N. P. Kristensen, C. Y. Gwee, R. A. Chisholm, and F. E. Rheindt. 2022. Bird diversity on shelf islands does not benefit from recent land-bridge connections. Journal of Biogeography, 49(1):189–200</a>.</p>
<h3>References</h3>
<p>Diamond, J. M. (1972). Biogeographic kinetics: Estimation of relaxation times for avifaunas of southwest Pacific islands. Proceedings of the National Academy of Sciences of the United States of America, 69(11), 3199–3203</p>
<p>Heaney, L. R. (1984). Mammalian species richness on islands on the Sunda Shelf, Southeast Asia. Oecologia, 61(1), 11–17</p>
<p>MacArthur, R. H., & Wilson, E. O. (1967). The theory of island biogeography. Princeton University Press.</p>
<p>Wilcox, B. A. (1978). Supersaturated island faunas: A species-age relationship for lizards on post-Pleistocene land-bridge islands. Science, 199(4332), 996–998</p>nadiahkristensenThe number of species on an island generally increases with island size and connectance to the mainland. However, if an island’s characteristics have changed in the recent past, that pattern will be disrupted. In Keita Sin’s recent paper in Journal of Biogeography, we tested if that expectation was borne out for birds on the Sundaic islands. Perhaps surprisingly, we found that it was not.