Assessing the Relationship of Ancient and Modern Populations

Sequencing DNA from deceased individuals can inform whether the individuals that currently live in a location are descended from individuals that.....

genetics. Advances in DNA isolation (Dabney et al. 2013), library preparation (Meyer et al. 2012), bone sampling (Pinhasi et al. 2015), and sequence capture (Haak et al. 2015) make it possible to obtain genome-wide data from hundreds of samples (Allentoft et al. 2015;Haak et al. 2015;Mathieson et al. 2015;Fu et al. 2016). Analysis of these data can provide new insight into recent evolutionary processes, which leave faint signatures in modern genomes, including natural selection (Jewett et al. 2016;Schraiber et al. 2016) and population replacement (Lazaridis et al. 2014;Sjödin et al. 2014).
One of the most powerful uses of aDNA is to assess the continuity of ancient and modern populations. In many cases, it is unclear whether populations that occupied an area in the past are the direct ancestors of the current inhabitants of that area. However, this can be next to impossible to assess using only modern genomes. Questions of population continuity and replacement have particular relevance for the spread of cultures and technology in humans (Lazaridis et al. 2016). For instance, recent work showed that modern South Americans are descended from people associated with the Clovis culture that inhabited North America over 10,000 years ago, further enhancing our understanding of the peopling of the Americas (Rasmussen et al. 2014).
Despite its utility in addressing difficult-to-answer questions in evolutionary biology, aDNA also has several limitations. Most strikingly, DNA decays rapidly following the death of an organism, resulting in highly fragmented, degraded starting material when sequencing (Sawyer et al. 2012). Thus, ancient data are frequently sequenced to low coverage, and has a significantly higher rate of misleadingly called nucleotides than modern samples. When working with diploid data, as in aDNA extracted from plants and animals, the low coverage prevents genotypes from being called with confidence.
Several strategies are commonly used to address the lowcoverage data. One of the most common approaches is to sample a random read from each covered site, and use that as a haploid genotype call (Skoglund et al. 2012;Allentoft et al. 2015;Haak et al. 2015;Mathieson et al. 2015;Fu et al. 2016;Lazaridis et al. 2016). Many common approaches to the analyses of aDNA, such as the usage of F-statistics (Green et al. 2010;Patterson et al. 2012), are designed with this kind of dataset in mind. F-statistics can be interpreted as linear combinations of simpler summary statistics, and can often be understood in terms of testing a tree-like structure relating populations. Nonetheless, despite the simplicity and appeal of this approach, it has several drawbacks. Primarily, it throws away reads from sites that are covered more than once, resulting in a potential loss of information from expensive, difficult-to-acquire data. Moreover, as shown by Peter (2016), F-statistics are fundamentally based on heterozygosity, which is determined by samples of size 2, and thus limited in power. Finally, these approaches are also strongly impacted by sequencing error, postmortem damage (PMD), and contamination.
On the other hand, several approaches exist to either work with genotype likelihoods or the raw read data. Genotype likelihoods are the probabilities of the read data at a site, given each of the three possible diploid genotypes at that site. They can be used in calculation of population genetic statistics, or likelihood functions, to average over uncertainty in the genotype (Korneliussen et al. 2014). However, many such approaches assume that genotype likelihoods are fixed by the SNP calling algorithm [although they may be recalibrated to account for aDNA-specific errors, as in Jónsson et al. (2013)]. However, with low coverage data, an increase in accuracy is expected if genotype likelihoods are coestimated with other parameters of interest, due to the covariation between processes that influence read quality and genetic diversity, such as contamination.
A recent method that coestimates demographic parameters, along with error and contamination rates, by using genotype likelihoods, showed that there can be significant power to assess the relationship of a single ancient sample to a modern population (Racimo et al. 2016). Nonetheless, they found that, for very low coverage data, inferences were not reliable. Thus, they were unable to apply their method to the large number of extremely low coverage ( , 13) genomes that are available. Moreover, they were unable to explore the tradeoffs that come with a limited budget: can we learn more by sequencing fewer individuals to high coverage, or more individuals at lower coverage?
Here, we develop a novel maximum likelihood approach for analyzing low coverage aDNA in relation to a modern population. We work directly with raw read data and explicitly model errors due to sequencing and portmortem damage. Crucially, our approach incorporates data from multiple individuals that belong to the same ancient population, which we show substantially increases power and reduces error in parameter estimates. We then apply our new methodology to ancient human data, and show that we can perform accurate demographic inference, even from very low coverage samples, by analyzing them jointly.

Sampling alleles in ancient populations
We assume a scenario in which allele frequencies are known with high accuracy in a modern population. Suppose that an allele is known to be at frequency x 2 ð0; 1Þ in the modern population, and we wish to compute the probability of obtaining k copies of that allele in a sample of n (0 # k # n) chromosomes from an ancient population. As we show in the Appendix, conditioning on the frequency of the allele in the modern population minimizes the impact of ascertainment, and allows this approach to be used for SNP capture data.
To calculate the sampling probability, we assume a simple demographic model, in which the ancient individual belongs to a population that split off from the modern population t 1 generations ago, and subsequently existed as an isolated population for t 2 generations. Further, we assume that the modern population has effective size N ð1Þ e ; and that the ancient population has effective size N ð2Þ e ; and measure time in diffusion units, t i ¼ t i =ð2N ðiÞ e Þ: If we know the conditional probability that an allele is at frequency y in the ancient sample, given that it is at frequency x in the modern population, denoted f ðy; x; t 1 ; t 2 Þ; then the sampling probability is simply an integral, P n;k ðxÞ ¼ Thus, we must compute the binomial moments of the allele frequency distribution in the ancient population. In the Appendix, we show that this can be computed using matrix exponentiation, where ðvÞ i indicates the ith element of the vector v; h n ¼ ð12xÞ n ; xð12xÞ n21 ; . . . ; x n T and Q and Q Y are the sparse matrices and This result has an interesting interpretation: the matrix Q Y can be thought of as evolving the allele frequencies back in time, from the modern population to the common ancestor of the ancient and modern populations, while Q evolves the allele frequencies forward in time, from the common ancestor to the ancient population ( Figure 1). Because of the fragmentation and degradation of DNA that is inherent in obtaining sequence data from ancient individuals, it is difficult to obtain the high coverage data necessary to make high quality genotype calls from ancient individuals. To address this, we instead work directly with raw read data, and average over all the possible genotypes weighted by their probability of producing the data. Specifically, we follow Nielsen et al. (2012) in modeling the probability of the read data in the ancient population, given the allele frequency at site l as where R i;l ¼ ða i;l ; d i;l Þ are the counts of ancestral and derived reads in individual i at site l, g i;l 2 f0; 1; 2g indicates the possible genotype of individual i at site l (i.e., 0 = homozygous ancestral, 1 = heterozygous, 2 = homozygous derived), and ℙðR i;l jg i;l Þ is the probability of the read data at site l for individual i, assuming that the individual truly has genotype g i;l : We use a binomial sampling with error model, in which the probability that a truly derived site appears ancestral (and vice versa) is given by e. We emphasize that the parameter e will capture both sequencing error as well as PMD [cf. Racimo et al. (2016), who found that adding an additional parameter to specifically model PMD does not improve inferences]. Thus, Combining these two aspects together by summing over possible allele frequencies weighted by their probabilities, we obtain our likelihood of the ancient data, ℙðR l jkÞp n;k ðx l Þ: (3)

Data availability
The most recent Python implementations of the described methods are available at www.github.com/schraiber/continuity/. A snapshot of the code used as of the publication of the manuscript is available at https://zenodo.org/record/1054127.

Impact of coverage and number of samples on inferences
To explore the tradeoff of sequencing more individuals at lower depth compared to fewer individuals at higher coverage, we performed simulations using msprime (Kelleher et al. 2016) combined with custom scripts to simulate error and low coverage data. Briefly, we assumed a Poisson distribution of reads at every site with mean given by the coverage, and then simulated reads by drawing from the binomial distribution described in the Methods. First, we examined the impact of coverage and number of samples on the ability to recover the drift times in the modern Figure 1 The generative model. Alleles are found at frequency x in the modern population, and are at frequency y in the ancient population. The modern population has effective size N ð1Þ e and has evolved for t 1 generations since the common ancestor of the modern and ancient populations, while the ancient population is of size N ð2Þ e and has evolved for t 2 generations. Ancient diploid samples are taken and sequenced to possibly low coverage, with errors. Arrows indicate that the sampling probability can be calculated by evolving alleles backward in time from the modern population, and then forward in time to the ancient population. and ancient populations. Figure 2 shows results for data simulated with t 1 ¼ 0:02 and t 2 ¼ 0:05; corresponding to an ancient individual who died 300 generations ago from a population of effective size 1000. The populations split 400 generations ago, and the modern population has an effective size of 10,000. We simulated 180,000 SNPs by simulating 100,000 500-bp fragments. Inferences of t 1 can be relatively accurate even with only one low coverage ancient sample ( Figure 2A). However, inferences of t 2 benefit much more from increasing the number of ancient samples, as opposed to coverage ( Figure 2B). Supplemental Material, Table S1 shows that there is very little change in the average estimated parameter, indicating that most of the change in RMSE is due to decreased sampling variance. Thus, two individuals sequenced to 0.53 coverage have a much lower error than a single individual sequenced to 23 coverage, even though there is very little bias in either case. To explore this effect further, we derived the sampling probability of alleles covered by exactly one sequencing read (see Appendix). We found that sites covered only once have no information about t 2 ; suggesting that evidence of heterozygosity is very important for inferences about t 2 : Finally, though we showed through simulation that there is sufficient power to disentangle t 1 from t 2 ; estimates of these parameters are negatively correlated, due to the necessity of fitting the total drift time t 1 þ t 2 ( Figure S1; all supplementary legends can be found in File S1).
We next examined the impact of coverage and sampling on the power to reject the hypothesis that the ancient individuals came from a population that is directly ancestral to the modern population. We analyzed both low coverage (0.53) and higher coverage (43) datasets consisting of one (for both low and high coverage samples) or five individuals (only for low coverage). We simulated data with parameters identical to the previous experiment, except we now examined the impact of varying the age of the ancient sample from 0 generations ago through to the split time with the modern population. We then performed a likelihood ratio test comparing the null model of continuity, in which t 2 ¼ 0; to a model in which the ancient population is not continuous. Figure 3 shows the power of the likelihood ratio test. For a single individual sequenced to low coverage, we see that the test only has power for very recently sampled ancient individuals (i.e., samples that are highly diverged from the modern population). However, the power increases dramatically as the number of individuals or the coverage per individual is increased; sequencing five individuals to 0.53 coverage results in essentially perfect power to reject continuity. Nonetheless, for samples that are very close to the divergence time, it will be difficult to determine if they are ancestral to the modern population or not, because differentiation is incomplete.

Impact of admixture
We examined two possible ways that admixture can result in violations of the model to assess their impact on inference. In many situations, there may have been secondary contact between the population from which the ancient sample is derived and the modern population used as a reference. We performed simulations of this situation by modifying the simulation corresponding to Figure 2 (300-generation-old ancient sample from population of size 1000 split from a population of size 10,000 400 generations ago) to include subsequent admixture from the ancient population to the modern population 200 generations ago (NB: this admixture occurred more recently than the ancient sample). In Figure 4, we show the results for admixture proportions ranging from 0 to 50%: Counterintuitively, estimates of t 1 initially decrease before again increasing. This is likely a result of the increased heterozygosity caused by admixture, which acts to artificially inflate the effective size of the modern population, and, thus, decrease t 1 : As expected, t 2 is estimated to be smaller the more admixture there is; indeed, for an admixture rate of 100%; the modern and ancient samples are continuous. The impact on t 2 appears to be linear, and is well approximated by ð1 2 f Þt 2 if the admixture fraction is f.
In other situations, there may be admixture from an unsampled "ghost" population into the modern population. If the ghost admixture is of a high enough proportion, it is likely to cause a sample that is, in fact, a member of a directly ancestral population to appear not to be ancestral. We explored this situation by augmenting our simulations in which the ancient sample is continuous with an outgroup population diverged from the modern population 0.04 time units ago (corresponding to 800 generations ago), and contributed genes to the modern population 0.01 time units ago (corresponding to 200 generations ago). We then assessed the impact on rejecting continuity using the likelihood ratio test ( Figure 5). As expected, we see that low-power sampling strategies (such as a single individual sequenced to low coverage) are very minimally impacted by ghost admixture. However, for more powerful sampling strategies, moderate rates of ghost admixture ( 10%) result in rejection of continuity.

Impact of contamination
We also explored the impact of foreign DNA contamination on inferences made using this approach. Briefly, we modified the simulations to include a chance c of a read being from a modern sample instead of the ancient sample when simulating reads. We again simulated data corresponding to Figure  2, with a 300-generation-old ancient sample from population of size 1000 split from a population of size 10,000 400 generations ago. In Figure 6, we see that relatively modest amounts of contamination can result in estimating zero, or near-zero, drift times. Interestingly, for the same contamination fraction, higher coverage samples are impacted slightly less. Together, this suggests that contamination will result in samples to be falsely inferred to be directly continuous with the modern population.

Application to ancient humans
We applied our approach to ancient human data from Mathieson et al. (2015), which is primarily derived from a SNP capture approach that targeted 1.2 million SNPs. Based on sampling location and associated archeological materials, the individuals were grouped into a priori panels, which we used to specify population membership when analyzing individuals together. We analyzed all samples for their relationship to the CEU individuals from the 1000Genomes Project Consortium (2015. Based on our results, which suggested that extremely low coverage samples would yield unreliable estimates, we excluded panels that are composed of only a single individual sequenced to ,23 coverage. We computed maximum likelihood estimates of t 1 and t 2 for individuals as grouped into populations ( Figure 7A and Table 1). We observe that t 2 is significantly greater than zero for all populations according to the likelihood ratio test. Thus, none of these populations are consistent with directly making up a large proportion of the ancestry of modern CEU individuals. Strikingly, we see that t 2 t 1 ; despite the fact these samples died in the past, and thus they belonged to a lineage that must have existed for fewer generations since the population split than the modern samples. This suggests that all of the ancient populations are characterized by extremely small effective population sizes.
We further explored the relationship between the dates of the ancient samples and the parameters of the model by plotting t 1 and t 2 against the mean sample date of all samples in that population (Figure 7, B and C). We expected to find that t 1 correlated with sample age, under the assumption that samples were members of relatively short-lived populations that diverged from the "main-stem" of CEU ancestry. Instead, we see no correlation between t 1 and sample time, suggesting that the relationship of these populations to the CEU is complicated, and not summarized well by the age of the samples. On the other hand, we see a strong positive correlation between t 2 and sampling time (P , 1 3 10 24 ). Because t 2 is a compound parameter, it is difficult to directly interpret this relationship. However, it is consistent with the most ancient samples belonging to populations with the smallest effective sizes, consistent with previous observations .
Finally, we examined the impact of grouping individuals into populations in real data. We see that estimates of t 1 for low coverage samples are typically lower when analyzed individually than when pooled with other individuals of the same panel ( Figure 8A); because Table S1 shows that there is no downward bias in t 1 for low coverage, this suggests that there may be some heterogeneity in these panels. On the other hand, there is substantial bias toward overestimating t 2 when analyzing samples individually, particularly for very low coverage samples ( Figure 8B). This again shows that, for estimates that rely on heterozygosity in ancient populations, pooling many low coverage individuals can significantly improve estimates.

Discussion
aDNA presents unique opportunities to enhance our understanding of demography and selection in recent history. However, it also comes equipped with several challenges, due to DNA PMD (Sawyer et al. 2012). Several strategies have been developed to deal with the low quality of aDNA data, from relatively simple options like sampling a read at random at every site (Green et al. 2010) to more complicated methods making use of genotype likelihoods (Racimo et al. 2016). Here, we presented a novel maximum likelihood approach for making inferences about how ancient populations are related to modern populations by analyzing read counts from multiple ancient individuals, and explicitly modeling relationship between the two populations. We explicitly condition on the allele frequency in a modern population; as we show in the Appendix, this renders our method robust to ascertainment in modern samples. Thus, it can be used with SNP capture data. Moreover, confidence intervals can be calculated using a nonparametric bootstrap, although this will be computational intensive for large ancient panels, such as those considered in this manuscript. Using this approach, we examined some aspects of sampling strategy for aDNA analysis, and we applied our approach to ancient humans.
We found that sequencing many individuals from an ancient population to low coverage (0.5-13) can be a significantly more cost-effective strategy than sequencing fewer individuals to relatively high coverage. For instance, we saw from simulations that far more accurate estimates of the drift time in an ancient population can be obtained by pooling two individuals at 0.53 coverage than by sequencing a single individual to 23 coverage (Figure 2). We saw this replicated in our analysis of the real data: low coverage individuals showed a significant amount of variation and bias in estimating the model parameters that was substantially reduced when individuals were analyzed jointly in a population (Figure 8). To explore this further, we showed that sites sequenced to 13 coverage in a single individual retain no information about the drift time in the ancient population. This can be intuitively understood because the drift time in the ancient population is strongly related the amount of heterozygosity in the ancient population: an ancient population with a longer drift time will have lower heterozygosity at sites shared with a modern population. When a site is only sequenced once in a single individual, there is no information about the heterozygosity of that site. We also observed a pronounced upward bias in estimates of the drift time in the ancient population from low coverage samples. We speculate that this is due to the presence of few sites covered more than once being likely to be homozygous, thus deflating the estimate of heterozygosity in the ancient population. Thus, for analysis of SNP data, we recommend that aDNA sampling be conduced to maximize the number of individuals from each ancient population that can be sequenced to 13, rather than attempting to sequence fewer individuals to high coverage. This suggestion can be complicated when samples have vastly different levels of endogenous DNA, where it may be cost effective to sequence high quality samples to higher coverage. In that case, we recommend sequencing samples to at least 3-43 coverage; as evidenced by Figure   2 and Figure 3, single samples at , 43 coverage provide extremely limited information about the drift time in the ancient population, and, thus, little power to reject continuity.
When we looked at the impact of model misspecification, we saw several important patterns. First, the influence of admixture from the ancient population on inferences of t 2 is approximately linear, suggesting that if there are estimates of the amount of admixture between the modern and ancient population, a bias-corrected estimate of t 2 could be produced ( Figure 4B). The impact on inference of t 1 is more complicated: admixture actually reduces estimates of t 1 ( Figure 4A). This is likely because admixture increases the heterozygosity in the modern population, thus causing the amount of drift time to seem reduced. In both cases, the bias is not impacted by details of sampling strategy, although the variance of estimates is highly in a way consistent with Figure 2.
Of particular interest in many studies of ancient populations is the question of direct ancestry: are the ancient samples members of a population that contributed substantially to a modern population? We emphasize that this does not mean that the particular samples were direct ancestors of any modern individuals; indeed, this is exceedingly unlikely for old samples (Donnelly 1983;Chang 1999;Baird et al. 2003;Rohde et al. 2004). Instead, we are asking whether an ancient sample was a member of a population that is directly continuous with a modern population. Several methods have been proposed to test this question, but thus far they have been Pop, population name; cov, mean coverage of individuals in the population; date, mean date of individuals in the population; t 1 ; maximum likelihood estimate of t 1 in the full model; t 2 ;maximum likelihood estimate of t 2 in the full model; LnL, maximum likelihood value in the full model; t 1 (cont), maximum likelihood estimate of t 1 in the model where t 2 ¼ 0; LnL, maximum likelihood value in the model where t 2 ¼ 0: limited to many individuals sequenced at a single locus (Sjödin et al. 2014) or to a single individual with genomewide data (Rasmussen et al. 2014). Our approach provides a rigorous, maximum-likelihood framework for testing questions of population continuity using multiple low coverage ancient samples. We saw from simulations ( Figure 3) that data from single, low coverage individuals result in very little power to reject the null hypothesis of continuity unless the ancient sample is very recent (i.e., it has been diverged from the modern population for a long time). Nonetheless, when low coverage individuals are pooled together, or a single high coverage individual is used, there is substantial power to reject continuity for all but the most ancient samples (i.e., samples dating from very near the population split time). Because many modern populations may have experienced admixture from unsampled "ghost" populations, we also performed simulations to test the impact of ghost admixture on the probability of falsely rejecting continuity. We find that single ancient samples do not provide sufficient power to reject continuity, even for high levels of ghost admixture, while increasingly powerful sampling schemes, adding more individuals or higher coverage per individual, reject continuity at higher rates. However, in these situations, whether we regard rejection of continuity as a false or true discovery is somewhat subjective: how much admixture from an outside population is required before considering a population to not be directly ancestral? In future work it will be extremely important to estimate the "maximum contribution" of the population an ancient sample comes from (cf. Sjödin et al. 2014).
To gain new insights from empirical data, we applied our approach to ancient samples throughout Europe. Notably, we rejected continuity for all populations that we analyzed. This is unsurprising, given that European history is extremely complicated, and has been shaped by many periods of admixture (Lazaridis et al. 2014(Lazaridis et al. , 2016Haak et al. 2015). Thus, modern Europeans have experienced many periods of "ghost" admixture (relative to any particular ancient sample). Nonetheless, our results show that none of these populations are even particularly close to directly ancestral, as our simulations have shown that rejection of continuity will not occur with low levels of ghost admixture.
Second, we observed that the drift time in the ancient population was much larger than the drift time in the modern population. Assuming that the ancient sample were a contemporary sample, the ratio t 1 =t 2 is an estimator of the ratio N ð2Þ e =N ð1Þ e ; in fact, because the ancient sample existed for fewer generations since the common ancestor of the ancient and modern populations, t 1 =t 2 acts as an upper bound on N ð2Þ e =N ð1Þ e : Moreover, this is unlikely to be due to unmodeled error in the ancient samples: error would be expected increase the heterozygosity in the ancient sample, and thus decrease our estimates of t 2 : Another potential complication is the fact that modern Europeans are a mixture of multiple ancestral populations (Lazaridis et al. 2014;Haak et al. 2015). As shown through simulation, admixture increases heterozygosity in the modern population and thus decreases estimates of t 1 : However, even very large amounts of ghost admixture did not result in the order-of-magnitude differences we see in the real data, suggesting that ghost admixture cannot account for all the discrepancy between modern and ancient N e : Thus, we find strong support for the observation that ancient Europeans were often members of small, isolated populations . We interpret these two results together as suggestive that many ancient samples found thus far in Europe were members of small populations that ultimately went locally extinct. Nonetheless, there may be many samples that belonged to larger metapopulations, and further work is necessary to specifically examine those cases.
We further examined the effective sizes of ancient populations through time by looking for a correlation between the age of the ancient populations and the drift time leading to them ( Figure 7C). We saw a strong positive correlation, and, although this drift time is a compound parameter, which complicates interpretations, it appears that the oldest Europeans were members of the smallest populations, and that effective population size has grown through time as agriculture spread through Europe.
We anticipate the further development of methods that explicitly account for differential drift times in ancient and modern samples will become important as aDNA research becomes even more integrated into population genomics. This is because many common summary methods, such as the use of Structure (Pritchard et al. 2000) and Admixture (Alexander et al. 2009), are sensitive to differential amounts of drift between populations (Falush et al. 2016). As we have shown in ancient Europeans, ancient samples tend to come from isolated subpopulations with a large amount of drift, thus confounding such summary approaches. Moreover, standard population genetics theory shows that allele frequencies are expected to be deterministically lower in ancient samples, even if they are direct ancestors of a modern population. Intuitively, this arises because the alleles must have arisen at some point from new mutations, and thus were at lower frequencies in the past. A potentially fruitful avenue to combine these approaches moving forward may be to separate regions of the genome based on ancestry components, and assess the ancestry of ancient samples relative to specific ancestry components, rather than to genomes as a whole.
Our current approach leaves several avenues for improvement. We use a relatively simple error model that wraps up both PMD and sequencing error into a single parameter. While Racimo et al. (2016) shows that adding an additional parameter for PMD-related error does not significantly change results, the recent work of Kousathanas et al. (2017) shows that building robust error models is challenging and essential to estimating heterozygosity properly. Although our method is robust to nonconstant demography because we consider only alleles that are segregating in both the modern and the ancient population, we are losing information by not modeling new mutations that arise in the ancient population. Similarly, we only consider a single ancient population at a time, albeit with multiple samples. Ideally, ancient samples would be embedded in complex demographic models that include ad-mixture, detailing their relationships to each other and to modern populations (Patterson et al. 2012;Lipson and Reich 2017). However, inference of such complex models is difficult, and, though there has been some progress in simplified cases (Pickrell and Pritchard 2012;Lipson et al. 2014), it remains an open problem due to the difficult of simultaneously inferring a nontree-like topology along with demographic parameters. Software such as momi (Kamm et al. 2017), which can compute the likelihood of SNP data in an admixture graph, may be able to be used to integrate over genotype uncertainty in larger settings than considered here.
To proceed with calculating (4), note that the conditional probability of an allele being at frequency y in the ancient population, given that it is at frequency x in the modern population, can be calculated as f ðy; x; t 1 ; t 2 Þ ¼ f ðx; y; t 1 ; t 2 Þ fðxÞ where f ðx; y; t 1 ; t 2 Þ is the joint probability of the allele frequencies in the modern and ancient populations, and fðxÞ is the frequency spectrum in the modern population.
Assuming that the ancestral population of the modern and ancient samples was at equilibrium, the joint distribution of allele frequencies can be computed by sampling alleles from the frequency spectrum of the ancestor, and evolving them forward in time via the Wright-Fisher diffusion. This can be written mathematically as f ðx; y; t 1 ; t 2 Þ ¼ Z 1 0 f ðz; x; t 1 Þ f ðz; y; t 2 ÞfðzÞdz: We now expand the frequency spectrum in terms of the speed measure and the probability of loss and use reversibility with respect to the speed measure to rewrite the equation, Z 1 0 f ðz; x; t 1 Þ f ðz; y; t 2 ÞfðzÞdz ¼ u where the second line follows by rearranging terms and exchanging the order of integration. Note that this formula takes the form of nested expectations. Specifically, Z 1 0 gðyÞ f ðz; y; t 2 Þdy ¼ E z ðgðY t 2 ÞÞ [ hðzÞ and Z 1 0 hðzÞ f Y ðx; z; t 1 Þdz ¼ E Y x ðhðZ t 1 ÞÞ ¼ E x ðgðYÞ; t 1 ; t 2 Þ: We now use (5) to note that where the final line follows by recognizing that f ðAjx; yÞ ¼ f ðAjxÞ since the allele was ascertained in the modern population. Thus, we see that the ascertainment is removed by conditioning, and we recover the original formula. Note that the robustness to ascertainment is only exact if the allele is ascertained in the modern population, but is expected to be very close to true, so long as the allele is ascertained in a population closer to the modern population than to the ancient population.