Мы используем файлы cookie.
Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
McDonald–Kreitman test
Другие языки:

    McDonald–Kreitman test

    Подписчиков: 0, рейтинг: 0

    The McDonald–Kreitman test is a statistical test often used by evolutionary and population biologists to detect and measure the amount of adaptive evolution within a species by determining whether adaptive evolution has occurred, and the proportion of substitutions that resulted from positive selection (also known as directional selection). To do this, the McDonald–Kreitman test compares the amount of variation within a species (polymorphism) to the divergence between species (substitutions) at two types of sites, neutral and nonneutral. A substitution refers to a nucleotide that is fixed within one species, but a different nucleotide is fixed within a second species at the same base pair of homologous DNA sequences. A site is nonneutral if it is either advantageous or deleterious. The two types of sites can be either synonymous or nonsynonymous within a protein-coding region. In a protein-coding sequence of DNA, a site is synonymous if a point mutation at that site would not change the amino acid, also known as a silent mutation. Because the mutation did not result in a change in the amino acid that was originally coded for by the protein-coding sequence, the phenotype, or the observable trait, of the organism is generally unchanged by the silent mutation. A site in a protein-coding sequence of DNA is nonsynonymous if a point mutation at that site results in a change in the amino acid, resulting in a change in the organism's phenotype. Typically, silent mutations in protein-coding regions are used as the "control" in the McDonald–Kreitman test.

    In 1991, John H. McDonald and Martin Kreitman derived the McDonald–Kreitman test while performing an experiment with Drosophila (fruit flies) and their differences in amino acid sequence of the alcohol dehydrogenase gene. McDonald and Kreitman proposed this method to estimate the proportion of substitutions that are fixed by positive selection rather than by genetic drift.

    In order to set up the McDonald–Kreitman test, we must first set up a two-way contingency table of our data on the species being investigated as shown below:

    Fixed Polymorphic
    Synonymous Ds Ps
    Nonsynonymous Dn Pn
    • Ds: the number of synonymous substitutions per gene
    • Dn: the number of non-synonymous substitutions per gene
    • Ps: the number of synonymous polymorphisms per gene
    • Pn: the number of non-synonymous polymorphisms per gene

    To quantify the values for Ds, Dn, Ps, and Pn, you count the number of differences in the protein-coding region for each type of variable in the contingency table.

    The null hypothesis of the McDonald–Kreitman test is that the ratio of nonsynonymous to synonymous variation within a species is going to equal the ratio of nonsynonymous to synonymous variation between species (i.e. Dn/Ds = Pn/Ps). When positive or negative selection (natural selection) influences nonsynonymous variation, the ratios will no longer equal. The ratio of nonsynonymous to synonymous variation between species is going to be lower than the ratio of nonsynonymous to synonymous variation within species (i.e. Dn/Ds < Pn/Ps) when negative selection is at work, and deleterious mutations strongly affect polymorphism. The ratio of nonsynonymous to synonymous variation within species is lower than the ratio of nonsynonymous to synonymous variation between species (i.e. Dn/Ds > Pn/Ps) when we observe positive selection. Since mutations under positive selection spread through a population rapidly, they don't contribute to polymorphism but do have an effect on divergence.

    Using an equation derived by Smith and Eyre-Walker, we can estimate the proportion of base substitutions fixed by natural selection, α, using the following formula:

    Alpha represents the proportion of substitutions driven by positive selection. Alpha can be equal to any number between -∞ and 1. Negative values of alpha are produced by sampling error or violations of the model, such as the segregation of slightly deleterious amino acid mutations. Similar to above, our null hypothesis here is that α=0, and we expect Dn/Ds to equal Pn/Ps.

    Neutrality index

    The neutrality index (NI) quantifies the direction and degree of departure from neutrality (where Pn/Ps and Dn/Ds ratios are equal). When assuming that silent mutations are neutral, a neutrality index greater than 1 (i.e. NI > 1) indicates negative selection is at work, resulting in an excess of amino acid polymorphism. This occurs because natural selection is favoring the purifying selection, and the weeding out of deleterious alleles. Because silent mutations are neutral, a neutrality index lower than 1 (i.e. NI < 1) indicates an excess of nonsilent divergence, which occurs when positive selection is at work in the population. When positive selection is acting on the species, natural selection favors a specific phenotype over other phenotypes, and the favored phenotype begins to go to fixation in the species as the allele frequency for that phenotype increases. To find the neutrality index, we can use the following equation:

    Sources of error with the McDonald–Kreitman test

    One drawback of performing a McDonald–Kreitman test is that the test is vulnerable to error, as with every other statistical test. Many factors can contribute to errors in estimates of the level of adaptive evolution, including presence of slightly deleterious mutations, variation of mutation rates across the genome, variation in coalescent histories across the genome, and changes in the effective population size. All these factors result in α being underestimated. However, according to research done by Charlesworth (2008), Andolfatto(2008), and Eyre-Walker(2006), none of these factors are significant enough to make scientists believe the McDonald–Kreitman test is unreliable, except for the presence of slightly deleterious mutations in species.

    In general, the McDonald-Kreitman test is often thought to be unreliable because of how significantly the test tends to underestimate the degree of adaptive evolution in the presence of slightly deleterious mutations. A slightly deleterious mutation can be defined as a mutation that negative selection acts on only very weakly so that its fate is determined by both selection and random genetic drift. If slightly deleterious mutations are segregating in the population, then it becomes difficult to detect positive selection and the degree of positive selection is underestimated. Weakly deleterious mutations have a larger chance of contributing to polymorphism than strongly deleterious mutations, but still have low probabilities of fixation. This creates bias in the McDonald–Kreitman test's estimate of the degree of adaptive evolution, resulting in a dramatically lower estimate of α. By contrast, since strongly deleterious mutations contribute to neither polymorphism nor divergence, strongly deleterious mutations do not bias estimates of α. The presence of slightly deleterious mutations is strongly linked to genes that have experienced the greatest reduction in effective population size. This means that soon after a recent reduction in effective population size in a species has occurred, such as a bottleneck, we observe a larger presence of slightly deleterious mutations in the protein-coding regions. We can make a direct connection with the increase in number of slightly deleterious mutations and the recent decrease in effective population size. For more information on why population size affects the tendency of slightly deleterious mutations to increase in frequency, refer to the article Nearly neutral theory of molecular evolution.

    Additionally, as with every statistical test, there is always the chance of having type I error and type II error in the McDonald Kreitman test. With statistical tests, we must strive more try to avoid making type I errors, to avoid rejecting the null hypothesis, when it is in fact, true. However, the McDonald Kreitman test is very vulnerable to type I error, because of the many factors that can lead to the accidental rejection of the true null hypothesis. Such factors include variation in recombination rate, nonequilibrium demography, small sample sizes, and in comparisons involving more recently diverged species. All of these factors have the ability to influence the ability of the McDonald-Kreitman test to detect positive selection, as well as the level of positive selection acting on a species. This inability to correctly determine the level of positive selection acting on a species often leads to a false positive, and the incorrect rejection of the null hypothesis.

    While performing the McDonald–Kreitman test, scientists also have to avoid making too numerous type II errors. Otherwise, a test's results may be too flawed and its results will be termed useless.

    Error-correcting Mechanisms of the McDonald–Kreitman test

    There continues to be more experimentation with the McDonald–Kreitman test and how to improve the accuracy of the test. The most important error to correct for is the error that α is severely underestimated in the presence of slightly deleterious mutations, as discussed in the previous section "Sources of Error with the McDonald-Kreitman Test." This possible adjustment of the McDonald–Kreitman test includes removing polymorphisms below a specific value from the data set to improve and increase the number of substitutions that occurred due to adaptive evolution. To minimize the impact of slightly deleterious mutations, it has been proposed to exclude polymorphisms that are below a certain cutoff frequency, such as <8% or <5% (there is still much debate about what the best cutoff value should be). By not including polymorphisms under a certain frequency, you can reduce the bias created by slightly deleterious mutations, since less polymorphisms will be counted. This will drive the estimate of α up. Therefore, the degree of adaptive evolution estimated will not be so severely underestimated, deeming the McDonald–Kreitman test to be more reliable.

    One adjustment necessary is to control for the type I error in the McDonald–Kreitman test, refer to the discussion of this in previous section "Sources of Error with the McDonald Kreitman Test." One method to avoid type I errors is to avoid using populations that have undergone a recent bottleneck, meaning they have recently undergone a recent decrease in effective population size. To make the analysis as accurate as possible in the McDonald–Kreitman test, it is best to use large sample sizes, but there is still debate and how large "large" is. Another method of controlling for type I error of the McDonald Kreitman test in applications to non-coding DNA, Peter Andolfatto suggests establishing significance levels by coalescent simulation with recombination in genomewide scans for selection on noncoding DNA. By doing this, you will be able to improve the accuracy of your statistical test and avoid any false positive tests. With all these possible ways to avoid making type I errors, scientists should cautiously choose which populations they are analyzing, to avoid analyzing populations that will lead to inaccurate results.

    See also


    Новое сообщение