Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
Assay sensitivity
Assay sensitivity is a property of a clinical trial defined as the ability of a trial to distinguish an effective treatment from a less effective or ineffective intervention. Without assay sensitivity, a trial is not internally valid and is not capable of comparing the efficacy of two interventions.
Importance
Lack of assay sensitivity has different implications for trials intended to show a difference greater than zero between interventions (superiority trials) and trials intended to show non-inferiority. Non-inferiority trials attempt to rule out some margin of inferiority between a test and control intervention i.e. rule out that the test intervention is no worse than the control intervention by a chosen amount.
If a trial intended to demonstrate efficacy by showing superiority of a test intervention to control lacks assay sensitivity, it will fail to show that the test intervention is superior and will fail to lead to a conclusion of efficacy.
In contrast, if a trial intended to demonstrate efficacy by showing a test intervention is non-inferior to an active control lacks assay sensitivity, the trial may find an ineffective intervention to be non-inferior and could lead to an erroneous conclusion of efficacy.
When two interventions within a trial are shown to have different efficacy (i.e., when one intervention is superior), that finding itself directly demonstrates that the trial had assay sensitivity (assuming the finding is not related to random or systematic error). In contrast, a trial that demonstrates non-inferiority between two interventions, or an unsuccessful superiority trial, generally does not contain such direct evidence of assay sensitivity. However, the idea that non-inferiority trials lack assay sensitivity has been disputed.
Differences in sensitivity
Assay sensitivity for a non-inferiority trial may depend upon the chosen margin of inferiority ruled out by the trial, and the design of the planned non-inferiority trial. The chosen margin of inferiority in a non-inferiority trial cannot be larger than the largest effect size which the control intervention reliably and reproducibly demonstrates compared to placebo or no treatment in past superiority trials. For instance, if there is reliable and reproducible evidence from previous superiority trials of an effect size of 10% for a control intervention compared to placebo, an appropriately designed non-inferiority trial designed to rule out that the test intervention may be as much as 5% less effective than the control would have assay sensitivity. On the other hand, with this same data, a noninferiority trial designed to rule out that the test intervention may be as much as 15% less effective than the control may not have assay sensitivity, since this trial would not ensure that the test intervention is any more effective than a placebo given that the effect ruled out is larger than the effect of the control compared to placebo. The choice of the margin is sometimes problematic in non-inferiority trials. Given investigators desire to choose larger margins to decrease the sample size needed to perform a trial, the chosen margin is sometimes larger than the effect size of the control compared to placebo. In addition, a valid noninferiority trial is not possible in situations in which there is a lack of data demonstrating a reliable and reproducible effect of the control compared to placebo.
In addition to choosing a margin based upon credible past evidence, to have assay sensitivity, the planned non-inferiority trial must be designed in a way similar to the past trials which demonstrated the effectiveness of the control compared to placebo, the so-called "constancy assumption". In this way, non-inferiority trials have a feature in common with external (historically) controlled trials. This also means that non-inferiority trials are subject to some of the same biases as historically controlled trials; that is, the effect of a drug in a past trial may not be the same in a current trial given changes in medical practice, differences in disease definitions or changes in the natural history of a disease, differences in outcome timing and definitions, usage of concomitant medications, etc.
The finding of "difference" or "no difference" between two interventions is not a direct demonstration of the internal validity of the trial unless another internal control confirms that the study methods have the ability to show a difference, if one exists, over the range of interest (i.e. the trial contains a third group receiving placebo). Since most clinical trials do not contain an internal "negative" control (i.e. a placebo group) to internally validate the trial, the data to evaluate the validity of the trial comes from past trials external to the current trial.