Мы используем файлы cookie.
Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
Minimal important difference
Другие языки:

    Minimal important difference

    Подписчиков: 0, рейтинг: 0

    The minimal important difference (MID) or minimal clinically important difference (MCID) is the smallest change in a treatment outcome that an individual patient would identify as important and which would indicate a change in the patient's management.

    Purpose

    Over the years great steps have been taken in reporting what really matters in clinical research. A clinical researcher might report: "in my own experience treatment X does not do well for condition Y". The use of a P value cut-off point of 0.05 was introduced by R.A. Fisher; this led to study results being described as either statistically significant or non-significant. Although this p-value objectified research outcome, using it as a rigid cut off point can have potentially serious consequences: (i) clinically important differences observed in studies might be statistically non-significant (a type II error, or false negative result) and therefore be unfairly ignored; this often is a result of having a small number of subjects studied; (ii) even the smallest difference in measurements can be proved statistically significant by increasing the number of subjects in a study. Such a small difference could be irrelevant (i.e., of no clinical importance) to patients or clinicians. Thus, statistical significance does not necessarily imply clinical importance.

    Over the years clinicians and researchers have moved away from physical and radiological endpoints towards patient-reported outcomes. However, using patient-reported outcomes does not solve the problem of small differences being statistically significant but possibly clinically irrelevant.

    In order to study clinical importance, the concept of minimal clinically important difference (MCID) was proposed by Jaeschke et al. in 1989. MCID is the smallest change in an outcome that a patient would identify as important. MCID therefore offers a threshold above which outcome is experienced as relevant by the patient; this avoids the problem of mere statistical significance. Schunemann and Guyatt recommended minimally important difference (MID) to remove the "focus on 'clinical' interpretations" (2005, p. 594).

    Methods of determining the MID

    There are several techniques to calculate the MID. They fall into three categories: distribution-based methods, anchor-based methods and the Delphi method.

    Distribution-based methods

    These techniques are derived from statistical measures of spread of data: the standard deviation, the standard error of measurement and the effect size, usually expressed as a standardized mean difference (SMD; also known as Cohen's d in psychology).

    1. Using the one-half standard deviation benchmark of an outcome measure entails that patient improving more than one-half of the outcome score's standard deviation have achieved a minimal clinically important difference.
    2. The standard error of measurement is the variation in scores due to unreliability of the scale or measure used. Thus a change smaller than the standard error of measurement is likely to be the result of measurement error rather than a true observed change. Patients achieving a difference in outcome score of at least one standard error of measurement would have achieved a minimal clinically important difference.
    3. The effect size is a measure obtained by dividing the difference between the means of the baseline and posttreatment scores by the SD of the baseline scores. An effect size cut off point can be used to define MID in the same way as the one half standard deviation and the standard error of measurement.
    4. Item response theory (IRT) also can create an estimate of MID using judges who respond to clinical vignettes illustrating different scenarios.

    Anchor based

    The anchor based method compares changes in scores with an "anchor" as a reference. An anchor establishes if the patient is better after treatment compared to baseline according to the patient's own experience.

    A popular anchoring method is to ask the patient at a specific point during treatment: ‘‘Do you feel that the treatment improved things for you?’’. Answers to anchor questions could vary from a simple "yes" or "no", to ranked options, e.g., "much better", "slightly better", "about the same", "somewhat worse" and "much worse". Differences between those average scale score for who answered "better" and those who answered "about the same" create the benchmark for the anchor method.

    An interesting approach to the anchor based method is establishment of an anchor before treatment. The patient is asked what minimal outcome would be necessary to undergo the proposed treatment. This method allows for more personal variation, as one patient might require more pain relief, where another strives towards more functional improvement.

    Different anchor questions and a different number of possible answers have been proposed. Currently there is no consensus on the one right question nor on the best answers.

    Delphi method

    The Delphi method relies on a panel of experts who reach consensus regarding the MID. The expert panel gets information about the results of a trial. They review it separately and provide their best estimate of the MID. Their responses are averaged, and this summary is sent back with an invitation to revise their estimates. This process is continued until consensus is achieved.

    Shortcomings

    The anchor based method is not suitable for conditions where most patients will improve and few remain unchanged. High post treatment satisfaction results in insufficient discriminative ability for calculation of a MID. A possible solution to this problem is a variation on the calculation of a 'substantial clinical benefit' score. This calculation is not based on the patients that improve vs. that do not, but on the patients that improve and those who improve a lot.

    MID calculation is of limited additional value for treatments that show effects only in the long run, e.g. tightly regulated blood glucose in the case of diabetes might cause discomfort because of the accompanying hypoglycemia (low blood sugar) and the perceived quality of life might actually decrease; however, regulation reduces severe long term complications and is therefore still warranted. The calculated MID varies widely depending on the method used, currently there is no preferred method of establishing the MID.

    There is no consensus regarding the optimal technique, but distribution-based methods have been criticized. For example, use of the standard error of the mean (SEM) is based on anecdotal observations that it is approximately equal to 1/2 SD when the reliability is 0.75. But Revicki et al. question why 1 SEM should "have anything to do with the MID? The SEM is estimated by the product of the SD and the square root of 1-reliability of a measure. The SEM is used to set the confidence interval (CI) around an individual score, that is, the observed score plus or minus 1.96 SEMS constitutes the 95% CI. In fact, the reliable change index proposed early by Jacobson and Truax [12] is based on defining change using the statistical convention of exceeding 2 standard errors" (p. 106).

    Caveats

    The MID varies according to diseases and outcome instruments, but it does not depend on treatment methods. Therefore, two different treatments for a similar disease can be compared using the same MID if the outcome measurement instrument is the same. Also MID may differ depending on baseline level and it seems to differ over time after treatment for the same disease.

    See also

    External links


    Новое сообщение