Use of Confidence Intervals in Interpreting Nonstatistically Significant Results
Alexander T. Hawkins, MD, MPH; Lauren R. Samuels, PhD
The goal of much of medical research is to determine which of 2 or more therapeutic approaches is most effective in a given situation. The power of a study is the probability of detecting a true treatment effect of a given magnitude and is highly dependent on the number of patients studied. When a retrospective observational study design is used, researchers have little or no control over the sample size, and thus little control over the power to detect a particular treatment effect. When such a study yields nonstatistically significant results (referred to as nonsignificant results in this article), an important question is whether the lack of statistical significance was likely due to a true absence of difference between the approaches or due to insufficient power. To address this issue, some researchers may consider conducting a power calculation for the completed study. However, power calculations—even for randomized clinical trials—are irrelevant once a study has been completed.1,2 Careful use of confidence intervals (CIs), however, can aid in the interpretation of nonsignificant findings across all study design.
LEGGI TUTTO https://jamanetwork.com/journals/jama/article-abstract/2786522#:~:text=doi%3A10.1001/jama.2021.16172