Statistical significance

From Citizendium
Revision as of 22:40, 3 August 2009 by imported>Robert Badgett (→‎Likelihood or Bayesian method)
Jump to navigation Jump to search
This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Video [?]
 
This editable Main Article is under development and subject to a disclaimer.
Plot of the standard normal probability density function.[1]

In statistics, statistical significance is a "term indicating that the results obtained in an analysis of study data are unlikely to have occurred by chance, and the null hypothesis is rejected. When statistically significant, the probability of the observed results, given the null hypothesis, falls below a specified level of probability (most often P < 0.05)."[2] The P-value, which is used to represent the likelihood the observed results are due to chance, is defined at "the probability, under the assumption of no effect or no difference (the null hypothesis), of obtaining a result equal to or more extreme than what was actually observed."[3]

Hypothesis testing

Usually, the null hypothesis is the there is no difference between two samples in regard to the factor being studied.[4]

Statistical errors

Two errors can occur in assessing the probability that the null hypothesis is true:

Type I error (alpha error)

Type I error, also called alpha error, is the the rejection of a correct null hypothesis. The probability of this is usually expressed by the p-value. Usually the null hypothesis is rejected if the p-value, or the chance of a type I error, is less than 5%. However, this threshold may be adjusted when multiple hypotheses are tested.[5]

Type II error (beta error)

Type II error, also called beta error, is the acceptance of an incorrect null hypothesis. This error may occur when the sample size was insufficient to have power to detect a statistically significant difference.[6][7][8]

Philosophical approaches to error testing

Frequentist method

This approach uses mathematical formulas to calculate deductive probabilities (p-value) of an experimental result.[3] This approach can generate confidence intervals.

A problem with the frequentist analyses of p-values is that they may overstate "statistical significance".[3][9]

Likelihood or Bayesian method

Some argue that the P-value should be interpreted in light of how plausible is the hypothesis based on the totality of prior research and physiologic knowledge.[10][3][9][11][12] This approach can generate Bayesian 95% credibility intervals.[13]

The Bayesian method has been proposed for adaptive trial designs for comparative effectiveness research.[14] In the United States, Medicare's Centers for Medicare and Medicaid Services (CMS) is investigating this role.[15]

Bayesian inference:[9]

The Bayesian analysis creates a Bayes Factor. Unlike the traditional P-value, the Bayes factor is not a probability of rejecting the null hypothesis, but is a ratio of probabilities. A value greater than 1 supports the null hypotheses, whereas a value less than 1 supports the alternative hypothesis. The equation for the Bayes Factor is:[9]

Goodman gives the following three methods of interpreting an example Bayes Factor of 1/2:[9]

  1. Objective probability: "The observed results are half as probable under the null hypothesis as they are under the alternative."
  2. Inductive evidence: "The evidence supports the null hypothesis half as strongly as it does the alternative."
  3. Subjective probability: "The odds of the null hypothesis relative to the alternative hypothesis after the experiment are half what they were before the experiment."

The Minimum Bayes Factor is proposed by Goodman as another way to help readers make Bayesian interpretations if they are accustomed to p-values:[9]

Note that the Minimum Bayes Factor when p = 0.05, or Z= 1.96, is 0.15. This Bayes Factor leads to a posterior probability of 13%, far higher than the 5% probability calculated using frequentist statistics.

Interpretation of the Bayes Factor[16]
Bayes Factor
(B)
Interpretation of support for the alternative hypothesis
> 1.00 reduces the odds of the null hypothesis
0.32–1.00 "not worth more than a bare mention"
0.100–0.320 "substantial support"
0.032–0.100 "strong support"
0.010–0.032 "very strong support"
< 0.010 "decisive support"

References

  1. Anonymous (2006). “Normal Distribution”, NIST/SEMATECH e-Handbook of Statistical Methods. Gaithersburg, MD: National Institute of Standards and Technology. Retrieved on 2009-02-10. 
  2. Anonymous. JAMAevidence Glossary. American Medical Association. Retrieved on 2009-02-10.
  3. 3.0 3.1 3.2 3.3 Goodman SN (1999). "Toward evidence-based medical statistics. 1: The P value fallacy". Ann Intern Med 130: 995–1004. PMID 10383371[e] Cite error: Invalid <ref> tag; name "pmid10383371" defined multiple times with different content
  4. Mosteller, Frederick; Bailar, John Christian (1992). Medical uses of statistics. Boston, Mass: NEJM Books. ISBN 0-910133-36-0.  Google Books
  5. Hochberg, Yosef (1988-12-01). "A sharper Bonferroni procedure for multiple tests of significance". Biometrika 75 (4): 800-802. DOI:10.1093/biomet/75.4.800. Retrieved on 2008-10-15. Research Blogging.
  6. Altman DG, Bland JM (August 1995). "Absence of evidence is not evidence of absence". BMJ (Clinical research ed.) 311 (7003): 485. PMID 7647644. PMC 2550545[e]
  7. Detsky AS, Sackett DL (April 1985). "When was a "negative" clinical trial big enough? How many patients you needed depends on what you found". Archives of internal medicine 145 (4): 709–12. PMID 3985731[e]
  8. Young MJ, Bresnitz EA, Strom BL (August 1983). "Sample size nomograms for interpreting negative clinical studies". Annals of internal medicine 99 (2): 248–51. PMID 6881780[e]
  9. 9.0 9.1 9.2 9.3 9.4 9.5 Goodman SN (1999). "Toward evidence-based medical statistics. 2: The Bayes factor.". Ann Intern Med 130 (12): 1005–13. PMID 10383350. Cite error: Invalid <ref> tag; name "pmid10383350" defined multiple times with different content Cite error: Invalid <ref> tag; name "pmid10383350" defined multiple times with different content Cite error: Invalid <ref> tag; name "pmid10383350" defined multiple times with different content Cite error: Invalid <ref> tag; name "pmid10383350" defined multiple times with different content Cite error: Invalid <ref> tag; name "pmid10383350" defined multiple times with different content
  10. Browner WS, Newman TB (1987). "Are all significant P values created equal? The analogy between diagnostic tests and clinical research". JAMA 257: 2459–63. PMID 3573245[e]
  11. Diamond GA, Kaul S (June 2004). "Prior convictions: Bayesian approaches to the analysis and interpretation of clinical megatrials". J. Am. Coll. Cardiol. 43 (11): 1929–39. DOI:10.1016/j.jacc.2004.01.035. PMID 15172393. Research Blogging.
  12. Ioannidis JP (August 2008). "Effect of formal statistical significance on the credibility of observational associations". Am. J. Epidemiol. 168 (4): 374–83; discussion 384–90. DOI:10.1093/aje/kwn156. PMID 18611956. Research Blogging.
  13. Gelfand, Alan E.; Sudipto Banerjee; Carlin, Bradley P. (2003). Hierarchical Modeling and Analysis for Spatial Data (Monographs on Statistics and Applied Probability). Boca Raton: Chapman & Hall/CRC. LCC QA278.2 .B36. ISBN 1-58488-410-X. 
  14. Luce BR, Kramer JM, Goodman SN, et al. (June 2009). "Rethinking Randomized Clinical Trials for Comparative Effectiveness Research: The Need for Transformational Change". Ann. Intern. Med. 151 (3). PMID 19567619[e]
  15. Anonymous. Health Care: Technology Assessment Subdirectory Page (English). Agency for Healthcare Research and Quality. Retrieved on 2009-08-03.
  16. Jeffreys, Harold [1961] (1998). Theory of probability. Oxford: Clarendon Press. ISBN 0-19-850368-7.