Applied statistics: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Nick Gardner
imported>Nick Gardner
Line 26: Line 26:
The development of [[statistics theory]]  that has enabled  more complex statistical problems to be tackled,  has been the work of mathematical  geniuses  such as Bernoulli, Laplace and Pascal, but the skills required for the  effective use of statistics are different from those required for the understanding of the mathematical derivation of their theorems. Statistics theory is mainly  concerned with the special circumstances under which events are governed solely by chance, without any influence from human action or recognisable natural forces, whereas applied statistics is usually concerned with its use under less restricted circumstances. Also, the terminology of statistical theory attaches precise meanings to some everyday words such as "confidence" and "significance" that may not always be applicable under less restricted circumstances. When, for example, a degree of statistical confidence is expressed in the statement that the strength of a type of steel will not fall below a stated level, that degree of confidence  may apply only to certain  circumstances - and not, for example, under conditions of widely varying temperatures. (This is a particularly important consideration in connection with economic  and financial statistics, which are often based upon quarterly data collected over a dozen or more years, during which they may be affected by a variety of changing circumstances). Thus the successful user of statistics has to combine an awareness of the theoretical  tools of inference that are available with an appreciation of the extent to which they can safely be applied to a particular  problem - if indeed they can be so applied, bearing in mind the financial disasters that have resulted from the mistaken reliance upon statistics in  situations containing  deterministic risks. <ref> Among the factors held to have contributed  the [[Crash of 2008]] was the use of judgement-free statistical risk assessments in a situation containing  deterministic risks, such as  the bursting of a real estate bubble [[http://en.citizendium.org/wiki/Crash_of_2008/Tutorials]]</ref>. The user who plans to employ those tools for the analysis of data must also be prepared to spend a good deal of time acquiring a grasp of the relevant theorems of statistics theory, and mastering the intricacies of the [[free statistical software]] that is available for that purpose. Managers who supervise such work, and users  of its application,  may seek to be excused  from such expenditure of effort, but cannot escape responsibilty for acquiring an understanding of statistical concepts that is at least sufficient for an awareness of the limitations of such analysis. And, as the statistician, M J Moroney has emphasised, there can never be any question of making a decision solely on the basis of a statistical test: an engineer doing a statistical test must remain an engineer, an economist must remain an economist, a pharmacist a pharmacist<ref>M J Moroney: ''Facts from Figures'',page 218, Penguin 1951</ref>.
The development of [[statistics theory]]  that has enabled  more complex statistical problems to be tackled,  has been the work of mathematical  geniuses  such as Bernoulli, Laplace and Pascal, but the skills required for the  effective use of statistics are different from those required for the understanding of the mathematical derivation of their theorems. Statistics theory is mainly  concerned with the special circumstances under which events are governed solely by chance, without any influence from human action or recognisable natural forces, whereas applied statistics is usually concerned with its use under less restricted circumstances. Also, the terminology of statistical theory attaches precise meanings to some everyday words such as "confidence" and "significance" that may not always be applicable under less restricted circumstances. When, for example, a degree of statistical confidence is expressed in the statement that the strength of a type of steel will not fall below a stated level, that degree of confidence  may apply only to certain  circumstances - and not, for example, under conditions of widely varying temperatures. (This is a particularly important consideration in connection with economic  and financial statistics, which are often based upon quarterly data collected over a dozen or more years, during which they may be affected by a variety of changing circumstances). Thus the successful user of statistics has to combine an awareness of the theoretical  tools of inference that are available with an appreciation of the extent to which they can safely be applied to a particular  problem - if indeed they can be so applied, bearing in mind the financial disasters that have resulted from the mistaken reliance upon statistics in  situations containing  deterministic risks. <ref> Among the factors held to have contributed  the [[Crash of 2008]] was the use of judgement-free statistical risk assessments in a situation containing  deterministic risks, such as  the bursting of a real estate bubble [[http://en.citizendium.org/wiki/Crash_of_2008/Tutorials]]</ref>. The user who plans to employ those tools for the analysis of data must also be prepared to spend a good deal of time acquiring a grasp of the relevant theorems of statistics theory, and mastering the intricacies of the [[free statistical software]] that is available for that purpose. Managers who supervise such work, and users  of its application,  may seek to be excused  from such expenditure of effort, but cannot escape responsibilty for acquiring an understanding of statistical concepts that is at least sufficient for an awareness of the limitations of such analysis. And, as the statistician, M J Moroney has emphasised, there can never be any question of making a decision solely on the basis of a statistical test: an engineer doing a statistical test must remain an engineer, an economist must remain an economist, a pharmacist a pharmacist<ref>M J Moroney: ''Facts from Figures'',page 218, Penguin 1951</ref>.


A useful contribution of statistics theory to the interpretation of results obtained from a sample is its quantification of the intuitive concept of  "significance" in a way  that enables an objective answer to be given to the question of how likely it is that, what might appear to be information, is really only a matter of chance (although the way that question is usually put by statisticians is "whether  the result is significant at the 5 per cent level"). If - and only if - it can be established by other methods that the sample used was not biassed, then one of a variety of statistical tests can be used to answer that question<ref>For example the procedure of the tutorial  ''Tests of Significance''  and its following chapters , in  Stat Trek Statistics Tutorials[http://stattrek.com/Lesson5/HypothesisTesting.aspx] </ref>. When established, the conclusion is best reported in jargon-free English, using a phrase such as "this result could arise by chance once in twenty trials". Equally useful is its quantification of the concept of the term "confidence", enabling an objective answer to be given to a question such as "how confident are we that the toxicity of a drug does not exceed a stipulate level" (or that a structure can bear at least a stipulate load,  or that a variable lies within a stipulated [[confidence interval]]). Among the most powerful of the techniques of statistical analysis is the use of "correlation" (sometimes termed "regression") to explore a relationship when the available evidence is subject to errors that are attributable solely to chance. It could be used, for example, to estimate the average constant of proportionality in a (hypothesised) linear relation between IQs and examination marks. In a more complex case it is used to explore the relationship between household income and household saving, taking account of other factors believed to affect that relationship. But, besides being very powerful, regression methods are especially prone to the production of false information.
A useful contribution of statistics theory to the interpretation of results obtained from a sample is its quantification of the intuitive concept of  "significance" in a way  that enables an objective answer to be given to the question of how likely it is that, what might appear to be information, is really only a matter of chance (although the way that question is usually put by statisticians is "whether  the result is significant at the 5 per cent level"). If - and only if - it can be established by other methods that the sample used was not biassed, then one of a variety of statistical tests can be used to answer that question<ref>For example the procedure of the tutorial  ''Tests of Significance''  and its following chapters , in  Stat Trek Statistics Tutorials[http://stattrek.com/Lesson5/HypothesisTesting.aspx] </ref>. When established, the conclusion is best reported in jargon-free English, using a phrase such as "this result could arise by chance once in twenty trials". Equally useful is its quantification of the concept of the term "confidence", enabling an objective answer to be given to a question such as "how confident are we that the toxicity of a drug does not exceed a stipulate level" (or that a structure can bear at least a stipulate load,  or that a variable lies within a stipulated [[confidence interval]]). Among the most powerful of the techniques of statistical analysis is the use of "correlation" (sometimes termed "regression") to explore a relationship when the available evidence is subject to errors that are attributable solely to chance. It could be used, for example, to estimate the average constant of proportionality in a (hypothesised) linear relation between IQs and examination marks. In a more complex case it is used to explore the relationship between household income and household saving, taking account of other factors believed to affect that relationship. But, besides being very powerful, regression methods are especially prone to the production of false information. Success in interpreting the data requires the tackling of problems that are often more difficult than the statistical manipulation<ref>M J Moroney: ''Facts from Figures'',page 303, Penguin 1951</ref>.


==Applications==
==Applications==

Revision as of 13:48, 4 July 2009

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
Tutorials [?]
 
This editable Main Article is under development and subject to a disclaimer.

Applied statistics provide both a familiar source of information and a notorious source of error and misinformation. Popular errors commonly arise from misplaced confidence in intuitive interpretations, in addition to which some serious errors have arisen from misuse by mathematicians and other professionals. Deliberate misinterpretation of statistics by politicians and marketing professionals is so much a popular commonplace that its genuine use is often treated with suspicion. Its use is nevertheless unavoidable and its misinterpretation can usually be avoided given a grasp of a few readily understood concepts.

Overview: the basics

Statistics are observations that are recorded in numerical form. It is essential to their successful handling to accept that statistics are not facts and therefore incontrovertible, but observations about facts and therefore fallible. The reliability of the information that they provide depends not only upon their successful interpretation, but also upon the accuracy with which the facts are observed and the extent to which they truly represent the subject matter of that information. An appreciation of the means by which statistics are collected is thus an essential part of the understanding of statistics and is least as important as a familiarity with the tools that are used in its interpretation.

The basic laws of chance from which much of statistics theory has been derived are little more than a formalisation of intuitive concepts, and the use of the resulting algorithms for the solution of many everyday statistical problems should require only a grasp of basic mathematical principles. Failures of interpretation by professional users suggest, however, that "probability blindness" is an inherent characteristic of the human brain that prevents the effective employment of intuition for that purpose.

Success in the use of more advanced statistics theory depends not so much upon mathematical ability as upon well-considered discrimination in the application of its theorems.

The collection of statistics

The methodology adopted for the collection of observations has a profound influence upon the problem of extracting useful information from the resulting statistics. That problem is at its easiest when the collecting authority can minimise disturbing influences by conducting a "controlled experiment"[1]. A range of more complex methodologies (and associated software packages) referred to as "the design of experiments" [2] is available for use when the collecting authority has various lesser degrees of control. The object of the design in each case is to facilitate the testing of an hypothesis by helping to exclude the influence of factors that the hypothesis does not take into account. At the furthest extreme from the controlled experiment, no such help can be provided through the physical elimination of extraneous influences - and, if they are to be eliminated, it must be done after they have been identified by a purely analytical technique termed the "analysis of variance"[3] For example, the rôle of the authorities that collect economic statistics is necessarily passive, and the testing of economic hypotheses involves the use of a version of the analysis of variance termed "econometrics"[4] (sometimes confused with economic modelling, which is a purely deterministic technique).

The taking of samples[5] reduces the cost of collecting observations and increases the opportunities to generate false information. One source of error arises from the fact that every time a sample is taken there will be a different result. That source of error is readily quantified as the sample's "standard error", or as the "confidence interval" within which the mean observation may be expected to lie[6]. That source of error cannot be eliminated, but it can be reduced to an acceptable level by increasing the size of the sample. The other source of error arises from the likelihood that the characteristics of the sample differ from those of the "population" that it is intended to represent. That source of error does not diminish with sample size and cannot be estimated by a mathematical formula. Careful attention to what is known about the composition of the "population" and the reflection of that composition in the sample is the only available precaution. The composition of the respondents to an opinion poll, for example, is normally chosen to reflect as far as possible the composition of the intended "population" as regards sex, age, income bracket etc. The remaining difference is referred as the "sample bias"', and undetected bias has sometimes been a major source of misinformation.

The use by statisticians of the term "population" refers, not to people in general, but to the category of things or people about which information is sought. A precise definition of the target population is an essential starting point in a statistical investigation, and also a possible source of misinformation. Difficulty can arise when, as often happens, the definition has to be arbitrary. If the intended population were the output of the country's farmers, for example, it might be necessary to draw an arbitrary dividing line between farmers and owners of smallholdings such as market gardens. Any major change over time in the relative output of farm products by the included and excluded categories might then lead to misleading conclusions. Technological change, such as the change from typewriters to word processors has sometimes given rise to serious difficulties in the construction of the price indexes used in the correction of GDP for inflation[7]. Since there is no objective solution to those problems, it is inevitable that national statistics embody an element of judgement exercised by the professional statisticians in the statistics authorities.

National statistics also embody some further arbitrary or subjective adjustments that are intended to increase their usefulness, but constitute another possible source of error. The early provisional release of results involves the arbitrary imputation of figures in the place of late or invalid responses to enquiries, and substantial amendments to the published figures sometimes then continue for some months after their initial release. Decisions are also taken from time to time to exclude or include transitory changes such as whether the transfer abroad for repair of a multi-million dollar airliner is to be recorded as an export and its return as an import. Judgmental adjustments are also made to resolve conflicts between duplicate measures of national output. National statistics authorities employ research teams for the purpose of maintaining and improving the reliability of their output in those and other respects.

Politicians in the major democracies have seldom had any influence upon the collection and publication of national statistics, and most countries have sought to allay suspicions to the contrary by delegating those functions to public bodies that are free from possible government influence.

Statistical inference

Although statistics is sometimes thought of as a branch of mathematics, some of its findings can be successfully interpreted by verbal inference, and there are others that only require the use of a few simply-expressed rules (such as those set out in paragraph 1 of the tutorials subpage). However, there is evidence to suggest that most people confidently prefer an intuitive approach, unaware of the "probability blindness"[8][9] that is characteristic of the human brain. Educated professionals seem not to be immune from overconfidence in that respect, of which there have been several examples involving the medical profession. For example, the following question was put to the staff and students of the Harvard Medical School :If a test of a disease that has a prevalence rate of 1 in 1000 has a false positive rate of 5%, what is the chance that a person who has been given a positive result actually has the disease. - 45 per cent gave the intuitive answer of 95% when the true answer is 2% [10] (see paragraph 2 of the tutorials subpage). No harm was done by those mistakes, but similar overconfidence by an eminent expert cost the English mother, Sally Clark, her liberty (see paragraph 3 of the tutorials subpage).

The development of statistics theory that has enabled more complex statistical problems to be tackled, has been the work of mathematical geniuses such as Bernoulli, Laplace and Pascal, but the skills required for the effective use of statistics are different from those required for the understanding of the mathematical derivation of their theorems. Statistics theory is mainly concerned with the special circumstances under which events are governed solely by chance, without any influence from human action or recognisable natural forces, whereas applied statistics is usually concerned with its use under less restricted circumstances. Also, the terminology of statistical theory attaches precise meanings to some everyday words such as "confidence" and "significance" that may not always be applicable under less restricted circumstances. When, for example, a degree of statistical confidence is expressed in the statement that the strength of a type of steel will not fall below a stated level, that degree of confidence may apply only to certain circumstances - and not, for example, under conditions of widely varying temperatures. (This is a particularly important consideration in connection with economic and financial statistics, which are often based upon quarterly data collected over a dozen or more years, during which they may be affected by a variety of changing circumstances). Thus the successful user of statistics has to combine an awareness of the theoretical tools of inference that are available with an appreciation of the extent to which they can safely be applied to a particular problem - if indeed they can be so applied, bearing in mind the financial disasters that have resulted from the mistaken reliance upon statistics in situations containing deterministic risks. [11]. The user who plans to employ those tools for the analysis of data must also be prepared to spend a good deal of time acquiring a grasp of the relevant theorems of statistics theory, and mastering the intricacies of the free statistical software that is available for that purpose. Managers who supervise such work, and users of its application, may seek to be excused from such expenditure of effort, but cannot escape responsibilty for acquiring an understanding of statistical concepts that is at least sufficient for an awareness of the limitations of such analysis. And, as the statistician, M J Moroney has emphasised, there can never be any question of making a decision solely on the basis of a statistical test: an engineer doing a statistical test must remain an engineer, an economist must remain an economist, a pharmacist a pharmacist[12].

A useful contribution of statistics theory to the interpretation of results obtained from a sample is its quantification of the intuitive concept of "significance" in a way that enables an objective answer to be given to the question of how likely it is that, what might appear to be information, is really only a matter of chance (although the way that question is usually put by statisticians is "whether the result is significant at the 5 per cent level"). If - and only if - it can be established by other methods that the sample used was not biassed, then one of a variety of statistical tests can be used to answer that question[13]. When established, the conclusion is best reported in jargon-free English, using a phrase such as "this result could arise by chance once in twenty trials". Equally useful is its quantification of the concept of the term "confidence", enabling an objective answer to be given to a question such as "how confident are we that the toxicity of a drug does not exceed a stipulate level" (or that a structure can bear at least a stipulate load, or that a variable lies within a stipulated confidence interval). Among the most powerful of the techniques of statistical analysis is the use of "correlation" (sometimes termed "regression") to explore a relationship when the available evidence is subject to errors that are attributable solely to chance. It could be used, for example, to estimate the average constant of proportionality in a (hypothesised) linear relation between IQs and examination marks. In a more complex case it is used to explore the relationship between household income and household saving, taking account of other factors believed to affect that relationship. But, besides being very powerful, regression methods are especially prone to the production of false information. Success in interpreting the data requires the tackling of problems that are often more difficult than the statistical manipulation[14].

Applications

Notes and references

  1. In a controlled experiment, a "control group", that is in all relevant respects similar to the experimental group, receive a "placebo", while the experimental group receive the treatment that is on trial
  2. Valerie Easton and John McCall: The Design of Experiments and ANOVA. STEPS 1997
  3. Anova Manova"
  4. Econometrics 2005
  5. Valerie Easton and John McCall: Sampling, STEPS 1997,
  6. Robin Levine-Wissing and David Thiel, Confidence Intervals, AP Statistics Tutorial
  7. See the article on Gross domestic product
  8. Daniel Kahneman and Amos Tversky: "Prospect Theory". Econometrica" vol 47 No2 1979
  9. Massimo Piattelli-Palmarini: Inevitable Illusions: How Mistakes of Reason Rule Our Minds, Chapter 4, "Probability Illusions", Wiley, 1994
  10. Michael Eysenck and Mark Keane Cognitive Psychology page 483 [1]
  11. Among the factors held to have contributed the Crash of 2008 was the use of judgement-free statistical risk assessments in a situation containing deterministic risks, such as the bursting of a real estate bubble [[2]]
  12. M J Moroney: Facts from Figures,page 218, Penguin 1951
  13. For example the procedure of the tutorial Tests of Significance and its following chapters , in Stat Trek Statistics Tutorials[3]
  14. M J Moroney: Facts from Figures,page 303, Penguin 1951