Difference between revisions of "Statistics"

From Nordan Symposia
Jump to navigationJump to search
Line 1: Line 1:
 
[[Image:lighterstill.jpg]]
 
[[Image:lighterstill.jpg]]
[[Image:Attractor.png|right||frame|<center>Attractor</center>]]
+
[[Image:Attractor.png|right||frame]]
  
'''Statistics''' is a [[Mathematics|mathematical science]] pertaining to the collection, analysis, interpretation or explanation, and presentation of [[data]]. It is applicable to a wide variety of [[academic discipline]]s, from the physical and social [[science]]s to the [[humanities]]. Statistics are also used for making informed decisions.
+
==Origin==
 +
German ''Statistik'' [[study]] of [[political]] [[facts]] and figures, from New Latin ''statisticus'' of politics, from Latin ''status'' [[state]]
 +
*[http://en.wikipedia.org/wiki/18th_century 1770]
 +
==Definition==
 +
*1:  a branch of [[mathematics]] dealing with the collection, [[analysis]], [[interpretation]], and presentation of masses of numerical [[data]]
 +
*2:  a collection of [[quantitative]] data
 +
==Description==
 +
'''Statistics''' is described as a [[mathematical]] body of [[science]] that pertains to the collection, analysis, interpretation or explanation, and presentation of [[data]], or as a branch of mathematics concerned with collecting and interpreting data. Because of its [[empirical]] roots and its focus on [[applications]], statistics is typically considered a distinct mathematical science rather than as a branch of mathematics. Some tasks a statistician may involve are less mathematical; for example, ensuring that data collection is undertaken in a way that produces valid [[conclusions]], coding data, or reporting results in ways [[comprehensible]] to those who must use them.
  
Statistical methods can be used to summarize or describe a collection of data; this is called '''[[descriptive statistics]]'''. In addition, patterns in the data may be [[mathematical model|modeled]] in a way that accounts for [[random]]ness and uncertainty in the observations, and then used to draw inferences about the process or population being studied; this is called '''[[inferential statistics]]'''. Both descriptive and inferential statistics comprise '''applied statistics'''. There is also a discipline called '''[[mathematical statistics]]''', which is concerned with the theoretical basis of the subject.
+
Statisticians improve data [[quality]] by developing specific [[experiment]] designs and survey samples. Statistics itself also provides tools for [[prediction]] and [[forecasting]] the use of data through [http://en.wikipedia.org/wiki/Statistical_model statistical models]. Statistics is applicable to a wide variety of [http://en.wikipedia.org/wiki/Academic_discipline academic disciplines], including natural and [[social sciences]], [[government]], and [[business]]. Statistical consultants can help [[organizations]] and companies that don't have in-house expertise relevant to their particular questions.
  
The word '''''statistics''''' is also the plural of '''''[[statistic]]''''' (singular), which refers to the result of applying a statistical algorithm to a set of data, as in [[economic statistics]], [[crime statistics]], etc.
+
Statistical methods can summarize or describe a collection of [[data]]. This is called ''[http://en.wikipedia.org/wiki/Descriptive_statistics descriptive statistics]''. This is particularly useful in communicating the results of [[experiments]] and [[research]]. In addition, data patterns may be modeled in a way that accounts for [[random]]ness and [[uncertainty]] in the observations.
  
==History==
+
These models can be used to draw [[inferences]] about the [[process]] or [[population]] under study—a practice called [http://en.wikipedia.org/wiki/Inferential_statistics inferential statistics]. Inference is a vital element of scientific advance, since it provides a way to draw conclusions from data that are subject to random variation. To prove the [[propositions]] being investigated further, the conclusions are tested as well, as part of the [[scientific method]]. Descriptive statistics and analysis of the new data tend to provide more information as to the [[truth]] of the proposition.
===Etymology===
 
  
The word ''statistics'' ultimately derives from the [[New Latin]] term ''statisticum collegium'' ("council of state") and the [[Italian language|Italian]] word ''statista'' ("[[statesman]]" or "[[politician]]"). The [[German language|German]] ''Statistik'', first introduced by [[Gottfried Achenwall]] (1749), originally designated the analysis of [[data]] about the [[state]], signifying the "science of state" (then called ''political arithmetic'' in English). It acquired the meaning of the collection and classification of data generally in the early [[19th century]]. It was introduced into English by [[Sir John Sinclair]].
+
"Applied statistics" comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns both the logical [[arguments]] underlying [[justification]] of approaches to [http://en.wikipedia.org/wiki/Statistical_inference statistical inference], as well encompassing ''[http://en.wikipedia.org/wiki/Mathematical_statistics mathematical statistics]''. Mathematical statistics includes not only the [[manipulation]] of [[probability]] distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of ''[http://en.wikipedia.org/wiki/Computational_statistics computational statistics]'' and the design of experiments.
  
Thus, the original principal purpose of ''Statistik'' was data to be used by governmental and (often centralized) administrative bodies. The collection of data about states and localities continues, largely through [[List of national and international statistical services|national and international statistical services]].  In particular, [[censuses]] provide regular information about the [[population]].
+
Statistics is closely related to [http://en.wikipedia.org/wiki/Probability_theory probability theory], with which it is often grouped. The [[difference]] is, roughly, that probability theory starts from the given [[parameters]] of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population. Statistics has many ties to [http://en.wikipedia.org/wiki/Machine_learning machine learning] and [http://en.wikipedia.org/wiki/Data_mining data mining].[http://en.wikipedia.org/wiki/Statistics]
  
===Origins in probability===
 
 
The mathematical methods of statistics emerged from [[probability theory]], which can be dated to the correspondence of [[Pierre de Fermat]] and [[Blaise Pascal]] (1654). [[Christiaan Huygens]] (1657) gave the earliest known scientific treatment of the subject. [[Jakob Bernoulli]]'s ''[[Ars Conjectandi]]'' (posthumous, 1713) and [[Abraham de Moivre]]'s ''[[Doctrine of Chances]]'' (1718) treated the subject as a branch of mathematics.<ref> See [[Ian Hacking]]'s ''The Emergence of Probability'' for a history of the early development of the very concept of mathematical probability. </ref> In the modern era, the work of [[Kolmogorov]] has been instrumental in formulating the fundamental model of Probability Theory, which is used throughout statistics.
 
 
The [[theory of errors]] may be traced back to [[Roger Cotes]]' ''Opera Miscellanea'' (posthumous, 1722), but a memoir prepared by [[Thomas Simpson]] in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the [[axiom]]s that positive and negative errors are equally probable, and that there are certain assignable limits within which all errors may be supposed to fall; continuous errors are discussed and a probability curve is given.
 
 
[[Pierre-Simon Laplace]] (1774) made the first attempt to deduce a rule for the combination of observations from the principles of the theory of probabilities. He represented the law of probability of errors by a curve. He deduced a formula for the mean of three observations. He also gave (1781) a formula for the law of facility of error (a term due to [[Lagrange]], 1774), but one which led to unmanageable equations. [[Daniel Bernoulli]] (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
 
 
The [[method of least squares]], which was used to minimize errors in data [[measurement]], was published independently by [[Adrien-Marie Legendre]] (1805), [[Robert Adrain]] (1808), and [[Carl Friedrich Gauss]] (1809). Gauss had used the method in his famous 1801 prediction of the location of the [[dwarf planet]] [[Ceres (dwarf planet)|Ceres]]. Further proofs were given by Laplace (1810, 1812), Gauss (1823), [[James Ivory (mathematician)|James Ivory]] (1825, 1826), Hagen (1837), [[Friedrich Bessel]] (1838), [[W. F. Donkin]] (1844, 1856), [[John Herschel]] (1850), and [[Morgan Crofton]] (1870).
 
 
Other contributors were Ellis (1844), [[Augustus De Morgan|De Morgan]] (1864), [[Glaisher]] (1872), and [[Giovanni Schiaparelli]] (1875). Peters's (1856) formula for <math>r</math>, the probable error of a single observation, is well known.
 
 
In the [[nineteenth century]] authors on the general theory included Laplace, [[Sylvestre Lacroix]] (1816), Littrow (1833), [[Richard Dedekind]] (1860), Helmert (1872), [[Hermann Laurent]] (1873), Liagre, Didion, and [[Karl Pearson]]. [[Augustus De Morgan]] and [[George Boole]] improved the exposition of the theory.
 
 
[[Adolphe Quetelet]] (1796-1874), another important founder of statistics, introduced the notion of the "average man" (''l'homme moyen'') as a means of understanding complex social phenomena such as [[crime rates]], [[marriage rates]], or [[suicide rates]].
 
 
===Statistics today===
 
 
During the 20th century, the creation of precise instruments for [[public health]] concerns ([[epidemiology]], [[biostatistics]], etc.) and economic and social purposes ([[unemployment]] rate, [[econometry]], etc.) necessitated substantial advances in statistical practices: the Western [[welfare state]]s developed after [[World War I]] had to possess specific knowledge of the "population".
 
 
Today the use of statistics has broadened far beyond its origins as a service to a state or government. Individuals and organizations use statistics to understand data and make informed decisions throughout the natural and social sciences, medicine, business, and other areas.
 
 
Statistics is generally regarded not as a subfield of mathematics but rather as a distinct, albeit allied, field. Many [[university|universities]] maintain separate mathematics and statistics [[academic department|department]]s. Statistics is also taught in departments as diverse as [[psychology]], [[education]], and [[public health]].
 
 
===Important contributors to statistics===
 
 
* [[Thomas Bayes]]
 
* [[Pafnuty Chebyshev]]
 
* [[Sir David Cox (statistician)|Sir David Cox]]
 
* [[Gertrude Mary Cox|Gertrude Cox]]
 
* [[George Dantzig]]
 
* [[W. Edwards Deming]]
 
* [[Bruno de Finetti]]
 
* [[Ronald Fisher|Sir Ronald Fisher]]
 
 
* [[Francis Galton|Sir Francis Galton]]
 
* [[Carl Friedrich Gauss]]
 
* [[William Sealey Gosset]] ("Student")
 
* [[Andrey Kolmogorov]]
 
* [[Aleksandr Lyapunov]]
 
* [[Abraham De Moivre]]
 
* [[Isaac Newton]]
 
* [[Florence Nightingale]]
 
 
* [[Blaise Pascal]]
 
* [[Karl Pearson]]
 
* [[Adolphe Quetelet]]
 
* [[Walter A. Shewhart]]
 
* [[Charles Spearman]]
 
* [[John Tukey]]
 
* [[C. R. Rao]]
 
* [[Rene Descartes]]
 
* [[George E. P. Box]]
 
 
 
==Conceptual overview==
 
In applying statistics to a scientific, industrial, or societal problem, one begins with a process or [[statistical population|population]] to be studied. This might be a population of people in a country, of crystal grains in a rock, or of goods manufactured by a particular factory during a given period. It may instead be a process observed at various times; data collected about this kind of "population" constitute what is called a [[time series]].
 
 
For practical reasons, rather than compiling data about an entire population, one usually instead studies a chosen subset of the population, called a [[sampling (statistics)|sample]]. Data are collected about the sample in an observational or [[experiment]]al setting. The data are then subjected to statistical analysis, which serves two related purposes: description and inference.
 
 
*[[Descriptive statistics]] can be used to summarize the data, either numerically or graphically, to describe the sample. Basic examples of numerical descriptors include the [[mean]] and [[standard deviation]]. Graphical summarizations include various kinds of charts and graphs.
 
*[[Inferential statistics]] is used to model patterns in the data, accounting for randomness and drawing inferences about the larger population. These inferences may take the form of answers to yes/no questions ([[hypothesis testing]]), estimates of numerical characteristics ([[estimation]]), descriptions of association ([[correlation]]), or modeling of relationships ([[regression analysis|regression]]). Other [[mathematical model|modeling]] techniques include [[ANOVA]], [[time series]], and [[data mining]].
 
 
The concept of correlation is particularly noteworthy. Statistical analysis of a [[data set]] may reveal that two variables (that is, two properties of the population under consideration) tend to vary together, as if they are connected. For example, a study of annual income and age of death among people might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated. However, one cannot immediately infer the existence of a causal relationship between the two variables; see [[correlation does not imply causation]]. The correlated phenomena could be caused by a third, previously unconsidered phenomenon, called a [[lurking variable]].
 
 
If the sample is representative of the population, then inferences and conclusions made from the sample can be extended to the population as a whole. A major problem lies in determining the extent to which the chosen sample is representative. Statistics offers methods to estimate and correct for randomness in the sample and in the data collection procedure, as well as methods for designing robust experiments in the first place; see [[experimental design]].
 
 
The fundamental mathematical concept employed in understanding such randomness is [[probability]]. [[Mathematical statistics]] (also called [[statistical theory]]) is the branch of [[applied mathematics]] that uses [[probability theory]] and [[mathematical analysis|analysis]] to examine the theoretical basis of statistics.
 
 
The use of any statistical method is valid only when the system or population under consideration satisfies the basic mathematical assumptions of the method. [[Misuse of statistics]] can produce subtle but serious errors in description and interpretation &mdash; subtle in that even experienced professionals sometimes make such errors, and serious in that they may affect social policy, medical practice and the reliability of structures such as bridges and nuclear power plants.
 
 
Even when statistics is correctly applied, the results can be difficult to interpret for a non-expert. For example, the [[statistical significance]] of a trend in the data &mdash; which measures the extent to which the trend could be caused by random variation in the sample &mdash; may not agree with one's intuitive sense of its significance. The set of basic statistical skills (and skepticism) needed by people to deal with information in their everyday lives is referred to as [[statistical literacy]].
 
 
==Statistical methods==
 
===Experimental and observational studies===
 
A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or [[independent variable]]s on response or [[dependent variable]]s.  There are two major types of causal statistical studies, experimental studies and observational studies.  In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed.  The difference between the two types is in how the study is actually conducted. Each can be very effective.
 
 
An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation may have modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead data are gathered and correlations between predictors and the response are investigated.
 
 
An example of an experimental study is the famous [[Hawthorne studies]] which attempted to test changes to the working environment at the Hawthorne plant of the Western Electric Company.  The researchers were interested in whether increased illumination would increase the productivity of the [[assembly line]] workers.  The researchers first measured productivity in the plant then modified the illumination in an area of the plant to see if changes in illumination would affect productivity.  As it turns out, productivity improved under all the experimental conditions (see [[Hawthorne effect]]).  However, the study is today heavily criticized for errors in experimental procedures, specifically the lack of a [[control group]] and [[double-blind|blindedness]].
 
 
An example of an observational study is a study which explores the correlation between smoking and lung cancer.  This type of study typically uses a survey to collect observations about the area of interest and then perform statistical analysis.  In this case, the researchers would collect observations of both smokers and non-smokers and then look at the number of cases of lung cancer in each group.
 
 
The basic steps for an experiment are to:
 
# [[planning statistical research|plan the research]] including determining information sources, research subject selection, and [[ethics|ethical]] considerations for the proposed research and method,
 
# [[Design of experiments|design the experiment]] concentrating on the system model and the interaction of independent and dependent variables,
 
# [[summary statistics|summarize a collection of observations]] to feature their commonality by suppressing details ([[descriptive statistics]]),
 
# reach consensus about what [[statistical inference|the observations tell us]] about the world we observe ([[statistical inference]]),
 
# document and present the results of the study.
 
 
===Levels of measurement===
 
:''See: [[Levels of measurement|Stanley Stevens' "Scales of measurement" (1946): nominal, ordinal, interval, ratio]]''
 
There are four types of measurements or measurement scales used in statistics.  The four types or [[level of measurement|levels of measurement]] (nominal, ordinal, interval, and ratio) have different degrees of usefulness in statistical [[research]].  Ratio measurements, where both a zero value and distances between different measurements are defined, provide the greatest flexibility in statistical methods that can be used for analysing the data.  Interval measurements have meaningful distances between measurements but no meaningful zero value (such as IQ measurements or temperature measurements in [[Kelvin]]).  Ordinal measurements have imprecise differences between consecutive values but a meaningful order to those values. Nominal measurements have no meaningful rank order among values.
 
 
===Statistical techniques===
 
Some well known statistical [[Statistical hypothesis testing|test]]s and [[procedure]]s for [[research]] [[observation]]s are:
 
* [[Student's t-test]]
 
* [[chi-square test]]
 
* [[Analysis of variance]] (ANOVA)
 
* [[Mann-Whitney U]]
 
* [[Regression analysis]]
 
* [[Factor Analysis]]
 
* [[Correlation]]
 
* [[Pearson product-moment correlation coefficient]]
 
* [[Spearman's rank correlation coefficient]]
 
 
==Specialized disciplines==
 
Some fields of inquiry use applied statistics so extensively that they have [[specialized terminology]]. These disciplines include:
 
 
* [[Actuarial science]]
 
* [[Applied Information Economics]]
 
* [[Biostatistics]]
 
* [[Business statistics]]
 
* [[Data mining]] (applying statistics and [[pattern recognition]] to discover knowledge from data)
 
* [[Economic statistics]] (Econometrics)
 
* [[Energy statistics]]
 
* [[Engineering statistics]]
 
* [[Epidemiology]]
 
* [[Geography]] and [[Geographic Information Systems]], more specifically in [[Spatial analysis]]
 
* [[Demography]]
 
* [[Psychological statistics]]
 
* [[Quality]]
 
* [[Social statistics]] (for all the ''social'' sciences)
 
* [[Statistical literacy]]
 
* [[Statistical survey]]s
 
* [[Process analysis]] and [[chemometrics]] (for analysis of data from [[analytical chemistry]] and [[chemical engineering]])
 
* [[Reliability engineering]]
 
* [[Image processing]]
 
* Statistics in various sports, particularly [[Baseball statistics|baseball]] and [[Cricket statistics|cricket]]
 
 
Statistics form a key basis tool in business and manufacturing as well.  It is used to understand measurement systems variability, control processes (as in [[statistical process control]] or SPC), for summarizing data, and to make data-driven decisions.  In these roles it is a key tool, and perhaps the only reliable tool.
 
 
==Statistical computing==
 
The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of [[linear model]]s, but powerful computers, coupled with suitable numerical [[algorithms]], caused a resurgence of interest in [[nonlinear regression|nonlinear models]] (especially [[neural networks]] and [[decision tree]]s) and the creation of new types, such as [[generalized linear model|generalised linear model]]s and [[multilevel model]]s.
 
 
Increased computing power has also led to the growing popularity of computationally-intensive methods based on [[resampling (statistics)|resampling]], such as permutation tests and the [[bootstrapping (statistics)|bootstrap]], while techniques such as [[Gibbs sampling]] have made Bayesian methods more feasible. The computer revolution has implications for the future of statistics, with a new emphasis on "experimental" and "empirical" statistics.  A large number of both general and special purpose [[List of statistical packages|statistical packages]] are now available to practitioners.
 
 
== Misuse ==
 
 
There is a general perception that statistical knowledge is all-too-frequently intentionally [[Misuse of statistics|misused]], by finding ways to interpret the data that are favorable to the presenter. A famous quote, variously attributed, but thought to be from [[Benjamin Disraeli, 1st Earl of Beaconsfield|Benjamin Disraeli]]<ref>cf. "Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists" by Joel Best. Professor Best attributes it to Disraeli, rather than [[Mark Twain]] or others. </ref> is, "There are three types of lies - lies, damn lies, and statistics." The well-known book ''How to Lie with Statistics'' by [[Darrell Huff]] discusses many cases of deceptive uses of statistics, focusing on misleading graphs. By choosing (or rejecting, or modifying) a certain sample, results can be manipulated; throwing out [[outliers]] is one means of doing so. This may be the result of outright fraud or of subtle and unintentional bias on the part of the researcher. Thus, Harvard President [[Lawrence Lowell]] wrote in 1909 that statistics, "like veal pies, are good if you know the person that made them, and are sure of the ingredients."
 
 
As further studies contradict previously announced results, people may become wary of trusting such studies. One might read a study that says (for example) "doing X reduces high blood pressure", followed by a study that says "doing X does not affect high blood pressure", followed by a study that says "doing X actually worsens high blood pressure". Often the studies were conducted on different groups with different protocols, or a small-sample study that promised intriguing results has not held up to further scrutiny in a large-sample study. However, many readers may not have noticed these distinctions, or the media may have oversimplified this vital contextual information, and the public's distrust of statistics is thereby increased.
 
 
However, deeper criticisms come from the fact that the hypothesis testing approach, widely used and in many cases required by law or regulation, forces one hypothesis to be 'favored' (the [[null hypothesis]]), and can also seem to exaggerate the importance of minor differences in large studies. A difference that is highly statistically significant can still be of no practical significance.
 
 
:''See also [[Hypothesis test#Criticism|criticism of hypothesis testing]] and [[Null hypothesis#Controversy|controversy over the null hypothesis]].''
 
 
In the fields of psychology and medicine, especially with regard to the approval of new drug treatments by the [[Food and Drug Administration]], criticism of the hypothesis testing approach has increased in recent years. One response has been a greater emphasis on the [[p-value|''p''-value]] over simply reporting whether a hypothesis was rejected at the given level of significance. Here again, however, this summarises the evidence for an effect but not the size of the effect. One increasingly common approach is to report [[confidence interval]]s instead, since these indicate both the size of the effect and the uncertainty surrounding it. This aids in interpreting the results, as the confidence interval for a given simultaneously indicates both statistical significance and effect size.
 
 
Note that both the ''p''-value and confidence interval approaches are based on the same fundamental calculations as those entering into the corresponding hypothesis test. The results are stated in a more detailed format, rather than the yes-or-no finality of the hypothesis test, but use the same underlying statistical methodology.
 
 
A truly different approach is to use [[Bayesian inference|Bayesian methods]]. This approach has been criticized as well, however. The strong desire to see good drugs approved and harmful or useless drugs restricted remain conflicting tensions ([[type I and type II errors]] in the language of hypothesis testing).
 
 
In his book ''Statistics As Principled Argument'', [[Robert P. Abelson]] articulates the position that statistics serves as a standardized means of settling disputes between scientists who could otherwise each argue the merits of their own positions ''[[ad infinitum]]''. From this point of view, statistics is principally a form of rhetoric. This can be taken as a positive or a negative, but as with any means of settling a dispute, statistical methods can succeed only as long as both sides accept the approach and agree on the method to be used.
 
 
==See also==
 
 
* [[List of basic statistics topics]]
 
* [[List of statistical topics]]
 
*[[Analysis of variance]] (ANOVA)
 
*[[CHAID]]
 
*[[Central limit theorem]]
 
*[[Confidence interval]]
 
*[[Correlation does not imply causation]]
 
*[[Data]]
 
*[[Data mining]]
 
*[[Extreme value theory]]
 
{{col-break}}
 
*[[Forecasting]]
 
*[[Instrumental variables estimation]]
 
*[[List of academic statistical associations]]
 
*[[List of national and international statistical services]]
 
*[[List of publications in statistics]]
 
*[[List of statisticians]]
 
*[[Machine learning]]
 
*[[Multivariate statistics]]
 
*[[Prediction interval]]
 
*[[Predictive analytics]]
 
{{col-break}}
 
*[[Regression analysis]]
 
*[[Resampling (statistics)]]
 
*[[SOCR]]
 
*[[Statistical phenomena]]
 
*[[Statistician]]
 
*[[Structural equation modeling]]
 
*[[Trend estimation]]
 
*[[Scientific visualization]]
 
*[[Common mode failure]]
 
 
 
==External links==
 
===General sites and organizations===
 
* [http://lib.stat.cmu.edu/ Statlib: Data, Software and News from the Statistics Community (Carnegie Mellon)]
 
* [http://isi.cbs.nl/ International Statistical Institute]
 
* [http://www.mathcs.carleton.edu/probweb/probweb.html Probability Web]
 
* [http://www.statsci.org StatSci.org: Statistical Science Web]
 
* [http://www.conceptstew.co.uk/PAGES/freeresources.html Statistics Glossary - and other teaching and learning resources]
 
* [http://www.census.gov/main/www/stat_int.html Statistical Agencies (International)]
 
* [http://www.ats.ucla.edu/stat/ Statistical resources archive at UCLA]
 
* [http://statpages.org/ StatPages.net (statistical calculations, free software, etc.)]
 
* [http://freestatistics.altervista.org/ Free Statistics (free and open source software, data and tutorials)]
 
* [http://www.amstat.org/ American Statistical Association]
 
 
===Online courses and textbooks===
 
{{wikibooks}}
 
{{Wikiversity}}
 
* [http://www.stat.ucla.edu/~dinov/courses_students.html A variety of class notes and educational materials on probability and statistics]
 
* [http://www.statsoft.com/textbook/stathome.html Electronic Statistics Textbook (StatSoft,Inc., The Statistics Homepage)]
 
* [http://davidmlane.com/hyperstat/index.html HyperStat Online:  An Introductory Statistics Textbook and Online Tutorial for Help in Statistics Courses (David Lane)]
 
* [http://www.micquality.com/downloads/ref-primer.pdf Primer in Statistics Reference (MiC Quality)] (PDF)
 
* [http://www.richland.cc.il.us/james/lecture/m170/ Statistics: Lecture Notes (from a professor at Richland Community College)]
 
* [http://www2.chass.ncsu.edu/garson/pa765/statnote.htm Statnotes: Topics in Multivariate Analysis, by G. David Garson]
 
* [http://sportsci.org/resource/stats/ "A New View of Statistics" by Will G. Hopkins]
 
* [http://www.StatisticalPractice.com "The Little Handbook of Statistical Practice"] by [http://www.tufts.edu/~gdallal/ Dr. Gerard E. Dallal], [[Tufts University]].
 
* [http://www.itl.nist.gov/div898/handbook/ NIST Engineering Statistics Handbook]
 
 
===Other resources===
 
* [http://www.informath.org/StatDis.pdf Disputes in statistical analyses] a single-page review (PDF).
 
* [http://www.york.ac.uk/depts/maths/histstat Materials for the History of Statistics (Univ. of York)]
 
* [http://www.realityclock.com Reality Clock] - Statistics that reflect the issues facing society and the world today.
 
* [http://www.ericdigests.org/1993/marriage.htm Resampling: A Marriage of Computers and Statistics (ERIC Digests)]
 
* [http://www.ericdigests.org/2000-2/resources.htm Resources for Teaching and Learning about Probability and Statistics (ERIC Digests)]
 
* [http://www.csdassn.org/software_reports.cfm Software Reports (by the International Association for Statistical Computing)]
 
* [http://www.amstat.org/sections/sis/ Statistics in Sports (Section of the ASA)]
 
* [http://www.r-project.org/ The R Project for Statistical Computing] (free software for statistical computing)
 
  
 
[[Category: General Reference]]
 
[[Category: General Reference]]
 
[[Category: Statistics]]
 
[[Category: Statistics]]

Revision as of 11:44, 25 May 2014

Lighterstill.jpg

Attractor.png

Origin

German Statistik study of political facts and figures, from New Latin statisticus of politics, from Latin status state

Definition

Description

Statistics is described as a mathematical body of science that pertains to the collection, analysis, interpretation or explanation, and presentation of data, or as a branch of mathematics concerned with collecting and interpreting data. Because of its empirical roots and its focus on applications, statistics is typically considered a distinct mathematical science rather than as a branch of mathematics. Some tasks a statistician may involve are less mathematical; for example, ensuring that data collection is undertaken in a way that produces valid conclusions, coding data, or reporting results in ways comprehensible to those who must use them.

Statisticians improve data quality by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting the use of data through statistical models. Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Statistical consultants can help organizations and companies that don't have in-house expertise relevant to their particular questions.

Statistical methods can summarize or describe a collection of data. This is called descriptive statistics. This is particularly useful in communicating the results of experiments and research. In addition, data patterns may be modeled in a way that accounts for randomness and uncertainty in the observations.

These models can be used to draw inferences about the process or population under study—a practice called inferential statistics. Inference is a vital element of scientific advance, since it provides a way to draw conclusions from data that are subject to random variation. To prove the propositions being investigated further, the conclusions are tested as well, as part of the scientific method. Descriptive statistics and analysis of the new data tend to provide more information as to the truth of the proposition.

"Applied statistics" comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns both the logical arguments underlying justification of approaches to statistical inference, as well encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments.

Statistics is closely related to probability theory, with which it is often grouped. The difference is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population. Statistics has many ties to machine learning and data mining.[1]