Statisticians have celebrated a lot recently. 2013 marked the 300th anniversary of Jacob Bernoulli's Ars Conjectandi, which used probability theory to explore the properties of statistics as more observations were taken. It was also the 250th anniversary of Thomas Bayes' essay on how humans can sequentially learn from experience, steadily updating their beliefs as more data become available (1). And it was the International Year of Statistics (2). Now that the bunting has been taken down, it is a good time to take stock of recent developments in statistical science and examine its role in the age of Big Data.
Much enthusiasm for statistics hangs on the ever-increasing availability of large data sets, particularly when something has to be ranked or classified. These situations arise, for example, when deciding which book to recommend, working out where your arm is when practicing golf swings in front of a games console, or (if you're a security agency) deciding whose private e-mail to read first. Purely data-based approaches, under the title of machine-learning, have been highly successful in speech recognition, real-time interpretation of moving images, and online translation.
The future lies in uncertainty
. D. J. Spiegelhalter
Science 18 July 2014:
Vol. 345 no. 6194 pp. 264-265
Via Complexity Digest, Ashish Umre