Graduate Student / Postdoc Seminar

Obvious Yet Overlooked Means for Testing Statistical Theories: An Informal Presentation

Speaker: Mark Tygert

Location: Warren Weaver Hall 1302

Date: Friday, March 12, 2010, 1 p.m.

Synopsis:

There are at least two obvious ways to check whether some measured data does not agree with a specified statistical theory. For definiteness, suppose that an experiment produces independent and identically distributed (i.i.d.) draws, and that we would like to test whether the draws do not arise from a specific probability density specified by the proposed statistical theory.

One test is to estimate the cumulative distribution function (the indefinite integral of the probability density function) using the empirical data, and then to consider the discrepancy between the empirical distribution and the actual distribution predicted by the theory. Kolmogorov and Smirnov introduced such a test in the 1930s, along with a profound analysis of its significance levels, and by now there are many interesting variants.

A different test which would seem to be obvious is to check whether any of the given i.i.d. draws has a probability that is substantially smaller than expected under the specified probability density function. Such testing for generalized outliers is far from the standard practice, even though it can be much more powerful in many circumstances (indeed, the indefinite integral in the definition of the cumulative distribution function for the Kolmogorov-Smirnov test can smooth over variations in the probability density function). We will discuss two variations on the alternative, complementary test, and note when they are more effective than the classical approaches.