ImageBONFERRONI, Carlo Emilio (1892-1960)

The problem of multiple tests of significance. In many kinds of research, particularly surveys and studies using batteries of tests, and with certain statistical methods (such as multiple regression and analysis of variance), there are frequently many (more than, say, 10) statistical hypotheses being tested (perhaps on different dependent and independent variable combinations). In such studies, we find that setting alpha = 0.05 does not provide sufficient protection against the Type I error. What happens is that as the number of separate hypothesis tests increase within a single study, the true alpha -level for the entire study (regardless of where you think you have set it) will be inflated. An approximate formula for how much alpha increases as a function of the number of hypotheses you test is

actual alpha-level = 1 - (1- alpha you think you are setting)j

where j is the number of significance tests being done or contemplated. Thus, if we think we are setting alpha at 0.05 in our study, but we are contemplating testing 20 statistical hypotheses (say 20 t-tests), our actual chance of claiming a significant result when there should not be one is 1 - (1 - 0.05)20 = 0.64. That is, we really have a 64% chance of making a Type I error. This is quite a substantial chance!! Moreover, you do not know which ones are errors and which ones aren?t.

Bonferroni worked on this problem. He devised a simple way to help control for the alpha -inflation problem. To have a study with an overall alpha -level equal to some pre-specified level (say 0.05), merely divide this up equally among the j tests of significance being contemplated. Thus, for each significance test, we use

new alpha-level = desired alpha for entire study divided by J

Another way to work is to do a PCA
Here is a link to my Rks Musing post on PCA

This also has made me reflect on Type I and Type II error