The power of a hypothesis test is the probability of not committing a Type II error – failing to reject the null hypothesis when the null hypothesis is false.
The effect size is the difference between the true value and the value specified in the null hypothesis.
Effect size = True value – Hypothesized value
For example, suppose the null hypothesis states that a population mean is equal to 100. A researcher might ask: What is the probability of rejecting the null hypothesis if the true population mean is equal to 90? In this example, the effect size would be 90 – 100, which equals -10. Obviously if the true value is far from the hypothesised value then the null hypothesis is more likely to be rejected so the probability of committing a Type II error is reduced. With this made clear we can make the following summary.
Factors That Affect Power
The power of a hypothesis test is affected by three factors.
- Sample size (n). Other things being equal, the greater the sample size, the greater the power of the test, since larger sample sizes tend to give more accurate values of the parameter in question.
- Significance level (α). The higher the significance level, the higher the power of the test. If you increase the significance level, you reduce the region of acceptance. As a result, you are more likely to reject the null hypothesis. This means you are less likely to accept the null hypothesis when it is false; i.e., less likely to make a Type II error. Hence, the power of the test is increased.
- The “true” value of the parameter being tested. The greater the difference between the “true” value of a parameter and the value specified in the null hypothesis, the greater the power of the test. That is, the greater the effect size, the greater the power of the test.
In addition, the probability of committing a Type II error increases with decreasing probability of committing a Type I test. It is impossible to simultaneously decrease the probability of a Type I test and Type II test.