In statistics, parametric tests are tests that make assumptions about the underlying distribution of data.
Common parametric tests include:
In order for the results of parametric tests to be valid, the following four assumptions should be met:
1. Normality – Data in each group should be normally distributed.
2. Equal Variance – Data in each group should have approximately equal variance.
3. Independence – Data in each group should be randomly and independently sampled from the population.
4. No Outliers – There should be no extreme outliers.
This tutorial provides a brief explanation of each assumption along with how to check if each assumption is met.
Assumption 1: Normality
Parametric tests assume that each group is roughly normally distributed.
If the sample sizes of each group are small (n
If the p-value of the test is less than a certain significance level, then the data is likely not normally distributed.
However, if the sample sizes are large then it’s better to use a Q-Q plot to visually check if the data is normally distributed.
If the data points roughly fall along a straight diagonal line in a Q-Q plot, then the dataset likely follows a normal distribution.
Assumption 2: Equal Variance
Parametric tests assume that the variance of each group is roughly equal.
We can visually check if this assumption is met by creating side-by-side boxplots for each group to see if the boxplots of each group are roughly the same size.
Another way to check if this assumption is met is to use the following rule of thumb: If the ratio of the largest variance to the smallest variance is less than 4, then we can assume the variances are approximately equal and use the two sample t-test.
For example, suppose group 1 has a variance of 24.5 and group 2 has a variance of 15.2. The ratio of the larger sample variance to the smaller sample variance would be calculated as:
Ratio:Â 24.5 / 15.2 = 1.61
Since this ratio is less than 4, we could assume that the variances between the groups are approximately equal.
Assumption 3: Independence
Parametric tests assume that the observations in each group are independent of observations in every other group.
The easiest way to check this assumption is to verify that the data was collected using a probability sampling method – a method in which every member in a population has an equal probability of being selected to be in the sample.
Examples of probability sampling methods include:
- Simple random sampling
- Stratified random sampling
- Cluster random sampling
- Systematic random sampling
If one of these methods was used to collect the data, we can assume that this assumption is met.
Assumption 4: No Outliers
Parametric tests assume that there are no extreme outliers in any group that could adversely affect the results of the test.
One way to visually check for outliers is to create boxplots for each group to see if there are any clear outliers that are much larger than the rest of the other observations in the group.
Another way to detect outliers is to perform Grubbs’ Test, which is a formal statistical test that can be used to identify outliers.
The following tutorials explain how to perform Grubbs’ Test in various statistical softwares:
- How to Conduct Grubbs’ Test in Excel
- How to Perform Grubbs’ Test in R
- How to Perform Grubbs’ Test in Python
Additional Resources
The following tutorials explain how to check the assumptions of other statistical tests.
How to Check Assumptions of Linear Regression
How to Check Assumptions of Logistic Regression
How to Check ANOVA Assumptions
How to Check Confidence Interval Assumptions