White’s test is used to determine if heteroscedasticity is present in a regression model.
Heteroscedasticity refers to the unequal scatter of residuals at different levels of a response variable in a regression model, which violates one of the key assumptions of linear regression that the residuals are equally scattered at each level of the response variable.
This tutorial explains how to perform White’s test in R to determine whether or not heteroscedasticity is a problem in a given regression model.
Example: White’s Test in R
In this example we will fit a multiple linear regression model using the built-in R dataset mtcars.
Once we’ve fit the model, we’ll use the bptest function from the lmtest library to perform White’s test to determine if heteroscedasticity is present.
Step 1: Fit a regression model.
First, we will fit a regression model using mpg as the response variable and disp and hp as the two explanatory variables.
#load the dataset data(mtcars) #fit a regression model model #view model summary summary(model) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 30.735904 1.331566 23.083
Step 2: Perform White’s test.
Next, we will use the following syntax to perform White’s test to determine if heteroscedasticity is present:
#load lmtest library library(lmtest) #perform White's test bptest(model, ~ disp*hp + I(disp^2) + I(hp^2), data = mtcars) studentized Breusch-Pagan test data: model BP = 7.0766, df = 5, p-value = 0.215
Here is how to interpret the output:
- The test statistic is X2 = 7.0766.
- The degrees of freedom is 5.
- The corresponding p-value is 0.215.
White’s test uses the following null and alternative hypotheses:
- Null (H0): Homoscedasticity is present.
- Alternative (HA): Heteroscedasticity is present.
Since the p-value is not less than 0.05, we fail to reject the null hypothesis. We do not have sufficient evidence to say that heteroscedasticity is present in the regression model.
What To Do Next
If you fail to reject the null hypothesis of White’s test then heteroscedasticity is not present and you can proceed to interpret the output of the original regression.
However, if you reject the null hypothesis, this means heteroscedasticity is present in the data. In this case, the standard errors that are shown in the output table of the regression may be unreliable.
There are a couple common ways that you can fix this issue, including:
1. Transform the response variable.
You can try performing a transformation on the response variable, such as taking the log, square root, or cube root of the response variable. Typically this can cause heteroscedasticity to go away.
2. Use weighted regression.
Weighted regression assigns a weight to each data point based on the variance of its fitted value. Essentially, this gives small weights to data points that have higher variances, which shrinks their squared residuals. When the proper weights are used, this can eliminate the problem of heteroscedasticity.