This tutorial explains how to read and interpret the F-distribution table.
What is the F-Distribution Table?
The F-distribution table is a table that shows the critical values of the F distribution. To use the F distribution table, you only need three values:
- The numerator degrees of freedom
- The denominator degrees of freedom
- The alpha level (common choices are 0.01, 0.05, and 0.10)
The following table shows the F-distribution table for alpha = 0.10. The numbers along the top of the table represent the numerator degrees of freedom (labeled as DF1 in the table) and the numbers along the left hand side of the table represent the denominator degrees of freedom (labeled as DF2 in the table).Â
Feel free to click on the table to zoom in.
The critical values within the table are often compared to the F statistic of an F test. If the F statistic is greater than the critical value found in the table, then you can reject the null hypothesis of the F test and conclude that the results of the test are statistically significant.
Examples of How to Use the F-Distribution Table
The F-distribution table is used to find the critical value for an F test. The three most common scenarios in which you’ll conduct an F test are as follows:
- F test in regression analysis to test for the overall significance of a regression model.
- F test in ANOVA (analysis of variance) to test for an overall difference between group means.
- F test to find out if two populations have equal variances.
Let’s walk through an example of how to use the F-distribution table in each of these scenarios.
F Test in Regression Analysis
Suppose we conduct a multiple linear regression analysis using hours studied and prep exams taken as predictor variables and final exam score as the response variable. When we run the regression analysis, we receive the following output:
Source | SS | df | MS | F | P |
---|---|---|---|---|---|
Regression | 546.53 | 2 | 273.26 | 5.09 | 0.033 |
Residual | 483.13 | 9 | 53.68 | ||
Total | 1029.66 | 11 |
In regression analysis, the f statistic is calculated as regression MS / residual MS. This statistic indicates whether the regression model provides a better fit to the data than a model that contains no independent variables. In essence, it tests if the regression model as a whole is useful.
In this example, the F statistic is 273.26 / 53.68 = 5.09.
Suppose we want to know if this F statistic is significant at level alpha = 0.05. Using the F-distribution table for alpha = 0.05, with numerator of degrees of freedom 2 (df for Regression) and denominator degrees of freedom 9 (df for Residual), we find that the F critical value is 4.2565.
Since our f statistic (5.09) is greater than the F critical value (4.2565), we can conclude that the regression model as a whole is statistically significant.
F test in ANOVA
Suppose we want to know whether or not three different studying techniques lead to different exam scores. To test this, we recruit 60 students. We randomly assign 20 students each to use one of the three studying techniques for one month in preparation for an exam. Once all of the students take the exam, we then conduct a one-way ANOVA to find out whether or not studying technique has an impact on exam scores. The following table shows the results of the one-way ANOVA:
Source | SS | df | MS | F | P |
---|---|---|---|---|---|
Treatment | 58.8 | 2 | 29.4 | 1.74 | 0.217 |
Error | 202.8 | 12 | 16.9 | ||
Total | 261.6 | 14 |
In an ANOVA, the f statistic is calculated as Treatment MS / Error MS. This statistic indicates whether or not the mean score for all three groups is equal.
In this example, the F statistic is 29.4 / 16.9 = 1.74.
Suppose we want to know if this F statistic is significant at level alpha = 0.05. Using the F-distribution table for alpha = 0.05, with numerator of degrees of freedom 2 (df for Treatment) and denominator degrees of freedom 12 (df for Error), we find that the F critical value is 3.8853.
Since our f statistic (1.74) is not greater than the F critical value (3.8853), we conclude that there is not a statistically significant difference between the mean scores of the three groups.
F test for Equal Variances of Two Populations
Suppose we want to know whether or not the variances for two populations are equal. To test this, we can conduct an F-test for equal variances in which we take a random sample of 25 observations from each population and find the sample variance for each sample.
The test statistic for this F-Test is defined as follows:
F-statistic = s12 / s22
where s12 and s22 are the sample variances. The further this ratio is from one, the stronger the evidence for unequal population variances.Â
The critical value for the F-Test is defined as follows:
F Critical Value = the value found in the F-distribution table with n1-1 and n2-1 degrees of freedom and a significance level of α.
Suppose the sample variance for sample 1 is 30.5 and the sample variance for sample 2 is 20.5. This means that our test statistic is 30.5 / 20.5 = 1.487. To find out if this test statistic is significant at alpha = 0.10, we can find the critical value in the F-distribution table associated with alpha = 0.10, numerator df = 24, and denominator df = 24. This number turns out to be 1.7019.
Since our f statistic (1.487) is not greater than the F critical value (1.7019), we conclude that there is not a statistically significant difference between the variances of these two populations.
Additional Resources
For a complete set of F-distribution tables for alpha values 0.001, 0.01, 0.025, 0.05, and 0.10, check out this page.