Update: The code for these animations is available here.
Another Update: I think some of the explanations on this page may be helped with more colors. I have some updated visuals here that include colors.
The Frisch-Waugh-Lovell theorem states that within a multivariate regression on and , the coefficient for , which is , will be the exact same as if you had instead run a regression on the residuals of and after regressing each one on separately.
The point of this post is not to explain the FWL theorem in linear algebraic detail, or explain why it’s useful (basically, it’s a fundamental intuition about what multivariate regression does and what it means to “partial” out the effects of two regressors). If you want to learn more about that, there’s some great stuff already on Google.
The point of this post is to simply provide an animation of this theorem. I find that the explanations of this theorem are often couched in lots of linear algebra, and it may be hard for some people to understand what’s going on exactly. I hope this animation can help with that.
Our Data
import numpy as np
import pandas as pd
import statsmodels.api as sm
np.random.seed(42069)
df = pd.DataFrame({'x1': np.random.uniform(0, 10, size=50)})
df['x2'] = 4.9 + df['x1'] * 0.983 + 2.104 * np.random.normal(0, 1.35, size=50)
df['y'] = 8.643 - 2.34 * df['x1'] + 3.35 * df['x2'] + np.random.normal(0, 1.65, size=50)
df['const'] = 1
model = sm.OLS(
endog=df['y'],
exog=df[['const', 'x1', 'x2']]
).fit()
model.summary()
The output of the above:
OLS Regression Results
Dep. Variable: | y | R-squared: | 0.977 |
---|---|---|---|
Model: | OLS | Adj. R-squared: | 0.976 |
Method: | Least Squares | F-statistic: | 997.5 |
Date: | Sat, 26 Dec 2020 | Prob (F-statistic): | 3.22e-39 |
Time: | 17:11:39 | Log-Likelihood: | -95.281 |
No. Observations: | 50 | AIC: | 196.6 |
Df Residuals: | 47 | BIC: | 202.3 |
Df Model: | 2 | ||
Covariance Type: | nonrobust |
coef | std err | t | P>|t| | [0.025 | 0.975] | |
---|---|---|---|---|---|---|
const | 9.4673 | 0.546 | 17.337 | 0.000 | 8.369 | 10.566 |
x1 | -2.2003 | 0.128 | -17.213 | 0.000 | -2.458 | -1.943 |
x2 | 3.1931 | 0.081 | 39.647 | 0.000 | 3.031 | 3.355 |
Omnibus: | 0.120 | Durbin-Watson: | 1.914 |
---|---|---|---|
Prob(Omnibus): | 0.942 | Jarque-Bera (JB): | 0.279 |
Skew: | -0.095 | Prob(JB): | 0.870 |
Kurtosis: | 2.687 | Cond. No. | 27.3 |
The Animation
Here is what would happen if we actually ran a univariate regression on the residuals after factoring out .
(The animation takes a few seconds, so you might need to wait for it to restart to get the full effect.)
Notice that the slope in the final block ends up equaling 3.1931
, which is the coefficient for in the multivariate regression.
Getting the coefficient for is more interesting; one thing that happens in the multivariate regression is the coefficient is negative despite the fact that is positively correlated with . What gives? Well, the following animation helps to show where that comes from:
You can mostly see here what’s happening: After we take out the effect of on , what we’re left over with is a negative relationship between and . Put another way: there is a negative correlation between and the residuals from the regression .