Causality/2
-
25. Debiasing with Propensity ScoreCausality/2 2025. 4. 15. 16:15
https://matheusfacure.github.io/python-causality-handbook/Debiasing-with-Propensity-Score.htmlPreviously, we saw how to go from a biased dataset to one where the treatment looked as good as randomly assigned. We used orthogonalization for that. That technique was based on predicting the treatment and the outcome and then replacing both with their predictions’ residuals. That alone is a powerful ..
-
24. Debiasing with OrthogonalizationCausality/2 2025. 4. 15. 07:39
https://matheusfacure.github.io/python-causality-handbook/Debiasing-with-Orthogonalization.htmlPreviously, we saw how to evaluate a causal model. By itself, that’s a huge deed. Causal models estimates the elasticity δy/δt, which is an unseen quantity. Hence, since we can’t see the ground truth of what our model is estimating, we had to be very creative in how we would go about evaluating them. T..
-
22. Synthetic Difference-in-DifferencesCausality/2 2025. 4. 10. 17:25
https://matheusfacure.github.io/python-causality-handbook/25-Synthetic-Diff-in-Diff.htmlIn previous chapters, we looked into both Difference-in-Differences and Synthetic Control methods for identifying the treatment effect with panel data (data where we have multiple units observed across multiple time periods). It turns out we can merge both approaches into a single estimator. This new Syntheti..
-
21. The Difference-in-Differences SagaCausality/2 2025. 4. 10. 14:40
https://matheusfacure.github.io/python-causality-handbook/24-The-Diff-in-Diff-Saga.htmlAfter discussing treatment effect heterogeneity, we will now switch gears a bit, back into average treatment effects. Over the next few chapters, we will cover some recent developments in panel data methods. A panel is a data structure that has repeated observations across time. The fact that we observe the sa..
-
20. [R-learner, Double ML] Debiased/Orthogonal Machine LearningCausality/2 2025. 4. 10. 11:55
https://matheusfacure.github.io/python-causality-handbook/22-Debiased-Orthogonal-Machine-Learning.htmlThe next meta-learner we will consider actually came before they were even called meta-learners. As far as I can tell, it came from an awesome 2016 paper that sprung a fruitful field in the causal inference literature. The paper was called Double Machine Learning for Treatment and Causal Paramet..
-
19. Meta LearnersCausality/2 2025. 4. 10. 09:43
https://matheusfacure.github.io/python-causality-handbook/21-Meta-Learners.htmlJust to recap, we are now interested in finding treatment effect heterogeneity, that is, identifying how units respond differently to the treatment. In this framework, we want to estimate or, E[δYi(t)|X] in the continuous case. In other words, we want to know how sensitive the units are to the treatment. This is super..
-
18. [F-learner] Plug-and-Play EstimatorsCausality/2 2025. 4. 10. 07:56
https://matheusfacure.github.io/python-causality-handbook/20-Plug-and-Play-Estimators.htmlSo far, we’ve seen how to debias our data in the case where the treatment is not randomly assigned, which results in confounding bias. That helps us with the identification problem in causal inference. In other words, once the units are exchangeable, or Y(0),Y(1)⊥T|X, it becomes possible to learn the treatm..
-
17. Heterogeneous Treatment Effects and PersonalizationCausality/2 2025. 4. 9. 05:45
https://matheusfacure.github.io/python-causality-handbook/18-Heterogeneous-Treatment-Effects-and-Personalization.htmlFrom Predictions to Causal InferenceIn the last chapter, we briefly covered Machine Learning models. ML models are tools for what I called predictions or, more technically, estimating the conditional expectation function E[Y|X]. In other words, ML is incredibly useful when you wan..