ABOUT ME

Today
Yesterday
Total
  • Causal Effect Inference with Deep Latent-Variable Models
    Causality/paper 2025. 2. 28. 07:39

    https://arxiv.org/pdf/1705.08821

    https://github.com/AMLab-Amsterdam/CEVAE

    NeurIPS 2017


    Abstract

    Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of confounders, factors that affect both an intervention and its outcome. A carefully designed observational study attempts to measure all important confounders. However, even if one does not have direct access to all confounders, there may exist noisy and uncertain measurement of proxies for confounders. We build on recent advances in latent variable modeling to simultaneously estimate the unknown latent space summarizing the confounders and the causal effect. Our method is based on Variational Autoencoders (VAE) which follow the causal structure of inference with proxies. We show our method is significantly more robust than existing methods, and matches the state-of-the-art on previous benchmarks focused on individual treatment effects.


    1. Introduction

    Understanding the causal effect of an intervention t on an individual with features X is a fundamental problem across many domains. Examples include understanding the effect of medications on a patient’s health, or of teaching methods on a student’s chance of graduation. With the availability of large datasets in domains such as healthcare and education, there is much interest in developing methods for learning individual-level causal effects from observational data [42, 53, 25, 43].

     

    The most crucial aspect of inferring causal relationships from observational data is confounding. A variable which affects both the intervention and the outcome is known as a confounder of the effect of the intervention on the outcome. On the one hand, if such a confounder can be measured, the standard way to account for its effect is by “controlling” for it, often through covariate adjustment or propensity score re-weighting [39]. On the the other hand, if a confounder is hidden or unmeasured, it is impossible in the general case (i.e. without further assumptions) to estimate the effect of the intervention on the outcome [40]. For example, socio-economic status can affect both the medication a patient has access to, and the patient’s general health. Therefore socio-economic status acts as confounder between the medication and health outcomes, and without measuring it we cannot in general isolate the causal effect of medications on health measures. Henceforth we will denote observed potential confounders2 by X, and unobserved confounders by Z.

    ( 2 Including observed covariates which do not affect the intervention or outcome, and therefore are not truly confounders.)

     

    In most real-world observational studies we cannot hope to measure all possible confounders. For example, in many studies we cannot measure variables such as personal preferences or most genetic and environmental factors. An extremely common practice in these cases is to rely on so-called “proxy variables” [38, 6, 36, Ch. 11]. For example, we cannot measure the socio-economic status of patients directly, but we might be able to get a proxy for it by knowing their zip code and job type. One of the promises of using big-data for causal inference is the existence of myriad proxy variables for unmeasured confounders.

     

    How should one use these proxy variables? The answer depends on the relationship between the hidden confounders, their proxies, the intervention and outcome [31, 37]. Consider for example the causal graphs in Figure 1: it’s well known [20, 15, 18, 31, 41] that it is often incorrect to treat the proxies X as if they are ordinary confounders, as this would induce bias. See the Appendix for a simple example of this phenomena. The aforementioned papers give methods which are guaranteed to recover the true causal effect when proxies are observed. However, the strong guarantees these methods enjoy rely on strong assumptions. In particular, it is assumed that the hidden confounder is either categorical with known number of categories, or that the model is linear-Gaussian.

     

    In practice, we cannot know the exact nature of the hidden confounder Z: whether it is categorical or continuous, or if categorical how many categories it includes. Consider socio-economic status (SES) and health. Should we conceive of SES as a continuous or ordinal variable? Perhaps SES as confounder is comprised of two dimensions, the economic one (related to wealth and income) and the social one (related to education and cultural capital). Z might even be a mix of continuous and categorical, or be high-dimensional itself. This uncertainty makes causal inference a very hard problem even with proxies available. We propose an alternative approach to causal effect inference tailored to the surrogate-rich setting when many proxies are available: estimation of a latent-variable model where we simultaneously discover the hidden confounders and infer how they affect treatment and outcome. Specifically, we focus on (approximate) maximum-likelihood based methods.

     

    Although in many cases learning latent-variable models are computationally intractable [50, 7], the machine learning community has made significant progress in the past few years developing computationally efficient algorithms for latent-variable modeling. These include methods with provable guarantees, typically based on the method-of-moments (e.g. Anandkumar et al. [4]); as well as robust, fast, heuristics such as variational autoencoders (VAEs) [27, 46], based on stochastic optimization of a variational lower bound on the likelihood, using so-called recognition networks for approximate inference.

     

    Our paper builds upon VAEs. This has the disadvantage that little theory is currently available to justify when learning with VAEs can identify the true model. However, they have the significant advantage that they make substantially weaker assumptions about the data generating process and the structure of the hidden confounders. Since their recent introduction, VAEs have been shown to be remarkably successful in capturing latent structure across a wide-range of previously difficult problems, such as modeling images [19], volumes [24], time-series [10] and fairness [34].

     

    We show that in the presence of noisy proxies, our method is more robust against hidden confounding, in experiments where we successively add noise to known-confounders. Towards that end we introduce a new causal inference benchmark using data about twin births and mortalities in the USA. We further show that our method is competitive on two existing causal inference benchmarks. Finally, we note that our method does not currently deal with the related problem of selection bias, and we leave this to future work.

    Related work.

    Proxy variables and the challenges of using them correctly have long been considered in the causal inference literature [54, 14]. Understanding what is the best way to derive and measure possible proxy variables is an important part of many observational studies [13, 29, 55]. Recent work by Cai and Kuroki [9], Greenland and Lash [18], building on the work of Greenland and Kleinbaum [17], Selén [47], has studied conditions for causal identifiability using proxy variables. The general idea is that in many cases one should first attempt to infer the joint distribution p(X, Z) between the proxy and the hidden confounders, and then use that knowledge to adjust for the hidden confounders [55, 41, 32, 37, 12]. For the example in Figure 1, Cai and Kuroki [9], Greenland and Lash [18], Pearl [41] show that if Z and X are categorical, with X having at least as many categories as Z, and with the matrix p(X, Z) being full-rank, one could identify the causal effect of t on y using a simple matrix inversion formula, an approach called “effect restoration”. Conditions under which one could identify more general and complicated proxy models were recently given by [37]


    2. Identification of Causal Effect

    Throughout this paper we assume the causal model in Figure 1. For simplicity and compatibility with prior benchmarks we assume that the treatment t is binary, but our proposed method does not rely on that. We further assume that the joint distribution p (Z, X, t, y) of the latent confounders Z and the observed confounders X can be approximately recovered solely from the observations (X, t, y). While this is impossible if the hidden confounder has no relation to the observed variables, there are many cases where this is possible, as mentioned in the introduction. For example, if X includes three independent views of Z [4, 22, 16, 2]; if Z is categorical and X is a Gaussian mixture model with components determined by X [5]; or if Z is comprised of binary variables and X are so-called “noisy-or” functions of Z [23, 8]. Recent results show that certain VAEs can recover a very large class of latent-variable models [51] as a minimizer of an optimization problem; the caveat is that the optimization process is not guaranteed to achieve the true minimum even if it is within the capacity of the model, similar to the case of classic universal approximation results for neural networks.


    2.1. Identifying Individual Treatment Effect


    3. Causal Effect Variational AutoEncoder

    The approach we take in this paper to the problem of learning the latent variable causal model is by using variational autoencoders [27, 46] to infer the complex non-linear relationships between X and (Z, t, y) and approximately recover p (Z, X, t, y). Recent work has dramatically increased the range and type of distributions which can be captured by VAEs [51, 45, 28]. The drawback of these methods is that because of the difficulty of guaranteeing global optima of neural net optimization, one cannot ensure that any given instance will find the true model even if it is within the model class. We believe this drawback is offset by the strong empirical performance across many domains of deep neural networks in general, and VAEs in particular. Specifically, we propose to parametrize the causal graph of Figure 1 as a latent variable model with neural net functions connecting the variables of interest. The flexible non-linear nature of neural nets will hopefully allow us to approximate well the true interactions between the treatment and its effect.

     

    Our design choices are mostly typical for VAEs: we assume the observations factorize conditioned on the latent variables, and use an inference network [27, 46] which follows a factorization of the true posterior. For the generative model we use an architecture inspired by TARnet [48], but instead of conditioning on observations we condition on the latent variables z; see details below. For the following, xi corresponds to an input datapoint (e.g. the feature vector of a given subject), ti corresponds to the treatment assignment, yi to the outcome of the of the particular treatment and zi corresponds to the latent hidden confounder. Each of the corresponding factors is described as:


    4. Experiments

    Evaluating causal inference methods is always challenging because we usually lack ground-truth for the causal effects. Common evaluation approaches include creating synthetic or semi-synthetic datasets, where real data is modified in a way that allows us to know the true causal effect or real-world data where a randomized experiment was conducted. Here we compare with two existing benchmark datasets where there is no need to model proxies, IHDP [21] and Jobs [33], often used for evaluating individual level causal inference. In order to specifically explore the role of proxy variables, we create a synthetic toy dataset, and introduce a new benchmark based on data of twin births and deaths in the USA.


    4.1. Benchmark Datasets

     

    For the second benchmark we consider the task described at [48] and follow closely their procedure. It uses a dataset obtained by the study of [33, 49], which concerns the effect of job training (treatment) on employment after training (outcome). Due to the fact that a part of the dataset comes from a randomized control trial we can estimate the “true” causal effect. Following [48] we report the absolute error on the Average Treatment effect on the Treated (ATT), which is the E [ITE(X)|t = 1]. For the individual causal effect we use the policy risk, that acts as a proxy to the individual treatment effect. The results after averaging over 10 train/validation/test splits can be seen at Table 2. As we can observe, CEVAE is competitive with the state-of-the art, while overall achieving the best estimate on the out-of-sample ATT.


    4.2.Synthetic Experiment on Toy Data

    model, as well as a 5-dimensional continuous z in order to investigate the robustness of CEVAE w.r.t. model misspecification. We evaluate across samples size N ∈ {1000, 3000, 5000, 10000, 30000} and provide the results in Figure 3. We see that no matter how many samples are given, LR1, LR2 and TARnet are not able to improve their error in estimating ATE directly from the proxies. On the other hand, CEVAE achieves significantly less error. When the latent model is correctly specified (CEVAE bin) we do better even with a small sample size; when it is not (CEVAE cont) we require more samples for the latent space to imitate more closely the true binary latent variable.


    4.3. Binary treatment outcome on Twins

    [30, 2, 5]. We note that there might still be proxies for the confounder in the other variables, such as the incompetent cervix covariate which is a known risk factor for early birth. Having created the dataset, we focus our attention on two tasks: Inferring the mortality of the unobserved twin (counterfactual), and inferring the average treatment effect. We compare with TARnet, LR1 and LR2. We vary the number of hidden layers for TARnet and CEVAE (nh in the figures). We note that while TARnet with 0 hidden layers is equivalent to LR2, CEVAE with 0 hidden layers still infers a latent space and is thus different. The results are given respectively in Figures 4(a) (higher is better) and 4(b) (lower is better).

     

    For the counterfactual task, we see that for small proxy noise all methods perform similarly. This is probably due to the gestation length feature being very informative; for LR1, the noisy codings of this feature form 6 of the top 10 most predictive features for mortality, the others being sex (males are more at risk), and 3 risk factors: incompetent cervix, mother lung disease, and abnormal amniotic fluid. For higher noise, TARnet, LR1 and LR2 see roughly similar degradation in performance; CEVAE, on the other hand, is much more robust to increasing proxy noise because of its ability to infer a cleaner latent state from the noisy proxies. Of particular interest is CEVAE nh = 0, which does much better for counterfactual inference than the equivalent LR2, probably because LR2 is forced to rely directly on the noisy proxies instead of the inferred latent state. For inference of average treatment effect, we see that at the low noise levels CEVAE does slightly worse than the other methods, with CEVAE nh = 0 doing noticeably worse. However, similar to the counterfactual case, CEVAE is significantly more robust to proxy noise, achieving quite a low error even when the direct proxies are completely useless at noise level 0.5.


    5. Conclusion

    In this paper we draw a connection between causal inference with proxy variables and the groundbreaking work in the machine learning community on latent variable models. Since almost all observational studies rely on proxy variables, this connection is highly relevant.

     

    We introduce a model which is the first attempt at tying these two ideas together: The Causal Effect Variational Autoencoder (CEVAE), a neural network latent variable model used for estimating individual and population causal effects. In extensive experiments we showed that it is competitive with the state-of-the art on benchmark datasets, and more robust to hidden confounding both at a toy artificial dataset as well as modifications of real datasets, such as the newly introduced Twins dataset. For future work, we plan to employ the expanding set of tools available for latent variables models (e.g. Kingma et al. [28], Tran et al. [51], Maaløe et al. [35], Ranganath et al. [44]), as well as to further explore connections between method of moments approaches such as Anandkumar et al. [5] with the methods for effect restoration given by Kuroki and Pearl [32], Miao et al. [37].


     

Designed by Tistory.