Causality : from learning to generative models
MetadataShow full item record
Causality is a fundamental concept in multiple disciplines. Causal questions arise in fields ranging from medical research to engineering, philosophy to physics. The last few decades have witnessed the development of a mathematical model of probabilistic causation by Judea Pearl and many others. In this modeling framework, directed acyclic graphs arise as natural objects to capture causal relations between random variables. The directed acyclic graph that captures the causal relations between variables is called the causal graph of the system. A fundamental problem is to learn the causal graph over a set of observed variables. In this thesis, we propose new algorithms for learning causal graphs in various settings: 1) First, we consider the setting where the observational data is available, but we are not allowed to perform new experiments without any unobserved common causes in Chapter 2, and with an unobserved common cause in Chapter 3 on two discrete/categorical variables. 2) Second, we consider the scenario where we are allowed to perform experiments, but these experiments have a cost associated with them and our goal is to minimize this cost for learning the causal graph in Chapter 4, and when there are unobserved common causes and we want to minimize the number of experiments to learn both the causal graph on the observed variables and the location of latents in Chapter 5. After the causal graph is learned, the next problem is to fit a functional model that can sample from the causal model. 3) Third, in Chapter 6 we suggest the use of neural networks, specifically generative adversarial networks for learning the causal model when the causal graph is known without latent variables.