Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
causal_analysis [2017/11/30 11:37]
admin
causal_analysis [2018/12/21 11:35] (current)
admin
Line 35: Line 35:
  
 This calculus entails finding and applying controlled interventions to an evolving object to estimate how its algorithmic information content is affected in terms of positive or negative shifts towards and away from randomness in connection to causation. The approach is an alternative to statistical approaches for inferring causal relationships and formulating theoretical expectations from perturbation analysis This calculus entails finding and applying controlled interventions to an evolving object to estimate how its algorithmic information content is affected in terms of positive or negative shifts towards and away from randomness in connection to causation. The approach is an alternative to statistical approaches for inferring causal relationships and formulating theoretical expectations from perturbation analysis
 +
 +http://​web.cs.ucla.edu/​~kaoru/​theoretical-impediments.pdf Theoretical Impediments to Machine Learning
 +
 +Current machine learning systems operate, almost exclusively,​ in a purely statistical mode,
 +which puts severe theoretical limits on their performance. We consider the feasibility of leveraging
 +counterfactual reasoning in machine learning tasks, and to identify areas where such
 +reasoning could lead to major breakthroughs in machine learning applications.
 +
 +https://​arxiv.org/​abs/​1805.06826 The Blessings of Multiple Causes
 +
 +We propose the deconfounder,​ an algorithm that combines unsupervised machine learning and predictive model checking to perform causal inference in multiple-cause settings. The deconfounder infers a latent variable as a substitute for unobserved confounders and then uses that substitute to perform causal inference. We develop theory for when the deconfounder leads to unbiased causal estimates, and show that it requires weaker assumptions than classical causal inference. We analyze its performance in three types of studies: semi-simulated data around smoking and lung cancer, semi-simulated data around genome wide association studies, and a real dataset about actors and movie revenue. The deconfounder provides a checkable approach to estimating close-to-truth causal effects.
 +
 +https://​arxiv.org/​abs/​1808.06581v1 The Deconfounded Recommender:​ A Causal Inference Approach to Recommendation
 +
 +We develop the deconfounded recommender,​ a strategy to leverage classical recommendation models for causal predictions. The deconfounded recommender uses Poisson factorization on which movies users watched to infer latent confounders in the data; it then augments common recommendation models to correct for potential confounding bias. 
 +
 +https://​arxiv.org/​abs/​1808.06316v1 Discovering Context Specific Causal Relationships
 +
 + In this paper, by taking the advantages of highly efficient decision tree induction and the well established causal inference framework, we propose the Tree based Context Causal rule discovery (TCC) method, for efficient exploration of context specific causal relationships from data. Experiments with both synthetic and real world data sets show that TCC can effectively discover context specific causal rules from the data.
 +
 +https://​arxiv.org/​abs/​1808.07804 Transfer Learning for Estimating Causal Effects using Neural Networks
 +
 +We develop new algorithms for estimating heterogeneous treatment effects, combining recent developments in transfer learning for neural networks with insights from the causal inference literature. By taking advantage of transfer learning, we are able to efficiently use different data sources that are related to the same underlying causal mechanisms. We compare our algorithms with those in the extant literature using extensive simulation studies based on large-scale voter persuasion experiments and the MNIST database.
 +
 +https://​arxiv.org/​abs/​1803.04929 ​
 +SAM: Structural Agnostic Model, Causal Discovery and Penalized Adversarial Learning
 +
 +We present the Structural Agnostic Model (SAM), a framework to estimate end-to-end non-acyclic causal graphs from observational data. In a nutshell, SAM implements an adversarial game in which a separate model generates each variable, given real values from all others. In tandem, a discriminator attempts to distinguish between the joint distributions of real and generated samples. Finally, a sparsity penalty forces each generator to consider only a small subset of the variables, yielding a sparse causal graph. SAM scales easily to hundreds variables. ​
 +
 +https://​openreview.net/​pdf?​id=BJE-4xW0W CAUSALGAN: LEARNING CAUSAL IMPLICIT GENERATIVE
 +MODELS WITH ADVERSARIAL TRAINING
 +
 +http://​openaccess.thecvf.com/​content_cvpr_2017/​papers/​Lopez-Paz_Discovering_Causal_Signals_CVPR_2017_paper.pdf Discovering Causal Signals in Images
 +
 +http://​proceedings.mlr.press/​v48/​johansson16.pdf Learning Representations for Counterfactual Inference
 +
 +We propose a new algorithmic
 +framework for counterfactual inference
 +which brings together ideas from domain adaptation
 +and representation learning.
 +
 +http://​proceedings.mlr.press/​v70/​hartford17a/​hartford17a.pdf Deep IV: A Flexible Approach for Counterfactual Prediction
 +
 +https://​arxiv.org/​abs/​1807.09341 . Learning Plannable Representations with Causal InfoGAN
 +
 +http://​aclweb.org/​anthology/​D18-1488 Challenges of Using Text Classifiers for Causal Inference github.com/​zachwooddoughty/​emnlp2018-causal
 +
 +https://​openreview.net/​forum?​id=H1ltQ3R9KQ Causal Reasoning from Meta-reinforcement learning ​
 +
 +https://​openreview.net/​forum?​id=Byldr3RqKX Tinkering with black boxes: counterfactuals uncover modularity in generative models
 +
 +https://​openreview.net/​forum?​id=rkxt8oC9FQ Perfect Match: A Simple Method for Learning Representations For Counterfactual Inference With Neural Networks
 +
 +