

Causality 101 with Robert Osazuwa Ness - #342
10 snips Jan 27, 2020
Robert Osazuwa Ness, an ML Research Engineer at Gamalon and an instructor at Northeastern University, dives into the intriguing world of causality. They discuss how understanding causal relationships can enhance model accuracy and increase algorithmic fairness. Ness explains disentangled representations and their importance in causal inference through examples like variational autoencoders. They also share exciting details about a new collaborative study group focused on causal modeling, inviting community participation to deepen knowledge in this essential area.
AI Snips
Chapters
Transcript
Episode notes
Defining Causality
- Causality is complex, intertwining philosophical questions with practical applications like estimating causal effects.
- It goes beyond basic statistical analysis by considering interventions and how actions change data generation.
Two Views of Causality
- Many view causality either as abstract philosophy or a simple add-on to existing machine learning techniques.
- This reflects a cultural clash between traditional causal inference and modern machine learning practices.
The IID Assumption
- Traditional machine learning models assume independently and identically distributed data (IID).
- Real-world actions based on model predictions often violate this assumption, impacting future data distributions.