
Data Science Decoded
We discuss seminal mathematical papers (sometimes really old đ ) that have shaped and established the fields of machine learning and data science as we know them today. The goal of the podcast is to introduce you to the evolution of these fields from a mathematical and slightly philosophical perspective.
We will discuss the contribution of these papers, not just from pure a math aspect but also how they influenced the discourse in the field, which areas were opened up as a result, and so on.
Our podcast episodes are also available on our youtube:
https://youtu.be/wThcXx_vXjQ?si=vnMfs
Latest episodes

May 30, 2025 âą 41min
Data Science #30 - The Bootstrap Method (1977)
In the 30th episode we review the the bootstrap, method which was introduced by Bradley Efron in 1979, is a non-parametric resampling technique that approximates a statisticâs sampling distribution by repeatedly drawing with replacement from the observed data, allowing estimation of standard errors, confidence intervals, and bias without relying on strong distributional assumptions. Its ability to quantify uncertainty cheaply and flexibly underlies many staples of modern data science and AI, powering model evaluation and feature stability analysis, inspiring ensemble methods like bagging and random forests, and informing uncertainty calibration for deep-learning predictionsâthereby making contemporary models more reliable and robust.Efron, B. "Bootstrap methods: Another look at the bootstrap." The Annals of Statistics 7 (1977): 1-26.

May 23, 2025 âą 41min
Data Science #29 - The Chi-square automatic interaction detection(CHAID) algorithm (1979)
In the 29th episode, we go over the 1979 paper by Gordon Vivian Kass that introduced the CHAID algorithm.CHAID (Chi-squared Automatic Interaction Detection) is a tree-based partitioning method introduced by G. V. Kass for exploring large categorical data sets by iteratively splitting records into mutually exclusive, exhaustive subsets based on the most statistically significant predictors rather than maximal explanatory power. Unlike its predecessor, AID, CHAID embeds each split in a chi-squared significance test (with Bonferroniâcorrected thresholds), allows multi-way divisions, and handles missing or âfloatingâ categories gracefully.In practice, CHAID proceeds by merging predictor categories that are least distinguishable (stepwise grouping) and then testing whether any compound categories merit a further split, ensuring parsimonious, stable groupings without overfitting. Through its significanceâdriven, multi-way splitting and built-in bias correction against predictors with many levels, CHAID yields intuitive decision trees that highlight the strongest associations in high-dimensional categorical data In modern data science, CHAIDâs core ideas underpin contemporary decisionâtree algorithms (e.g., CART, C4.5) and ensemble methods like random forests, where statistical rigor in splitting criteria and robust handling of missing data remain critical. Its emphasis on automated, hypothesisâdriven partitioning has influenced automated feature selection, interpretable machine learning, and scalable analytics workflows that transform raw categorical variables into actionable insights.

May 23, 2025 âą 39min
Data Science #28 - The Bloom filter algorithm
In the 28th episode, we go over Burton Bloom's Bloom filter from 1970, a groundbreaking data structure that enables fast, space-efficient set membership checks by allowing a small, controllable rate of false positives.Unlike traditional methods that store full data, Bloom filters use a compact bit array and multiple hash functions, trading exactness for speed and memory savings. This idea transformed modern data science and big data systems, powering tools like Apache Spark, Cassandra, and Kafka, where fast filtering and memory efficiency are critical for performance at scale.

Apr 2, 2025 âą 32min
Data Science #27 - The History of Least Squares (1877)
Mansfield Merriman's 1877 paper traces the historical development of the Method of Least Squares, crediting Legendre (1805) for introducing the method, Adrain (1808) for the first formal probabilistic proof, and Gauss (1809) for linking it to the normal distribution. He evaluates multiple proofs, including Laplaceâs (1810) general probability-based derivation, and highlights later refinements by various mathematicians. The paper underscores the methodâs fundamental role in statistical estimation, probability theory, and error minimization, solidifying its place in scientific and engineering applications.

Mar 23, 2025 âą 33min
Data Science #26 - The First Gradient decent algorithm by Cauchy (1847)
In this episode, we review Cauchyâs 1847 paper, which introduced an iterative method for solving simultaneous equations by minimizing a function using its partial derivatives. Instead of elimination, he proposed progressively reducing the functionâs value through small updates, forming an early version of gradient descent. His approach allowed systematic approximation of solutions, influencing numerical optimization.This work laid the foundation for machine learning and AI, where gradient-based methods are essential. Modern stochastic gradient descent (SGD) and deep learning training algorithms follow Cauchyâs principle of stepwise minimization. His ideas power optimization in neural networks, making AI training efficient and scalable.

Feb 4, 2025 âą 33min
Data Science #24 - The Expectation Maximization (EM) algorithm Paper review (1977)
At the 24th episode we go over the paper titled:Dempster, Arthur P., Nan M. Laird, and Donald B. Rubin. "Maximum likelihood from incomplete data via the EM algorithm." Journal of the royal statistical society: series B (methodological) 39.1 (1977): 1-22.The Expectation-Maximization (EM) algorithm is an iterative method for finding Maximum Likelihood Estimates (MLEs) when data is incomplete or contains latent variables. It alternates between the E-step, where it computes the expected value of the missing data given current parameter estimates, and the M-step, where it maximizes the expected complete-data log-likelihood to update the parameters. This process repeats until convergence, ensuring a monotonic increase in the likelihood function.EM is widely used in statistics and machine learning, especially in Gaussian Mixture Models (GMMs), hidden Markov models (HMMs), and missing data imputation. Its ability to handle incomplete data makes it invaluable for problems in clustering, anomaly detection, and probabilistic modeling. The algorithm guarantees stable convergence, though it may reach local maxima, depending on initialization.In modern data science and AI, EM has had a profound impact, enabling unsupervised learning in natural language processing (NLP), computer vision, and speech recognition. It serves as a foundation for probabilistic graphical models like Bayesian networks and Variational Inference, which power applications such as chatbots, recommendation systems, and deep generative models. Its iterative nature has also inspired optimization techniques in deep learning, such as Expectation-Maximization inspired variational autoencoders (VAEs), demonstrating its ongoing influence in AI advancements.

Jan 14, 2025 âą 38min
Data Science #23- The Markov Chain Monte Carl MCMC Paper review (1953)
In the 23rd episode we review the The 1953 paper Metropolis, Nicholas, et al. "Equation of state calculations by fast computing machines."
The journal of chemical physics 21.6 (1953): 1087-1092 which introduced the Monte Carlo method for simulating molecular systems, particularly focusing on two-dimensional rigid-sphere models.
The study used random sampling to compute equilibrium properties like pressure and density, demonstrating a feasible approach for solving analytically intractable statistical mechanics problems.
The work pioneered the Metropolis algorithm, a key development in what later became known as Markov Chain Monte Carlo (MCMC) methods.
By validating the Monte Carlo technique against free volume theories and virial expansions, the study showcased its accuracy and set the stage for MCMC as a powerful tool for exploring complex probability distributions.
This breakthrough has had a profound impact on modern AI and ML, where MCMC methods are now central to probabilistic modeling, Bayesian inference, and optimization.
These techniques enable applications like generative models, reinforcement learning, and neural network training, supporting the development of robust, data-driven AI systems.
Youtube: https://www.youtube.com/watch?v=gWOawt7hc88&t

Jan 7, 2025 âą 48min
Data Science #22 - The theory of dynamic programming, Paper review 1954
We review Richard Bellman's "The Theory of Dynamic Programming" paper from 1954 which revolutionized how we approach complex decision-making problems through two key innovations. First, his Principle of Optimality established that optimal solutions have a recursive structure - each sub-decision must be optimal given the state resulting from previous decisions. Second, he introduced the concept of focusing on immediate states rather than complete historical sequences, providing a practical way to tackle what he termed the "curse of dimensionality."These foundational ideas directly shaped modern artificial intelligence, particularly reinforcement learning. The mathematical framework Bellman developed - breaking complex problems into smaller, manageable subproblems and making decisions based on current state - underpins many contemporary AI achievements, from game-playing agents like AlphaGo to autonomous systems and robotics. His work essentially created the theoretical backbone that enables modern AI systems to handle sequential decision-making under uncertainty.The principles established in this 1954 paper continue to influence how we design AI systems today, particularly in reinforcement learning and neural network architectures dealing with sequential decision problems.

Dec 25, 2024 âą 60min
Data Science #21 - Steps Toward Artificial Intelligence
In the 1st episode of the second season we review the legendary Marvin Minsky's "Steps Toward Artificial Intelligence" from 1961.
Itis a foundational work in the field of AI that outlines the challenges and methodologies for developing intelligent problem-solving systems. The paper categorizes AI challenges into five key areas: Search, Pattern Recognition, Learning, Planning, and Induction.
It emphasizes how computers, limited by their ability to perform only programmed actions, can enhance problem-solving efficiency through heuristic methods, learning from patterns, and planning solutions to narrow down possible options.
The significance of this work lies in its conceptual framework, which established a systematic approach to AI development.
Minsky highlighted the need for machines to mimic cognitive functions like recognizing patterns and learning from experience, which form the basis of modern machine learning algorithms.
His emphasis on heuristic methods provided a pathway to make computational processes more efficient and adaptive by reducing exhaustive searches and using past data to refine problem-solving strategies.
The paper is pivotal as it set the stage for advancements in AI by introducing the integration of planning, adaptive learning, and pattern recognition into computational systems.
Minsky's insights continue to influence AI research and development, including neural networks, reinforcement learning, and autonomous systems, bridging theoretical exploration and practical applications in the quest for artificial intelligence.

Dec 9, 2024 âą 60min
Data Science #20 - the Rao-Cramer bound (1945)
In the 20th episode, we review the seminal paper by Rao which introduced the Cramer Rao bound:
Rao, Calyampudi Radakrishna (1945). "Information and the accuracy attainable in the estimation of statistical parameters". Bulletin of the Calcutta Mathematical Society. 37. Calcutta Mathematical Society: 81â89.
The Cramér-Rao Bound (CRB) sets a theoretical lower limit on the variance of any unbiased estimator for a parameter.
It is derived from the Fisher information, which quantifies how much the data tells us about the parameter. This bound provides a benchmark for assessing the precision of estimators and helps identify efficient estimators that achieve this minimum variance.
The CRB connects to key statistical concepts we have covered previously:
Consistency: Estimators approach the true parameter as the sample size grows, ensuring they become arbitrarily accurate in the limit. While consistency guarantees convergence, it does not necessarily imply the estimator achieves the CRB in finite samples.
Efficiency: An estimator is efficient if it reaches the CRB, minimizing variance while remaining unbiased. Efficiency represents the optimal use of data to achieve the smallest possible estimation error.
Sufficiency: Working with sufficient statistics ensures no loss of information about the parameter, increasing the chances of achieving the CRB. Additionally, the CRB relates to KL divergence, as Fisher information reflects the curvature of the likelihood function and the divergence between true and estimated distributions.
In modern DD and AI, the CRB plays a foundational role in uncertainty quantification, probabilistic modeling, and optimization. It informs the design of Bayesian inference systems, regularized estimators, and gradient-based methods like natural gradient descent. By highlighting the tradeoffs between bias, variance, and information, the CRB provides theoretical guidance for building efficient and robust machine learning models