
Compiled Conversations Machine Learning Fundamentals, Part 1 with Shannon Wirtz
Shannon Wirtz, product analyst at Angi, joins us to explore the foundations of machine learning - breaking down the terminology, concepts, and approaches that form the bedrock of modern ML systems.
We start by understanding what machine learning actually means in practice, how it differs from traditional rules-based programming, and where it fits within the broader landscape of AI and deep learning. Shannon shares insights from his experience working with ML models in his professional work, from predicting customer behavior to classification tasks.
The conversation covers everything from the fundamental building blocks (models, features, training sets) to the different paradigms of learning - supervised, unsupervised, semi-supervised, self-supervised, and reinforcement learning. We explore why generalization is critical, how bias and variance affect model performance, and why the “garbage in, garbage out” principle is so important in ML.
Topics include:
- What machine learning means and how it differs from traditional programming
- The relationship between AI, machine learning, and deep learning
- Core ML concepts: models, training sets, samples, instances, datasets
- Classification vs regression problems
- Parameters vs hyperparameters in model training
- Generalization: why models must work on unseen data
- Bias and variance: understanding overfitting and underfitting
- Learning paradigms: supervised, unsupervised, semi-supervised, self-supervised, reinforcement
- Online vs batch learning approaches
- Instance-based vs model-based learning
- Anomaly detection and change point detection
- Features and the “garbage in garbage out” principle
- The curse of dimensionality: why more features isn’t always better
- Dimension reduction techniques including PCA
- Model families: linear/logistic regression, decision trees, k-means, SVMs
Shannon also shares practical examples from his work, including predicting tradesperson behavior, handling missing data, and the importance of understanding your data’s context and history before training models.
Whether you’re new to machine learning or looking to solidify your understanding of the fundamentals, this episode provides a comprehensive foundation for understanding how ML systems work and why certain approaches are chosen for different problems.
This is Part 1 of a 2-part series. In Part 2, we’ll explore ensemble learning, neural networks, model training and evaluation, interpretation techniques, and practical learning resources.
Show Links
- Shannon Wirtz on LinkedIn
- Overfitting and Underfitting
- Bias-Variance Tradeoff
- Semi-supervised Learning
- Reinforcement Learning
- Linear Regression
- Logistic Regression
- k-means Clustering
- Decision Trees
- Support Vector Machines
- Principal Component Analysis (PCA)
- Robust Principal Component Analysis
- Curse of Dimensionality
- Spurious Correlations
- Kaggle
