

Data Skeptic
Kyle Polich
The Data Skeptic Podcast features interviews and discussion of topics related to data science, statistics, machine learning, artificial intelligence and the like, all from the perspective of applying critical thinking and the scientific method to evaluate the veracity of claims and efficacy of approaches.
Episodes
Mentioned books

Oct 26, 2018 • 25min
Being Bayesian
Dive into the intriguing world of Bayesianism through the eyes of a parrot named Yoshi. Discover how her changing food preferences highlight the art of probability and the importance of prior beliefs. The hosts demystify Bayesian statistics, illustrating how past choices can predict future likes, all while discussing the nuances of decision-making under uncertainty. Explore how sensory signals play a crucial role in shaping beliefs about what our pets enjoy, emphasizing the need for constant updates to our understanding.

Oct 19, 2018 • 33min
Modeling Fake News
This is our interview with Dorje Brody about his recent paper with David Meier, How to model fake news. This paper uses the tools of communication theory and a sub-topic called filtering theory to describe the mathematical basis for an information channel which can contain fake news. Thanks to our sponsor Gartner.

Oct 12, 2018 • 27min
The Louvain Method for Community Detection
The podcast explores community detection in social networks using the Louvain Method. It discusses the concept of communities, the strength of connections within a community, and the theory behind the Louvain Method. The speakers also explore the potential use of the method in identifying interest-based communities and detecting fake news on social networks. Additionally, they discuss the spread of information within communities and the risk of spreading fake information.

Oct 5, 2018 • 32min
Cultural Cognition of Scientific Consensus
In this episode, our guest is Dan Kahan about his research into how people consume and interpret science news. In an era of fake news, motivated reasoning, and alternative facts, important questions need to be asked about how people understand new information. Dan is a member of the Cultural Cognition Project at Yale University, a group of scholars interested in studying how cultural values shape public risk perceptions and related policy beliefs. In a paper titled Cultural cognition of scientific consensus, Dan and co-authors Hank Jenkins‐Smith and Donald Braman discuss the "cultural cognition of risk" and establish experimentally that individuals tend to update their beliefs about scientific information through a context of their pre-existing cultural beliefs. In this way, topics such as climate change, nuclear power, and conceal-carry handgun permits often result in people. The findings of this and other studies tell us that on topics such as these, even when people are given proper information about a scientific consensus, individuals still interpret those results through the lens of their pre-existing cultural beliefs. The 'cultural cognition of risk' refers to the tendency of individuals to form risk perceptions that are congenial to their values. The study presents both correlational and experimental evidence confirming that cultural cognition shapes individuals' beliefs about the existence of scientific consensus, and the process by which they form such beliefs, relating to climate change, the disposal of nuclear wastes, and the effect of permitting concealed possession of handguns. The implications of this dynamic for science communication and public policy‐making are discussed.

Sep 28, 2018 • 26min
False Discovery Rates
A false discovery rate (FDR) is a methodology that can be useful when struggling with the problem of multiple comparisons. In any experiment, if the experimenter checks more than one dependent variable, then they are making multiple comparisons. Naturally, if you make enough comparisons, you will eventually find some correlation. Classically, people applied the Bonferroni Correction. In essence, this procedure dictates that you should lower your p-value (raise your standard of evidence) by a specific amount depending on the number of variables you're considering. While effective, this methodology is strict about preventing false positives (type i errors). You aren't likely to find evidence for a hypothesis that is actually false using Bonferroni. However, your exuberance to avoid type i errors may have introduced some type ii errors. There could be some hypotheses that are actually true, which you did not notice. This episode covers an alternative known as false discovery rates. The essence of this method is to make more specific adjustments to your expectation of what p-value is sufficient evidence.

Sep 21, 2018 • 30min
Deep Fakes
Digital videos can be described as sequences of still images and associated audio. Audio is easy to fake. What about video? A video can easily be broken down into a sequence of still images replayed rapidly in sequence. In this context, videos are simply very high dimensional sequences of observations, ripe for input into a machine learning algorithm. The availability of commodity hardware, clever algorithms, and well-designed software to implement those algorithms at scale make it possible to do machine learning on video, but to what end? There are many answers, one interesting approach being the technology called "DeepFakes". The Deep of Deepfakes refers to Deep Learning, and the fake refers to the function of the software - to take a real video of a human being and digitally alter their face to match someone else's face. Here are two examples: Barack Obama via Jordan Peele The versatility of Nick Cage This software produces curiously convincing fake videos. Yet, there's something slightly off about them. Surely machine learning can be used to determine real from fake... right? Siwei Lyu and his collaborators certainly thought so and demonstrated this idea by identifying a novel, detectable feature which was commonly missing from videos produced by the Deep Fakes software. In this episode, we discuss this use case for deep learning, detecting fake videos, and the threat of fake videos in the future.

Sep 14, 2018 • 19min
Fake News Midterm
In this episode, Kyle reviews what we've learned so far in our series on Fake News and talks briefly about where we're going next.

Sep 7, 2018 • 19min
Quality Score
Two weeks ago we discussed click through rates or CTRs and their usefulness and limits as a metric. Today, we discuss a related metric known as quality score. While that phrase has probably been used to mean dozens of different things in different contexts, our discussion focuses around the idea of quality score encountered in Search Engine Marketing (SEM). SEM is the practice of purchasing keyword targeted ads shown to customers using a search engine. Most SEM is managed via an auction mechanism - the advertiser states the price they are willing to pay, and in real time, the search engine will serve users advertisements and charge the advertiser. But how to search engines decide who to show and what price to charge? This is a complicated question requiring a multi-part answer to address completely. In this episode, we focus on one part of that equation, which is the quality score the search engine assigns to the ad in context. This quality score is calculated via several factors including crawling the destination page (also called the landing page) and predicting how applicable the content found there is to the ad itself.

6 snips
Aug 31, 2018 • 40min
The Knowledge Illusion
Kyle interviews Steven Sloman, Professor in the school of Cognitive, Linguistic, and Psychological Sciences at Brown University. Steven is co-author of The Knowledge Illusion: Why We Never Think Alone and Causal Models: How People Think about the World and Its Alternatives. Steven shares his perspective and research into how people process information and what this teaches us about the existence of and belief in fake news.

Aug 24, 2018 • 32min
Click Through Rates
A Click Through Rate (CTR) is the proportion of clicks to impressions of some item of content shared online. This terminology is most commonly used in digital advertising but applies just as well to content websites might choose to feature on their homepage or in search results. A CTR is intuitively appealing as a metric for optimization. After all, if users are disinterested in some content, under normal circumstances, it's reasonable to assume they would ignore the content, rather than clicking on it. On the other hand, the best content is likely to elicit a high CTR as users signal their interest by following the hyperlink. In the advertising world, a website could charge per impression, per click, or per action. Both impression and action based pricing have asymmetrical results for the publisher and advertiser. However, paying per click (CPC based advertising) seems to strike a nice balance. For this and other numeric reasons, many digital advertising mechanisms (such as Google Adwords) use CPC as the payment mechanism. When charging per click, an advertising platform will value a high CTR when selecting which ad to show. As we learned in our episode on Goodhart's Law, once a measure is turned into a target, it ceases to be a good measure. While CTR alone does not entirely drive most online advertising algorithms, it does play an important role. Thus, advertisers are incentivized to adopt strategies that maximize CTR. On the surface, this sounds like a great idea: provide internet users what they are looking for, and be awarded with their attention and lower advertising costs. However, one possible unintended consequence of this type of optimization is the creation of ads which are designed solely to generate clicks, regardless of if the users are happy with the page they visit after clicking a link. So, at least in part, websites that optimize for higher CTRs are going to favor content that does a good job getting viewers to click it. Getting a user to view a page is not totally synonymous with getting a user to appreciate the content of a page. The gap between the algorithmic goal and the user experience could be one of the factors that has promoted the creation of fake news.


