Increments

Ben Chugg and Vaden Masrani
undefined
Sep 16, 2020 • 1h 29min

#11 - Debating Existential Risk

Vaden's arguments against Bayesian philosophy and existential risk are examined by someone who might actually know what they're talking about, i.e., not Ben. After writing a critique of our conversation in Episode 7, which started off a series of blog posts, our good friend Mauricio (who studies political science, economics, and philosophy) kindly agrees to come on the podcast and try to figure out who's more confused. Does Vaden convert? We apologize for the long wait between this episode and the last one. It was all Vaden's fault. Hit us up at incrementspodcast@gmail.com!Note from Vaden:  Upon relistening, I've just learned my new computer chair clicks in the most annoying possible way every time I get enthusiastic. My apologies - I'll work on being less enthusiastic in future episodes.  Second note from Vaden: Yeesh lots of audio issues with this episode - I replaced the file with a cleaned up version at 5:30pm September 17th. Still learning... Support Increments
undefined
Aug 13, 2020 • 1h 16min

#10 (C&R Series, Ch. 4) - Tradition

Traditions, what are you good for? Absolutely nothing? In this episode of Increments, Ben and Vaden begin their series on Conjectures and Refutations by looking at the role tradition plays in society, and examine one tradition in particular - the critical tradition. No monkeys were harmed in the making of this episode. References:- C&R, Chapter 4: Towards a Rational Theory of TraditionPodcast shoutout:- Jennifer Doleac and Rob Wiblin on policing, law and incarceration- James Foreman Jr. on the US criminal legal systemaudio updated 26/12/2020Support Increments
undefined
Aug 7, 2020 • 1h 23min

#9 - Facial Recognition Technology with Stephen Caines

The talented Stephen Caines punctures the cloud of confusion that is Ben and Vaden's conception of facial recognition technology. We talk about the development and usage of facial recognition in the private and public spheres, the dangers and merits of the technology, and Vaden's plan to use it a bars. For God's sake don't give that man a GPU. Stephen is a legal technologist with a passion for access to justice. He is a 2019 graduate of the University of Miami School of Law with a concentration in the Business of Innovation, Law, and Technology. While in law school, his work focused on public interest, legal aid organizations, and non-profits. He was a 2018 Access to Justice Technology Fellow and has worked with the Legal Services of Greater Miami, Inc. on a variety of technology initiatives aimed at optimizing their operations. Additionally, he worked on the legislative and technology policy team of the Cyber Civil Rights Initiative. Stephen’s current work focuses on developing standards and best practices for the safe and ethical implementation of technology in the public sector.References: Stephen's website.Perpetual Lineup Project (out of Georgetown)Stephen on the Our Data podcastIBM, Amazon, and Microsoft put moratoria on some aspects of their FRT technology. Clearview AI Special Guest: Stephen Caines.Support Increments
undefined
Jul 28, 2020 • 1h 11min

#8 - Philosophy of Probability III: Conjectures and Refutations

On the same page at last! Ben comes to the philosophical confessional to announce his probabilistic sins. The Bayesians will be pissed (with high probability). At least Vaden doesn't make him kiss anything. After too much agreement and self-congratulation, Ben and Vaden conclude the mini-series on the philosophy of probability, and "announce" an upcoming mega-series on Conjectures and Refutations. References:- My Bayesian Enlightenment by Eliezer YudkowskyRationalist community blogs:- Less Wrong- Slate Star Codex- Marginal RevolutionYell at us at incrementspodcast@gmail.com. Support Increments
undefined
Jul 7, 2020 • 1h 38min

#7 - Philosophy of Probability II: Existential Risks

Back down to earth we go! Or try to, at least. In this episode Ben and Vaden attempt to ground their previous discussion on the philosophy of probability by focusing on a real-world example, namely the book The Precipice by Toby Ord, recently featured on the Making Sense podcast. Vaden believes in arguments, and Ben argues for beliefs. Quotes"A common approach to estimating the chance of an unprecedented event with earth-shaking consequences is to take a skeptical stance: to start with an extremely small probability and only raise it from there when a large amount of hard evidence is presented. But I disagree. Instead, I think the right method is to start with a probability that reflects our overall impressions, then adjust this in light of the scientific evidence. When there is a lot of evidence, these approaches converge. But when there isn’t, the starting point can matter. In the case of artificial intelligence, everyone agrees the evidence and arguments are far from watertight, but the question is where does this leave us? Very roughly, my approach is to start with the overall view of the expert community that there is something like a one in two chance that AI agents capable of outperforming humans in almost every task will be developed in the coming century. And conditional on that happening, we shouldn’t be shocked if these agents that outperform us across the board were to inherit our future. Especially if when looking into the details, we see great challenges in aligning these agents with our values."- The Precipice, p. 165"Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non–fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification." - Global Catastrophes and Trends, p. 226"And while computers have been used for many years to assemble other  computers and machines, such deployments do not indicate any imminent self- reproductive capability. All those processes require human actions to initiate them,  raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies."- Global Catastrophes and Trends, p. 26References:- Global Catastrophes and Trends: The Next Fifty Years- The Precipice: Existential Risk and the Future of Humanity- Making Sense podcast w/ Ord  (Clip starts around 40:00)- Repugnant conclusion- Arrow's theorem- Balinski–Young theoremSupport Increments
undefined
Jul 2, 2020 • 1h 17min

#6 - Philosophy of Probability I: Introduction

Don't leave yet - we swear this will be more interesting than it sounds ... ... But a drink will definitely help. Ben and Vaden dive into the interpretations behind probability. What do people mean when they use the word, and why do we use this one tool to describe different concepts. The rowdiness truly kicks in when Vaden releases his pent-up critique of Bayesianism, thereby losing both his friends and PhD position. But at least he's ingratiated himself with Karl Popper. References:Vaden's  Slides on a 1975 paper by Irving John Good titled Explicativity, Corroboration, and the Relative Odds of Hypotheses. The paper is I.J. Good’s response to Karl Popper, and in the presentation I compare the two philosophers’ views on probability, epistemology, induction, simplicity, and content.Diversity in Interpretations of Probability: Implications for Weather ForecastingAndrew Gelman, Philosophy and the practice of Bayesian statisticsPopper quote: "Those who identify confirmation with probability must believe that a high degree of probability is desirable. They implicitly accept the rule: ‘Always choose the most probable hypothesis!’ Now it can be easily shown that this rule is equivalent to the following rule: ‘Always choose the hypothesis which goes as little beyond the evidence as possible!’ And this, in turn, can be shown to be equivalent, not only to ‘Always accept the hypothesis with the lowest content (within the limits of your task, for example, your task of predicting)!’, but also to ‘Always choose the hypothesis which has the highest degree of ad hoc character (within the limits of your task)!’" (Conjectures and Refutations p.391) Get in touch at incrementspodcast@gmail.com.audio updated 13/12/2020Support Increments
undefined
Jun 18, 2020 • 1h 17min

#5 - Incrementalism Revisited: Defund the Police

In their first somber episode, Ben and Vaden discuss the protests and political tensions surrounding the murder of George Floyd. They talk about defunding the police, the importance of philosophy in politics, and honest conversation as the only peaceful means of error-correction. References:  https://8cantwait.org/https://www.8toabolition.com/Study which found that body cameras did not have a statistically significant effect. Errata: Ta-Nehisi Coates quote is "essential below" not "eternal under". Full quote is: "It is truly horrible to understand yourself as the essential below of your country."Things That Make White People Uncomfortable was written by Michael Bennett, not Michael BarnetLove and complaints both welcome at incrementspodcast@gmail.com. Support Increments
undefined
Jun 8, 2020 • 1h 31min

#4 - The Hubris of Computer Scientists

Are computer scientists recklessly applying their methods to other fields without sufficient thoughtfulness? What are computer scientists good for anyway? Ben, in true masochistic fashion, worries that computer scientists are overstepping their bounds. Vaden analyzes his worries with a random forest and determines that they are only 10% accurate, but then proceeds to piss of his entire field by arguing that we're nowhere close to true artificial intelligence. References"Good" isn't good enough, Ben Green. "How close are we to creating artificial intelligence?", David Deutsch, Aeon"Artificial Intelligence - The Revolution Hasn't Happened Yet", Michael Jordan, Medium"Deep Learning: A Critical Appraisal", Gary MarcusErrata Vaden says "every logarithmic curve starts with exponential growth". This should be "every logistic curve stats with exponential growth". Vaden says "95 degree accuracy". This should be "95 percent accuracy." The three main rationalists were Descarte, Spinoza, and Leibniz, and the three main empiricists were Bacon, Locke, and Hume. (Not whatever Vaden said) Support Increments
undefined
May 25, 2020 • 1h 23min

#3 - Incrementalism vs Revolution: Prison Abolition

The hosts debate between incremental reforms and radical change in prison abolition, discussing the harms of incremental reforms. They explore critiques of the current prison system and restorative justice as an alternative. The conversation delves into the philosophical implications of incrementalism and revolution, using historical examples to analyze the benefits of each approach.
undefined
11 snips
May 22, 2020 • 1h 30min

#2 - Consequentialism II: Strange Beliefs

The podcast delves into topics like moral decision-making perspectives, differing beliefs, valuing future generations, and ethical implications of long-term consequentialism. Discussions also cover career decision-making, hidden motivations, radical proposals, and the debate between radical reform and incremental change within a liberal community.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app