The Gradient: Perspectives on AI cover image

The Gradient: Perspectives on AI

Latest episodes

undefined
Feb 22, 2024 • 1h 59min

Cameron Jones & Sean Trott: Understanding, Grounding, and Reference in LLMs

In episode 112 of The Gradient Podcast, Daniel Bashir speaks to Cameron Jones and Sean Trott.Cameron is a PhD candidate in the Cognitive Science Department at the University of California, San Diego. His research compares how humans and large language models process language about world knowledge, situation models, and theory of mind.Sean is an Assistant Teaching Professor in the Cognitive Science Department at the University of California, San Diego. His research interests include probing large language models, ambiguity in languages, how ambiguous words are represented, and pragmatic inference. He previously completed his PhD at UCSD.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:55) Cameron’s background* (06:00) Sean’s background* (08:15) Unexpected capabilities of language models and the need for embodiment to understand meaning* (11:05) Interpreting results of Turing tests, separating what humans and LLMs do when behaving as though they “understand”* (14:27) Internal mechanisms, interpretability, how we test theories* (16:40) Languages are efficient, but for whom? * (17:30) Initial motivations: lexical ambiguity * (19:20) The balance of meanings across wordforms* (22:35) Tension between speaker- and comprehender-oriented pressures in lexical ambiguity* (25:05) Context and potential vs. realized ambiguity* (27:15) LLM-ology* (28:30) Studying LLMs as models of human cognition and as interesting objects of study in their own right* (30:03) Example of explaining away effects* (33:54) The internalist account of belief sensitivity—behavior and internal representations* (37:43) LLMs and the False Belief Task* (42:05) Hypothetical on observed behavior and inferences about internal representations* (48:05) Distributional Semantics Still Can’t Account for Affordances* (50:25) Tests of embodied theories and limitations of distributional cues* (53:54) Multimodal models and object affordances* (58:30) Language and grounding, other buzzwords* (59:45) How could we know if LLMs understand language?* (1:04:50) Reference: as a thing words do vs. ontological notion* (1:11:38) The Role of Physical Inference in Pronoun Resolution* (1:16:40) World models and world knowledge* (1:19:45) EPITOME* (1:20:20) The different tasks* (1:26:43) Confounders / “attending” in LM performance on tasks* (1:30:30) Another hypothetical, on theory of mind* (1:32:26) How much information can language provide in service of mentalizing? * (1:35:14) Convergent validity and coherence/validity of theory of mind* (1:39:30) Interpretive questions about behavior w/r/t/ theory of mind* (1:43:35) Does GPT-4 Pass the Turing Test?* (1:44:00) History of the Turing Test* (1:47:05) Interrogator strategies and the strength of the Turing Test* (1:52:15) “Internal life” and personality* (1:53:30) How should this research impact how we assess / think about LLM abilities? * (1:58:56) OutroLinks:* Cameron’s homepage and Twitter* Sean’s homepage and Twitter* Research — Language and NLP* Languages are efficient, but for whom?* Research — LLM-ology* Do LLMs know what humans know?* Distributional Semantics Still Can’t Account for Affordances* In Cautious Defense of LLM-ology* Should Psycholinguists use LLMs as “model organisms”?* (Re)construing Meaning in NLP* Research — language and grounding, theory of mind, reference [insert other buzzwords here]* Do LLMs have a “theory of mind”?* How could we know if LLMs understand language?* Does GPT-4 Pass the Turing Test?* Could LMs change language?* The extended mind and why it matters for cognitive science research* EPITOME* The Role of Physical Inference in Pronoun Resolution Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Feb 15, 2024 • 60min

Nicholas Thompson: AI and Journalism

In episode 111 of The Gradient Podcast, Daniel Bashir speaks to Nicholas Thompson.Nicholas is the CEO of The Atlantic. Previously, he served as editor-in-chief of Wired and editor of Newyorker.com. Nick also cofounded Atavist, which sold to Automattic in 2018. Publications under Nick’s leadership have won numerous National Magazine Awards and Pulitzer Prizes, and one WIRED story he edited was the basis for the movie Argo. Nick is also the co-founder of Speakeasy AI, a software platform designed to foster constructive online conversations about the world’s most pressing problems.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:12) Nick’s path into journalism* (03:25) The Washington Monthly — a turning point* (05:09) Perspectives from different positions in the journalism industry* (08:16) What is great journalism?* (09:42) Example from The Atlantic* (11:00) Other examples/pieces of good journalism* (12:20) Pieces on aging* (12:56) Mortality and life-force associated with running — Nick’s piece in WIRED* (15:30) On urgency* (18:20) The job of an editor* (22:23) AI in journalism — benefits and limitations* (26:45) How AI can help writers, experimentation* (28:40) Examples of AI in journalism and issues: CNET, Sports Illustrated, Nick’s thoughts on how AI should be used in journalism* (32:20) Speakeasy AI and creating healthy conversation spaces* (34:00) Details about Speakeasy* (35:12) Business pivots and business model trouble* (35:37) Remaining gaps in fixing conversational spaces* (38:27) Lessons learned* (40:00) Nick’s optimism about Speakeasy-like projects* (43:14) Social simulacra, a “Troll WestWorld,” algorithmic adjustments in social media* (46:11) Lessons and wisdom from journalism about engagement, more on engagement in social media* (50:27) Successful and unsuccessful futures for AI in journalism* (54:17) Previous warnings about synthetic media, Nick’s perspective on risks from synthetic media in journalism* (57:00) Stop trying to build AGI(59:13) OutroLinks:* Nicholas’s Twitter and website* Speakeasy AI* Writing* “To Run My Best Marathon at Age 44, I Had to Outrun My Past” in WIRED* “The year AI actually changes the media business” in NiemanLab’s Predictions for Journalism 2023 Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Feb 8, 2024 • 1h 59min

Subbarao Kambhampati: Planning, Reasoning, and Interpretability in the Age of LLMs

In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:11) Professor Kambhampati’s background* (06:07) Explanation in AI* (18:08) What people want from explanations—vocabulary and symbolic explanations* (21:23) The realization of new concepts in explanation—analogy and grounding* (30:36) Thinking and language* (31:48) Conscious and subconscious mental activity* (36:58) Tacit and explicit knowledge* (42:09) The development of planning as a research area* (46:12) RL and planning* (47:47) What makes a planning problem hard? * (51:23) Scalability in planning* (54:48) LLMs do not perform reasoning* (56:51) How to show LLMs aren’t reasoning* (59:38) External verifiers and backprompting LLMs* (1:07:51) LLMs as cognitive orthotics, language and representations* (1:16:45) Finding out what kinds of representations an AI system uses* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs* (1:39:53) The Generative AI Paradox, reasoning and retrieval* (1:43:48) AI as an ersatz natural science* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering* (1:58:33) OutroLinks:* Professor Kambhampati’s Twitter and homepage* Research and Writing — Planning and Human-Aware AI Systems* A Validation-structure-based theory of plan modification and reuse (1990)* Challenges of Human-Aware AI Systems (2020)* Polanyi vs. Planning (2021)* LLMs and Planning* Can LLMs Really Reason and Plan? (2023)* On the Planning Abilities of LLMs (2023)* Other* Changing the nature of AI research Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Feb 1, 2024 • 56min

Russ Maschmeyer: Spatial Commerce and AI in Retail

In episode 109 of The Gradient Podcast, Daniel Bashir speaks to Russ Maschmeyer.Russ is the Product Lead for AI and Spatial Commerce at Shopify. At Shopify, he leads a team that looks at how AI can better empower entrepreneurs, with a particular interest in how image generation can help make the lives of business owners and merchants more productive. He previously led design for multiple services at Facebook and co-founded Primer, an AR-enabled interior design marketplace.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:50) Russ’s background and a hacked Kinect sensor* (06:00) Instruments and emotion, embodiment and accessibility* (08:45) Natural language as input and generative AI in creating emotive experiences* (10:55) Work on search queries and recommendations at Facebook, designing for search* (16:35) AI in the retail and entrepreneurial landscape* (19:15) Shopify and AI for business owners* (22:10) Vision and directions for AI in commerce* (25:01) Personalized experiences for shopping* (28:45) Challenges for creating personalized experiences* (31:49) Intro to spatial commerce* (34:48) AR/VR devices and spatial commerce* (37:30) MR and AI for immersive product search* (41:35) Implementation details* (48:05) WonkaVision and difficulties for immersive web experiences* (52:10) Future projects and directions for spatial commerce* (55:10) OutroLinks:* Russ’s Twitter and homepage* With a Wave of the Hand, Improvising on Kinect in The New York Times* Shopify Spatial Commerce Projects* MR and AI for immersive product search* A more immersive web with a simple optical illusion* What if your room had a reset button? Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jan 25, 2024 • 1h 8min

Benjamin Breen: The Intersecting Histories of Psychedelics and AI Research

In episode 108 of The Gradient Podcast, Daniel Bashir speaks to Professor Benjamin Breen.Professor Breen is an associate professor of history at UC Santa Cruz specializing in the history of science, medicine, globalization, and the impacts of technological change. He is the author of multiple books including The Age of Intoxication: Origins of the Global Drug Trade and the more recent Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychedelic Science, which you can pre-order now.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:05) Professor Breen’s background* (04:47) End of history narratives / millenarian thinking in AI/technology* (09:53) Transformative technological change and societal change* (16:45) AI and psychedelics* (17:23) Techno-utopianism* (26:08) Technologies as metaphors for humanity* (32:34) McLuhanist thinking / brain as a computational machine, Prof. Breen’s skepticism* (37:13) Issues with overblown narratives about technology* (42:46) Narratives about transformation and their impacts on progress* (45:23) The historical importance of today’s AI landscape* (50:05) International aspects of the history of technology* (53:13) Doomerism vs optimism, why doomerism is appealing* (57:58) Automation, meta-skills, jobs — advice for early career* (1:01:08) LLMs and (history) education* (1:07:10) OutroLinks:* Professor Breen’s Twitter and homepage* Books* Tripping on Utopia: Margaret Mead, the Cold War, and the Troubled Birth of Psychedelic Science* The Age of Intoxication: Origins of the Global Drug Trade* Writings* Into the mystic* ‘Alien Jesus’* Simulating History with ChatGPT Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jan 18, 2024 • 2h 13min

Ted Gibson: The Structure and Purpose of Language

In episode 107 of The Gradient Podcast, Daniel Bashir speaks to Professor Ted Gibson.Ted is a Professor of Cognitive Science at MIT. He leads the TedLab, which investigates why languages look the way they do; the relationship between culture and cognition, including language; and how people learn, represent, and process language.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:13) Prof Gibson’s background* (05:33) The computational linguistics community and NLP, engineering focus* (10:48) Models of brains* (12:03) Prof Gibson’s focus on behavioral work* (12:53) How dependency distances impact language processing* (14:03) Dependency distances and the origin of the problem* (18:53) Dependency locality theory* (21:38) The structures languages tend to use* (24:58) Sentence parsing: structural integrations and memory costs* (36:53) Reading strategies vs. ordinary language processing* (40:23) Legalese* (46:18) Cross-dependencies* (50:11) Number as a cognitive technology* (54:48) Experiments* (1:03:53) Why counting is useful for Western societies* (1:05:53) The Whorf hypothesis* (1:13:05) Language as Communication* (1:13:28) The noisy channel perspective on language processing* (1:27:08) Fedorenko lab experiments—language for thought vs. communication and Chomsky’s claims* (1:43:53) Thinking without language, inner voices, language processing vs. language as an aid for other mental processing* (1:53:01) Dependency grammars and a critique of Chomsky’s grammar proposals, LLMs* (2:08:48) LLM behavior and internal representations* (2:12:53) OutroLinks:* Ted’s lab page and Twitter* Re-imagining our theories of language* Research — linguistic complexity and dependency locality theory* Linguistic complexity: locality of syntactic dependencies (1998)* The Dependency Locality Theory: A Distance-Based Theory of Linguistic Complexity (2000)* Consequences of the Serial Nature of Linguistic Input for Sentential Complexity (2005)* Large-scale evidence of dependency length minimization in 37 languages (2015)* Dependency locality as an explanatory principle for word order (2020)* Robust effects of working memory demand during naturalistic language comprehension in language-selective cortex (2022)* A resource-rational model of human processing of recursive linguistic structure (2022)* Research — language processing / communication and cross-linguistic universals* Number as a cognitive technology: Evidence from Pirahã language and cognition (2008)* The communicative function of ambiguity in language (2012)* The rational integration of noisy evidence and prior semantic expectations in sentence interpretation (2013)* Color naming across languages reflects color use (2017)* How Efficiency Shapes Human Language (2019) Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jan 11, 2024 • 2h 11min

Harvey Lederman: Propositional Attitudes and Reference in Language Models

In episode 106 of The Gradient Podcast, Daniel Bashir speaks to Professor Harvey Lederman.Professor Lederman is a professor of philosophy at UT Austin. He has broad interests in contemporary philosophy and in the history of philosophy: his areas of specialty include philosophical logic, the Ming dynasty philosopher Wang Yangming, epistemology, and philosophy of language. He has recently been working on incomplete preferences, on trying in the philosophy of language, and on Wang Yangming’s moral metaphysics.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:15) Harvey’s background* (05:30) Higher-order metaphysics and propositional attitudes* (06:25) Motivations* (12:25) Setup: syntactic types and ontological categories* (25:11) What makes higher-order languages meaningful and not vague?* (25:57) Higher-order languages corresponding to the world* (30:52) Extreme vagueness* (35:32) Desirable features of languages and important questions in philosophy* (36:42) Higher-order identity* (40:32) Intuitions about mental content, language, context-sensitivity* (50:42) Perspectivism* (51:32) Co-referring names, identity statements* (55:42) The paper’s approach, “know” as context-sensitive* (57:24) Propositional attitude psychology and mentalese generalizations* (59:57) The “good standing” of theorizing about propositional attitudes* (1:02:22) Mentalese* (1:03:32) “Does knowledge imply belief?” — when a question does not have good standing* (1:06:17) Sense, Reference, and Substitution* (1:07:07) Fregeans and the principle of Substitution* (1:12:12) Follow-up work to this paper* (1:13:39) Do Language Models Produce Reference Like Libraries or Like Librarians?* (1:15:02) Bibliotechnism* (1:19:08) Inscriptions and reference, what it takes for something to refer* (1:22:37) Derivative and basic reference* (1:24:47) Intuition: n-gram models and reference* (1:28:22) Meaningfulness in sentences produced by n-gram models* (1:30:40) Bibliotechnism and LLMs, disanalogies to n-grams* (1:33:17) On other recent work (vector grounding, do LMs refer?, etc.)* (1:40:12) Causal connections and reference, how bibliotechnism makes good on the meanings of sentences* (1:45:46) RLHF, sensitivity to truth and meaningfulness* (1:48:47) Intelligibility* (1:50:52) When LLMs produce novel reference* (1:53:37) Novel reference vs. find-replace* (1:56:00) Directionality example* (1:58:22) Human intentions and derivative reference* (2:00:47) Between bibliotechnism and agency* (2:05:32) Where do invented names / novel reference come from?* (2:07:17) Further questions* (2:10:04) OutroLinks:* Harvey’s homepage and Twitter* Papers discussed* Higher-order metaphysics and propositional attitudes* Perspectivism* Sense, Reference, and Substitution* Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Jan 4, 2024 • 1h 30min

Eric Jang: AI is Good For You

In episode 105 of The Gradient Podcast, Daniel Bashir speaks to Eric Jang.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:25) Updates since Eric’s last interview* (06:07) The problem space of humanoid robots* (08:42) Motivations for the book “AI is Good for You”* (12:20) Definitions of AGI* (14:35) ~ AGI timelines ~* (16:33) Do we have the ingredients for AGI?* (18:58) Rediscovering old ideas in AI and robotics* (22:13) Ingredients for AGI* (22:13) Artificial Life* (25:02) Selection at different levels of information—intelligence at different scales* (32:34) AGI as a collective intelligence* (34:53) Human in the loop learning* (37:38) From getting correct answers to doing things correctly* (40:20) Levels of abstraction for modeling decision-making — the neurobiological stack* (44:22) Implementing loneliness and other details for AGI* (47:31) Experience in AI systems* (48:46) Asking for Generalization* (49:25) Linguistic relativity* (52:17) Language vs. complex thought and Fedorenko experiments* (54:23) Efficiency in neural design* (57:20) Generality in the human brain and evolutionary hypotheses* (59:46) Embodiment and real-world robotics* (1:00:10) Moravec’s Paradox and the importance of embodiment* (1:05:33) How embodiment fits into the picture—in verification vs. in learning* (1:10:45) Nonverbal information for training intelligent systems* (1:11:55) AGI and humanity* (1:12:20) The positive future with AGI* (1:14:55) The negative future — technology as a lever* (1:16:22) AI in the military* (1:20:30) How AI might contribute to art* (1:25:41) Eric’s own work and a positive future for AI* (1:29:27) OutroLinks:* Eric’s book* Eric’s Twitter and homepage Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Dec 28, 2023 • 1h 36min

2023 in AI, with Nathan Benaich

In episode 104 of The Gradient Podcast, Daniel Bashir speaks to Nathan Benaich.Nathan is Founder and General Partner at Air Street Capital, a VC firm focused on investing in AI-first technology and life sciences companies. Nathan runs a number of communities focused on AI including the Research and Applied AI Summit and leads Spinout.fyi to improve the creation of university spinouts. Nathan co-authors the State of AI Report.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:00) Updates in Nathan World — Air Street’s second fund, spinouts, * (07:30) Events: Research and Applied AI Summit, State of AI Report launches* (09:50) The State of AI: main messages, the increasing role of subject matter experts* Research* (14:13) Open and closed-source* (17:55) Benchmarking and evaluation, small/large models and industry verticals* (21:10) “Vibes” in LLM evaluation* (24:00) Codegen models, personalized AI, curriculum learning* (26:20) The exhaustion of human-generated data, lukewarm content, synthetic data* (29:50) Opportunities for AI applications in the natural sciences* (35:15) Reinforcement Learning from Human Feedback and alternatives* (38:30) Industry* (39:00) ChatGPT and productivity* (42:37) General app wars, ChatGPT competitors* (45:50) Compute—demand, supply, competition* (50:55) Export controls and geopolitics* (54:45) Startup funding and compute spend* (59:15) Politics* (59:40) Calls for regulation, regulatory divergence* (1:04:40) AI safety* (1:07:30) Nathan’s perspective on regulatory approaches* (1:12:30) The UK’s early access to frontier models, standards setting, regulation difficulties* (1:17:20) Jailbreaking, constitutional AI, robustness* (1:20:50) Predictions!* (1:25:00) Generative AI misuse in elections and politics (and, this prediction coming true in Bangladesh)* (1:26:50) Progress on AI governance* (1:30:30) European dynamism* (1:35:08) OutroLinks:* Nathan’s homepage and Twitter* The 2023 State of AI Report* Bringing Dynamism to European Defense* A prediction coming true: How AI is disrupting Bangladesh’s election* Air Street Capital is hiring a full-time Community Lead! Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
Dec 21, 2023 • 46min

Kathleen Fisher: DARPA and AI for National Security

In episode 103 of The Gradient Podcast, Daniel Bashir speaks to Dr. Kathleen Fisher.As the director of DARPA’s Information Innovation Office (I2O), Dr. Kathleen Fisher oversees a portfolio that includes most of the agency’s AI-related research and development efforts, including the recent AI Forward initiative. AI Forward explores new directions for AI research that will result in trustworthy systems for national security missions. This summer, roughly 200 participants from the commercial sector, academia, and the U.S. government attended workshops that generated ideas to inform DARPA’s next phase of AI exploratory projects. Dr. Fisher previously served as a program manager in I2O from 2011 to 2014. As a program manager, she conceptualized, created, and executed programs in high-assurance computing and machine learning, including Probabilistic Programming for Advancing Machine Learning (PPAML), making building ML applications easier. She was also a co-author of a recent paper about the threats posed by large language models.Since 2018, DARPA has dedicated over $2 billion in R&D funding to AI research. The agency DARPA has been generating groundbreaking research and development for 65 years – leading to game-changing military capabilities and icons of modern society, such as initiating the research field that rendered self-driving cars and developing the technology that led to Apple’s Siri.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pubSubscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:30) Kathleen’s background* (05:05) Intersections between programming languages and AI* (07:15) Neuro-symbolic AI, trade-offs between flexibility and guarantees* (09:45) History of DARPA and the Information Innovation Office (I2O)* (13:55) DARPA’s perspective on research* (17:10) Galvanizing a research community* (20:06) DARPA’s recent investments in AI and AI Forward* (26:35) Dual-use nature of generative AI, identifying and mitigating security risks, Kathleen’s perspective on short-term and long-term risk (note: the “Gradient podcast” Kathleen mentions is from Last Week in AI)* (30:10) Concerns about deployment and interaction* (32:20) Outcomes from AI Forward workshops and themes* (36:10) Incentives in building and using AI technologies, friction* (38:40) Interactions between DARPA and other government agencies* (40:09) Future research directions* (44:04) Ways to stay up to date on DARPA’s work* (45:40) OutroLinks:* DARPA I2O website* Probabilistic Programming for Advancing Machine Learning (PPAML) (Archived)* Assured Neuro Symbolic Learning and Reasoning (ANSR)* AI Cyber Challenge* AI Forward* Identifying and Mitigating the Security Risks of Generative AI Paper* FoundSci Solicitation* FACT Solicitation* Semantic Forensics (SemaFor)* GARD Open Source Resources* I2O Newsletter signup Get full access to The Gradient at thegradientpub.substack.com/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode