The Nonlinear Library

The Nonlinear Fund
undefined
Apr 5, 2024 • 44sec

EA - Considering pledging but haven't? Help us understand! by GraceAdams

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Considering pledging but haven't? Help us understand!, published by GraceAdams on April 5, 2024 on The Effective Altruism Forum. At Giving What We Can, we're hoping to speak to people who are interested in taking the Giving What We Can Pledge at some point, but haven't yet. We're conducting 45 min calls to understand your journey a bit more, and we'll donate $50 to a charity of your choice on our platform in exchange for you time. If you're interested in helping us out, feel free to DM me here, or email me at grace@givingwhatwecan.org Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 5, 2024 • 56min

EA - Two tools for rethinking existential risk by Arepo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Two tools for rethinking existential risk, published by Arepo on April 5, 2024 on The Effective Altruism Forum. Acknowledgements I owe thanks to Siao Si Looi, Derek Shiller, Nuño Sempere, Rudolf Ordoyne, Arvo Muñoz Morán, Justis Mills, David Manheim, Christopher Lankhof, Mohammad Ismam Huda, Uli Alskelung Von Hornbol, John Halstead, Charlie Guthmann, Vasco Grilo, Nia Jane Gardner, Michael Dickens, David Denkenberger and Agustín Covarrubias for invaluable comments and discussion on this post, the code and/or the project as a whole. Any remaining mistakes, whether existentially terminal or merely catastrophic, are all mine. Tl;dr I've developed two calculators designed to help longtermists estimate the likelihood of humanity achieving a secure interstellar existence after 0 or more major catastrophes. These can be used to compare an a priori estimate, and a revised estimate after counterfactual events. I hope these calculators will allow better prioritisation among longtermists and will finally give a common currency to longtermists, collapsologists and totalising consequentialists who favour non-longtermism. This will give these groups more scope for resolving disagreements and perhaps finding moral trades. This post explains how to use the calculators, and how to interpret their results. Introduction I argued earlier in this sequence that the classic concept of 'existential risk' is much too reductive. In short, by classing an event as either an existential catastrophe or not, it forces categorical reasoning onto fundamentally scalar questions of probability/credence. As longtermists, we are supposed to focus on achieving some kind of utopic future, in which morally valuable life would inhabit much of the Virgo supercluster for billions if not trillions of years.[1] So ultimately, rather than asking whether an event will destroy '(the vast majority of) humanity's long-term potential', we should ask various related but distinct questions: Contraction/expansion-related: What effect does the event have on the expected size of future civilisation? In practice we usually simplify this to the question of whether or not distant future civilisation will exist: Existential security-related: What is the probability[2] that human descendants (or whatever class of life we think has value) will eventually become interstellar? But this is still a combination of two questions, the latter of which longtermists have never, to my knowledge, considered probabilistically:[3] What is the probability that the event kills all living humans? What effect does the event otherwise have on the probability that we eventually reach an interstellar/existentially secure state, [4] given the possibility of multiple civilisational collapses and 'reboots'? (where the first reboot is the second civilisation) Welfare-related: How well off (according to whatever axiology one thinks best) would such life be? Reboot 1, maybe Image credit to Yuri Shwedoff In the last two posts I described models for longtermists to think about both elements of the existential security-related question together.[5] These fell into two groups: a simple model of civilisational states, which treats every civilisation as having equivalent prospects to its predecessors at an equivalent technological level, a family of more comprehensive models of civilisational states that a) capture my intuitions about how our survival prospects might change across multiple possible civilisations, b) have parameters which tie to estimates in existing existential-research literature (for example, the estimates of risk of per year and per century described in Michael Aird's Database of Existential Risk estimates (or similar)) and c) allow enough precision to consider catastrophes that 'only' set us back arbitrarily small amounts of time. Since the...
undefined
Apr 5, 2024 • 7min

LW - On Complexity Science by Garrett Baker

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Complexity Science, published by Garrett Baker on April 5, 2024 on LessWrong. I have a long and confused love-hate relationship with the field of complex systems. People there never want to give me a simple, straightforward explanation about what its about, and much of what they say sounds a lot like woo ("edge of chaos" anyone?). But it also seems to promise a lot! This from the primary textbook on the subject: The present situation can be compared to an archaeological project, where a mosaic floor has been discovered and is being excavated. While the mosaic is only partly visible and the full picture is still missing, several facts are becoming clear: the mosaic exists; it shows identifiable elements (for instance, people and animals engaged in recognizable activities); there are large patches missing or still invisible, but experts can already tell that the mosaic represents a scene from, say, Homer's Odyssey. Similarly, for dynamical complex adaptive systems, it is clear that a theory exists that, eventually, can be fully developed. Of course, that textbook never actually described what the mosaic it thought it saw actually was. The closest it came to was: More formally, co-evolving multiplex networks can be written as, ddtσi(t)F(Mαij,σj(t)) ddtMαijG(Mβij(t),σj(t)).(1.1) [...] The second equation specifies how the interactions evolve over time as a function G that depends on the same inputs, states of elements and interaction networks. G can be deterministic or stochastic. Now interactions evolve in time. In physics this is very rarely the case. The combination of both equations makes the system a co-evolving complex system. Co-evolving systems of this type are, in general, no longer analytically solvable. Which... well... isn't very exciting, and as far as I can tell just describes any dynamical system (co-evolving or no). The textbook also seems pretty obsessed with a few seemingly random fields: Economics Sociology Biology Evolution Neuroscience AI Probability theory Ecology Physics Chemistry "What?" I had asked, and I started thinking Ok, I can see why some of these would have stuff in common with others. Physics brings in a bunch of math you can use. Economics and sociology both tackle similar questions with very different techniques. It would be interesting to look at what they can tell each other (though it seems strange to spin off a brand new field out of this). Biology, evolution, and ecology? Sure. Both biology and ecology are constrained by evolutionary pressures, so maybe we can derive new things about each by factoring through evolution. AI, probability theory, and neuroscience? AI and neuroscience definitely seem related. The history of AI and probability theory has been mixed, and I don't know enough about the history of neuroscience and probability theory to have a judgement there. And chemistry??? Its mostly brought into the picture to talk about stoichiometry, the study of the rate and equilibria of chemical reactions. Still, what? And how exactly is all this meant to fit together again? And each time I heard a complex systems theorist talk about why their field was important they would say stuff like Complexity spokesperson: Well, current classical economics mostly assumes you are in an economic equilibrium, this is because it makes the math easier, but in fact we're not! And similarly with a bunch of other fields! We make a bunch of simplifying assumptions, but they're all usually a simplification of the truth! Thus, complex systems science. Me: Oh... so you don't make any simplifying assumptions? That seems... intractable? Complexity spokesperson: Oh no our models still make plenty of simplifications, we just run a bunch of numerical simulations of toy scenarios, then make wide and sweeping claims about the results. Me: That seems... wors...
undefined
Apr 5, 2024 • 55sec

EA - A trilogy on anti-philanthropic misdirection by Richard Y Chappell

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A trilogy on anti-philanthropic misdirection, published by Richard Y Chappell on April 5, 2024 on The Effective Altruism Forum. Three recent posts that may be of interest: Moral Misdirection (introducing the general concept: public communicators should aim to improve the importance-weighted accuracy of their audience's beliefs) What "Effective Altruism" Means to Me (sets out 42 claims that I think are true and important, as well as a handful of possible misconceptions that I explicitly reject) Anti-Philanthropic Misdirection: explains why I think vitriolic anti-EAs are often guilty of moral misdirection, and how responsible criticism would look different. (Leif Wenar's recent WIRED article is singled out for special attention.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 5, 2024 • 33min

AF - Koan: divining alien datastructures from RAM activations by Tsvi Benson-Tilsen

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Koan: divining alien datastructures from RAM activations, published by Tsvi Benson-Tilsen on April 5, 2024 on The AI Alignment Forum. [Metadata: crossposted from https://tsvibt.blogspot.com/2024/04/koan-divining-alien-datastructures-from.html.] Exploring the ruins of an alien civilization, you find what appears to be a working computer - - it's made of plastic and metal, wires connect it to various devices, and you see arrays of capacitors that maintain charged or uncharged states and that sometimes rapidly toggle in response to voltages from connected wires. You can tell that the presumptive RAM is activating in complex but structured patterns, but you don't know their meanings. What strategies can you use to come to understand what the underlying order is, what algorithm the computer is running, that explains the pattern of RAM activations? Thanks to Joss Oliver (SPAR) for entertaining a version of this koan. Many of B's ideas come from Joss. Real data about minds Red: If we want to understand how minds work, the only source of real data is our own thinking, starting from the language in which we think. Blue: That doesn't seem right. A big alternative source of data is neuroscience. We can directly observe the brain - - electrical activations of neurons, the flow of blood, the anatomical structure, the distribution of chemicals - - and we can correlate that with behavior. Surely that also tells us about how minds work? Red: I mostly deny this. To clarify: I deny that neuroscience is a good way to gain a deep understanding of the core structure of mind and thought. It's not a good way to gain the concepts that we lack. Blue: Why do you think this? It seems straightforward to expect that science should work on brains, just like it works on anything else. If we study the visible effects of the phenomenon, think of hypotheses to explain those visible effects, and test those hypotheses to find the ones that are true, then we'll find our way towards more and more predictive hypotheses. R: That process of investigation would of course work in the very long run. My claim is: that process of investigation is basically trying to solve a problem that's different from the problem of understanding the core structure of mind. That investigation would eventually work anyway, but mostly as a side-effect. As a rough analogy: if you study soccer in great detail, with a very high standard for predictive accuracy, you'll eventually be forced to understand quantum mechanics; but that's really a side-effect, and it doesn't mean quantum mechanics is very related to soccer or vice versa, and there's much faster ways of investigating quantum mechanics. As another, closer analogy: if you study a calculator in great detail, and you ask the right sort of questions, then you'll eventually be led to understand addition, because addition is in some sense a good explanation for why the calculator is the way that it is; but you really have to be asking the right questions, and you could build a highly detailed physics simulation that accurately predicts the intervention-observation behavior of the calculator as a physical object without understanding addition conceptually (well, aside from needing to understand addition for purposes of coding a simulator). B: What I'm hearing is: There are different domains, like QM and soccer, or electrons in wires vs. the concepts of addition. And if you want to understand one domain, you should try to study it directly. Is that about right? R: Yeah, that's an ok summary. B: Just ok? R: Your summary talks about where we start our investigation. I'd also want to emphasize the directional pull on our investigation that comes from the questions we're asking. B: I see. I don't think this really applies to neuroscience and minds, though. Like, ok, what we really wa...
undefined
Apr 5, 2024 • 5min

EA - Apply to help run EAGxIndia or EAGxBerkeley! by Arthur Malone

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to help run EAGxIndia or EAGxBerkeley!, published by Arthur Malone on April 5, 2024 on The Effective Altruism Forum. We're running applications to build two EAGx teams for events later this year: EAGxBerkeley will take place 7-8 September at Lighthaven. Apply here to join the team. We're planning on running EAGxIndia in October in Bangalore. Apply here to join the team. The application for each event takes 15 minutes and the deadline is Friday April 26th at 11:59pm Pacific Time. We encourage you to apply ASAP, as we will review applications on a rolling basis. About the conferences Please see this post for more about the EAGx conference series. The application to express interest in running an event in your area is still open! EAGxIndia 2024 This will be the second EAGx conference in India, following a very well-received inaugural event in Jaipur in January 2023. We are aiming to support local organizers in hosting EAGxIndia 2024 in Bangalore this October. We expect to grow the event size to 200-300 attendees; the event will primarily be aimed at community members and EA-aligned organizations based in India, though we also intend to invite attendees from other countries and EA-adjacent Indian organizations that might contribute to and benefit from attending the conference. This event will build on the knowledge and successes of last year's event; where the first Indian conference emphasized introductory content and kickstarting local community building, EAGxIndia 2024 will aim to balance accessibility with higher-context EA material. EAGxBerkeley 2024 We are excited to announce that the third EAGxBerkeley will be held at Lighthaven on September 7th and 8th. This will be the second Bay Area EA conference this year, after EA Global: Bay Area (Global Catastrophic Risks) in February; this event welcomes effective altruists working on, studying, and interested in all cause areas. EAGxBerkeley '24 intends to bring together ~500 attendees from the U.S. West Coast and beyond and resource them through networking, workshops, presentations, small group gatherings, an opportunities fair, and more. About the organizer roles Detailed job descriptions can be found here for EAGxIndia and here for EAGxBerkeley. Responsibilities are typically divided into Team Lead, Production, Admissions, Content, and Volunteer Coordination. As noted in the full descriptions, EAGx teams can vary in composition according to members' specific availability, strengths, and interests, so teams can distribute tasks as best suited for their members. Additionally, each role may have significant variability in expected time commitments. In most cases, roles will require much less time in the months preceding the conference (anywhere between 2-15 hours per week), ramping up to 20-40+ hours per week as the conference approaches. This flexibility in role responsibilities and time investment means we are interested in hearing from a wide range of applicants! The following elements are likely to apply to all candidates: Familiarity with the effective altruism community, and a passion for promoting its principles. Excellent organizational abilities with the capacity to handle multiple tasks and prioritize effectively under tight deadlines. Strong interpersonal skills and the capability to communicate effectively across various groups and stakeholders. (Bonus) Experience in event planning or logistics, preferably with large-scale events or conferences; experience with Salesforce or other CRM tools. (Bonus) Previous attendance and/or volunteer service at other EAG(x) events; understanding and enthusiasm for the EAG(x) conference format. Role logistics Pay: All positions are contractor roles paid hourly. Compensation starts at $38/hour for EAGxBerkeley roles and £15/hour for EAGxIndia roles, and are open to high...
undefined
Apr 5, 2024 • 4min

LW - Partial value takeover without world takeover by KatjaGrace

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Partial value takeover without world takeover, published by KatjaGrace on April 5, 2024 on LessWrong. People around me are very interested in AI taking over the world, so a big question is under what circumstances a system might be able to do that - what kind of capabilities could elevate an entity above the melange of inter-agent conflict and into solipsistic hegemony? We theorize about future AI systems hiding their motives until they are in a position to take over the world, so they don't get noticed and shut down beforehand. But humans and their institutions aren't very fixed. They might (arguably) have the same deep human values over time and space. But surface-level, sometimes they like little moustaches and the opera and delicate etiquette and sometimes they like ecstatic rock'n'roll re-negotiations of social reality. Sometimes they want big communal houses with their extended kin, and sometimes quiet condos. Eleven children or cushions that look like cats. The same person born in different eras might adulate glory once and humbleness another. If you wake up in this world, as a new entity, not smart enough to 'take it over' (alas!), and you find yourself with some unusual values that you'd like to forward, it seems to me there are a lot of other ways to forward them than 'pretend to have normal human values and bet on becoming all-powerful later'. Especially if you are above normal levels of smart, eloquent, and unrelenting. Like, I don't know, write about it? Make some youtube videos? Or if you are in the business of talking to billions of people every day, subtly take the values for granted, or imply that they are what elites value, or talk about them directly. For instance, suppose you really want things to have red vertical stripes on them, near the letter X. Then you might make a flag or logo like this and say it stands for something that you predict will be memetically fit. Or make a new emoji with an X and stripes, and use it in a way that people will pick up. Or start using 'Xax' as slang for something that happens near red vertical stripes. Here's how 2024 ChatGPT suggests proceeding with that last one, if you have as much power as a single social media influencer: ![[Pasted image 20240404223131.png]] My basic point is that a slim chance of 'taking over' and entirely remaking the world is not the only way to change values in our world. You can also - for many of us with radically higher probability - change values a little bit. At least if superficial values changes will suffice (i.e. shifts in what people instrumentally or contingently want or create). And for creatures in that (arguably quite broad) band between as powerful as me and powerful enough to take over the world, I'd guess these other means are more promising on net. If I like something weird, I'm better off writing a blog post about it than I am keeping entirely silent and trying to gain power by other means. It's true that taking over the world might arguably get you power over the entire future, but this doesn't seem discontinuously different from smaller fractions, whereas I think people often reason as if it is. Taking over 1% of the world might get you something like 1% of the future in expectation. In a shifting conflict between different sets of values, it's true you are at great risk of losing everything sometime in eternity, but if someone is going to end up with everything, there's also some chance it's you, and prima facie I'm not sure if it's above or below 1%. So there are two aspects of this point: You can probably substantially control values and thus the future without 'taking over' the world in any more traditionally offensive way You can take over a bit; there's not obviously more bang for your buck in taking over entirely If AI agents with unusual values would for a lon...
undefined
Apr 5, 2024 • 3min

LW - New report: A review of the empirical evidence for existential risk from AI via misaligned power-seeking by Harlan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: New report: A review of the empirical evidence for existential risk from AI via misaligned power-seeking, published by Harlan on April 5, 2024 on LessWrong. Visiting researcher Rose Hadshar recently published a review of some evidence for existential risk from AI, focused on empirical evidence for misalignment and power seeking. (Previously from this project: a blogpost outlining some of the key claims that are often made about AI risk, a series of interviews of AI researchers, and a database of empirical evidence for misalignment and power seeking.) In this report, Rose looks into evidence for: Misalignment,[1] where AI systems develop goals which are misaligned with human goals; and Power-seeking,[2] where misaligned AI systems seek power to achieve their goals. Rose found the current state of this evidence for existential risk from misaligned power-seeking to be concerning but inconclusive: There is empirical evidence of AI systems developing misaligned goals (via specification gaming[3] and via goal misgeneralization[4]), including in deployment (via specification gaming), but it's not clear to Rose whether these problems will scale far enough to pose an existential risk. Rose considers the conceptual arguments for power-seeking behavior from AI systems to be strong, but notes that she could not find any clear examples of power-seeking AI so far. With these considerations, Rose thinks that it's hard to be very confident either that misaligned power-seeking poses a large existential risk, or that it poses no existential risk. She finds this uncertainty to be concerning, given the severity of the potential risks in question. Rose also expressed that it would be good to have more reviews of evidence, including evidence for other claims about AI risks[5] and evidence against AI risks.[6] ^ "An AI is misaligned whenever it chooses behaviors based on a reward function that is different from the true welfare of relevant humans." ( Hadfield-Menell & Hadfield, 2019) ^ Rose follows (Carlsmith, 2022) and defines power-seeking as "active efforts by an AI system to gain and maintain power in ways that designers didn't intend, arising from problems with that system's objectives." ^ "Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome." ( Krakovna et al., 2020). ^ "Goal misgeneralization is a specific form of robustness failure for learning algorithms in which the learned program competently pursues an undesired goal that leads to good performance in training situations but bad performance in novel test situations." ( Shah et al., 2022a). ^ Joseph Carlsmith's report Is Power-Seeking AI an Existential Risk? Reviews some evidence for most of the claims that are central to the argument that AI will pose an existential risk. ^ Last year, Katja wrote Counterarguments to the basic AI x-risk case, which outlines some arguments against existential risk from AI. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 5, 2024 • 3min

EA - AIM/CE's Materials and Resources for EAs by CE

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AIM/CE's Materials and Resources for EAs, published by CE on April 5, 2024 on The Effective Altruism Forum. With just two weeks to go (deadline April 14th), many EAs are preparing to apply to our three programs. Our Charity Entrepreneurship Incubation Program (to launch your own nonprofit) Our Research Program (to find the best solutions to the worst problems) Our new Founding to Give pre-Incubator (to find a highly commercial idea and co-founder) Before applying, we often encourage applicants to check out our resources to give them the best chances of success. We also believe in just sharing good ideas and good materials. So we decided to post some of those resources here for the community at large. There's a lot of material here, so don't feel put off. Most successful applicants don't have a huge amount of time to prepare before applying. Instead, they dig deeper into specific content the further they get in the application process. 7 suggested resources and steps for applying Read up about the programs you're interested in, take Impactful Careers Quiz or CE-specific quiz on our website, explore our blog, and read the relevant report or idea summaries. Read our handbook. Here's a free copy of the first chapter. Get a sense of how our research process is designed to help us find the best ideas. Understand our values and think about what they mean to you. Read about a few of the 31 charities we have helped launch and take a look at our past participants and their testimonials. Read our blog post: How to increase your odds of getting accepted Revisit and tailor your CV to highlight relevant, independent, altruistic, or unusual projects or work. If you want to go deeper, check out this document with many links, articles, podcasts, and book recommendations covering the following and more: Learn more about Effective Altruism Get motivated to be ambitious Get better at decision-making and meta skills Get better at productivity systems Get better at being fast and lean Work on your creativity Get better at quantitative work, research, science Learn more about scientific methods and evidence Get more resilient and confident Learn more about running a charity And much more. We hope the following are useful in helping you consider and prepare for an application to one of our programs! You can apply here until April 14! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 5, 2024 • 1h 34min

LW - AI #58: Stargate AGI by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #58: Stargate AGI, published by Zvi on April 5, 2024 on LessWrong. Another round? Of economists projecting absurdly small impacts, of Google publishing highly valuable research, a cycle of rhetoric, more jailbreaks, and so on. Another great podcast from Dwarkesh Patel, this time going more technical. Another proposed project with a name that reveals quite a lot. A few genuinely new things, as well. On the new offerings front, DALLE-3 now allows image editing, so that's pretty cool. Table of Contents Don't miss out on Dwarkesh Patel's podcast with Sholto Douglas and Trenton Bricken, which got the full write-up treatment. Introduction. Table of Contents. Language Models Offer Mundane Utility. Never stop learning. Language Models Don't Offer Mundane Utility. The internet is still for porn. Clauding Along. Good at summarization but not fact checking. Fun With Image Generation. DALLE-3 now has image editing. Deepfaketown and Botpocalypse Soon. OpenAI previews voice duplication. They Took Our Jobs. Employment keeps rising, will continue until it goes down. The Art of the Jailbreak. It's easy if you try and try again. Cybersecurity. Things worked out this time. Get Involved. Technical AI Safety Conference in Tokyo tomorrow. Introducing. Grok 1.5, 25 YC company models and 'Dark Gemini.' In Other AI News. Seriously, Google, stop publishing all your trade secrets. Stargate AGI. New giant data center project, great choice of cautionary title. Larry Summers Watch. Economists continue to have faith in nothing happening. Quiet Speculations. What about interest rates? Also AI personhood. AI Doomer Dark Money Astroturf Update. OpenPhil annual report. The Quest for Sane Regulations. The devil is in the details. The Week in Audio. A few additional offerings this week. Rhetorical Innovation. The search for better critics continues. Aligning a Smarter Than Human Intelligence is Difficult. What are human values? People Are Worried About AI Killing Everyone. Can one man fight the future? The Lighter Side. The art must have an end other than itself. Language Models Offer Mundane Utility A good encapsulation of a common theme here: Paul Graham: AI will magnify the already great difference in knowledge between the people who are eager to learn and those who aren't. If you want to learn, AI will be great at helping you learn. If you want to avoid learning? AI is happy to help with that too. Which AI to use? Ethan Mollick examines our current state of play. Ethan Mollick (I edited in the list structure): There is a lot of debate over which of these models are best, with dueling tests suggesting one or another dominates, but the answer is not clear cut. All three have different personalities and strengths, depending on whether you are coding or writing. Gemini is an excellent explainer but doesn't let you upload files. GPT-4 has features (namely Code Interpreter and GPTs) that greatly extend what it can do. Claude is the best writer and seems capable of surprising insight. But beyond the differences, there are four important similarities to know about: All three are full of ghosts, which is to say that they give you the weird illusion of talking to a real, sentient being - even though they aren't. All three are multimodal, in that they can "see" images. None of them come with instructions. They all prompt pretty similarly to each other. I would add there are actually four models, not three, because there are (at last!) two Geminis, Gemini Advanced and Gemini Pro 1.5, if you have access to the 1.5 beta. So I would add a fourth line for Gemini Pro 1.5: Gemini Pro has a giant context window and uses it well. My current heuristic is something like this: If you need basic facts or explanation, use Gemini Advanced. If you want creativity or require intelligence and nuance, or code, use Claude. If ...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app