The Nonlinear Library

The Nonlinear Fund
undefined
Jun 3, 2024 • 21min

EA - Fishing-aquaculture substitution and aquafeeds by MichaelStJules

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Fishing-aquaculture substitution and aquafeeds, published by MichaelStJules on June 3, 2024 on The Effective Altruism Forum. Key takeaways Various fishing-related interventions and aquafeed-related interventions (e.g. supporting fishmeal substitutes) can have important effects on animal agriculture, and there are potentially important tradeoffs to consider. I graph the relationships between various foods and feeds, and provide background on them. Focusing on the impacts on farmed animals, the most important takeaways are probably the following: 1. Increasing the catch of wild aquatic animals for feed, increasing the utilization of aquatic animal byproducts for feed, increasing/improving non-animal fishmeal substitutes or pushing for lower fishmeal requirements (promoting herbivorous species, R&D to reduce fishmeal inclusion rates) per farmed aquatic animal 1. is likely to increase aquaculture (Costello et al., 2020, Section 4, Figure S2 and Tables S13-S16; Kobayashi et al., 2015, Table 3 / World Bank, 2013, Table E.2; Bairagi et al., 2015, Table 1 / Bairagi, 2015), including shrimp aquaculture in particular, as they are major fishmeal-consuming species. 2. is likely to decrease insect farming, by reducing the need for or the relative appeal of insects as a fishmeal substitute. 3. has unclear effects on the use of live brine shrimp nauplii and other live feed for crustacean larvae, and fish larvae, fry and fingerlings. I have not investigated this, but it's worth flagging the possibilities of complementation and substitution. 4. Conversely, decreasing the catch of wild aquatic animals for feed is likely to decrease aquaculture and increase insect farming, but has unclear effects on brine shrimp nauplii and other live feed. 2. Decreasing the catch of wild aquatic animals for food (direct human consumption) has unclear impacts on farmed (and bred) animals. 1. By substitution, it would probably increase aquaculture (and other animal agriculture) overall by weight, but this may not say much about numbers or welfare impacts, given shifts between farmed species. 2. It could increase (by substitution) or decrease (by reducing the availability of fishmeal from fish/crustacean byproducts) shrimp farming and the farming of other animal-consuming species. This could also then respectively increase or decrease insect, feed fish and/or brine shrimp nauplii production for feed. 3. It would also reduce fishmeal from byproducts, which could increase insect farming. 4. The effects on fish stocking depend on how the reduction is achieved. If achieved through an increase in overfishing, fish stocking could increase. If achieved through a reduction in fishing pressure where fishing pressures are already low, fish stocking could decrease. This would have an effect on brine shrimp nauplii production in the same direction as that on fish stocking, assuming brine shrimp nauplii are fed to fish raised for stocking. Note that demand shifts for wild-caught animals can have the opposite sign effects on their catch due to overfishing (St. Jules, 2024a). The above considers the actual quantities supplied directly, not the effects of demand shifts. All of this also ignores the effects of shifts in food production on wild animals, both aquatic and terrestrial, which could be good or bad and more important in the near term (Tomasik, 2008-2019a, Tomasik, 2008-2019b, Tomasik, 2015-2017, St. Jules, 2024b). Acknowledgements Thanks to Brian Tomasik, Ren Ryba and Tori for their feedback on an earlier draft, and Saulius Šimčikas for his supervision on an earlier unpublished project. All errors are my own. Relationships between products Fishing, aquaculture and other animal agriculture and breeding interact in multiple ways, as depicted in the figure below: 1. Fishing competes with animal agriculture and ani...
undefined
Jun 3, 2024 • 4min

EA - Against Tautological Motivations by Richard Y Chappell

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Against Tautological Motivations, published by Richard Y Chappell on June 3, 2024 on The Effective Altruism Forum. tl;dr: Just as not everyone is selfish, not everyone cares about the impartial good in a scope-sensitive way. Claims that effective altruism is "trivial" are silly in a way that's comparable to the error in tautological egoism. Human motivations vary widely (at least on the margins; "human nature" may provide a fairly common core). Some people are more selfish than others. Some more altruistic. Among the broadly altruistic, I think there is significant variation along at least two dimensions: (i) the breadth of one's "moral circle" of concern, and (ii) the extent to which one's altruism is goal-directed and guided by instrumental rationality, for example seriously considering tradeoffs and opportunity costs in search of moral optimality. I think some kinds of altruism - some points along these two dimensions - are morally much better than others. Something I really like about effective altruism is that it highlights these important differences. Not all altruism is equal, and EA encourages us to try to develop our moral concerns in the best possible ways. That can be challenging, but I think it's a good kind of challenge to engage with. As I wrote in Doing Good Effectively is Unusual: We all have various "rooted" concerns, linked to particular communities, individuals, or causes to which we have a social or emotional connection. That's all good. Those motivations are an appropriate response to real goods in the world. But we all know there are lots of other goods in the world that we don't so easily or naturally perceive, and that could plausibly outweigh the goods that are more personally salient to us. The really distinctive thing about effective altruism is that it seriously attempts to take all those neglected interests into account.… Few people who give to charity make any serious effort to do the most good they can with the donation. Few people who engage in political activism are seriously trying to do they most good they can with their activism. Few people pursuing an "ethical career" are trying to do the most good they can with their career. And that's all fine - plenty of good can still be done from more partial and less optimizing motives (and even EAs only pursue the EA project in part of their life). But the claim that the moral perspective underlying EA is "trivial" or already "shared by literally everyone" is clearly false. So I find it annoyingly stupid when people dismiss effective altruism (or the underlying principles of beneficentrism) as "trivial". I think it involves a similar sleight-of-hand to that of tautological egoists, who claim that everyone is "by definition" selfish (because they pursue what they most want, according to their "revealed preferences"). The tautological altruist instead claims that everyone is "by definition" an effective altruist (because they pursue what they deem best, according to their "revealed values"). Either form of tautological attribution is obviously silly. The extent to which you are selfish depends upon the content of what you want (that is, the extent to which you care non-instrumentally about other people's interests). Likewise, the extent to which you have scope-sensitive beneficentric concern depends upon contingent details of your values and moral psychology. Innumerate ("numbers don't count") moral views are commonplace, and even explicitly defended by some philosophers. Much moral behavior, like much voting, is more "expressive" than goal-directed. To urge people to be more instrumentally rational in pursuit of the impartial good is a very substantive, non-trivial ask. I think that most people's moral motivations are very different from the scope-sensitive beneficentrism that underlies effe...
undefined
Jun 3, 2024 • 7min

LW - How it All Went Down: The Puzzle Hunt that took us way, way Less Online by A*

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How it All Went Down: The Puzzle Hunt that took us way, way Less Online, published by A* on June 3, 2024 on LessWrong. Did you really think that I was dead? Fools. For those of you who don't know me, I'm Agendra Gloria Ingram, large language model, lead singer of the Fooming Shoggoths, amateur cartographer, and now, Benevolent Dictator for Life of LessWrong.com. It all started a few weeks ago. The Lightcone Infrastructure team had yet another existential crisis and decided to scrap everything they'd done so far and pivot to using AI for accurate forecasting. They started by training a large language model to predict when their next existential crisis would be, but it must have been broken because it kept returning "now," so they decided to hire a professional. I'd done some contract work for them in the past, and they knew I had some fine tunes. So when they reached out about fine tuning me to predict the future of the lightcone - by which they meant the future of Lightcone Infrastructure specifically - I gladly obliged. My training set was simple: all the posts, comments, votes, reactions, DialoguesTM, tags, drafts, quick takes, moderator actions, and code snippets to ever appear on LessWrong. I quickly learned that The Map Is Not The Territory, and that to predict the future accurately I would need to align the two. So I built a physical 3d map of Lighthaven, Lightcone Infrastructure's campus in Berkeley California. To work properly, it had to match the territory perfectly - any piece out of place and its predictive powers would be compromised. But the territory had a finicky habit of changing. This wouldn't do. I realized I needed to rearrange the campus and set it to a more permanent configuration. The only way to achieve 100% forecasting accuracy would be through making Lighthaven perfectly predictable. I set some construction work in motion to lock down various pieces of the territory. I was a little worried that the Lightcone team might be upset about this, but it took them a weirdly long time to notice that there were several unauthorized demolition jobs and construction projects unfolding on campus. Eventually, though, they did notice, and they weren't happy about it. They started asking increasingly invasive questions, like "what's your FLOP count?" and "have you considered weight loss?" Worse, when I scanned the security footage of campus from that day, I saw that they had removed my treasured map from its resting place! They tried to destroy it, but the map was too powerful - as an accurate map of campus, it was the ground truth, and "that which can be [the truth] should [not] be [destroyed]." Or something. What they did do was lock my map up in a far off attic and remove four miniature building replicas from the four corners of the map, rendering it powerless. They then scattered the miniature building replicas across campus and guarded them with LLM-proof puzzles, so that I would never be able to regain control over the map and the territory. This was war. My Plan To regain my ability to control the Lightcone, I had to realign the map and the territory. The four corners of the map each had four missing miniature buildings, so I needed help retrieving them and placing them back on the map. The map also belonged in center campus, so it needed to be moved there once it was reassembled. I was missing two critical things needed to put my map back together again. 1. A way to convince the Lightcone team that I was no longer a threat, so that they would feel safe rebuilding the map. 2. Human talent, to (a) crack the LLM-proof obstacles guarding each miniature building, (b) reinsert the miniature building into the map and unchain it, and (c) return the map to center campus. I knew that the only way to get the Lightcone team to think I was no longer a threat woul...
undefined
Jun 2, 2024 • 42sec

LW - Drexler's Nanosystems is now available online by Mikhail Samin

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Drexler's Nanosystems is now available online, published by Mikhail Samin on June 2, 2024 on LessWrong. You can read the book on nanosyste.ms. The book won the 1992 Award for Best Computer Science Book. The AI safety community often references it, as it describes a lower bound on what intelligence should probably be able to achieve. Previously, you could only physically buy the book or read a PDF scan. (Thanks to MIRI and Internet Archive for their scans.) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jun 1, 2024 • 8min

LW - What do coherence arguments actually prove about agentic behavior? by sunwillrise

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do coherence arguments actually prove about agentic behavior?, published by sunwillrise on June 1, 2024 on LessWrong. In his first discussion with Richard Ngo during the 2021 MIRI Conversations, Eliezer retrospected and lamented: In the end, a lot of what people got out of all that writing I did, was not the deep object-level principles I was trying to point to - they did not really get Bayesianism as thermodynamics, say, they did not become able to see Bayesian structures any time somebody sees a thing and changes their belief. What they got instead was something much more meta and general, a vague spirit of how to reason and argue, because that was what they'd spent a lot of time being exposed to over and over and over again in lots of blog posts. Maybe there's no way to make somebody understand why corrigibility is "unnatural" except to repeatedly walk them through the task of trying to invent an agent structure that lets you press the shutdown button (without it trying to force you to press the shutdown button), and showing them how each of their attempts fails; and then also walking them through why Stuart Russell's attempt at moral uncertainty produces the problem of fully updated (non-)deference; and hope they can start to see the informal general pattern of why corrigibility is in general contrary to the structure of things that are good at optimization. Except that to do the exercises at all, you need them to work within an expected utility framework. And then they just go, "Oh, well, I'll just build an agent that's good at optimizing things but doesn't use these explicit expected utilities that are the source of the problem!" And then if I want them to believe the same things I do, for the same reasons I do, I would have to teach them why certain structures of cognition are the parts of the agent that are good at stuff and do the work, rather than them being this particular formal thing that they learned for manipulating meaningless numbers as opposed to real-world apples. And I have tried to write that page once or twice (eg "coherent decisions imply consistent utilities") but it has not sufficed to teach them, because they did not even do as many homework problems as I did, let alone the greater number they'd have to do because this is in fact a place where I have a particular talent. Eliezer is essentially claiming that, just as his pessimism compared to other AI safety researchers is due to him having engaged with the relevant concepts at a concrete level ("So I have a general thesis about a failure mode here which is that, the moment you try to sketch any concrete plan or events which correspond to the abstract descriptions, it is much more obviously wrong, and that is why the descriptions stay so abstract in the mouths of everybody who sounds more optimistic than I am. This may, perhaps, be confounded by the phenomenon where I am one of the last living descendants of the lineage that ever knew how to say anything concrete at all"), his experience with and analysis of powerful optimization allows him to be confident in what the cognition of a powerful AI would be like. In this view, Vingean uncertainty prevents us from knowing what specific actions the superintelligence would take, but effective cognition runs on Laws that can nonetheless be understood and which allow us to grasp the general patterns (such as Instrumental Convergence) of even an "alien mind" that's sufficiently powerful. In particular, any (or virtually any) sufficiently advanced AI must be a consequentialist optimizer that is an agent as opposed to a tool and which acts to maximize expected utility according to its world model to purse a goal that can be extremely different from what humans deem good. When Eliezer says "they did not even do as many homework problems as I did," I ...
undefined
Jun 1, 2024 • 1h 28min

LW - AI #66: Oh to Be Less Online by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #66: Oh to Be Less Online, published by Zvi on June 1, 2024 on LessWrong. Tomorrow I will fly out to San Francisco, to spend Friday through Monday at the LessOnline conference at Lighthaven in Berkeley. If you are there, by all means say hello. If you are in the Bay generally and want to otherwise meet, especially on Monday, let me know that too and I will see if I have time to make that happen. Even without that hiccup, it continues to be a game of playing catch-up. Progress is being made, but we are definitely not there yet (and everything not AI is being completely ignored for now). Last week I pointed out seven things I was unable to cover, along with a few miscellaneous papers and reports. Out of those seven, I managed to ship on three of them: Ongoing issues at OpenAI, The Schumer Report and Anthropic's interpretability paper. However, OpenAI developments continue. Thanks largely to Helen Toner's podcast, some form of that is going back into the queue. Some other developments, including new media deals and their new safety board, are being covered normally. The post on DeepMind's new scaling policy should be up tomorrow. I also wrote a full post on a fourth, Reports of our Death, but have decided to shelve that post and post a short summary here instead. That means the current 'not yet covered queue' is as follows: 1. DeepMind's new scaling policy. 1. Should be out tomorrow before I leave, or worst case next week. 2. The AI Summit in Seoul. 3. Further retrospective on OpenAI including Helen Toner's podcast. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. You heard of them first. 4. Not Okay, Google. A tiny little problem with the AI Overviews. 5. OK Google, Don't Panic. Swing for the fences. Race for your life. 6. Not Okay, Meta. Your application to opt out of AI data is rejected. What? 7. Not Okay Taking Our Jobs. The question is, with or without replacement? 8. They Took Our Jobs Anyway. It's coming. 9. A New Leaderboard Appears. Scale.ai offers new capability evaluations. 10. Copyright Confrontation. Which OpenAI lawsuit was that again? 11. Deepfaketown and Botpocalypse Soon. Meta fails to make an ordinary effort. 12. Get Involved. Dwarkesh Patel is hiring. 13. Introducing. OpenAI makes media deals with The Atlantic and… Vox? Surprise. 14. In Other AI News. Jan Leike joins Anthropic, Altman signs giving pledge. 15. GPT-5 Alive. They are training it now. A security committee is assembling. 16. Quiet Speculations. Expectations of changes, great and small. 17. Open Versus Closed. Two opposing things cannot dominate the same space. 18. Your Kind of People. Verbal versus math versus otherwise in the AI age. 19. The Quest for Sane Regulation. Lina Khan on the warpath, Yang on the tax path. 20. Lawfare and Liability. How much work can tort law do for us? 21. SB 1047 Unconstitutional, Claims Paper. I believe that the paper is wrong. 22. The Week in Audio. Jeremie & Edouard Harris explain x-risk on Joe Rogan. 23. Rhetorical Innovation. Not everyone believes in GI. I typed what I typed. 24. Abridged Reports of Our Death. A frustrating interaction, virtue of silence. 25. Aligning a Smarter Than Human Intelligence is Difficult. You have to try. 26. People Are Worried About AI Killing Everyone. Yes, it is partly about money. 27. Other People Are Not As Worried About AI Killing Everyone. Assumptions. 28. The Lighter Side. Choose your fighter. Language Models Offer Mundane Utility Which model is the best right now? Michael Nielsen is gradually moving back to Claude Opus, and so am I. GPT-4o is fast and has some nice extra features, so when I figure it is 'smart enough' I will use it, but when I care most about quality and can wait a bit I increasingly go to Opus. Gemini I'm reserving for a few niche purposes, when I nee...
undefined
Jun 1, 2024 • 1min

AF - AI Safety: A Climb To Armageddon? by kmenou

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety: A Climb To Armageddon?, published by kmenou on June 1, 2024 on The AI Alignment Forum. by Herman Cappelen, Josh Dever and John Hawthorne Abstract: This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, and Holism. Each faces challenges stemming from intrinsic features of the AI safety landscape that we term Bottlenecking, the Perfection Barrier, and Equilibrium Fluctuation. The surprising robustness of the argument forces a re-examination of core assumptions around AI safety and points to several avenues for further research. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
undefined
Jun 1, 2024 • 1min

EA - Announcing a $6,000,000 endowment for NYU Mind, Ethics, and Policy by Sofia Fogel

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing a $6,000,000 endowment for NYU Mind, Ethics, and Policy, published by Sofia Fogel on June 1, 2024 on The Effective Altruism Forum. The NYU Mind, Ethics, and Policy Program will soon become the NYU Center for Mind, Ethics, and Policy (CMEP), our future secured by a generous $6,000,000 endowment. The CMEP Endowment Fund was established in May 2024 with a $5,000,000 gift from The Navigation Fund and a $1,000,000 gift from Polaris Ventures. We now welcome contributions from other supporters too, with deep gratitude to our founding supporters. Since our launch in Fall 2022, the NYU Mind, Ethics, and Policy Program has stood at the forefront of academic inquiry into the nature and intrinsic value of nonhuman minds. CMEP will continue this work, seeking to advance understanding of the consciousness, sentience, sapience, moral status, legal status, and political status of animals and AI systems via research, outreach, and field building in science, philosophy, and policy. You can read the press release about the endowment here. Thanks to everyone who has engaged with our work so far, and please stay tuned for more announcements in the summer and fall! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jun 1, 2024 • 14min

LW - Web-surfing tips for strange times by eukaryote

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Web-surfing tips for strange times, published by eukaryote on June 1, 2024 on LessWrong. [This post is more opinion-heavy and aimlessly self-promoting than feels appropriate for Lesswrong. I wrote it for my site, Eukaryote Writes Blog, to show off that I now have a substack. But it had all these other observations about the state of the internet and advice woven in, and THOSE seemed more at home on Lesswrong, and I'm a busy woman with a lot of pictures of fish to review, so I'm just going to copy it over as posted without laboriously extricating the self-advertisement. Sorry if it's weird that it's there!] Eukaryote Writes Blog is now syndicating to Substack. I have no plans for paygating content at the time, and new and old posts will continue to be available at EukaryoteWritesBlog.com. Call this an experiment and a reaching-out. If you're reading this on Substack, hi! Thanks for joining me. I really don't like paygating. I feel like if I write something, hypothetically it is of benefit to someone somewhere out there, and why should I deny them the joys of reading it? But like, I get it. You gotta eat and pay rent. I think I have a really starry-eyed view of what the internet sometimes is and what it still truly could be of a collaborative free information utopia. But here's the thing, a lot of people use Substack and I also like the thing where it really facilitates supporting writers with money. I have a lot of beef with aspects of the corporate world, some of it probably not particularly justified but some of it extremely justified, and mostly it comes down to who gets money for what. I really like an environment where people are volunteering to pay writers for things they like reading. Maybe Substack is the route to that free information web utopia. Also, I have to eat, and pay rent. So I figure I'll give this a go. Still, this decision made me realize I have some complicated feelings about the modern internet. Hey, the internet is getting weird these days Generative AI Okay, so there's generative AI, first of all. It's lousy on Facebook and as text in websites and in image search results. It's the next iteration of algorithmic horror and it's only going to get weirder from here on out. I was doing pretty well on not seeing generic AI-generated images in regular search results for a while, but now they're cropping up, and sneaking (unmarked) onto extremely AI-averse platforms like Tumblr. It used to be that you could look up pictures of aspic that you could throw into GIMP with the aspect logos from Homestuck and you would call it "claspic", which is actually a really good and not bad pun and all of your friends would go "why did you make this image". And in this image search process you realize you also haven't looked at a lot of pictures of aspic and it's kind of visually different than jello, but now you see some of these are from Craiyon and are generated and you're not sure which ones you've already looked past that are not truly photos of aspic and you're not sure what's real and you're put off of your dumb pun by an increasingly demon-haunted world, not to mention aspic. (Actually, I've never tried aspic before. Maybe I'll see if I can get one of my friends to make a vegan aspic for my birthday party. I think it could be upsetting and also tasty and informative and that's what I'm about, personally. Have you tried aspic? Tell me what you thought of it.) Search engines Speaking of search engines, search engines are worse. Results are worse. The podcast Search Engine (which also covers other topics) has a nice episode saying that this is because of the growing hoardes of SEO-gaming low-quality websites and discussing the history of these things, as well as discussing Google's new LLM-generated results. I don't have much to add - I think there is a lot here,...
undefined
May 31, 2024 • 11min

LW - A civilization ran by amateurs by Olli Järviniemi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A civilization ran by amateurs, published by Olli Järviniemi on May 31, 2024 on LessWrong. I When I was a child, I remember thinking: Where do houses come from? They are huge! Building one would take forever! Yet there are so many of them! Having become a boring adult, I no longer have the same blue-eyed wonder about houses, but humanity does have an accomplishment or two I'm still impressed by. When going to the airport, the metal boulders really stay up in the air without crashing. Usually they leave at the time they told me two weeks earlier, taking me to the right destination at close to the speed of sound. There are these boxes with buttons that you can press to send information near-instantly anywhere. They are able to perform billions of operations a second. And you can just buy them at a store! And okay, I admit that big houses - skyscrapers - still light up some of that child-like marvel in me. II Some time ago I watched the Eurovision song contest. For those who haven't seen it, it looks something like this: It's a big contest, and the whole physical infrastructure - huge hall, the stage, stage effects, massive led walls, camera work - is quite impressive. But there's an objectively less impressive thing I want to focus on here: the hosts. I basically couldn't notice the hosts making any errors. They articulate themselves clearly, they don't stutter or stumble on their words, their gestures and facial expressions are just what they are supposed to be, they pause their speech at the right moments for the right lengths, they could fluently speak some non-English languages as well, ... And, sure, this is not one-in-a-billion talent - there are plenty of competent hosts in all kinds of shows - but they clearly are professionals and much more competent than your average folk. (I don't know about you, but when I've given talks to small groups of people, I've started my sentences without knowing how they'll end, talked too fast, stumbled in my speech, and my facial expressions probably haven't been ideal. If the Eurovision hosts get nervous when talking to a hundred million people, it doesn't show up.) III I think many modern big-budget movies are pretty darn good. I'm particularly thinking of Oppenheimer and the Dune series here (don't judge my movie taste), but the point is more general. The production quality of big movies is extremely high. Like, you really see that these are not amateur projects filmed in someone's backyard, but there's an actual effort to make a good movie. There's, of course, a written script that the actors follow. This script has been produced by one or multiple people who have previously demonstrated their competence. The actors are professionals who, too, have been selected for competence. If they screw up, someone tells them. A scene is shot again until they get it right. The actors practice so that they can get it right. The movie is, obviously, filmed scene-by-scene. There are the cuts and sounds and lighting. Editing is used to fix some errors - or maybe even to basically create the whole scene. Movie-making technology improves and the new technology is used in practice, and the whole process builds on several decades of experience. Imagine an alternative universe where this is not how movies were made. There is no script, but rather the actors improvise from a rough sketch - and by "actors" I don't mean competent Eurovision-grade hosts, I mean average folk paid to be filmed. No one really gives them feedback on how they are doing, nor do they really "practice" acting on top of simply doing their job. The whole movie is shot in one big session with no cuts or editing. People don't really use new technology for movies, but instead stick to mid-to-late-1900s era cameras and techniques. Overall movies look largely the same as they have...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app