The Nonlinear Library

The Nonlinear Fund
undefined
Apr 23, 2024 • 5min

LW - Forget Everything (Statistical Mechanics Part 1) by J Bostock

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forget Everything (Statistical Mechanics Part 1), published by J Bostock on April 23, 2024 on LessWrong. EDIT: I somehow missed that John Wentworth and David Lorell are also in the middle of a sequence on this same topic here. I will see where this goes from here! Introduction to a sequence on the statistical thermodynamics of some things and maybe eventually everything. This will make more sense if you have a basic grasp on quantum mechanics, but if you're willing to accept "energy comes in discrete units" as a premise then you should be mostly fine. The title of this post has a double meaning: Forget the thermodynamics you've learnt before, because statistical mechanics starts from information theory. The main principle of doing things with statistical mechanics is can be summed up as follows: Forget as much as possible, then find a way to forget some more. Particle(s) in a Box All of practical thermodynamics (chemistry, engines, etc.) relies on the same procedure, although you will rarely see it written like this: Take systems which we know something about Allow them to interact in a controlled way Forget as much as possible If we have set our systems correctly, the information that is lost will allow us to learn some information somewhere else. For example, consider a particle in a box. What does it mean to "forget everything"? One way is forgetting where the particle is, so our knowledge of the particle's position could be represented by a uniform distribution over the interior of the box. Now imagine we connect this box to another box: If we forget everything about the particle now, we should also forget which box it is in! If we instead have a lot of particles in our first box, we might describe it as a box full of gas. If we connect this to another box and forget where the particles are, we would expect to find half in the first box and half in the second box. This means we can explain why gases expand to fill space without reference to anything except information theory. A new question might be, how much have we forgotten? Our knowledge gas particle has gone from the following distribution over boxes 1 and 2 P(Box)={1 Box 1 0 Box 2 To the distribution P(Box)={0.5 Box 1 0.5 Box 2 Which is the loss of 1 bit of information per particle. Now lets put that information to work. The Piston Imagine a box with a movable partition. The partition restricts particles to one side of the box. If the partition moves to the right, then the particles can access a larger portion of the box: In this case, to forget as much as possible about the particles means to assume they are in the largest possible space, which involves the partition being all the way over to the right. Of course there is the matter of forgetting where the partition is, but we can safely ignore this as long as the number of particles is large enough. What if we have a small number of particles on the right side of the partition? We might expect the partition to move some, but not all, of the way over, when we forget as much as possible. Since the region in which the pink particles can live has decreased, we have gained knowledge about their position. By coupling forgetting and learning, anything is possible. The question is, how much knowledge have we gained? Maths of the Piston Let the walls of the box be at coordinates 0 and 1, and let x be the horizontal coordinate of the piston. The position of each green particle can be expressed as a uniform distribution over (0,x), which has entropy log2(x), and likewise each pink particle's position is uniform over (x,1), giving entropy log2(1x). If we have ng green particles and np pink particles, the total entropy becomes nglog2(x)+nplog2(1x), which has a minimum at x=ngng+np. This means that the total volume occupied by each population of particles is proportion...
undefined
Apr 23, 2024 • 10min

EA - Should we break up Google DeepMind? by Hauke Hillebrandt

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Should we break up Google DeepMind?, published by Hauke Hillebrandt on April 23, 2024 on The Effective Altruism Forum. Regulators should review the 2014 DeepMind acquisition. When Google bought DeepMind in 2014, no regulator, not the FTC, not the EC's DG COMP, nor the CMA, scrutinized the impact. Why? AI startups have high value but low revenues. And so they avoid regulation (and tax, see below). Buying start-ups with low revenues flies under the thresholds of EU merger regulation[1] or the CMA's 'turnover test' (despite it being a 'relevant enterprise' under the National Security and Investment Act). In 2020, the FTC ordered Big Tech to provide info on M&A from 2010-2019 that it didn't report (UK regulators should urgently do so as well given that their retrospective powers might only be 10 years).[2] Regulators should also review the 2023 Google-DeepMind internal merger. DeepMind and Google Brain are key players in AI. In 2023, they merged into Google DeepMind. This compromises independence, reduces competition for AI talent and resources, and limits alternatives for collaboration partners. Though they are both part of Google, regulators can scrutinize this, regardless of corporate structure. For instance, UK regulators have intervened in M&A of enterprises already under common ownership - especially in Tech (cf UK regulators ordered FB to sell GIPHY). And so, regulators should consider breaking up Google Deepmind as per recent proposals: A new paper 'Unscrambling the eggs: breaking up consummated mergers and dominant firms' by economists at Imperial cites Google DeepMind as a firm that could be unmerged. [3] A new Brookings paper also argues that if other means to ensure fair markets fail, then as a last resort, foundation model firms may need to be broken up on the basis of functions, akin to how we broke up AT&T.[4] Relatedly, some top economists agree that we should designate Google Search as 'platform utilities' and break it apart from any participant on that platform, most agree that we should explore this further to weigh costs and benefits.[5] Indeed, the EU accuses Google of abusing dominance in ad tech and may force it to sell parts of its firm.[6] Kustomer, a firm of a similar size to DeepMind bought by Facebook, recently spun out again and shows this is possible. Finally, DeepMind itself has in the past tried to break away from Google.[7] Since DeepMind's AI improves all Google products, regulators should work cross-departmentally to scrutinize both mergers above on the following grounds: Market dominance: Google dominates the field of AI, surpassing all universities in terms of high-quality publications: Tax avoidance: Despite billions in UK profits yearly, Google is only taxed $60M.[8] DeepMind's is only taxed ~$1M per year on average. [9],[10] We should tax them more fairly. DeepMind's recent revenue jump is due to creative accounting, as it doesn't have many revenue streams, but almost all are based on how much Google arbitrarily pays for internal services. Indeed, Google just waived $1.5B in DeepMind's 'startup debt' [11],[12] despite DeepMind's CEO boasting that they have a unique opportunity as part of Google and its dozens of billion user products by immediately shipping their advances into[13] and saving Google hundreds of millions in energy costs.[14] About 85% of the innovations causing the recent AI boom came from Google DeepMind.[15] DeepMind also holds 560 patents,[16] and this IP is very hard to value and tax. Such a bad precedent might cause either more tax avoidance by OpenAI, Microsoft AI, Anthropic, Palantir, and A16z setting up UK offices, or it will give Google an unfair edge over these smaller firms). Public interest concerns: DeepMind's AI improves YouTube's algorithm and thus DeepMind indirectly polarizes voters.[17] Regulators s...
undefined
Apr 23, 2024 • 7min

LW - Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm) by Ruby

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Take the wheel, Shoggoth! (Lesswrong is trying out changes to the frontpage algorithm), published by Ruby on April 23, 2024 on LessWrong. For the last month, @RobertM and I have been exploring the possible use of recommender systems on LessWrong. Today we launched our first site-wide experiment in that direction. (In the course of our efforts, we also hit upon a frontpage refactor that we reckon is pretty good: tabs instead of a clutter of different sections. For now, only for logged-in users. Logged-out users see the "Latest" tab, which is the same-as-usual list of posts.) Why algorithmic recommendations? A core value of LessWrong is to be timeless and not news-driven. However, the central algorithm by which attention allocation happens on the site is the Hacker News algorithm[1], which basically only shows you things that were posted recently, and creates a strong incentive for discussion to always be centered around the latest content. This seems very sad to me. When a new user shows up on LessWrong, it seems extremely unlikely that the most important posts for them to read were all written within the last week or two. I do really like the simplicity and predictability of the Hacker News algorithm. More karma means more visibility, older means less visibility. Very simple. When I vote, I basically know the full effect this has on what is shown to other users or to myself. But I think the cost of that simplicity has become too high, especially as older content makes up a larger and larger fraction of the best content on the site, and people have been becoming ever more specialized in the research and articles they publish on the site. So we are experimenting with changing things up. I don't know whether these experiments will ultimately replace the Hacker News algorithm, but as the central attention allocation mechanism on the site, it definitely seems worth trying out and iterating on. We'll be trying out a bunch of things from reinforcement-learning based personalized algorithms, to classical collaborative filtering algorithms to a bunch of handcrafted heuristics that we'll iterate on ourselves. The Concrete Experiment Our first experiment is Recombee, a recommendations SaaS, since spinning up our RL agent pipeline would be a lot of work.We feed it user view and vote history. So far, it seems that it can be really good when it's good, often recommending posts that people are definitely into (and more so than posts in the existing feed). Unfortunately it's not reliable across users for some reason and we've struggled to get it to reliably recommend the most important recent content, which is an important use-case we still want to serve. Our current goal is to produce a recommendations feed that both makes people feel like they're keeping up to date with what's new (something many people care about) and also suggest great reads from across LessWrong's entire archive. The Recommendations tab we just launched has a feed using Recombee recommendations. We're also getting started using Google's Vertex AI offering. A very early test makes it seem possibly better than Recombee. We'll see. (Some people on the team want to try throwing relevant user history and available posts into an LLM and seeing what it recommends, though cost might be prohibitive for now.) Unless you switch to the "Recommendations" tab, nothing changes for you. "Latest" is the default tab and is using the same old HN algorithm that you are used to. I'll feel like we've succeeded when people switch to "Recommended" and tell us that they prefer it. At that point, we might make "Recommended" the default tab. Preventing Bad Outcomes I do think there are ways for recommendations to end up being pretty awful. I think many readers have encountered at least one content recommendation algorithm that isn't givi...
undefined
Apr 23, 2024 • 8min

EA - On failing to get EA jobs: My experience and recommendations to EA orgs by Ávila Carmesí

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On failing to get EA jobs: My experience and recommendations to EA orgs, published by Ávila Carmesí on April 23, 2024 on The Effective Altruism Forum. This is an anonymous account (Ávila is not a real person). I am posting on this account to avoid potentially negative effects on my future job prospects. SUMMARY: I've been rejected from 18 jobs or internships, 12 of which are "in EA." I briefly spell out my background information and show all my rejections. Then, I list some recommendations to EA orgs on how they can (hopefully) improve the hiring process. This post probably falls under the category of "it's hard, even for high-achievers, to get an EA job." But there's still the (probably bigger) problem of what there is for "mediocre" EAs to do in a movement that prizes extremely high-achieving individuals. If this post improves hiring a little bit at a few EA orgs, I will be a happy person. BACKGROUND Entry-level EA jobs and internships have been getting very competitive. It is common for current applicants to hear things like "out of 600, we can only take 20" (CHAI), or "only 3% of applicants made it this far" (IAPS) or "It's so competitive it's probably not even worth applying" (GovAI representative). So far, I haven't been accepted to any early-career AI safety opportunities, and I've mostly been rejected in the first round. ABOUT ME I'll keep this section somewhat vague to protect my anonymity. I'm mostly applying to AI safety-related jobs and internships. I am graduating from a top university with honors and a perfect GPA. I have 3 stellar letters of recommendation, 3 research internships in different areas, part-time work at a research lab, I lead two relevant student clubs, I've also worked part-time at 3 other non-research (though still academic) jobs. I can show very high interest and engagement for the programs I am applying to. I've co-authored several conference papers and have done independent research. I've done a couple of "cool" things that show potential (but mentioning them here might compromise my anonymity). I've also gotten my resume reviewed by two hiring professionals who said it looked great. Most of this research and leadership experience is very relevant to the jobs I am applying to. One potentially big thing working against me is that I'm neither a CS nor public policy/IR person (or something super policy-relevant like that). JOBS/INTERNSHIPS/FUNDING I'VE APPLIED TO Rejections Horizon Junior Fellowship - Rejected on round 3/4 GovAI summer fellowship - Rejected first round ERA<>Krueger Lab - Rejected first round fp21 internship - Never heard back BERI (full-time job) - Rejected first round MIT FutureTech (part-time) - Job filled before interview PIBBSS Fellowship - Rejected first round Berkeley Risk and Security Lab - Never heard back CLR Fellowship - Rejected first round ERA Fellowship - Rejected first round CHAI Internship - Rejected first round UChicago XLab - Rejected first round EA LTFF research grant - Rejected Open Phil research grant - Rejected Acceptances None yet! Note: I've also applied to jobs that align with my principles but are not at EA orgs. I'm also still applying to jobs, so this is not (yet) a pity party. MY EXPECTATIONS Although I expected these to be quite competitive, I was surprised to be eliminated during the first round for so many of them. That's because most of these are specifically meant for early-career people and I'd say I have a great resume/credentials/demonstrated skills for an early career person. RECOMMENDATIONS TO EA ORGS As someone who's spent a lot of time doing EA org applications, below are some tentative thoughts on how to (probably) improve them. Please let me know what you think in the comments. Increase the required time-commitment as the application progresses. By this I mean, start out with sh...
undefined
Apr 22, 2024 • 4min

LW - Funny Anecdote of Eliezer From His Sister by Daniel Birnbaum

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funny Anecdote of Eliezer From His Sister, published by Daniel Birnbaum on April 22, 2024 on LessWrong. This comes from a podcast called 18Forty, of which the main demographic of Orthodox Jews. Eliezer's sister (Hannah) came on and talked about her Sheva Brachos, which is essentially the marriage ceremony in Orthodox Judaism. People here have likely not seen it, and I thought it was quite funny, so here it is: https://18forty.org/podcast/channah-cohen-the-crisis-of-experience/ David Bashevkin: So I want to shift now and I want to talk about something that full disclosure, we recorded this once before and you had major hesitation for obvious reasons. It's very sensitive what we're going to talk about right now, but really for something much broader, not just because it's a sensitive personal subject, but I think your hesitation has to do with what does this have to do with the subject at hand? And I hope that becomes clear, but one of the things that has always absolutely fascinated me about you and really increased my respect for you exponentially, is that you have dedicated much of your life and the folks of your research on relationships and particularly the crisis of experience in how people find and cultivate relationships. And your personal background on this subject to me really provides a lot of contexts of how I see you speaking. I'm mentioning this for two reasons. Your maiden name is? Channah Cohen: Yudkowsky. David Bashevkin: Yudkowsky. And many of our listeners, though not all of our listeners will recognize your last name. Your older brother is world famous. It's fair to say, world famous researcher in artificial intelligence. He runs a blog that I don't know if they're still posting on it was called LessWrong. He wrote like a massive gazillion page fan fiction of Harry Potter. Your brother is Eliezer Yudkowsky. Channah Cohen: Yes. David Bashevkin: You shared with me one really beautiful anecdote about Eliezer that I insist on sharing because it's so sweet. He spoke at your sheva brachos. Channah Cohen: Yes. David Bashevkin: And I would not think it was not think that Eliezer Yudkowsky would be the best sheva brachos speaker, but it was the most lovely thing that he said. What did Eliezer Yudkowsky say at your sheva brachos? Channah Cohen: Yeah, it's a great story because it was mind-blowingly surprising at the time. And it is, I think the only thing that anyone said at a sheva brachos that I actually remember, he got up at the first sheva brachos and he said, when you die after 120 years, you're going to go up to shamayim [this means heaven] and Hakadosh Baruch Hu [this means God]. And again, he used these phrases PART 3 OF 4 ENDS [01:18:04] Channah Cohen: Yeah. Hakadosh Baruch Hu will stand the man and the woman in front of him and he will go through a whole list of all the arguments you ever had together, and he will tell you who was actually right in each one of those arguments. And at the end he'll take a tally, and whoever was right more often wins the marriage. And then everyone kind of chuckled and Ellie said, "And if you don't believe that, then don't act like it's true." David Bashevkin: What a profound… If you don't believe that, then don't act like it's true. Don't spend your entire marriage and relationship hoping that you're going to win the test to win the marriage. What a brilliant Channah Cohen: What a great piece of advice. David Bashevkin: What a brilliant presentation. I never would've guessed that Eliezer Yudkowsky would enter into my sheva brachos wedding lineup, but that is quite beautiful and I can't thank you enough for sharing that. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Apr 22, 2024 • 12min

EA - Priors and Prejudice by MathiasKB

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priors and Prejudice, published by MathiasKB on April 22, 2024 on The Effective Altruism Forum. This post is easily the weirdest thing I've ever written. I also consider it the best I've ever written - I hope you give it a chance. If you're not sold by the first section, you can safely skip the rest. I Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let's name this hypothetical movement the Effective Samaritans. Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping. But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn't become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures. The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the way to 1870's where the union and social democratic movements got their start. In many developing countries wage theft is still common-place. When employees can't be certain they'll get paid what was promised in the contract they signed and they can't trust the legal system to have their back, society settles on much fewer surplus producing work arrangements than is optimal. Work to improve capacity of the existing legal structure is fraught with risk. One risks strengthening the oppressive arms used by the ruling and capitalist classes to stay in power. A safer option may be to strengthen labour unions, who can take up these fights on behalf of their members. Being in inherent opposition to capitalist interests, unions are much less likely to be captured and co-opted. Though there is much uncertainty, unions present a promising way to increase contract-enforcement and help bring about the conditions necessary for economic development, a report by Reassess Priorities concludes. Compelled by the anti-randomista arguments, some Effective Samaritans begin donating to the 'Developing Unions Project', which funds unions in developing countries and does political advocacy to increase union influence. A well-regarded economist writes a scathing criticism of Effective Samaritanism, stating that they are blinded by ideology and that there isn't sufficient evidence to show that increases in labor power leads to increases in contract enforcement. The article is widely discussed on the Effective Samaritan Forum. One commenter writes a highly upvoted response, arguing that absence of evidence isn't evidence of absence. The professor is too concerned with empirical evidence, and fails to engage sufficiently with the object-level arguments for why the Developing Unions Project is promising. Additionally, why are we listening to an economics professor anyways? Economics is completely bankrupt as a science, resting on empirically false ridiculous assumptions, and is filled with activists doing shoddy science to confirm their neoliberal beliefs. I sometimes imagine myself trying to convince the Effective Samaritan why I'm correct to hold my current beliefs, many of which have come out of the rationalist diaspora. I explain how I'm not fully bought into the analysis of labor historians, which credits labor unions and the Social Democratic movements for making Scandinavia uniquely wealthy, equitable and happy. If this were a driving factor, how come the descendants of Scandinavians who migrated to the US long before are doing just as well in America? Besides, even if I don't know enough to ...
undefined
Apr 22, 2024 • 8min

LW - AI Regulation is Unsafe by Maxwell Tabarrok

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Regulation is Unsafe, published by Maxwell Tabarrok on April 22, 2024 on LessWrong. Concerns over AI safety and calls for government control over the technology are highly correlated but they should not be. There are two major forms of AI risk: misuse and misalignment. Misuse risks come from humans using AIs as tools in dangerous ways. Misalignment risks arise if AIs take their own actions at the expense of human interests. Governments are poor stewards for both types of risk. Misuse regulation is like the regulation of any other technology. There are reasonable rules that the government might set, but omission bias and incentives to protect small but well organized groups at the expense of everyone else will lead to lots of costly ones too. Misalignment regulation is not in the Overton window for any government. Governments do not have strong incentives to care about long term, global, costs or benefits and they do have strong incentives to push the development of AI forwards for their own purposes. Noticing that AI companies put the world at risk is not enough to support greater government involvement in the technology. Government involvement is likely to exacerbate the most dangerous parts of AI while limiting the upside. Default government incentives Governments are not social welfare maximizers. Government actions are an amalgam of the actions of thousands of personal welfare maximizers who are loosely aligned and constrained. In general, governments have strong incentives for myopia, violent competition with other governments, and negative sum transfers to small, well organized groups. These exacerbate existential risk and limit potential upside. The vast majority of the costs of existential risk occur outside of the borders of any single government and beyond the election cycle for any current decision maker, so we should expect governments to ignore them. We see this expectation fulfilled in governments reactions to other long term or global externalities e.g debt and climate change. Governments around the world are happy to impose trillions of dollars in direct cost and substantial default risk on future generations because costs and benefits on these future generations hold little sway in the next election. Similarly, governments spend billions subsidizing fossil fuel production and ignore potential solutions to global warming, like a carbon tax or geoengineering, because the long term or extraterritorial costs and benefits of climate change do not enter their optimization function. AI risk is no different. Governments will happily trade off global, long term risk for national, short term benefits. The most salient way they will do this is through military competition. Government regulations on private AI development will not stop them from racing to integrate AI into their militaries. Autonomous drone warfare is already happening in Ukraine and Israel. The US military has contracts with Palantir and Andruil which use AI to augment military strategy or to power weapons systems. Governments will want to use AI for predictive policing, propaganda, and other forms of population control. The case of nuclear tech is informative. This technology was strictly regulated by governments, but they still raced with each other and used the technology to create the most existentially risky weapons mankind has ever seen. Simultaneously, they cracked down on civilian use. Now, we're in a world where all the major geopolitical flashpoints have at least one side armed with nuclear weapons and where the nuclear power industry is worse than stagnant. Government's military ambitions mean that their regulation will preserve the most dangerous misuse risks from AI. They will also push the AI frontier and train larger models, so we will still face misalignment risks. These may ...
undefined
Apr 22, 2024 • 28min

EA - 'The AI Dilemma: Growth vs Existential Risk': An Extension for EAs and a Summary for Non-economists by TomHoulden

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'The AI Dilemma: Growth vs Existential Risk': An Extension for EAs and a Summary for Non-economists, published by TomHoulden on April 22, 2024 on The Effective Altruism Forum. In this post I first summarize a recent paper by Chad Jones focused on the decision to deploy AI that has the potential to increase both economic growth and existential risk (section 1). Jones offers some simple insights which I think could be interesting for effective altruists and may be influential for how policy-makers think about trade-offs related to existential risk. I then consider some extensions which make the problem more realistic, but more complicated (section 2). These extensions include the possibility of pausing deployment of advanced AI to work on AI safety, as well allowing for the possibility of economic growth outside of deployment of AI (I show this weakens the case for accepting high levels of risk from AI). At times, I have slightly adjusted notation used by Jones where I thought it would be helpful to further simplify some of the key points.[1] I. Summary AI may boost economic growth to a degree never seen before. Davidson (2021), for example, suggests a tentative 30% probability in greater than 30% growth lasting at least ten years before 2100. As many in the effective altruism community are acutely aware, advanced AI may also pose risks, perhaps even a risk of human extinction. The decision problem that Jones introduces is: given the potential for unusually high economic growth from AI, how much existential risk should we be willing to tolerate to deploy this AI? In his simple framework, Jones demonstrates that this tolerance is mainly determined by three factors: the growth benefits that AI may bring, the threat that AI poses, and the parameter that underlies how utility is influenced by consumption levels. Here, I will talk in the language of a 'social planner' who applies some discount to future welfare; a discount rate in the range of 2%-4% seems to be roughly in line with that rate applied in the US and UK,[2] though longtermists may generally choose to calibrate with a lower discount rate (eg. <1%). In the rest of this post when I say 'it is optimal to...' or something to this effect, this is just shorthand for: 'for social planner who gets to make decisions about AI deployment with a discount rate X, it is optimal to...'. The Basic Economic Framework Utility functions (Bounded and unbounded) A utility function is an expression which assigns some value to particular states of the world for, let's say, individual people. Here, Jones (and often macroeconomics more generally) assumes that utility for an individual is just a function of their consumption. The so called 'constant relative risk aversion' utility function assumes utility is given by Where c is consumption, and γ(>0) and u will be helpful to calibrate this utility function for real-world applications, where γ adjusts the curvature and u scales utility up or down.[3] There is a key difference between these two functions (more specifically, when γ>1 vs γ1 ): for γ>1 utility is bounded above, while for γ1 utility is not. A utility function is bounded above if, as consumption increases to infinity, utility rises toward an upper bound that isn't infinite. A utility function is unbounded above if, as consumption increases to infinity, utility does too. The distinction between bounded and unbounded utility functions becomes particularly important when considering the growth benefits of AI, since prolonged periods of high growth can cause us to move along x-axis (of the above plot) quite far. In the most extreme case, Jones considers what happens to our willingness to deploy AI when that AI will be guaranteed to deliver an economic singularity (infinite growth in finite time). In this case we can see that if uti...
undefined
Apr 22, 2024 • 1h 12min

LW - On Llama-3 and Dwarkesh Patel's Podcast with Zuckerberg by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Llama-3 and Dwarkesh Patel's Podcast with Zuckerberg, published by Zvi on April 22, 2024 on LessWrong. It was all quiet. Then it wasn't. Note the timestamps on both of these. Dwarkesh Patel did a podcast with Mark Zuckerberg on the 18th. It was timed to coincide with the release of much of Llama-3, very much the approach of telling your story directly. Dwarkesh is now the true tech media. A meteoric rise, and well earned. This is two related posts in one. First I cover the podcast, then I cover Llama-3 itself. My notes are edited to incorporate context from later explorations of Llama-3, as I judged that the readability benefits exceeded the purity costs. Podcast Notes: Llama-3 Capabilities (1:00) They start with Llama 3 and the new L3-powered version of Meta AI. Zuckerberg says "With Llama 3, we think now that Meta AI is the most intelligent, freely-available assistant that people can use." If this means 'free as in speech' then the statement is clearly false. So I presume he means 'free as in beer.' Is that claim true? Is Meta AI now smarter than GPT-3.5, Claude 2 and Gemini Pro 1.0? As I write this it is too soon to tell. Gemini Pro 1.0 and Claude 3 Sonnet are slightly ahead of Llama-3 70B on the Arena leaderboard. But it is close. The statement seems like a claim one can make within 'reasonable hype.' Also, Meta integrates Google and Bing for real-time knowledge, so the question there is if that process is any good, since most browser use by LLMs is not good. (1:30) Meta are going in big on their UIs, top of Facebook, Instagram and Messenger. That makes sense if they have a good product that is robust, and safe in the mundane sense. If it is not, this is going to be at the top of chat lists for teenagers automatically, so whoo boy. Even if it is safe, there are enough people who really do not like AI that this is probably a whoo boy anyway. Popcorn time. (1:45) They will have the ability to animate images and it generates high quality images as you are typing and updates them in real time as you are typing details. I can confirm this feature is cool. He promises multimodality, more 'multi-linguality' and bigger context windows. (3:00) Now the technical stuff. Llama-3 follows tradition in training models in three sizes, here 8b, 70b that released on 4/18, and a 405b that is still training. He says 405b is already around 85 MMLU and they expect leading benchmarks. The 8b Llama-3 is almost as good as the 70b Llama-2. The Need for Inference (5:15) What went wrong earlier for Meta and how did they fix it? He highlights Reels, with its push to recommend 'unconnected content,' meaning things you did not ask for, and not having enough compute for that. They were behind. So they ordered double the GPUs that needed. They didn't realize the type of model they would want to train. (7:30) Back in 2006, what would Zuck have sold for when he turned down $1 billion? He says he realized if he sold he'd just build another similar company, so why sell? It wasn't about the number, he wasn't in position to evaluate the number. And I think that is actually wise there. You can realize that you do not want to accept any offer someone would actually make. (9:15) When did making AGI become a key priority? Zuck points out Facebook AI Research (FAIR) is 10 years old as a research group. Over that time it has become clear you need AGI, he says, to support all their other products. He notes that training models on coding generalizes and helps their performance elsewhere, and that was a top focus for Llama-3. So Meta needs to solve AGI because if they don't 'their products will be lame.' It seems increasingly likely, as we will see in several ways, that Zuck does not actually believe in 'real' AGI. By 'AGI' he means somewhat more capable AI. (13:40) What will the Llama that makes cool produ...
undefined
Apr 22, 2024 • 30min

EA - Motivation gaps: Why so much EA criticism is hostile and lazy by titotal

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Motivation gaps: Why so much EA criticism is hostile and lazy, published by titotal on April 22, 2024 on The Effective Altruism Forum. Disclaimer: While I criticize several EA critics in this article, I am myself on the EA-skeptical side of things (especially on AI risk). Introduction I am a proud critic of effective altruism, and in particular a critic of AI existential risk, but I have to admit that a lot of the critcism of EA is hostile, or lazy, and is extremely unlikely to convince a believer. Take this recent Leif Weinar time article as an example. I liked a few of the object level critiques, but many of the points were twisted, and the overall point was hopelessly muddled (are they trying to say that voluntourism is the solution here?). As people have noted, the piece was needlessly hostile to EA (and incredibly hostile to Will Macaskill in particular). And he's far from the only prominent hater. Emille Torres views EA as a threat to humanity. Timnit Gebru sees the whole AI safety field as racist nutjobs. In response, @JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors? There are a few answers to this. Good writing is hard regardless of the subject matter. More inflammatory rhetoric gets more clicks, shares and discussion. EA figures have been involved in bad things (like SBF's fraud), so nasty words in response are only to be expected. I think there's a more interesting explanation though, and it has to do with motivations. I think the average EA-critical person doesn't hate EA, although they might dislike it. But it takes a lot of time and effort to write an article and have it published in TIME magazine. If Leif Weinar didn't hate EA, he wouldn't have bothered to write the article. In this article, I'm going to explore the concept of motivation gaps, mainly using the example of AI x-risk, because the gaps are particularly stark there. I'm going to argue that for certain causes, the critiques being hostile or lazy is the natural state of affairs, whether or not the issue is actually correct, and that you can't use the unadjusted quality of each sides critiques to judge an issues correctness. No door to door atheists Disclaimer: These next sections contains an analogy between logical reasoning about religious beliefs and logical reasoning about existential risk. It is not an attempt to smear EA as a religion, nor is it an attack on religion. Imagine a man, we'll call him Dave, who, for whatever reason, has never once thought about the question of whether God exists. One day he gets a knock on his door, and encounters two polite, well dressed and friendly gentlemen who say they are spreading the word about the existence of God and the Christian religion. They tell them that a singular God exists, and that his instructions for how to live life are contained within the Holy Bible. They have glossy brochures, well-prepared arguments and evidence, and represent a large organisation with a significant following and social backing by many respected members of society. He looks their website and finds that, wow, a huge number of people believe this, there is a huge field called theology explaining why God exists, and some of the smartest people in history have believed it as well. Dave is impressed, but resolves to be skeptical. He takes their information and informs them that he while he finds them convincing, he wants to hear the other side of the story as well. He tells them that he'll wait for the atheist door-to-door knockers to come and make their case, so he can decide for himself. Dave waits for many months, but to his frustration, no atheists turn up. Another point for the Christians. He doesn't give up though, and looks online, and finds the largest atheist forum he can find, r/atheism. Dave is shoc...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app