The Nonlinear Library: LessWrong

The Nonlinear Fund
undefined
May 10, 2024 • 10min

LW - How to be an amateur polyglot by arisAlexis

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How to be an amateur polyglot, published by arisAlexis on May 10, 2024 on LessWrong. Setting the stage Being a polyglot is a problem of definition first. Who can be described as a polyglot? At what level do you actually "speak" the given language? Some sources cite that polyglot means speaking more than 4 languages, others 6. My take is it doesn't matter. I am more interested in the definition of when you speak the language. If you can greet and order a coffee in 20 languages do you actually speak them? I don't think so. Do you need to present a scientific document or write a newspaper worthy article to be considered? That's too much. I think the best definition would be that you can go out with a group of native speakers, understand what they are saying and participate in the discussion that would range from everyday stuff to maybe work related stuff and not switching too often to English nor using google translate. It's ok to pause and maybe ask for a specific word or ask the group if your message got across. This is what I am aiming for when I study a specific language. Why learn a foreign language when soon we will have AI auto-translate from our glasses and other wearables? This is a valid question for work related purposes but socially it's not. You can never be interacting with glasses talking in another language while having dinner with friends nor at a date for example. The small things that make you part of the culture are hidden in the language. The respect and the motivation to blend in is irreplaceable. For reference here are the languages I speak at approximate levels: Greek - native English - proficient (C2) Spanish - high level (C1) active learning French - medium level (B2) active learning Italian - coffee+ level (B1) active learning Dutch - survival level (A2) in hibernation Get started Firstly, I think the first foreign language you learn could be taught in a formal way with an experienced teacher. That will teach you the way to structure your thought process and learn how to learn efficiently. It's common in Europe and non-English speaking countries to learn a second language at school. This guide is not about how to learn formally though. It's about how to take up new foreign languages without a *permanent teacher (I will expand later). One of the most important things when learning a language is motivation. You either love the culture, the language itself (how it sounds and reads), a loved one or you are moving there or doing a long term stay. If you hate the language, it is mandatory that you learn it but you'd rather not then none of this will work. I found that to be the case with Dutch where while I did like the culture, I found the language pretty bad sounding (almost ridiculous hhh-hhh sounds) - sorry if you are Dutch. That resulted in me learning the minimum in 7 years while I picked up Italian in a summer. Now that you found your calling let's proceed. Methods & Tools I wholeheartedly recommend Memrise as an app for learning. It's vastly better than Duolingo and much less repetitive and boring. It reminds you of words you have forgotten at regular intervals utilizing the spaced repetition learning techniques. It's much more focused in everyday interactions and their unique selling point is videos of random people. It's genius that they are asking native speakers on the street to pronounce words and phrases for you. Having a visual reference makes it much more engaging and sticks. In my experience, trying to learn a new word takes maybe 10 fictional time units and if I am in a real conversation and someone corrects me, it takes just that time and I will forever remember the face of the person correcting me and the place. In a smaller degree that's how memrise works. But we need to be a bit more structured. After learning everyday phrases ...
undefined
May 10, 2024 • 46min

LW - My thesis (Algorithmic Bayesian Epistemology) explained in more depth by Eric Neyman

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My thesis (Algorithmic Bayesian Epistemology) explained in more depth, published by Eric Neyman on May 10, 2024 on LessWrong. In March I posted a very short description of my PhD thesis, Algorithmic Bayesian Epistemology, on LessWrong. I've now written a more in-depth summary for my blog, Unexpected Values. Here's the full post: *** In January, I defended my PhD thesis. My thesis is called Algorithmic Bayesian Epistemology, and it's about predicting the future. In many ways, the last five years of my life have been unpredictable. I did not predict that a novel bat virus would ravage the world, causing me to leave New York for a year. I did not predict that, within months of coming back, I would leave for another year - this time of my own free will, to figure out what I wanted to do after graduating. And I did not predict that I would rush to graduate in just seven semesters so I could go work on the AI alignment problem. But the topic of my thesis? That was the most predictable thing ever. It was predictable from the fact that, when I was six, I made a list of who I might be when I grow up, and then attached probabilities to each option. Math teacher? 30%. Computer programmer? 25%. Auto mechanic? 2%. (My grandma informed me that she was taking the under on "auto mechanic".) It was predictable from my life-long obsession with forecasting all sorts of things, from hurricanes to elections to marble races. It was predictable from that time in high school when I was deciding whether to tell my friend that I had a crush on her, so I predicted a probability distribution over how she would respond, estimated how good each outcome would be, and calculated the expected utility. And it was predictable from the fact that like half of my blog posts are about predicting the future or reasoning about uncertainty using probabilities. So it's no surprise that, after a year of trying some other things (mainly auction theory), I decided to write my thesis about predicting the future. If you're looking for practical advice for predicting the future, you won't find it in my thesis. I have tremendous respect for groups like Epoch and Samotsvety: expert forecasters with stellar track records whose thorough research lets them make some of the best forecasts about some of the world's most important questions. But I am a theorist at heart, and my thesis is about the theory of forecasting. This means that I'm interested in questions like: How do I pay Epoch and Samotsvety for their forecasts in a way that incentivizes them to tell me their true beliefs? If Epoch and Samotsvety give me different forecasts, how should I combine them into a single forecast? Under what theoretical conditions can Epoch and Samotsvety reconcile a disagreement by talking to each other? What's the best way for me to update how much I trust Epoch relative to Samotsvety over time, based on the quality of their predictions? If these sorts of questions sound interesting, then you may enjoy consuming my thesis in some form or another. If reading a 373-page technical manuscript is your cup of tea - well then, you're really weird, but here you go! If reading a 373-page technical manuscript is not your cup of tea, you could look at my thesis defense slides (PowerPoint, PDF),[1] or my short summary on LessWrong. On the other hand, if you're looking for a somewhat longer summary, this post is for you! If you're looking to skip ahead to the highlights, I've put a * next to the chapters I'm most proud of (5, 7, 9). Chapter 0: Preface I don't actually have anything to say about the preface, except to show off my dependency diagram. (I never learned how to make diagrams in LaTeX. You can usually do almost as well in Microsoft Word, with way less effort!) Chapter 1: Introduction "Algorithmic Bayesian epistemology" (the title of the...
undefined
May 10, 2024 • 8min

LW - We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming" by Lukas Gloor

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming", published by Lukas Gloor on May 10, 2024 on LessWrong. Predicting the future is hard, so it's no surprise that we occasionally miss important developments. However, several times recently, in the contexts of Covid forecasting and AI progress, I noticed that I missed some crucial feature of a development I was interested in getting right, and it felt to me like I could've seen it coming if only I had tried a little harder. (Some others probably did better, but I could imagine that I wasn't the only one who got things wrong.) Maybe this is hindsight bias, but if there's something to it, I want to distill the nature of the mistake. First, here are the examples that prompted me to take notice: Predicting the course of the Covid pandemic: I didn't foresee the contribution from sociological factors (e.g., "people not wanting to get hospitalized" - Zvi called it " the control system"). As a result, I overpredicted the difference between countries with a lockdown policy vs ones without. (Note that this isn't necessarily an update against the cost-effectiveness of lockdowns because the update goes both ways: lockdowns saved fewer lives than I would've predicted naively, but costs to the economy were also lower compared to the counterfactual where people already social-distanced more than expected of their own accord since they were reading the news about crowded hospitals and knew close contacts who were sick with virus.) Predicting AI progress: Not foreseeing that we'd get an Overton window shift in AI risk awareness. Many EAs were arguably un(der)prepared for the possibility of a "chat-gpt moment," where people who weren't paying attention to AI progress previously got to experience a visceral sense of where AI capabilities progress is rapidly heading. As a result, it is now significantly easier to make significant policy asks to combat AI risks. Not foreseeing wide deployment of early-stage "general" AI and the possible irrelevance of AI boxing. Early discussions of AI risk used to involve this whole step about whether a superhuman AI system could escape and gain access to the internet. No one (to my knowledge?) highlighted that the future might well go as follows: "There'll be gradual progress on increasingly helpful AI tools. Companies will roll these out for profit and connect them to the internet. There'll be discussions about how these systems will eventually become dangerous, and safety-concerned groups might even set up testing protocols ("safety evals"). Still, it'll be challenging to build regulatory or political mechanisms around these safety protocols so that, when they sound the alarm at a specific lab that the systems are becoming seriously dangerous, this will successfully trigger a slowdown and change the model release culture from 'release by default' to one where new models are air-gapped and where the leading labs implement the strongest forms of information security." If we had understood the above possibility earlier, the case for AI risks would have seemed slightly more robust, and (more importantly) we could've started sooner with the preparatory work that ensures that safety evals aren't just handled company-by-company in different ways, but that they are centralized and connected to a trigger for appropriate slowdown measures, industry-wide or worldwide. Concerning these examples, it seems to me that: 1. It should've been possible to either foresee these developments or at least highlight the scenario that happened as one that could happen/is explicitly worth paying attention to. 2. The failure mode at play involves forecasting well on some narrow metrics but not paying attention to changes in the world brought about by the exact initial thin...
undefined
May 10, 2024 • 45min

LW - AI #63: Introducing Alpha Fold 3 by Zvi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #63: Introducing Alpha Fold 3, published by Zvi on May 10, 2024 on LessWrong. It was a remarkably quiet announcement. We now have Alpha Fold 3, it does a much improved job predicting all of life's molecules and their interactions. It feels like everyone including me then shrugged and went back to thinking about other things. No cool new toy for most of us to personally play with, no existential risk impact, no big trades to make, ho hum. But yes, when we look back at this week, I expect what we remember will be Alpha Fold 3. Unless it turns out that it is Sophon, a Chinese technique to potentially make it harder to fine tune an open model in ways the developer wants to prevent. I do not expect this to get the job done that needs doing, but it is an intriguing proposal. We also have 95 theses to evaluate in a distinct post, OpenAI sharing the first draft of their model spec, Apple making a world class anti-AI and anti-iPad ad that they released thinking it was a pro-iPad ad, more fun with the mysterious gpt2, and more. The model spec from OpenAI seems worth pondering in detail, so I am going to deal with that on its own some time in the coming week. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Agents, simple and complex. 4. Language Models Don't Offer Mundane Utility. No gadgets, no NPCs. 5. GPT-2 Soon to Tell. Does your current model suck? In some senses. 6. Fun With Image Generation. Why pick the LoRa yourself? 7. Deepfaketown and Botpocalypse Soon. It's not exactly going great. 8. Automation Illustrated. A look inside perhaps the premiere slop mill. 9. They Took Our Jobs. Or are we pretending this to help the stock price? 10. Apple of Technically Not AI. Mistakes were made. All the feels. 11. Get Involved. Dan Hendrycks has a safety textbook and free online course. 12. Introducing. Alpha Fold 3. Seems like a big deal. 13. In Other AI News. IBM, Meta and Microsoft in the model game. 14. Quiet Speculations. Can we all agree that a lot of intelligence matters a lot? 15. The Quest for Sane Regulation. Major labs fail to honor their commitments. 16. The Week in Audio. Jack Clark on Politico Tech. 17. Rhetorical Innovation. The good things in life are good. 18. Open Weights are Unsafe and Nothing Can Fix This. Unless, maybe? Hmm. 19. The Lighter Side. Mmm, garlic bread. It's been too long. Language Models Offer Mundane Utility How much utility for how much cost? Kapoor and Narayanan argue that with the rise of agent-based systems, you have to evaluate different models on coding tasks based on dollar cost versus quality of results. They find that a simple 'ask GPT-4 and turn the temperature slowly up on retries if you fail' is as good as the agents they tested on HumanEval, while costing less. They mention that perhaps it is different with harder and more complex tasks. How much does cost matter? If you are using such queries at scale without humans in the loop, or doing them in the background on a constant basis as part of your process, then cost potentially matters quite a bit. That is indeed the point of agents. Or if you are serving lots of customers constantly for lots of queries, those costs can add up fast. Thus all the talk about the most cost-efficient approach. There are also other purposes for which cost at current margins is effectively zero. If you are a programmer who must evaluate, use and maintain the code outputted by the AI, what percentage of total costs (including your labor costs) are AI inference? In the most obvious baseline case, something akin to 'a programmer asks for help on tasks,' query speed potentially matters but being slightly better at producing good code, or even slightly better at producing code that is easier for the human to evaluate, understand and learn from, is going to crush...
undefined
May 10, 2024 • 8min

LW - Why Care About Natural Latents? by johnswentworth

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why Care About Natural Latents?, published by johnswentworth on May 10, 2024 on LessWrong. Suppose Alice and Bob are two Bayesian agents in the same environment. They both basically understand how their environment works, so they generally agree on predictions about any specific directly-observable thing in the world - e.g. whenever they try to operationalize a bet, they find that their odds are roughly the same. However, their two world models might have totally different internal structure, different "latent" structures which Alice and Bob model as generating the observable world around them. As a simple toy example: maybe Alice models a bunch of numbers as having been generated by independent rolls of the same biased die, and Bob models the same numbers using some big complicated neural net. Now suppose Alice goes poking around inside of her world model, and somewhere in there she finds a latent variable ΛA with two properties (the Natural Latent properties): ΛA approximately mediates between two different observable parts of the world X1,X2 ΛA can be estimated to reasonable precision from either one of the two parts In the die/net case, the die's bias (ΛA) approximately mediates between e.g. the first 100 numbers (X1) and the next 100 numbers (X2), so the first condition is satisfied. The die's bias can be estimated to reasonable precision from either the first 100 numbers or the second 100 numbers, so the second condition is also satisfied. This allows Alice to say some interesting things about the internals of Bob's model. First: if there is any latent variable (or set of latent variables, or function of latent variables) ΛB which mediates between X1 and X2 in Bob's model, then Bob's ΛB encodes Alice's ΛA (and potentially other stuff too). In the die/net case: during training, the net converges to approximately match whatever predictions Alice makes(by assumption), but the internals are a mess. An interpretability researcher pokes around in there, and finds some activation vectors which approximately mediate between X1 and X2. Then Alice knows that those activation vectors must approximately encode the bias ΛA. (The activation vectors could also encode additional information, but at a bare minimum they must encode the bias.) Second: if there is any latent variable (or set of latent variables, or function of latent variables) Λ'B which can be estimated to reasonable precision from just X1, and can also be estimated to reasonable precision from just X2, then Alice's ΛA encodes Bob's Λ'B (and potentially other stuff too). Returning to our running example: suppose our interpretability researcher finds that the activations along certain directions can be precisely estimated from just X1, and the activations along those same directions can be precisely estimated from just X2. Then Alice knows that the bias ΛA must give approximately-all the information which those activations give. (The bias could contain more information - e.g. maybe the activations in question only encode the rate at which a 1 or 2 is rolled, whereas the bias gives the rate at which each face is rolled.) Third, putting those two together: if there is any latent variable (or set of latent variables, or function of latent variables) Λ''B which approximately mediates between X1 and X2 in Bob's model, and can be estimated to reasonable precision from either one of X1 or X2, then Alice's ΛA and Bob's Λ''B must be approximately isomorphic - i.e. each encodes the other. So if an interpretability researcher finds that activations along some directions both mediate between X1 and X2, and can be estimated to reasonable precision from either of X1 or X2, then those activations are approximately isomorphic to what Alice calls "the bias of the die". So What Could We Do With That? We'll give a couple relatively-...
undefined
May 9, 2024 • 11min

LW - Dyslucksia by Shoshannah Tekofsky

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dyslucksia, published by Shoshannah Tekofsky on May 9, 2024 on LessWrong. The curious tale of how I mistook my dyslexia for stupidity - and talked, sang, and drew my way out of it. Sometimes I tell people I'm dyslexic and they don't believe me. I love to read, I can mostly write without error, and I'm fluent in more than one language. Also, I don't actually technically know if I'm dyslectic cause I was never diagnosed. Instead I thought I was pretty dumb but if I worked really hard no one would notice. Later I felt inordinately angry about why anyone could possibly care about the exact order of letters when the gist is perfectly clear even if if if I right liike tis. I mean, clear to me anyway. I was 25 before it dawned on me that all the tricks I was using were not remotely related to how other people process language. One of my friends of six years was specialized in dyslexia, and I contacted her, full excitement about my latest insight. "Man, guess what? I realized I am dyslectic! This explains so much! I wish someone had told me sooner. It would have saved me so much grief." "Oh, yeah, I know." "Wait, what?" "You are very obviously dyslectic." "Wait, why didn't you tell me?" "You didn't seem bothered." "Oh…" Turns out my dyslexia was a public secret that dated back all the way to my childhood (and this was obviously unrelated to my constitutional lack of self-awareness). Anyway. How come I kind of did fine? I'm fluent in English (not my native language), wrote my PhD thesis of 150 pages in 3 months without much effort, and was a localization tester for Dutch-English video game translation for two years. I also read out loud till the age of 21, trace every letter like it's a drawing, and need to sing new word sounds to be able to remember them. I thought everyone had to but no one sent me the memo. Dear reader, not everyone has to. When I recently shared my information processing techniques with old and new friends, they asked if I had ever written them down so maybe other people could use them too. I hadn't. So here is my arsenal of alternative information processing techniques. Read Out Loud Honestly, I didn't realize there was an age where you were supposed to stop doing this. In school you obviously had to whisper to yourself. At home you go to your room and read at normal volume. If it's a fiction book, you do voices for the different characters. It's great. I remember my sister sometimes walking in to my room when I was little cause she said it sounded like so much fun in there. It totally was. Later I found out my mother made sure my siblings never made me aware it was unusual I was still reading out loud. Instead she signed me up for competitions to read books on the local radio. This was before the wide-spread internet and audio books. Later I'd read to my parents sometimes, who were always excited about how much energy I threw into the endeavor. I didn't know any different. In college I was still reading out loud. Research papers have a voice. Mathematical equations especially. They take longer to say out loud than to read in your head, but you can never be sure what's on the page if you don't. According to my brain anyway. When I was 22 I moved in with my first boyfriend and reading out loud got a little obstructive. I started subvocalizing, and that was definitely less fun. I still subvocalize now. But if I struggle to follow a passage, I go back to reading it out loud. I've probably read out this essay a dozen times by now. I keep checking the cadence of every sentence. It's easier to spot word duplications, cause I find myself repeating myself. Missing words also stick out like inverted pot holes. They destroy the flow. So I jump back and smooth them over. Sometimes when I talk, I finish the sentence differently than it's written. Then I go back and ...
undefined
May 9, 2024 • 7min

LW - some thoughts on LessOnline by Raemon

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: some thoughts on LessOnline, published by Raemon on May 9, 2024 on LessWrong. I mostly wrote this for facebook, but it ended up being a whole-ass post so I figured I'd put it here too. I'm helping run "LessOnline: A Festival of Writers Who Are Wrong On the Internet (But Striving To Be Less So)". I'm incentivized to say nice things about the event. So, grain of salt and all. But, some thoughts, which roughly breakdown into: The vibe: preserving cozy/spaciousness of a small retreat at a larger festival The audience: "Reunion for the The Extended Family Blogosphere, both readers and writers." Manifest, and Summer Camp ... I. The Vibe I've been trying to explain the vibe I expect and it's tricksy. I think the vibe will be something like "CFAR Reunion meets Manifest." But a lot of people haven't been to a CFAR Reunion or to Manifest. I might also describe it like "the thing the very first EA Summit (before EA Global) was like, before it became EA Global and got big." But very few people went to that either. Basically: I think this will do a pretty decent job of having the feel of a smaller (~60 person), cozy retreat, but while being more like 200 - 400 people. Lightcone has run several ~60 person private retreats, which succeeded being a really spacious intellectual environment, with a pretty high hit rate for meeting new people who you might want to end up having a several hour conversation with. Realistically, with a larger event there'll be at least some loss of "cozy/spaciousness", and a somewhat lower hit rate for people you want to talk to with the open invites. But, I think Lightcone has learned a lot about how to create a really nice vibe. We've built our venue, Lighthaven, with "warm, delightful, focused intellectual conversation" as a primary priority. Whiteboards everywhere, lots of nooks and a fractal layout that makes it often feel like you're in a seclude private conversation by a firepit, even though hundreds of other people are nearby (often at another secluded private conversation with _their_ own firepit!) (It's sort of weird that this kind of venue is extremely rare. Many events are hotels, which feel vaguely stifling and corporate. And the nice spacious retreat centers we've used don't score well on the whiteboard front, and surprisingly not even that well on "lots of nooks") ... Large events tend to use "Swap Card" for causing people to meet each other. I do find Swap Card really good for nailing down a lot of short meetings. But it somehow ends up with a vibe of ruthless efficiency - lots of back-to-back 30 minute meetings, instead of a feeling of organic discovery. The profile feels like a "job fair professional" sort of thing. Instead we're having a "Names, Faces, and Conversations" document, where people write in a giant google doc about what questions and ideas are currently alive for them. People are encouraged to comment inline if they have thoughts, and +1 if they'd be into chatting about it. Some of this hopefully turns into 1-1 conversations, and if more people are interested it can organically grow into "hey let's hold a small impromptu group discussion about that in the Garden Nook" ... We'll also have a bunch of stuff that's just plain fun. We're planning a puzzle hunt that spans the event, and a dance concert led by the Fooming Shoggoths, with many songs that didn't make it onto their April 1st album. And the venue itself just lends itself to a feeling of whimsy and discovery. ... Another thing we're doing is encouraging people to bring their kids, and providing a day care to make that easier. I want this event to feel like something you can bring your whole life/self to. By default these sorts of events tend to not be very kid friendly. ... ... ... II. The Audience So that was a lot of words about The Vibe. The second question is "who a...
undefined
May 9, 2024 • 57min

LW - Dating Roundup #3: Third Time's the Charm by Zvi

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dating Roundup #3: Third Time's the Charm, published by Zvi on May 9, 2024 on LessWrong. The first speculated on why you're still single. We failed to settle the issue. A lot of you were indeed still single. So the debate continues. The second gave more potential reasons, starting with the suspicion that you are not even trying, and also many ways you are likely trying wrong. The definition of insanity is trying the same thing over again expecting different results. Another definition of insanity is dating in 2024. Can't quit now. You're Single Because Dating Apps Keep Getting Worse A guide to taking the perfect dating app photo. This area of your life is important, so if you intend to take dating apps seriously then you should take photo optimization seriously, and of course you can then also use the photos for other things. I love the 'possibly' evil here. Misha Gurevich: possibly evil idea: Dating app that trawls social media and websites and creates a database of individuals regardless of if they opt in or not, including as many photos and contact information as can be found. Obviously this would be kind of a privacy violation and a lot of people would hate it. but I imagine a solid subset of singles who are lonely but HATE the app experience would be grateful to be found this way. No big deal, all we are doing is taking all the data about private citizens on the web and presenting it to any stranger who wants it in easy form as if you might want to date them. Or stalk them. Or do anything else, really. And you thought AI training data was getting out of hand before. All right, so let's consider the good, or at least not obviously evil, version of this. There is no need to fill out an intentional profile, or engage in specific actions, other than opting in. We gather all the information off the public web. We use AI to amalgamate all the data, assemble in-depth profiles and models of all the people. If it thinks there is a plausible match, then it sets it up. Since we are in danger of getting high on the creepiness meter, let's say the woman gets to select who gets contacted first, then if both want to match in succession you put them in contact. Ideally you'd also use AI to facilitate in various other ways, let people say what they actually want in natural language, let the AI ask follow-up questions to find potential matches or do checks first (e.g. 'I would say yes if you can confirm that he…') and so on. There is definitely not enough deep work being done trying to overturn the system. Bumble gives up its one weird trick, goes back to men messaging first. Melissa Chen: The evolution of Bumble: Sick of men inboxing women ("the patriarchy is so creepy and icky!") Starts dating app to reverse the natural order (women now make the first move! So empowering! So brave & stunning!) Women complain it's exhausting Reinstate the natural law Hardcore Siege: It's such ridiculous headline. I have never gotten an opener on Bumble besides "hey", women never actually work go start a conversation or have a good opener, they're literally just re-approving the ability of the man to start the conversation. Outa: Anyone that's used it would tell you that 99% of the time they would just leave a "hey" or "." Casey Handmer: AFAIK no one has yet made a dating app where the cost of sending messages is increased if you're a creep. This would be technologically easy to do, and would let the market solve the problem. Several interesting things here. 1. Many 'women never actually initiated the conversation' responses. Women say 'hey' to bypass the requirement almost all the time. That is not obviously useless as a secondary approval, but it presumably is not worth the bother. 2. This was among women who self-selected into the app with mandatory female openers, so yeah, women really really...
undefined
May 8, 2024 • 16min

LW - Designing for a single purpose by Itay Dreyfus

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Designing for a single purpose, published by Itay Dreyfus on May 8, 2024 on LessWrong. If you've ever been to Amsterdam, you've probably visited, or at least heard about the famous cookie store that sells only one cookie. I mean, not a piece, but a single flavor. I'm talking about Van Stapele Koekmakerij of course - where you can get one of the world's most delicious chocolate chip cookies. If not arriving at opening hour, it's likely to find a long queue extending from the store's doorstep through the street it resides. When I visited the city a few years ago, I watched the sensation myself: a nervous crowd awaited as the rumor of 'out of stock' cookies spreaded across the line. The store, despite becoming a landmark for tourists, stands for an idea that seems to be forgotten in our culture: crafting for a single purpose. In the tech scene where I'm coming from, and which you might too, this approach is often perceived as singular, and not in its positiveness. We've been taught to go big or go home - raise millions in funding, build a big company, hire more and more employees, and hope for the desired exit. Anything less is considered a mind of a failure. From a personal perspective I've seen this attitude in almost every branding session I ran with startup founders. Again and again, they struggled to distill their primary focus. Moreover, when discussing competitors, it often seemed their startup competed in every possible field. In a way, that fear of committing reflects the human nature of FOMO - deliberately giving up on something(s) and experiencing the potential loss of other benefits. This mindset has also seeped into our collective body of work, especially in software. A product, which often starts as a weird small creature, gradually evolves into a multi-arm octopus, which sadly became the norm for VCware 1. And so we've been left with bloated, bigger, and… worse software. The idea of maintaining a small scope in product has already appeared in my writing in various forms; in niche product design I explored the effect of growth on design; and in defense of Twitter, I wrote about the bloated era of incumbent culture. But in between there seems to be a different attitude that not many choose to embrace, which like in Van Stapele's case, seeks a real purpose. Going back to basics as a way to find purpose In a tweet posted a few months ago, Jeff Sheldon described his renewed approach to photography after getting a new camera. It enlightened my eyes: I'm not a professional photographer, and never been. But my beloved Canon 700D still serves me often while traveling. Besides learning about ISO and shutter speed settings, being familiar with the mechanics of a DSLR camera has also introduced me to the practice of shooting photos in RAW format, which means capturing photos at the highest quality level. But the super heavy file format marks only the start of the process in modern photography. The rest belongs to the post-processing act: the daunting work of polishing, enhancing, and fixing images. When I returned from vacation, I hoped to edit my captures. Then I noticed something weird. When comparing my photos to some stunning photos I saw online, it seemed like my camera output wasn't as good as those shared photos. In doubt of my gear I then, again, noticed something I should have probably known: it wasn't about the camera, but the editing. I realized professional-made photos were overly edited, often detached from their original conditions. It appeared that what you see isn't what you get. I wondered, has photography become an art of photo manipulation? To respectful photographers, this might appear like a false accusation. The time spent sitting in front of the photo editor is at the heart of many camera enthusiasts. After all, that's why a camera is set to sh...
undefined
May 7, 2024 • 5min

LW - Observations on Teaching for Four Weeks by ClareChiaraVincent

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Observations on Teaching for Four Weeks, published by ClareChiaraVincent on May 7, 2024 on LessWrong. I just finished a program where I taught two classes of high school seniors, two classes a day for four weeks, as part of my grad program. This experience was a lot of fun and it was rewarding, but it was really surprising, and even if only in small ways prompted me to update my beliefs about the experience of being a professor. Here are the three biggest surprises I encountered. 1: The Absent-Minded Professor Thing is Real I used to be confused and even a little bit offended when at my meetings with my advisor every week, he wouldn't be able to remember anything about my projects, our recent steps, or what we talked about last week. Now I get it. Even after just one week of classes, my short-term and long-term memory were both entirely shot. I would tell students things like, "send that to me in an email, otherwise I'll forget" because I would. Now that the program is over, things are slowly getting better, but I'm still recovering. I can't really tell why this happened, but there are two obvious theories. The first is just that two classes at the same time is too many names and faces (plus other personal details) at once and so much information just overwhelmed me. The other is that there's something unusual about teaching in particular. I noticed that I was doing a lot more task-switching than normal. Most jobs and most of my research experience involves working on projects for long blocks of time, multiple hours or sometimes multiple days with few distractions aside basics like eating and sleeping and commuting. But teaching involves changing your focus over and over. I've led recitation sections as a teaching assistant, but for some reason this was so much worse. That makes me think that it's more likely to be the task-switching. As a recitation leader, you have to remember a lot of names and faces too. But once you're outside of class you can mostly go back to work as normal, there's not so much task-switching. This project was in a high school but my students were all seniors, so I think this is what it would be like to teach college too. Most of them were already 18 so you can barely tell the difference. I was helping them with projects so I think it's a bit like being a PhD advisor too. So it could also be the load of keeping track of lots of research projects, more than just keeping track of lots of people. 2: Teaching Makes You Dehydrated For this program I taught only two days a week, just two classes, on Monday and Wednesday afternoon. But even with only two classes per day and two days per week, I became seriously and uncomfortably dehydrated. This had all kinds of weird knock-on effects with my digestion and my ability to concentrate. It was really very unpleasant. Part of this is that you have to be talking and meeting all the time. But mostly I got dehydrated because of the logistics. If you drink enough water, then halfway through the class you have to go to the bathroom and you're either super uncomfortable and distracted all session or you have to awkwardly walk out in the middle of class. Even if it doesn't hit right away, a 10-minute break between classes isn't enough time to go to the bathroom, especially since some students show up early from the next class and others stay late. So you're trapped. I had some success on days when I showed videos and could sneak out the back while they were watching. But overall this was bad for my teaching and my quality of life. 3: Teaching is a Grueling Job Even Under the Best Circumstances I didn't really like high school. Classes were too easy and too boring, and even though no one was asking very much of me, I felt like I was being taken advantage of. Implicitly I assumed that the teachers were the ones ta...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app