Artificial Ignorance cover image

Artificial Ignorance

Latest episodes

undefined
Aug 7, 2024 • 32min

How a $3000/hour escort uses AI to automate sex work

Adelyn Moore, a self-described "autistic courtesan" and escort charging over $2000 per hour, shares her unique insights on the intersection of sex work and artificial intelligence. She discusses how AI-generated content struggles with realism, pointing out current technological limitations. Adelyn also explores the balance between authenticity and automation in personal interactions and the evolving complexities of intimacy in escort services. Her candid perspective challenges societal assumptions about the adult industry and highlights its innovative potential.
undefined
Jun 27, 2024 • 37min

Saving the world with AI and government grants

Helena Merk, CEO of Streamline Climate, discusses reshaping climate tech funding with AI, highlighting bureaucratic hurdles in grant applications. She emphasizes AI's role in optimizing processes to accelerate global climate response, reducing errors, and saving time for startups. The podcast explores the importance of prompt engineering in grant applications, AI's impact on renewable energy financing, and challenges in automating grant eligibility. Additionally, it touches on partnering for carbon dioxide removal and navigating the intersection of climate, technology, and entrepreneurship.
undefined
Apr 24, 2024 • 41min

Getting to the top of the GPT Store (and building an AI-native search engine, too)

Exploring the success of Consensus in the GPT Store, their AI-native search engine for research papers, and the value of vector search. Discussing the complexities of search engines beyond semantic similarity and potential future AI applications in the NFL.
undefined
Apr 10, 2024 • 36min

How Intercom is transforming customer support with AI

Fergal Reid, VP of AI at Intercom, discusses how the company is using generative AI to transform customer support. They talk about the launch of Fin AI Copilot, which can learn from past conversations and improve agent interactions. Intercom is shifting from chatbots to AI copilots that augment support staff, enhancing efficiency and user experience.
undefined
Feb 26, 2024 • 40min

Bridging AI and human creativity

Something that I’m often thinking about is AI’s ongoing impact on the arts. Clearly, Midjourney and Stable Diffusion have unlocked a new engine for creativity, but it’s just that: an engine. Most of us wouldn’t get much value out of a V8 if it was just dropped in our garage, and most professionals probably can’t go from diffusion model to productive workflow without some extra steps. So designers, especially UX and Figma designers, are still safe from AI for the time being. But there is a lot of change on the horizon - and one of the best people to discuss that change is Harrison Telyan, the co-founder of NUMI, which offers startups access a guild of vetted, professional designers for a flat monthly subscription. Before founding NUMI, Harrison was the founding designer of Imgur, and graduated from the RISD - the Rhode Island School of Design, a world-class design program. Harrison and I talked about his experience rapidly scaling a prior business in Africa, how AI is eating the design world (and the jobs at risk of being eaten), NUMI’s unique, engineering-esque approach to providing a design service, and much more.Three key takeawaysReal feedback comes from paying customers. In Harrison’s experience, founders can be reluctant to reach out and talk to their customers directly - and sometimes are even reluctant to charge customers at all.[Something] that I see a lot in founders is how unwilling, maybe not even unwilling, but they have forgotten to actually start the business at some point. I always recommend you chop up your customers in half and start charging them - you will see very quickly the type of feedback that you'll get when you try to separate someone from their money. That's when the real feedback comes.AI has a ways to go before replacing talented designers. Harrison is bullish about AI’s impact on the design community - but he also admits that areas like entry level graphic design work (as opposed to higher level brand identity or UX work), is going to be at risk from AI pretty soon.The real problem that I see though, is none of these [AI] companies have design leaders behind their prompting or their code, and so naturally they're capped. … I'm looking at the landscape and I'm quite bullish on how AI is going to serve the design community. We hear all the time from Guild members at NUMI, is AI gonna replace me? No. It's just gonna allow you to do work faster, more efficiently and you know, it's gonna take away the kinda like rote administrative stuff of design.Not all design agencies are the same. At first, it’s easy to think of NUMI as just another “agency.” But Harrison pushes back on that label - first, because they think of their design community as a guild, not as independent contracts, and second, because they’re building tools and education for the guild to get better, rather than subcontracting work.We always cringe at the word agency when someone's describing us because on the surface, call us whatever you want, but we know what we are. And what we are is a company that was started by designers for designers. And that may not mean much, but when you look at our competition, all of them were started by people in marketing, and then they just create these commodified versions of us that ask for the lowest price at the highest quality with the most communication. We just take a different approach, and that approach is: how can you lift up the designer through technology? How can you remove all the BS from the admin side of what they have to do so that they can get back to designing? It comes down to leveraging tech to remove the BS, to make the designer move faster and put them up on a pedestal. It's actually very similar to how Airbnb thinks about its hosts. Put them up on a pedestal and the rest will work itself out. And that's what we do.And three things I learned for the first time:* Boda bodas are bicycles and motorcycle taxis commonly found in East Africa.* Figma plugins suffer from bit rot - they need to be regularly maintained to keep up with the underlying platform changes.* Many founders seek design services too early, when they really need to be experimenting and talking to customers as much as possible. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit www.ignorance.ai/subscribe
undefined
Jan 25, 2024 • 44min

Funding a new generation of AI companies

The podcast discusses the process of finding product-market fit in startups and the importance of determination and initial success. It explores the potential of AI, compares it to the adoption of smartphones, and emphasizes the need for safety measures in AI and robotics. The host also highlights exciting AI companies and their innovative products, and provides an overview of the H F Zero program and its investment terms.
undefined
Oct 18, 2023 • 41min

The AI email startup that's taking on Gmail

A few weeks ago, I was lucky enough to sit down with Andrew Lee, the CEO of Shortwave and the cofounder of Firebase.If you're not familiar with Firebase, it's a platform for developers who don't want to host their own backend infrastructure. After being acquired by Google in 2014, it's now a key part of Google Cloud Platform and is used by millions of businesses.These days, Andrew works on Shortwave, an email client that started as a replacement for Google Inbox, but has quickly become a leader in the AI-for-email space. The Shortwave AI assistant can summarize threads, search your inbox, or write your replies for you.Full disclosure: I've been using Shortwave for a couple of months now, and I'm a pretty big fan. I used to use Google Inbox pretty heavily, so it was refreshing to find a worthy successor. Having the ability to summarize emails was the cherry on top - I wanted that feature in Gmail so badly, I made my own Chrome extension to do it.But Shortwave's approach is much, much more thorough than my slapped-together approach. In our conversation, Andrew and I dove into the company's AI architecture, what he learned building Firebase and how he's applying that to Shortwave, and how he thinks about competing with Google/Gmail.Five key takeawaysWith AI, being lean is a big advantage. Shortwave is able to outpace Google because it can iterate faster with new AI technology, and it doesn't have to worry about working with potential “competitors.”I think we have a few advantages. One is we're just a startup. We can move really fast. So we have something live that works today. Google has Duet AI, which hasn't launched anything at this level. It has some very basic writing features, but most of the stuff we've talked to salespeople about is "coming next year."I used to work at Google. I have some good insight into why it's hard for a big company to move very quickly. People at Google are very sharp and they're good at what they do, but it is a very big challenge to move a huge organization with billions of people forward at a rapid clip. And so we can outrun them.We also have the benefit of being able to use the best technology, wherever it is. I think Google is gonna be extremely reluctant to just start calling the OpenAI API, for example. I think they're be very reluctant to use like open source models from Microsoft, which we do.Making a fast, capable AI app takes more than "just" a few LLM calls. Shortwave's architecture, which they recently detailed in a great blog post, shows the lengths that they've gone to build something that is both more capable than a basic ChatGPT integration, while also being lightning fast.Every time you make a request in our product, there's a dozen LLM calls going out and there's a cross encoder that's running, and there's embeddings that are being generated.The first thing we do is we take your query and the history that you've had with the chat and a bunch of contextual information about the state of the world. For example, what's open on your screen, whether you've written a draft, what time zone you're in, things like that.All so we can figure out what you're talking about, and we ask an LLM "what information would be useful in answering this question?" Do we check your calendar? Do we search email history? Do we pull in documentation? Do we look at your settings? Do we, take the current thread and just stick it in there?There's a whole bunch of stuff that we can do. And once we've determined that, it allows us to kind of modularize our system where we say, "Hey we know we need these three things." And each one of those pieces of information can then go off in parallel and load information. The most interesting one by far is our AI search infrastructure, where we go off and we use a bunch of AI tech to find emails that are relevant to the query and allow it to answer questions that are answerable in your email history.But then we take the output of all those tools, we bring them back together, we throw them in a big master prompt and we have it answer your question. And we do that whole thing, the dozen or so calls, the cross encoder, and the embeddings, and all of that - in three seconds.The current RAG approaches have significant limitations. RAG (retrieval augmented generation) is currently the most popular way of giving LLMs "long-term memory," by fetching relevant documents and handing them to a prompt. But Andrew discussed why that doesn't work amazingly well, and how they're trying to work around it.The kind of standard approach to document search that AI folks are doing is the embedding plus vector database approach. Where you take all of the history, you embed it, you store that in a vector database somewhere, and then you take the query, you embed that, you do a search with cosine similarity, you find the relevant documents, you pull those in and you put them in a prompt.But it doesn't actually produce as good of results as you might like because it only works in cases where the documents that answer your question have semantic similarity with the question itself. Sometimes this is true, right? But if I say, "when am I leaving on Friday," and what you're really looking for is the time of your flight, and that email doesn't have the word "leaving" in there at all.So we wanted to go a step further and say, okay, we want to be even smarter than this. We wanna find things that don't have necessarily semantic similarity. And still answer your question, pull those in. So the way we do that is, we have a whole bunch of different what we call fetchers, a whole bunch of different methods for loading those emails into memory.So we look for names. We look for time ranges, we look for labels, we look for keywords. There's a few other things that we look for. And then we go and we just load all of those emails. We're going to pull all the things that are semantically similar, and the ones that match relevant keywords and the ones in the date range, and the ones involving the right people, et cetera.As always, talking to users is incredibly important. This is one of the things that YCombinator drills into its founders, and with good reason. Shortwave spent over a year experimenting with crazy collaboration features, but ultimately came back to focus on a great single-user experience.When I started this company, I said to myself, I'm not going to be like all those other second time founders that think they know everything. That jump in and think it's going to be easy. I'm going to do this from first principles, and we're gonna talk to our users and we're going to iterate really fast, and we're going to be scrappy.And we did that. We talked to our users, we were disciplined. But it was still just a brutal refresher on how much you have to do that. Like how much you have to talk to users, how much you have to be willing to admit your ignorance and throw out stuff that isn't working.We tried all kinds of features. Like the current state of the product is iteration number, I don't know, 10 or something. For the first year of the product, basically everybody churned. Because we had this much more crazy rethink about how email works, which in retrospect was not a particularly good idea.Sometimes backwards compatibility is inevitable. Many founders are trying to build new and better software, and as a result ship their minimum viable product (MVP) with a bare bones set of features. But sometimes you need to actually support the entire universe of features that your customers actually want - especially if you have established competitors.One of the decisions I wish I would've made earlier is to say that we're going all in on supporting the full breadth of email. There's a lot of stuff in email that feels ancient that you might, starting out fresh, be like, we shouldn't bother doing this. A good example would be like BCC.Kids these days haven’t heard of a carbon copy, much less a blind carbon copy. It's kind of this weird, esoteric thing. And for a while we didn't have it. We said, we're gonna build a different primitive that's gonna do some of the things that BCC does.And I think what we learned was people are so used to some of these things. And in order to play nicely with existing systems, to play nicely with Gmail, to play nicely with other people's email clients - you really have to support these things fully.You can build cool stuff, but they have to be layered on top. So you can build a nicer interface doing X, Y, Z, but the underlying stuff needs to be like totally standard. And I think we should have accepted that much sooner and said, we are just going to support everything that email does and then build simplifications on top as workflows rather than trying to simplify the underlying model.And three things I learned for the first time:* Gmail pioneered the idea of threaded conversations in email, which was not something email was originally designed for. As a result, Gmail still has a setting where you can disable threads entirely!* Firebase originally started as an app called SendMeHome, which was meant to help people return lost items. The founders pivoted twice, listening to what their users wanted, and eventually landed on Firebase.* HyDE (Hypothetical Document Embeddings) is a RAG technique that involves creating fake documents that might have relevant words that aren't in the document itself (like "leaving" vs "flight" from the quotes above), and using those as a stepping stone to find the right underlying documents. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit www.ignorance.ai/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode