AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Improving Model Efficiency and Performance
The speaker mentions various ways to enhance model efficiency and performance, such as quantizing models, using inference servers like VLM and TRTLM, implementing AI templates to compile models, and optimizing code for better efficiency. They have helped customers in these areas and even rewrote popular models on Replicate for faster performance. The speaker suggests potentially integrating fast inference servers and AI template into the code layer to assist users in improving model speed. Additionally, they highlight the importance of exploring manual, semi-manual, or automatic methods to boost performance, aiming for benefits across the board. Lastly, the speaker notes a price war on mixtraal last December, indicating that some players may be reducing prices at a loss, contrasting with their own regular pricing strategy.
This Friday we’re doing a special crossover event in SF with Dylan Patel of SemiAnalysis (previous guest!), and we will do a live podcast on site. RSVP here.
Also join us on June 25-27 for the biggest AI Engineer conference of the year!
Replicate is one of the most popular AI inference providers, reporting over 2 million users as of their $40m Series B with a16z. But how did they get there?
The Definitive Replicate Story (warts and all)
Their overnight success took 5 years of building, and it all started with arXiv Vanity, which was a 2017 vacation project that scrapes arXiv PDFs and re-renders them into semantic web pages that reflow nicely with better typography and whitespace.
From there, Ben and Andreas’ idea was to build tools to make ML research more robust and reproducible by making it easy to share code artefacts alongside papers. They had previously created Fig, which made it easy to spin up dev environments; it was eventually acquired by Docker and turned into `docker-compose`, the industry standard way to define services from containerized applications.
2019: Cog
The first iteration of Replicate was a Fig-equivalent for ML workloads which they called Cog; it made it easy for researchers to package all their work and share it with peers for review and reproducibility.
But they found that researchers were terrible users: they’d do all this work for a paper, publish it, and then never return to it again.
“We talked to a bunch of researchers and they really wanted that.... But how the hell is this a business, you know, like how are we even going to make any money out of this?
…So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. Do you want like a deployment platform for deploying models? Do you want a central place for versioning models? We were trying to think of lots of different products we could sell that were related to this thing…
So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day.”
The team graduated YCombinator with no customers, no product and nothing to demo - which was fine because demo day got canceled as the YC W’20 class graduated right into the pandemic. The team spent the next year exploring and building Covid tools.
2021: CLIP + GAN = PixRay
By 2021, OpenAI released CLIP. Overnight dozens of Discord servers got spun up to hack on CLIP + GANs. Unlike academic researchers, this community was constantly releasing new checkpoints and builds of models.
PixRay was one of the first models being built on Replicate, and it quickly started taking over the community. Chris Dixon has a famous 2010 post titled “The next big thing will start out looking like a toy”; image generation would have definitely felt like a toy in 2021, but it gave Replicate its initial boost.
2022: Stable Diffusion
In August 2022 Stable Diffusion came out, and all the work they had been doing to build this infrastructure for CLIP / GANs models became the best way for people to share their StableDiffusion fine-tunes:
And like the first week we saw people making animation models out of it. We saw people make game texture models that use circular convolutions to make repeatable textures. We saw a few weeks later, people were fine tuning it so you could put your face in these models and all of these other ways. […] So tons of product builders wanted to build stuff with it. And we were just sitting in there in the middle, as the interface layer between all these people who wanted to build, and all these machine learning experts who were building cool models. And that's really where it took off. Incredible supply, incredible demand, and we were just in the middle.
(Stable Diffusion also spawned Latent Space as a newsletter)
The landing page paved the cowpath for the intense interest in diffusion model APIs.
2023: Llama & other multimodal LLMs
By 2023, Replicate’s growing visibility in the Stable Diffusion indie hacker community came from top AI hackers like Pieter Levels and Danny Postmaa, each making millions off their AI apps:
Meta then released LLaMA 1 and 2 (our coverage of it), greatly pushing forward the SOTA open source model landscape. Demand for text LLMs and other modalities rose, and Replicate broadened its focus accordingly, culminating in a $18m Series A and $40m Series B from a16z (at a $350m valuation).
Building standards for the AI world
Now that the industry is evolving from toys to enterprise use cases, all these companies are working to set standards for their own space. We cover this at ~45 mins in the podcast. Some examples:
* LangChain has been trying to establish "chain” as the standard mental models when putting multiple prompts and models together, and the “LangChain Expression Language” to go with it. (Our episode with Harrison)
* LLamaHub for packaging RAG utilities. (Our episode with Jerry)
* Ollama’s Modelfile to define runtimes for different model architectures. These are usually targeted at local inference.
* Cog (by Replicate) to create environments to which you can easily attach CUDA devices and make it easy to spin up inference on remote servers.
* GGUF as the filetype ggml-based executors.
None of them have really broken out yet, but this is going to become a fiercer competition as the market matures.
Full Video Podcast
As a reminder, all Latent Space pods now come in full video on our YouTube, with bonus content that we cut for time!
Show Notes
* Free $10 credit for Latent Space readers
* Andreas Jansson (Ben’s co-founder)
* Charlie Holtz (Replicate’s Hacker in Residence)
* Fig (now Docker Compose)
* Command Line Interface Guidelines (clig)
* Apple Human Interface Guidelines
* PixRay
* VQGAN-CLIP by Rivers Have Wings
Timestamps
* [00:00:00] Introductions
* [00:01:17] Low latency is all you need
* [00:04:08] Evolution of CLIs
* [00:05:59] How building ArxivVanity led to Replicate
* [00:11:37] Making ML research replicable with containers
* [00:17:22] Doing YC in 2020 and pivoting to tools for COVID
* [00:20:22] Launching the first version of Replicate
* [00:25:51] Embracing the generative image community
* [00:28:04] Getting reverse engineered into an API product
* [00:31:25] Growing to 2 million users
* [00:34:29] Indie vs Enterprise customers
* [00:37:09] How Unsplash uses Replicate
* [00:38:29] Learnings from Docker that went into Cog
* [00:45:25] Creating AI standards
* [00:50:05] Replicate's compute availability
* [00:53:55] Fixing GPU waste
* [01:00:39] What's open source AI?
* [01:04:46] Building for AI engineers
* [01:06:41] Hiring at Replicate
This summary covers the full range of topics discussed throughout the episode, providing a comprehensive overview of the content and insights shared.
Transcript
Alessio [00:00:00]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, partner and CTO in Residence at Decibel Partners, and I'm joined by my co-host Swyx, founder of Smol AI.
Swyx [00:00:14]: Hey, and today we have Ben Firshman in the studio. Welcome Ben.
Ben [00:00:18]: Hey, good to be here.
Swyx [00:00:19]: Ben, you're a co-founder and CEO of Replicate. Before that, you were most notably founder of Fig, which became Docker Compose. You also did a couple of other things before that, but that's what a lot of people know you for. What should people know about you that, you know, outside of your, your sort of LinkedIn profile?
Ben [00:00:35]: Yeah. Good question. I think I'm a builder and tinkerer, like in a very broad sense. And I love using my hands to make things. So like I work on, you know, things may be a bit closer to tech, like electronics. I also like build things out of wood and I like fix cars and I fix my bike and build bicycles and all this kind of stuff. And there's so much, I think I've learned from transferable skills, from just like working in the real world to building things, building things in software. And you know, it's so much about being a builder, both in real life and, and in software that crosses over.
Swyx [00:01:11]: Is there a real world analogy that you use often when you're thinking about like a code architecture or problem?
Ben [00:01:17]: I like to build software tools as if they were something real. So I wrote this thing called the command line interface guidelines, which was a bit like sort of the Mac human interface guidelines, but for command line interfaces, I did it with the guy I created Docker Compose with and a few other people. And I think something in there, I think I described that your command line interface should feel like a big iron machine where you pull a lever and it goes clunk and like things should respond within like 50 milliseconds as if it was like a real life thing. And like another analogy here is like in the real life, you know, when you press a button on an electronic device and it's like a soft switch and you press it and nothing happens and there's no physical feedback of anything happening, then like half a second later, something happens. Like that's how a lot of software feels, but instead like software should feel more like something that's real where you touch, you pull a physical lever and the physical lever moves, you know, and I've taken that lesson of kind of human interface to, to software a ton. You know, it's all about kind of low latency of feeling, things feeling really solid and robust, both the command lines and, and user interfaces as well.
Swyx [00:02:22]: And how did you operationalize that for Fig or Docker?
Ben [00:02:27]: A lot of it's just low latency. Actually, we didn't do it very well for Fig in the first place. We used Python, which was a big mistake where Python's really hard to get booting up fast because you have to load up the whole Python runtime before it can run anything. Okay. Go is much better at this where like Go just instantly starts.
Swyx [00:02:45]: You have to be under 500 milliseconds to start up?
Ben [00:02:48]: Yeah, effectively. I mean, I mean, you know, perception of human things being immediate is, you know, something like a hundred milliseconds. So anything like that is, is yeah, good enough.
Swyx [00:02:57]: Yeah. Also, I should mention, since we're talking about your side projects, well, one thing is I am maybe one of a few fellow people who have actually written something about CLI design principles because I was in charge of the Netlify CLI back in the day and had many thoughts. One of my fun thoughts, I'll just share it in case you have thoughts, is I think CLIs are effectively starting points for scripts that are then run. And the moment one of the script's preconditions are not fulfilled, typically they end. So the CLI developer will just exit the program. And the way that I designed, I really wanted to create the Netlify dev workflow was for it to be kind of a state machine that would resolve itself. If it detected a precondition wasn't fulfilled, it would actually delegate to a subprogram that would then fulfill that precondition, asking for more info or waiting until a condition is fulfilled. Then it would go back to the original flow and continue that. I don't know if that was ever tried or is there a more formal definition of it? Because I just came up with it randomly. But it felt like the beginnings of AI in the sense that when you run a CLI command, you have an intent to do something and you may not have given the CLI all the things that it needs to do, to execute that intent. So that was my two cents.
Ben [00:04:08]: Yeah, that reminds me of a thing we sort of thought about when writing the CLI guidelines, where CLIs were designed in a world where the CLI was really a programming environment and it's primarily designed for machines to use all of these commands and scripts. Whereas over time, the CLI has evolved to humans. It was back in a world where the primary way of using computers was writing shell scripts effectively. We've transitioned to a world where actually humans are using CLI programs much more than they used to. And the current sort of best practices about how Unix was designed, there's lots of design documents about Unix from the 70s and 80s, where they say things like, command line commands should not output anything on success. It should be completely silent, which makes sense if you're using it in a shell script. But if a user is using that, it just looks like it's broken. If you type copy and it just doesn't say anything, you assume that it didn't work as a new user. I think what's really interesting about the CLI is that it's actually a really good, to your point, it's a really good user interface where it can be like a conversation, where it feels like you're, instead of just like you telling the computer to do this thing and either silently succeeding or saying, no, you did, failed, it can guide you in the right direction and tell you what your intent might be, and that kind of thing in a way that's actually, it's almost more natural to a CLI than it is in a graphical user interface because it feels like this back and forth with the computer, almost funnily like a language model. So I think there's some interesting intersection of CLIs and language models actually being very sort of closely related and a good fit for each other.
Swyx [00:05:59]: Yeah, I'll say one of the surprises from last year, I worked on a coding agent, but I think the most successful coding agent of my cohort was Open Interpreter, which was a CLI implementation. And I have chronically, even as a CLI person, I have chronically underestimated the CLI as a useful interface. You also developed ArchiveVanity, which you recently retired after a glorious seven years.
Ben [00:06:22]: Something like that.
Swyx [00:06:23]: Which is nice, I guess, HTML PDFs.
Ben [00:06:27]: Yeah, that was actually the start of where Replicate came from. Okay, we can tell that story. So when I quit Docker, I got really interested in science infrastructure, just as like a problem area, because it is like science has created so much progress in the world. The fact that we're, you know, can talk to each other on a podcast and we use computers and the fact that we're alive is probably thanks to medical research, you know. But science is just like completely archaic and broken and it's like 19th century processes that just happen to be copied to the internet rather than take into account that, you know, we can transfer information at the speed of light now. And the whole way science is funded and all this kind of thing is all kind of very broken. And there's just so much potential for making science work better. And I realized that I wasn't a scientist and I didn't really have any time to go and get a PhD and become a researcher, but I'm a tool builder and I could make existing scientists better at their job. And if I could make like a bunch of scientists a little bit better at their job, maybe that's the kind of equivalent of being a researcher. So one particular thing I dialed in on is just how science is disseminated in that all of these PDFs, quite often behind paywalls, you know, on the internet.
Swyx [00:07:34]: And that's a whole thing because it's funded by national grants, government grants, then they're put behind paywalls. Yeah, exactly.
Ben [00:07:40]: That's like a whole, yeah, I could talk for hours about that. But the particular thing we got dialed in on was, interestingly, these PDFs are also, there's a bunch of open science that happens as well. So math, physics, computer science, machine learning, notably, is all published on the archive, which is actually a surprisingly old institution.
Swyx [00:08:00]: Some random Cornell.
Ben [00:08:01]: Yeah, it was just like somebody in Cornell who started a mailing list in the 80s. And then when the web was invented, they built a web interface around it. Like it's super old.
Swyx [00:08:11]: And it's like kind of like a user group thing, right? That's why they're all these like numbers and stuff.
Ben [00:08:15]: Yeah, exactly. Like it's a bit like something, yeah. That's where all basically all of math, physics and computer science happens. But it's still PDFs published to this thing. Yeah, which is just so infuriating. The web was invented at CERN, a physics institution, to share academic writing. Like there are figure tags, there are like author tags, there are heading tags, there are site tags. You know, hyperlinks are effectively citations because you want to link to another academic paper. But instead, you have to like copy and paste these things and try and get around paywalls. Like it's absurd, you know. And now we have like social media and things, but still like academic papers as PDFs, you know. This is not what the web was for. So anyway, I got really frustrated with that. And I went on vacation with my old friend Andreas. So we were, we used to work together in London on a startup, at somebody else's startup. And we were just on vacation in Greece for fun. And he was like trying to read a machine learning paper on his phone, you know, like we had to like zoom in and like scroll line by line on the PDF. And he was like, this is f*****g stupid. So I was like, I know, like this is something we discovered our mutual hatred for this, you know. And we spent our vacation sitting by the pool, like making latex to HTML, like converters, making the first version of Archive Vanity. Anyway, that was up then a whole thing. And the story, we shut it down recently because they caught the eye of Archive. They were like, oh, this is great. We just haven't had the time to work on this. And what's tragic about the Archive, it's like this project of Cornell that's like, they can barely scrounge together enough money to survive. I think it might be better funded now than it was when we were, we were collaborating with them. And compared to these like scientific journals, it's just that this is actually where the work happens. But they just have a fraction of the money that like these big scientific journals have, which is just so tragic. But anyway, they were like, yeah, this is great. We can't afford to like do it, but do you want to like as a volunteer integrate arXiv Vanity into arXiv?
Swyx [00:10:05]: Oh, you did the work.
Ben [00:10:06]: We didn't do the work. We started doing the work. We did some. I think we worked on this for like a few months to actually get it integrated into arXiv. And then we got like distracted by Replicate. So a guy called Dan picked up the work and made it happen. Like somebody who works on one of the, the piece of the libraries that powers arXiv Vanity. Okay.
Swyx [00:10:26]: And the relationship with arXiv Sanity?
Ben [00:10:28]: None.
Swyx [00:10:30]: Did you predate them? I actually don't know the lineage.
Ben [00:10:32]: We were after, we both were both users of arXiv Sanity, which is like a sort of arXiv...
Ben [00:10:37]: Which is Andre's RecSys on top of arXiv.
Ben [00:10:40]: Yeah. Yeah. And we were both users of that. And I think we were trying to come up with a working name for arXiv and Andreas just like cracked a joke of like, oh, let's call it arXiv Vanity. Let's make the papers look nice. Yeah. Yeah. And that was the working name and it just stuck.
Swyx [00:10:52]: Got it.
Ben [00:10:53]: Got it.
Alessio [00:10:54]: Yeah. And then from there, tell us more about why you got distracted, right? So Replicate, maybe it feels like an overnight success to a lot of people, but you've been building this since 2019. Yeah.
Ben [00:11:04]: So what prompted the start?
Alessio [00:11:05]: And we've been collaborating for even longer.
Ben [00:11:07]: So we created arXiv Vanity in 2017. So in some sense, we've been doing this almost like six, seven years now, a classic seven year.
Swyx [00:11:16]: Overnight success.
Ben [00:11:17]: Yeah. Yes. We did arXiv Vanity and then worked on a bunch of like surrounding projects. I was still like really interested in science publishing at that point. And I'm trying to remember, because I tell a lot of like the condensed story to people because I can't really tell like a seven year history. So I'm trying to figure out like the right. Oh, we got room. The right length.
Swyx [00:11:35]: We want to nail the definitive Replicate story here.
Ben [00:11:37]: One thing that's really interesting about these machine learning papers is that these machine learning papers are published on arXiv and a lot of them are actual fundamental research. So like should be like prose describing a theory. But a lot of them are just running pieces of software that like a machine learning researcher made that did something, you know, it was like an image classification model or something. And they managed to make an image classification model that was better than the existing state of the art. And they've made an actual running piece of software that does image segmentation. And then what they had to do is they then had to take that piece of software and write it up as prose and math in a PDF. And what's frustrating about that is like if you want to. So this was like Andreas is, Andreas was a machine learning engineer at Spotify. And some of his job was like he did pure research as well. Like he did a PhD and he was doing a lot of stuff internally. But part of his job was also being an engineer and taking some of these existing things that people have made and published and trying to apply them to actual problems at Spotify. And he was like, you know, you get given a paper which like describes roughly how the model works. It's probably listing lots of crucial information. There's sometimes code on GitHub. More and more there's code on GitHub. But back then it was kind of relatively rare. But it's quite often just like scrappy research code and didn't actually run. And, you know, there was maybe the weights that were on Google Drive, but they accidentally deleted the weights of Google Drive, you know, and it was like really hard to like take this stuff and actually use it for real things. We just started talking together about like his problems at Spotify and I connected this back to my work at Docker as well. I was like, oh, this is what we created containers for. You know, we solved this problem for normal software by putting the thing inside a container so you could ship it around and it kept on running. So we were sort of hypothesizing about like, hmm, what if we put machine learning models inside containers so they could actually be shipped around and they could be defined in like some production ready formats and other researchers could run them to generate baselines and you could people who wanted to actually apply them to real problems in the world could just pick up the container and run it, you know. And we then thought this is quite whether it gets normally in this part of the story I skip forward to be like and then we created cog this container stuff for machine learning models and we created Replicate, the place for people to publish these machine learning models. But there's actually like two or three years between that. The thing we then got dialed into was Andreas was like, what if there was a CI system for machine learning? It's like one of the things he really struggled with as a researcher is generating baselines. So when like he's writing a paper, he needs to like get like five other models that are existing work and get them running.
Swyx [00:14:21]: On the same evals.
Ben [00:14:22]: Exactly, on the same evals so you can compare apples to apples because you can't trust the numbers in the paper.
Swyx [00:14:26]: So you can be Google and just publish them anyway.
Ben [00:14:31]: So I think this was coming from the thinking of like there should be containers for machine learning, but why are people going to use that? Okay, maybe we can create a supply of containers by like creating this useful tool for researchers. And the useful tool was like, let's get researchers to package up their models and push them to the central place where we run a standard set of benchmarks across the models so that you can trust those results and you can compare these models apples to apples and for like a researcher for Andreas, like doing a new piece of research, he could trust those numbers and he could like pull down those models, confirm it on his machine, use the standard benchmark to then measure his model and you know, all this kind of stuff. And so we started building that. That's what we applied to YC with, got into YC and we started sort of building a prototype of this. And then this is like where it all starts to fall apart. We were like, okay, that sounds great. And we talked to a bunch of researchers and they really wanted that and that sounds brilliant. That's a great way to create a supply of like models on this research platform. But how the hell is this a business, you know, like how are we even going to make any money out of this? And we're like, oh s**t, that's like the, that's the real unknown here of like what the business is. So we thought it would be a really good idea to like, okay, before we get too deep into this, let's try and like reduce the risk of this turning into a business. So let's try and like research what the business could be for this research tool effectively. So we went and talked to a bunch of companies trying to sell them something which didn't exist. So we're like, hey, do you want a way to share research inside your company so that other researchers or say like the product manager can test out the machine learning model? They're like, maybe. And we were like, do you want like a deployment platform for deploying models? Like, do you want like a central place for versioning models? Like we're trying to think of like lots of different like products we could sell that were like related to this thing. And terrible idea. Like we're not sales people and like people don't want to buy something that doesn't exist. I think some people can pull this off, but we were just like, you know, a bunch of product people, products and engineer people, and we just like couldn't pull this off. So we then got halfway through our YC batch. We hadn't built a product. We had no users. We had no idea what our business was going to be because we couldn't get anybody to like buy something which didn't exist. And actually there was quite a way through our, I think it was like two thirds the way through our YC batch or something. And we're like, okay, well we're kind of screwed now because we don't have anything to show at demo day. And then we then like tried to figure out, okay, what can we build in like two weeks that'll be something. So we like desperately tried to, I can't remember what we've tried to build at that point. And then two weeks before demo day, I just remember it was all, we were going down to Mountain View every week for dinners and we got called on to like an all hands Zoom call, which was super weird. We're like, what's going on? And they were like, don't come to dinner tomorrow. And we realized, we kind of looked at the news and we were like, oh, there's a pandemic going on. We were like so deep in our startup. We were just like completely oblivious to what was going on around us.
Swyx [00:17:20]: Was this Jan or Feb 2020?
Ben [00:17:22]: This was March 2020. March 2020. 2020.
Swyx [00:17:25]: Yeah. Because I remember Silicon Valley at the time was early to COVID. Like they started locking down a lot faster than the rest of the US.
Ben [00:17:32]: Yeah, exactly. And I remember, yeah, soon after that, like there was the San Francisco lockdowns and then like the YC batch just like stopped. There wasn't demo day and it was in a sense a blessing for us because we just kind of
Swyx [00:17:43]: In the normal course of events, you're actually allowed to defer to a future demo day. Yeah.
Ben [00:17:51]: So we didn't even take any defer because it just kind of didn't happen.
Swyx [00:17:55]: So was YC helpful?
Ben [00:17:57]: Yes. We completely screwed up the batch and that was our fault. I think the thing that YC has become incredibly valuable for us has been after YC. I think there was a reason why we couldn't, didn't need to do YC to start with because we were quite experienced. We had done some startups before. We were kind of well connected with VCs, you know, it was relatively easy to raise money because we were like a known quantity. You know, if you go to a VC and be like, Hey, I made this piece of-
Swyx [00:18:24]: It's Docker Compose for AI.
Ben [00:18:26]: Exactly. Yeah. And like, you know, people can pattern match like that and they can have some trust, you know what you're doing. Whereas it's much harder for people straight out of college and that's where like YC sweet spot is like helping people straight out of college who are super promising, like figure out how to do that.
Swyx [00:18:40]: No credentials.
Ben [00:18:41]: Yeah, exactly. We don't need that. But the thing that's been incredibly useful for us since YC has been, this was actually, I think, so Docker was a YC company and Solomon, the founder of Docker, I think told me this. He was like, a lot of people underestimate the value of YC after you finish the batch. And his biggest regret was like not staying in touch with YC. I might be misattributing this, but I think it was him. And so we made a point of that. And we just stayed in touch with our batch partner, who Jared at YC has been fantastic.
Ben [00:19:10]: Jared Friedman. All of like the team at YC, there was the growth team at YC when they were still there and they've been super helpful. And two things have been super helpful about that is like raising money, like they just know exactly how to raise money. And they've been super helpful during that process in all of our rounds, like we've done three rounds since we did YC and they've been super helpful during the whole process. And also just like reaching a ton of customers. So like the magic of YC is that you have all of, like there's thousands of YC companies, I think, on the order of thousands, I think. And they're all of your first customers. And they're like super helpful, super receptive, really want to like try out new things. You have like a warm intro to every one of them basically. And there's this mailing list where you can post about updates to your products, which is like really receptive. And that's just been fantastic for us. Like we've just like got so many of our users and customers through YC. Yeah.
Swyx [00:20:00]: Well, so the classic criticism or the sort of, you know, pushback is people don't buy you because you are both from YC. But at least they'll open the email. Right. Like that's the... Okay.
Ben [00:20:13]: Yeah. Yeah. Yeah.
Swyx [00:20:16]: So that's been a really, really positive experience for us. And sorry, I interrupted with the YC question. Like you were, you make it, you just made it out of the YC, survived the pandemic.
Ben [00:20:22]: I'll try and condense this a little bit. Then we started building tools for COVID weirdly. We were like, okay, we don't have a startup. We haven't figured out anything. What's the most useful thing we could be doing right now?
Swyx [00:20:32]: Save lives.
Ben [00:20:33]: So yeah. Let's try and save lives. I think we failed at that as well. We had a bunch of products that didn't really go anywhere. We kind of worked on, yeah, a bunch of stuff like contact tracing, which turned out didn't really be a useful thing. Sort of Andreas worked on like a door dash for like people delivering food to people who are vulnerable. What else did we do? The meta problem of like helping people direct their efforts to what was most useful and a few other things like that. It didn't really go anywhere. So we're like, okay, this is not really working either. We were considering actually just like doing like work for COVID. We have this decision document early on in our company, which is like, should we become a like government app contracting shop? We decided no.
Swyx [00:21:11]: Because you also did work for the gov.uk. Yeah, exactly.
Ben [00:21:14]: We had experience like doing some like-
Swyx [00:21:17]: And the Guardian and all that.
Ben [00:21:18]: Yeah. For like government stuff. And we were just like really good at building stuff. Like we were just like product people. Like I was like the front end product side and Andreas was the back end side. So we were just like a product. And we were working with a designer at the time, a guy called Mark, who did our early designs for Replicate. And we were like, hey, what if we just team up and like become and build stuff? And yeah, we gave up on that in the end for, I can't remember the details. So we went back to machine learning. And then we were like, well, we're not really sure if this is going to work. And one of my most painful experiences from previous startups is shutting them down. Like when you realize it's not really working and having to shut it down, it's like a ton of work and it's people hate you and it's just sort of, you know. So we were like, how can we make something we don't have to shut down? And even better, how can we make something that won't page us in the middle of the night? So we made an open source project. We made a thing which was an open source Weights and Biases, because we had this theory that like people want open source tools. There should be like an open source, like version control, experiment tracking like thing. And it was intuitive to us and we're like, oh, we're software developers and we like command line tools. Like everyone loves command line tools and open source stuff, but machine learning researchers just really didn't care. Like they just wanted to click on buttons. They didn't mind that it was a cloud service. It was all very visual as well, that you need lots of graphs and charts and stuff like this. So it wasn't right. Like it was right. We actually were building something that Andreas made at Spotify for just like saving experiments to cloud storage automatically, but other people didn't really want this. So we kind of gave up on that. And then that was actually originally called Replicate and we renamed that out of the way. So it's now called Keepsake and I think some people still use it. Then we sort of came back, we looped back to our original idea. So we were like, oh, maybe there was a thing in that thing we were originally sort of thinking about of like researchers sharing their work and containers for machine learning models. So we just built that. And at that point we were kind of running out of the YC money. So we were like, okay, this like feels good though. Let's like give this a shot. So that was the point we raised a seed round. We raised seed round. Pre-launch. We raised pre-launch and pre-team. It was an idea basically. We had a little prototype. It was just an idea and a team. But we were like, okay, like, you know, bootstrapping this thing is getting hard. So let's actually raise some money. Then we made Cog and Replicate. It initially didn't have APIs, interestingly. It was just the bit that I was talking about before of helping researchers share their work. So it was a way for researchers to put their work on a webpage such that other people could try it out and so that you could download the Docker container. We cut the benchmarks thing of it because we thought that was just like too complicated. But it had a Docker container that like, you know, Andreas in a past life could download and run with his benchmark and you could compare all these models apples to apples. So that was like the theory behind it. That kind of started to work. It was like still when like, you know, it was long time pre-AI hype and there was lots of interesting stuff going on, but it was very much in like the classic deep learning era. So sort of image segmentation models and sentiment analysis and all these kinds of things, you know, that people were using, that we're using deep learning models for. And we were very much building for research because all of this stuff was happening in research institutions, you know, the sort of people who'd be publishing to archive. So we were creating an accompanying material for their models, basically, you know, they wanted a demo for their models and we were creating a company material for it. What was funny about that is they were like not very good users. Like they were, they were doing great work obviously, but, but the way that research worked is that they, they just made like one thing every six months and they just fired and forget it, forgot it. Like they, they published this piece of paper and like, done, I've, I've published it. So they like output it to Replicate and then they just stopped using Replicate. You know, they were like once every six monthly users and that wasn't great for us, but we stumbled across this early community. This was early 2021 when OpenAI created this, created CLIP and people started smushing CLIP and GANs together to produce image generation models. And this started with, you know, it was just a bunch of like tinkerers on Discord, basically. There was an early model called Big Sleep by Advadnoun. And then there was VQGAN Clip, which was like a bit more popular by Rivers Have Wings. And it was all just people like tinkering on stuff in Colabs and it was very dynamic and it was people just making copies of co-labs and playing around with things and forking in. And to me this, I saw this and I was like, oh, this feels like open source software, like so much more than the research world where like people are publishing these papers.
Swyx [00:25:48]: You don't know their real names and it's just like a Discord.
Ben [00:25:51]: Yeah, exactly. But crucially, it was like people were tinkering and forking and things were moving really fast and it just felt like this creative, dynamic, collaborative community in a way that research wasn't really, like it was still stuck in this kind of six month publication cycle. So we just kind of latched onto that and started building for this community. And you know, a lot of those early models were published on Replicate. I think the first one that was really primarily on Replicate was one called Pixray, which was sort of mid 2021 and it had a really cool like pixel art output, but it also just like produced general, you know, the sort of, they weren't like crisp in images, but they were quite aesthetically pleasing, like some of these early image generation models. And you know, that was like published primarily on Replicate and then a few other models around that were like published on Replicate. And that's where we really started to find our early community and like where we really found like, oh, we've actually built a thing that people want and they were great users as well. And people really want to try out these models. Lots of people were like running the models on Replicate. We still didn't have APIs though, interestingly, and this is like another like really complicated part of the story. We had no idea what a business model was still at this point. I don't think people could even pay for it. You know, it was just like these web forms where people could run the model.
Swyx [00:27:06]: Just for historical interest, which discords were they and how did you find them? Was this the Lion Discord? Yeah, Lion. This is Eleuther.
Ben [00:27:12]: Eleuther, yeah. It was the Eleuther one. These two, right? There was a channel where Viki Gangklep, this was early 2021, where Viki Gangklep was set up as a Discord bot. I just remember being completely just like captivated by this thing. I was just like playing around with it all afternoon and like the sort of thing. In Discord. Oh s**t, it's 2am. You know, yeah.
Swyx [00:27:33]: This is the beginnings of Midjourney.
Ben [00:27:34]: Yeah, exactly. And Stability. It was the start of Midjourney. And you know, it's where that kind of user interface came from. Like what's beautiful about the user interface is like you could see what other people are doing. And you could riff off other people's ideas. And it was just so much fun to just like play around with this in like a channel full of a hundred people. And yeah, that just like completely captivated me and I'm like, okay, this is something, you know. So like we should get these things on Replicate. Yeah, that's where that all came from.
Swyx [00:28:00]: And then you moved on to, so was it APIs next or was it Stable Diffusion next?
Ben [00:28:04]: It was APIs next. And the APIs happened because one of our users, our web form had like an internal API for making the web form work, like with an API that was called from JavaScript. And somebody like reverse engineered that to start generating images with a script. You know, they did like, you know, Web Inspector Coffee is Carl, like figured out what the API request was. And it wasn't secured or anything.
Swyx [00:28:28]: Of course not.
Ben [00:28:29]: They started generating a bunch of images and like we got tons of traffic and like what's going on? And I think like a sort of usual reaction to that would be like, hey, you're abusing our API and to shut them down. And instead we're like, oh, this is interesting. Like people want to run these models. So we documented the API in a Notion document, like our internal API in a Notion document and like message this person being like, hey, you seem to have found our API. Here's the documentation. That'll be like a thousand bucks a month, please, with a straight form, like we just click some buttons to make. And they were like, sure, that sounds great. So that was our first customer.
Swyx [00:29:05]: A thousand bucks a month.
Ben [00:29:07]: It was a surprising amount of money. That's not casual. It was on the order of a thousand bucks a month.
Swyx [00:29:11]: So was it a business?
Ben [00:29:13]: It was the creator of PixRay. Like it was, he generated NFT art. And so he like made a bunch of art with these models and was, you know, selling these NFTs effectively. And I think lots of people in his community were doing similar things. And like he then referred us to other people who were also generating NFTs and he joined us with models. We started our API business. Yeah. Then we like made an official API and actually like added some billing to it. So it wasn't just like a fixed fee.
Swyx [00:29:40]: And now people think of you as the host and models API business. Yeah, exactly.
Ben [00:29:44]: But that just turned out to be our business, you know, but what ended up being beautiful about this is it was really fulfilling. Like the original goal of what we wanted to do is that we wanted to make this research that people were making accessible to like other people and for it to be used in the real world. And this was like the just like ultimately the right way to do it because all of these people making these generative models could publish them to replicate and they wanted a place to publish it. And software engineers, you know, like myself, like I'm not a machine learning expert, but I want to use this stuff, could just run these models with a single line of code. And we thought, oh, maybe the Docker image is enough, but it's actually super hard to get the Docker image running on a GPU and stuff. So it really needed to be the hosted API for this to work and to make it accessible to software engineers. And we just like wound our way to this. Yeah.
Swyx [00:30:30]: Two years to the first paying customer. Yeah, exactly.
Alessio [00:30:33]: Did you ever think about becoming Midjourney during that time? You have like so much interest in image generation.
Swyx [00:30:38]: I mean, you're doing fine for the record, but, you know, it was right there, you were playing with it.
Ben [00:30:46]: I don't think it was our expertise. Like I think our expertise was DevTools rather than like Midjourney is almost like a consumer products, you know? Yeah. So I don't think it was our expertise. It certainly occurred to us. I think at the time we were thinking about like, oh, maybe we could hire some of these people in this community and make great models and stuff like this. But we ended up more being at the tooling. Like I think like before I was saying, like I'm not really a researcher, but I'm more like the tool builder, the behind the scenes. And I think both me and Andreas are like that.
Swyx [00:31:09]: I think this is an illustration of the tool builder philosophy. Something where you latch on to in DevTools, which is when you see people behaving weird, it's not their fault, it's yours. And you want to pave the cow paths is what they say, right? Like the unofficial paths that people are making, like make it official and make it easy for them and then maybe charge a bit of money.
Alessio [00:31:25]: And now fast forward a couple of years, you have 2 million developers using Replicate. Maybe more. That was the last public number that I found.
Ben [00:31:33]: It's 2 million users. Not all those people are developers, but a lot of them are developers, yeah.
Alessio [00:31:38]: And then 30,000 paying customers was the number late in space runs on Replicate. So we had a small podcaster and we host a whisper diarization on Replicate. And we're paying. So we're late in space in the 30,000. You raised a $40 million dollars, Series B. I would say that maybe the stable diffusion time, August 22, was like really when the company started to break out. Tell us a bit about that and the community that came out and I know now you're expanding beyond just image generation.
Ben [00:32:06]: Yeah, like I think we kind of set ourselves, like we saw there was this really interesting image, generative image world going on. So we kind of, you know, like we're building the tools for that community already, really. And we knew stable diffusion was coming out. We knew it was a really exciting thing, you know, it was the best generative image model so far. I think the thing we underestimated was just like what an inflection point it would be, where it was, I think Simon Willison put it this way, where he said something along the lines of it was a model that was open source and tinkerable and like, you know, it was just good enough and open source and tinkerable such that it just kind of took off in a way that none of the models had before. And like what was really neat about stable diffusion is it was open source so you could like, compared to like Dali, for example, which was like sort of equivalent quality. And like the first week we saw like people making animation models out of it. We saw people make like game texture models that like use circular convolutions to make repeatable textures. We saw, you know, a few weeks later, like people were fine tuning it so you could make, put your face in these models and all of these other-
Swyx [00:33:10]: Textual inversion.
Ben [00:33:11]: Yep. Yeah, exactly. That happened a bit before that. And all of this sort of innovation was happening all of a sudden. And people were publishing on Replicate because you could just like publish arbitrary models on Replicate. So we had this sort of supply of like interesting stuff being built. But because it was a sufficiently good model, there was also just like a ton of people building with it. They were like, oh, we can build products with this thing. And this was like about the time where people were starting to get really interested in AI. So like tons of product builders wanted to build stuff with it. And we were just like sitting in there in the middle, it's like the interface layer between like all these people who wanted to build and all these like machine learning experts who were building cool models. And that's like really where it took off. We were just sort of incredible supply, incredible demand, and we were just like in the middle. And then, yeah, since then, we've just kind of grown and grown really. And we've been building a lot for like the indie hacker community, these like individual tinkerers, but also startups and a lot of large companies as well who are sort of exploring and building AI things. Then kind of the same thing happened like middle of last year with language models and Lama 2, where the same kind of stable diffusion effect happened with Lama. And Lama 2 was like our biggest week of growth ever because like tons of people wanted to tinker with it and run it. And you know, since then we've just been seeing a ton of growth in language models as well as image models. Yeah. We're just kind of riding a lot of the interest that's going on in AI and all the people building in AI, you know. Yeah.
Swyx [00:34:29]: Kudos. Right place, right time. But also, you know, took a while to position for the right place before the wave came. I'm curious if like you have any insights on these different markets. So Peter Levels, notably very loud person, very picky about his tools. I wasn't sure actually if he used you. He does. So you've met him on your Series B blog posts and Danny Post might as well, his competitor all in that wave. What are their needs versus, you know, the more enterprise or B2B type needs? Did you come to a decision point where you're like, okay, you know, how serious are these indie hackers versus like the actual businesses that are bigger and perhaps better customers because they're less churny?
Ben [00:35:04]: They're surprisingly similar because I think a lot of people right now want to use and build with AI, but they're not AI experts and they're not infrastructure experts either. So they want to be able to use this stuff without having to like figure out all the internals of the models and, you know, like touch PyTorch and whatever. And they also don't want to be like setting up and booting up servers. And that's the same all the way from like indie hackers just getting started because like obviously you just want to get started as quickly as possible, all the way through to like large companies who want to be able to use this stuff, but don't have like all of the experts on stuff, you know, you know, big companies like Google and so on that do actually have a lot of experts on stuff, but the vast majority of companies don't. And they're all software engineers who want to be able to use this AI stuff, but they just don't know how to use it. And it's like, you really need to be an expert and it takes a long time to like learn the skills to be able to use that. So they're surprisingly similar in that sense. I think it's kind of also unfair of like the indie community, like they're not churning surprisingly, or churny or spiky surprisingly, like they're building real established businesses, which is like, kudos to them, like building these really like large, sustainable businesses, often just as solo developers. And it's kind of remarkable how they can do that actually, and it's in credit to a lot of their like product skills. And you know, we're just like there to help them being like their machine learning team effectively to help them use all of this stuff. A lot of these indie hackers are some of our largest customers, like alongside some of our biggest customers that you would think would be spending a lot more money than them, but yeah.
Swyx [00:36:35]: And we should name some of these. So you have them on your landing page, your Buzzfeed, you have Unsplash, Character AI. What do they power? What can you say about their usage?
Ben [00:36:43]: Yeah, totally. It's kind of a various things.
Swyx [00:36:46]: Well, I mean, I'm naming them because they're on your landing page. So you have logo rights. It's useful for people to, like, I'm not imaginative. I see monkey see monkey do, right? Like if I see someone doing something that I want to do, then I'm like, okay, Replicate's great for that.
Ben [00:37:00]: Yeah, yeah, yeah.
Swyx [00:37:01]: So that's what I think about case studies on company landing pages is that it's just a way of explaining like, yep, this is something that we are good for. Yeah, totally.
Ben [00:37:09]: I mean, it's, these companies are doing things all the way up and down the stack at different levels of sophistication. So like Unsplash, for example, they actually publicly posted this story on Twitter where they're using BLIP to annotate all of the images in their catalog. So you know, they have lots of images in the catalog and they want to create a text description of it so you can search for it. And they're annotating images with, you know, off the shelf, open source model, you know, we have this big library of open source models that you can run. And you know, we've got lots of people are running these open source models off the shelf. And then most of our larger customers are doing more sophisticated stuff. So they're like fine tuning the models, they're running completely custom models on us. A lot of these larger companies are like, using us for a lot of their, you know, inference, but it's like a lot of custom models and them like writing the Python themselves because they've got machine learning experts on the team. And they're using us for like, you know, their inference infrastructure effectively. And so it's like lots of different levels of sophistication where like some people using these off the shelf models. Some people are fine tuning models. So like level, Peter Levels is a great example where a lot of his products are based off like fine tuning, fine tuning image models, for example. And then we've also got like larger customers who are just like using us as infrastructure effectively. So yeah, it's like all things up and down, up and down the stack.
Alessio [00:38:29]: Let's talk a bit about COG and the technical layer. So there are a lot of GPU clouds. I think people have different pricing points. And I think everybody tries to offer a different developer experience on top of it, which then lets you charge a premium. Why did you want to create COG?
Ben [00:38:46]: You worked at Docker.
Alessio [00:38:47]: What were some of the issues with traditional container runtimes? And maybe yeah, what were you surprised with as you built it?
Ben [00:38:54]: COG came right from the start, actually, when we were thinking about this, you know, evaluation, the sort of benchmarking system for machine learning researchers, where we wanted researchers to publish their models in a standard format that was guaranteed to keep on running, that you could replicate the results of, like that's where the name came from. And we realized that we needed something like Docker to make that work, you know. And I think it was just like natural from my point of view of like, obviously that should be open source, that we should try and create some kind of open standard here that people can share. Because if more people use this format, then that's great for everyone involved. I think the magic of Docker is not really in the software. It's just like the standard that people have agreed on, like, here are a bunch of keys for a JSON document, basically. And you know, that was the magic of like the metaphor of real containerization as well. It's not the containers that are interesting. It's just like the size and shape of the damn box, you know. And it's a similar thing here, where really we just wanted to get people to agree on like, this is what a machine learning model is. This is how a prediction works. This is what the inputs are, this is what the outputs are. So cog is really just a Docker container that attaches to a CUDA device, if it needs a GPU, that has a open API specification as a label on the Docker image. And the open API specification defines the interface for the machine learning model, like the inputs and outputs effectively, or the params in machine learning terminology. And you know, we just wanted to get people to kind of agree on this thing. And it's like general purpose enough, like we weren't saying like, some of the existing things were like at the graph level, but we really wanted something general purpose enough that you could just put anything inside this and it was like future compatible and it was just like arbitrary software. And you know, it'd be future compatible with like future inference servers and future machine learning model formats and all this kind of stuff. So that was the intent behind it. It just came naturally that we wanted to define this format. And that's been really working for us. Like a bunch of people have been using cog outside of replicates, which is kind of our original intention, like this should be how machine learning is packaged and how people should use it. Like it's common to use cog in situations where like maybe they can't use the SAS service because I don't know, they're in a big company and they're not allowed to use a SAS service, but they can use cog internally still. And like they can download the models from replicates and run them internally in their org, which we've been seeing happen. And that works really well. People who want to build like custom inference pipelines, but don't want to like reinvent the world, they can use cog off the shelf and use it as like a component in their inference pipelines. We've been seeing tons of usage like that and it's just been kind of happening organically. We haven't really been trying, you know, but it's like there if people want it and we've been seeing people use it. So that's great. Yeah. So a lot of it is just sort of philosophical of just like, this is how it should work from my experience at Docker, you know, and there's just a lot of value from like the core being open, I think, and that other people can share it and it's like an integration point. So, you know, if replicate, for example, wanted to work with a testing system, like a CI system or whatever, we can just like interface at the cog level, like that system just needs to put cog models and then you can like test your models on that CI system before they get deployed to replicate. And it's just like a format that everyone, we can get everyone to agree on, you know.
Alessio [00:41:55]: What do you think, I guess, Docker got wrong? Because if I look at a Docker Compose and a cog definition, first of all, the cog is kind of like the Dockerfile plus the Compose versus in Docker Compose, you're just exposing the services. And also Docker Compose is very like ports driven versus you have like the actual, you know, predict this is what you have to run.
Ben [00:42:16]: Yeah.
Alessio [00:42:17]: Any learnings and maybe tips for other people building container based runtimes, like how much should you separate the API services versus the image building or how much you want to build them together?
Ben [00:42:29]: I think it was coming from two sides. We were thinking about the design from the point of view of user needs, what are their problems and what problems can we solve for them, but also what the interface should be for a machine learning model. And it was sort of the combination of two things that led us to this design. So the thing I talked about before was a little bit of like the interface around the machine learning model. So we realized that we wanted to be general purpose. We wanted to be at the like JSON, like human readable things rather than the tensor level. So it was like an open API specification that wrapped a Docker container. And that's where that design came from. And it's really just a wrapper around Docker. So we were kind of building on, standing on shoulders there, but Docker is too low level. So it's just like arbitrary software. So we wanted to be able to like have a open API specification that defined the function effectively that is the machine learning model. But also like how that function is written, how that function is run, which is all defined in code and stuff like that. So it's like a bunch of abstraction on top of Docker to make that work. And that's where that design came from. But the core problems we were solving for users was that Docker is really hard to use and productionizing machine learning models is really hard. So on the first part of that, we knew we couldn't use Dockerfiles. Like Dockerfiles are hard enough for software developers to write. I'm saying this with love as somebody who works on Docker and like works on Dockerfiles, but it's really hard to use. And you need to know a bunch about Linux, basically, because you're running a bunch of CLI commands. You need to know a bunch about Linux and best practices and like how apt works and all this kind of stuff. So we're like, OK, we can't get to that level. We need something that machine learning researchers will be able to understand, like people who are used to like Colab notebooks. And what they understand is they're like, I need this version of Python. I need these Python packages. And somebody told me to apt-get install something. You know? If there was sudo in there, I don't really know what that means. So we tried to create a format that was at that level, and that's what cog.yaml is. And we were really kind of trying to imagine like, what is that machine learning researcher going to understand, you know, and trying to build for them. Then the productionizing machine learning models thing is like, OK, how can we package up all of the complexity of like productionizing machine learning models, like picking CUDA versions, like hooking it up to GPUs, writing an inference server, defining a schema, doing batching, all of these just like really gnarly things that everyone does again and again. And just like, you know, provide that as a tool. And that's where that side of it came from. So it's like combining those user needs with, you know, the sort of world need of needing like a common standard for like what a machine learning model is. And that's how we thought about the design. I don't know whether that answers the question.
Alessio [00:45:12]: Yeah. So your idea was like, hey, you really want what Docker stands for in terms of standard, but you actually don't want people to do all the work that goes into Docker.
Ben [00:45:22]: It needs to be higher level, you know?
Swyx [00:45:25]: So I want to, for the listener, you're not the only standard that is out there. As with any standard, there must be 14 of them. You are surprisingly friendly with Olama, who is your former colleagues from Docker, who came out with the model file. Mozilla came out with the Lama file. And then I don't know if this is in the same category even, but I'm just going to throw it in there. Like Hugging Face has the transformers and diffusers library, which is a way of disseminating models that obviously people use. How would you compare your contrast, your approach of Cog versus all these?
Ben [00:45:53]: It's kind of complementary, actually, which is kind of neat in that a lot of transformers, for example, is lower level than Cog. So it's a Python library effectively, but you still need to like...
Swyx [00:46:04]: Expose them.
Ben [00:46:05]: Yeah. You still need to turn that into an inference server. You still need to like install the Python packages and that kind of thing. So lots of replicate models are transformers models and diffusers models inside Cog, you know? So that's like the level that that sits. So it's very complementary in some sense. We're kind of working on integration with Hugging Face such that you can deploy models from Hugging Face into Cog models and stuff like that to replicate. And some of these things like Llamafile and what Llama are working on are also very complementary in that they're doing a lot of the sort of running these things locally on laptops, which is not a thing that works very well with Cog. Like Cog is really designed around servers and attaching to CUDA devices and NVIDIA GPUs and this kind of thing. So we're actually like, you know, figuring out ways that like we can, those things can be interoperable because, you know, they should be and they are quite complementary and that you should be able to like take a model and replicate and run it on your local machine. You should be able to take a model, you know, the machine and run it in the cloud.
Swyx [00:47:02]: Is the base layer something like, is it at the like the GGUF level, which by the way, I need to get a primer on like the different formats that have emerged, or is it at the star dot file level, which is model file, Llamafile, whatever, whatever, or is it at the Cog level? I don't know, to be honest.
Ben [00:47:16]: And I think this is something we still have to figure out. There's a lot yet, like exactly where those lines are drawn. Don't know exactly. I think this is something we're trying to figure out ourselves, but I think there's certainly a lot of promise about these systems interoperating. We just want things to work together. You know, we want to try and reduce the number of standards. So the more, the more these things can interoperate and, you know, convert between each other and that kind of stuff at the minute.
Swyx [00:47:34]: Cool. Well, there's a foundation for that.
Alessio [00:47:36]: Andreas comes out of Spotify, Eric from Moto also comes out of Spotify. You work at Docker and the Llamafile guys work at Docker. Did both you and Andreas know that there was somebody else you work with that had a kind of like similar, not similar idea, but like was interested in the same thing or did you then just say, oh, I know those people. They're doing something very similar.
Ben [00:47:58]: We learned about both early on actually, yeah, because we know, we know them both quite well. And it's funny how I think we're all seeing the same problems and just like applying, you know, trying to fix the same problems that we're all seeing. I think the Llama one's particularly funny because I joined Docker through my startup. Funnily, actually, the thing which worked for my startup was Compose, but we were actually working on another thing, which was a bit like EC2 for Docker. So we were working on like productionizing Docker containers. And Llama was working on a thing called Chimatic, which was a bit like a desktop app for Docker. And our companies both got bought by Docker at the same time. And you know, Chimatic turned into Docker desktop. And then, you know, our thing then turned into Compose. And it's funny how we're both applying our, like the things we saw at Docker to the AI world, but they're building like the local environment for us and we're building like the cloud for it. And yeah, so that's just like really pleasing. And I think, you know, we're collaborating closely because there's just so much opportunity for working there. You have a hammer.
Swyx [00:49:06]: Everything's a nail.
Ben [00:49:07]: Yeah, exactly. Exactly. So I think a lot of where we're coming from a lot with AI is we're all kind of on the replicated team. We're all kind of people who have built developer tools in the past. We've got a team, like I worked at Docker, we've got people who worked at Heroku and GitHub and like the iOS ecosystem and all this kind of thing, like the previous generation of developer tools, where we like figured out a bunch of stuff. And then like AI has come along and we just don't yet have those tools and abstractions like to make it easy to use. So we're trying to like take the lessons that we learned from the previous generation of stuff and apply it to this new generation of stuff. And obviously there's a bit of nuance there because the trick is to take like the right lessons and do new stuff where it makes sense. You can't just like cut and paste, you know, but that's like how we're approaching this is we're trying to like as much as possible, like take some of those lessons we learned from like, you know, how Heroku and GitHub was built, for example, and apply them to AI.
Swyx [00:50:05]: We should also talk a little bit about your compute availability. We're trying to ask this of all, you know, it's Compute Provider Month. Do you own your own GPUs? How many do you have access to? What do you feel about the tightness of the GPU market?
Ben [00:50:17]: We don't own our own GPUs. We've got a few that we play around with, but not for production workloads. And we are primarily built on public clouds, so primarily GCP and CoreWeave and like some smatterings elsewhere.
Swyx [00:50:29]: None from NVIDIA, which is your newest investor?
Ben [00:50:31]: We work with NVIDIA, so, you know, they're kind of helping us get GPU availability. GPUs are hard to get hold of. Like if you go to AWS and ask for one A100, they won't give you an A100. But if you go to AWS and say, I would like 100 A100s in two years, they're like, sure, we've got some. And I think the problem is like that makes sense from their point of view. They want just like reliable, sustained usage. They don't want like spiky usage and like wastage in their infrastructure, which makes total sense. But that makes it really hard for startups, you know, who are wanting to just like get hold of GPUs. I think we're in a fortunate position where we can aggregate demand so we can make commits to cloud providers. And then, you know, we actually have good availability, like, you know, we don't have infinite availability, obviously, but, you know, if you want an A100 from Replicate, you can get it. But, you know, we're seeing other companies pop up as well, like SF Compute's a great example of this, where they're doing the same idea for training almost where, you know, a lot of startups need to be able to train a model, but they can't get hold of GPUs from large cloud providers. So SF Compute is like letting people rent, you know, 10 H100s for two days, which is just impossible otherwise. And, you know, what they're effectively doing there is they're aggregating demand such that they can make a big commit to the cloud provider and then let people use smaller chunks of it. And that's kind of what we're doing with Replicate as well. We're aggregating demand such that we make big commits to the cloud providers. And you know, then people can run like a 100 millisecond API request on an A100.
Swyx [00:51:51]: So, you know, coming from a finance background, this sounds surprisingly similar to banks, where the job of a bank is maturity transformation, is what you call it. You take short term deposits, which technically can be withdrawn at any time, and you turn that into long term loans for mortgages and stuff, and you pocket the difference in interest. And that's the bank.
Ben [00:52:09]: Yeah, that's exactly what we're doing.
Swyx [00:52:11]: So you run a bank.
Ben [00:52:12]: Yeah, it's your bank. Right, yeah. And it's so much a finance problem as well, because we have to make bets on the future demand value of GPUs, yeah.
Swyx [00:52:21]: What are you... Okay, I don't know how much you can disclose, but what are you forecasting? Down? Up a lot? Yeah. Up 10x?
Ben [00:52:30]: I can't really. We're projecting our growth with some educated guesses about what kind of models are going to come out and what kind of models these will run, you know? We need to bet that like, okay, maybe language models are getting larger. So we need to like have GPUs with a lot of RAM, or like multi GPU nodes, or maybe models are getting smaller, and we actually need smaller GPUs, you know, we have to make some educated guesses about that kind of stuff, yeah.
Swyx [00:52:50]: Yeah. Speaking of which, the mixture of experts models must be throwing a spanner into the planning.
Ben [00:52:56]: Not so much. We've got like multi-node A100 machines, which can run those, and multi-node H100 machines, which can run those, no problem. So we're set up for that. Okay.
Swyx [00:53:04]: Right. I didn't expect it to be so easy. My impression was that the amount of RAM per model was increasing a lot, especially on a sort of per parameter basis, per active parameter basis, going from like mixed trial being eight experts to like the deep-seek MOE models, I don't know if you saw them, being like 30, 60 experts, and you can see it keep going up, I guess.
Ben [00:53:26]: Yeah. I think we might run into problems at some point, and yeah, I don't know exactly what's going on there. I think something that we're finding, which is kind of interesting, like I don't know this in depth, you know, we're certainly seeing a lot of good results from lower precision models. So like, you know, 90% of the performance with just like much less RAM required. That means that we can run them on GPUs we have available, and it's good for customers as well because it runs faster, and like they want that trade-off, you know, where it's just slightly worse, but like way faster and cheaper.
Alessio [00:53:55]: Do you see a lot of GPU waste in terms of people running the thing on a GPU that is like too advanced? I think we use C4 to run Whisper. So we're at the bottom end of it. Yeah. Any thoughts? I think one of the hackathons we were at, people were like, oh, how do I get access to like H100s? And it's like, you need to run like- Dude, you don't need H100s.
Ben [00:54:14]: You don't need H100s. Yeah. Yeah. Well, if you want low licensee, like sure, like spend a lot of money on the H100. Yeah. We see a ton of that kind of stuff. And it's surprisingly hard to optimize these models right now. So a lot of people are just running like really unoptimized models. We're doing the same, honestly. Like we're a lot of models on Replicate have just been like not been optimized very well. So something we want to like be able to help people with is optimizing those models. Like either we show people how to with guides or we make it easier to use some of these more optimized inference servers or we show people how to compile the models or we do that automatically or something like that. But that's only something we're exploring because there's so much wastage. Like it's not just wasting the GPUs. It's also like a bad experience and the models run slow. So the models on Replicate almost all pushed by our community. Like people have pushed those models themselves, but like it's like a big head of distribution where there's like a long tail of lots of models that people have pushed. And then like a big head of like the models most people run. So models like Llama 2, like Stable Diffusion, you know, we work with Meta and Stability to like maintain those models. And we've done a ton of optimization to make this really fast. So those models are optimized, but the long tail is not. And there's like a lot of wastage there.
Alessio [00:55:32]: And going into the, well, it's already the new year. Do you see the customer demand and the GPU like hardware demand kind of like staying together? Because I think a lot of people are saying, oh, there's like hundreds of thousands of GPUs being shipped this year. Like the crunch is going to be over, but you also have like millions of people that now care about using AI. You know, how do you see the two lines progressing? Are you seeing customer demand is going to outpace the GPU growth? Do you see them together? Do you see maybe a lot of this like model improvement work kind of helping alleviate
Ben [00:56:04]: that? That's a really good question. From our point of view, demand is not outpacing supply GPUs, like we have enough, from our point of view, we have enough GPUs to go around, but that might change for sure. Yeah.
Alessio [00:56:15]: That's a very nicely put way as a startup founder to respond.
Swyx [00:56:21]: So as your frame did more, it's like sort of picking the wrong box model, whereas yours is more about maybe the inference stack, if you can call it. Were you referencing VLLM? What other sort of techniques are you referencing? Also keeping in mind that when I talk to your competitors, and I don't know if we don't have to name any of them, but they are working on trying to optimize the kinds of models. Like they basically, they'll quantize their models for you with their special stack. So you basically use their versions of Llamatu, you use their versions of Mistral, and that's one way to approach it. I don't see it as the replicate DNA to do that because that would be like sort of, you would have to slap the replicate house brand on something, which I mean, just comment on any of that. What do you mean when you say optimize models?
Ben [00:57:05]: Things like quantizing the models, you can imagine a way that we could help people quantize their models if we want to. We've had success using inference servers like VLM and TRT LLM, and we're using those kind of things to serve language models. We've had success with things like AI templates, which compile the models, all of those kinds of things. And there's like some even really just boring things of just like making the code more efficient. Like when they're just writing some Python code, it's really easy to just write inefficient Python code. And there's like really boring things like that as well, but it's like a whole smash of things like that.
Swyx [00:57:40]: You will do that for a customer? Like you look at their code and-
Ben [00:57:43]: Yeah, we've certainly helped some of our customers be able to do that, some of the stuff. And a lot of the models on, like the popular models on replicate, we've like rewritten them to use that stuff as well. And like the stable diffusion that we run, for example, is compiled for the AI template to make it super fast. And it's all open source that you can see all of this stuff on GitHub, if you want to like see how we do it. But you can imagine ways that we could help people. It's almost like built into the Cog layer maybe, where we could help people like use these fast inference servers or use AI template to compile their models to make it faster. Whether it's like manual, semi-manual or automatic, we're not really sure, but that's something we want to explore because it benefits everyone.
Swyx [00:58:21]: And then on the competitive piece, there was a price war on Mixtral last year, this last December. As far as I can tell, you guys did not enter that war. You have Mixtral, but it's just regular pricing. I think also some of these players are probably losing money on their pricing. You don't have to say anything, but the break even is somewhere between 50 to 75 cents per million tokens served. How are you thinking about like just the overall competitiveness in the market? How should people choose when everyone's an API?
Ben [00:58:50]: So for Lama2 and Mistral, I think not mixed trial, I can't remember exactly. We have similar performance and similar price to some of these other services. We're not like bargain basement to some of the others, because to your point, we don't want to burn tons of money, but we're pricing it sensibly and sustainably to a point where we think it's competitive with other people such that we want developers using Replicate and we don't want to price it such that it's only affordable by big companies. We want to make it cheap enough such that the developers can afford it, but we also don't want the super cheap prices, because then it's almost like then your customers are hostile and the more customers you get, the worse it gets. So we're pricing it sensibly, but still to the point where hopefully it's cheap enough to build on. And I think the thing we really care about, like we want to, obviously we want models and Replicate to be comparable to other people. But I think the really crucial thing about Replicate and the way I think we think about it is that it's not just the API for them, particularly in open source, it's not just the API for the model that is the important bit. It's because quite often with open source models, like the whole point of open source is that you can tinker on it and you can customize it and you can fine tune it and you can like smush it together with another model, like Lava, for example. And you can't do that if it's just like a hosted API, because it's just like, you know, you can't touch the code. So what we want to do with Replicate is build a platform that's actually open. So like we've got all of these models where the performance and price is on par with everything else. But if you want to customize it, you can fine tune it, you can go to GitHub and get the source code for it and edit the source code and push up your own custom version and this kind of thing. Because that's the crucial thing for open source machine learning is be able to tinker on it and customizing it. And we think that's really important to make open source AI work.
Alessio [01:00:39]: You mentioned open source. How do you think about levels of openness? When Lama 2 came out, I wrote a post about this, about it's like open source and there's open weights, then there's restrictive weights. It was on the front page of Agornews. So there was like all sort of comments from people. So I'm always curious to hear your thoughts. Like what do you think it's okay for people to license? What's okay for people to not release?
Ben [01:01:03]: You know, before it was just like closed source, big models, open source, little models, you know, purely open source stuff. And we're now seeing like lots of variations where, you know, model companies putting restrictive licenses on their models, you know, that means it can only be used for non-commercial use, you know, and a lot of the, you know, open source crowd is complaining it's not true open source, you know, and all this kind of thing. And I think a lot of that is coming from philosophy, you know, like the sort of free software movement kind of philosophy. And I don't think it's necessarily a bad thing. I think it's good that model companies can make money out of their models. You know, that's like how this will incentivize people to make more models and this kind of thing. And I think it's totally fine if like somebody made something to ask for some money in return if you're making money out of it. And I think that's totally okay. And I think there's some really interesting like midpoints as well where people are releasing the codes, you can still tinker on it, but the person who trained the model still wants to get a cut of it if like you're making a bunch of money out of it. And I think that's good. And that's going to make like the ecosystem more sustainable. I don't think anybody's really figured it out yet. We're going to see like more experimentation with this and more people like try to figure out like what are the business models around building models and how can I make money out of this? And we'll just see where it ends up. And I think it's something we want to support as Replicate as well because we believe in open source. We think it's great, but there's also going to be lots of models which are closed source as well. And these companies might not be, there's probably going to be a long tail of a bunch of people building models that don't have the reach that OpenAI have. And hopefully as Replicate, we can help those people find developers and help them make money and that kind of thing.
Alessio [01:02:46]: I think the computer requirements of AI kind of changed the thing. I started an open source company. I'm a big open source fan. And before it was kind of man hours was really all that went into open source. It wasn't much monetary investment. Well, not that man hours are not worth a lot, but if you think about Llama 2, it's like $25 million, you know, like all in, it's like you can't just spin up a discord and like spend $25 million. So I think it's net positive for everybody that Llama 2 is open source and well, it's the open source, you know, it's the open source term. I think people like you're saying, it's like they kind of argue on the semantics of it, but like all we care about is that Llama 2 is open because if Llama 2 wasn't open source today, like that, if Mistral was not open source, we will be in a bad spot, you know?
Ben [01:03:33]: So, and I think the nuance here is making sure that these models are still tinkerable because the beautiful thing about Llama 2 as a base model is that like, yeah, it costs $25 million to train to start with, but then you can fine tune it for like 50 bucks. And that's what's so beautiful about the open source ecosystem. And something I think is really surprising as well, like completely surprised me. Like I think a lot of people assumed that it's not going to be open source machine learning. It's just not going to be practical because it's so expensive to train these models. But like fine tuning is unreasonably effective and people are getting really good results out of it and it's really cheap. So people can effectively create open source models really cheaply. And there's going to be like this sort of ecosystem of tons of models being made. And I think the risk there from a licensing point of view is we need to make sure that the licenses let people do that, because if you release a big model under a non-commercial license and people can't fine tune it, you've lost the magic of it being open. And I'm sure there are ways to structure that such that the person paying $25 million feels like they're compensated somehow and they can feel like they can, you know, they should keep on training models and people can keep on fine tuning it. But I guess we just have to figure out exactly how that plays out.
Swyx [01:04:46]: Excellent. So just wanted to round it out. You've been an excellent, very open. I should have started my intro with this, but I feel like you found the sort of AI engineer crew before I did. And, you know, something I really resonated with you in sort of the Series B announcement was that you put in some stats here about how there are two orders of magnitude more software engineers than there are machine learning engineers, about 30 million software engineers and 500,000 machine learning engineers. You can maybe plus minus one of those orders of magnitude, but it's around that ballpark. And so obviously there will be a lot more engineers than there will be ML engineers. How do you see this group? Like, is it all software engineers? Are they going to specialize? What would you advise someone trying to become an AI engineer? Is this a legitimate career path?
Ben [01:05:30]: Yeah, absolutely. I mean, it's very clear that AI is going to be a large part of how we build software in the future. Now, it's a bit like being a software developer in the 90s and ignoring the Internet. You know, you just need to you need to learn about this stuff. You need to figure this stuff out. I don't think it needs to be super low level. You don't need to be like, you know, the metaphor here is that you don't need to be digging down into like this sort of Pytorch level if you don't want to in the same way as a software engineer in the 90s. You don't need to be like understanding how network stacks work to be able to build a website, you know, but you need to understand the shape of this thing and how to hold it and what it's good at and what it's not. And that's really important. So, yeah, certainly just advise people to like just start playing around with it, get a feel of like how language models work, get a feel of like how these diffusion models work, get a feel of like what fine tuning is and how it works, because some of your job might be building datasets, you know, get a feeling of how prompting works, because some of your job might be writing a prompt. And those are just all really important skills to sort of figure out.
Swyx [01:06:36]: Yeah. Well, thanks for building the definitive platform for doing all that.
Ben [01:06:41]: Yeah, of course.
Alessio [01:06:42]: And if I know call to actions, who should come work at Replicate, anything for the audience?
Ben [01:06:47]: Yeah, well, I mean, we're hiring. If you click on jobs at the bottom of our Replicate.com, there's some jobs. And I just encourage you to like just like try out AI, even if you don't, even if you think you're not smart enough. Like the whole reason I started this company is because I was looking at the cool stuff that Andreas was making. Like Andreas is like a proper machine learning person with a PhD, you know, and I was like just like a, you know, a sort of lowly software engineer. I was like, you're doing really cool stuff and I want to be able to do that. And by us working together, you know, we've now made it accessible to dummies like me. And I just encourage anyone who's like wants to try this stuff out, just give it a try. I would also encourage people who are tool builders. Like the limiting factor now on AI is not like the technology, like the technology has made incredible advances and there's just so many incredible machine learning models that can do a ton of stuff. The limiting factor is just like making that accessible to people who build products, because it's really hard to use this stuff right now. And obviously we're building some of that stuff as Replicate, but there's just like a ton of other tooling and abstractions that need to be built out to make this stuff usable. So I just encourage people who like building developer tools to just like get stuck into it as well, because that's going to make this stuff accessible to everyone.
Swyx [01:07:58]: Yeah, I especially want to highlight you have a hacker in residence job opening available, which not every company has, which means just join you and hack stuff. I think Charlie Holtz is doing a fantastic job of that.
Ben [01:08:09]: Yeah, effectively. Like most of our, a lot of our job is just like showing people how to use AI. So we've just got a team of like software developers and people have kind of figured this stuff out who are writing about it, who are making videos about it, who are making example applications to show people what you can do with this stuff.
Swyx [01:08:26]: Yeah. In my world that used to be called DevRel, but now it's hacker in residence.
Ben [01:08:31]: And this came from Zeke, who's another one of our hackers.
Swyx [01:08:38]: Tell me this came from Chroma, because I want to start that one.
Ben [01:08:41]: We developed, like they, Antoine actually was like, hey, we came up with that first. But I think we came up with it independently, because the story behind this is we originally called it the DevRel team. Yeah. And DevRel's cursed now. Zeke was like, that sounds so boring. I want to go to someone and say I'm a developer relations person, or a developer advocate or something. So we were like, okay, what's the like, the way we can make this sound the most fun? All right, you're a hacker.
Swyx [01:09:10]: I would say like that is consistently the vibe I get from Replicate. Everyone on your team I interact with. When I go to your San Francisco office, like that's the vibe that you're generating. Like it's a hacker space more than an office. And you hold fantastic meetups there. And I think you're a really positive presence in our community. So thank you for doing all that. And it's instilling the hacker vibe and culture into AI.
Ben [01:09:31]: I'm really glad that I'm really glad that's working. Cool. That's a wrap.
Alessio [01:09:34]: I think. Thank you so much for coming on, man.
Ben [01:09:36]: Yeah, of course. Thank you. This is a lot of fun.
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode