4min chapter

Talk Python To Me cover image

#304: asyncio all the things with Omnilib

Talk Python To Me

CHAPTER

Async IO - Scaling the Wait Times

The idea is that you can generally do somewhere around 256 concurrent network requests on a single process before you really start to overload the event loop. The real problem at the end of the day is that the way that the async IO framework and event loops work is that for each task that you give them, it basically adds it to a round robin queue of all of the things that it has to work on. And so what we've seen actually is you end up with cases where you technically time out the request because it's taken too long for Python oryncIO to get back to the network request before it hits like a TCP interrupt. So this was essentially the answer to that - run an

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode