Talk Python To Me cover image

#304: asyncio all the things with Omnilib

Talk Python To Me

00:00

Async IO - Scaling the Wait Times

The idea is that you can generally do somewhere around 256 concurrent network requests on a single process before you really start to overload the event loop. The real problem at the end of the day is that the way that the async IO framework and event loops work is that for each task that you give them, it basically adds it to a round robin queue of all of the things that it has to work on. And so what we've seen actually is you end up with cases where you technically time out the request because it's taken too long for Python oryncIO to get back to the network request before it hits like a TCP interrupt. So this was essentially the answer to that - run an

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app