AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Canisis and Kanesusi - How to Scale a Big Data Cluster
With kanesusi, it gives you very strict, predictable concurrency. Every container is going to run one job at a time. The clusters are completely uniform in te the size. So we try to evenly size them by splitting them into more or less similarly sized jobs. And i is in series, and they're using a fixed capacity. It's like four gig abites of ram, a single c pu in a container infor structure,. Or similar inlamda, right? You could figure it for four gig abite, and you're using pretty much one cpu. This makes monitoring and trouble shoting a lot easier actually.