AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Monitoring Your Data Pipelines
The orchestration system is actually smart enough to do things like say, hey, I know I had auto-scaled down to just five nodes, but now I need 30 nodes. The type of programming, it's quite difficult to do if you try to code it up. We have a bunch of customers that essentially have integrated into like Sumo Logic or Splunk all of the access data that comes out of a send. And so that's an infrastructure efficiency. Then the fourth is actually in our world of Spark efficiency, which is how much data are you reprocessing unnecessarily? So oftentimes it's, are you sending the smallest, most compact, most efficient jobs to your processing