An Opinionated Look At End-to-end Code Only Analytical Workflows With Bruin
Nov 11, 2024
auto_awesome
Burak Karakan, co-founder of Bruin and a seasoned software engineer, discusses the advantages of a code-only approach to data workflows. He emphasizes how a unified data management tool can simplify analytics for mobile gaming companies. Burak details Bruin's open-source architecture, which allows small teams to efficiently manage their data sources. He also covers the evolution of the Bruin toolchain and its role in enhancing collaboration. Finally, he stresses the need for improved data quality and accessibility in analytical systems.
Bruin focuses on a unified, code-only data platform to streamline workflows specifically for small to mid-sized companies, enhancing data management efficiency.
DataFold’s monitoring capabilities allow real-time visibility and control, enabling proactive detection of discrepancies and anomalies in data processes to maintain integrity.
The flexibility of Bruin's code-driven workflows empowers users to seamlessly integrate Python and SQL, fostering composability and adaptability in data pipelines.
Deep dives
Real-time Data Monitoring
DataFold’s monitors provide automatic monitoring for data processes, allowing users to catch discrepancies and anomalies in real time at the source. By monitoring cross-database data differences, schema changes, key metrics, and custom data tests, the tool enhances data integrity and aids in preventing costly mistakes. This proactive approach ensures that data issues are addressed before they escalate, offering greater visibility and control over data management. The ability to maintain a smooth data stack is critical for businesses reliant on accurate and timely data.
Building Code-only Data Systems
The focus of Bruin is on creating code-only data platforms that target small to mid-sized companies, particularly in the gaming industry, where data plays a crucial role. Bruin's approach stems from recognizing the challenges faced by teams without the necessary data infrastructure, allowing them to analyze large volumes of clickstream data effectively. By offering simplified tools for data ingestion, transformation, quality, and governance, Bruin reduces the complexity often associated with data handling in smaller organizations. This enables teams to harness their data effectively without overwhelming resource demands.
Unified Toolchain Evolution
Bruin represents a shift from the fragmented modern data stack towards a more unified toolchain that integrates various stages of data management. Instead of relying on multiple disparate tools for different tasks, Bruin provides an all-in-one solution that simplifies the workflow from data integration to analytical processes. The current focus is on addressing the shortcomings of the prior disaggregation phase by improving observability and reducing data access complexity. This holistic approach caters to increasing the efficiency and effectiveness of data teams who need to navigate various data tasks seamlessly.
Flexibility and Extensibility in Workflow
Bruin emphasizes flexibility through code-driven workflows, enabling users to incorporate Python alongside SQL within their data pipelines. This composability allows teams to run custom scripts and models easily while leveraging existing data structures within the platform. By treating different programming assets as first-class citizens, users can efficiently transition between various tasks without being locked into specific methodologies or requiring separate infrastructures. This design aligns with the ongoing trend towards making data workflows more adaptable and user-friendly.
Future Directions and Community Integration
Moving forward, Bruin aims to enhance its user experience by providing more templates designed to accelerate the setup of data projects. This initiative is complemented by a push to deepen the semantic understanding of data assets and improve lineage tracking across workflows. Additionally, there is a desire to expand the execution capabilities of Bruin, allowing for remote execution of tasks to accommodate larger volumes of data processing. Engaging with the community through open-source contributions is also a priority, fostering a collaborative environment for improving and building upon Bruin's functionalities.
Summary The challenges of integrating all of the tools in the modern data stack has led to a new generation of tools that focus on a fully integrated workflow. At the same time, there have been many approaches to how much of the workflow is driven by code vs. not. Burak Karakan is of the opinion that a fully integrated workflow that is driven entirely by code offers a beneficial and productive means of generating useful analytical outcomes. In this episode he shares how Bruin builds on those opinions and how you can use it to build your own analytics without having to cobble together a suite of tools with conflicting abstractions.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Imagine catching data issues before they snowball into bigger problems. That’s what Datafold’s new Monitors do. With automatic monitoring for cross-database data diffs, schema changes, key metrics, and custom data tests, you can catch discrepancies and anomalies in real time, right at the source. Whether it’s maintaining data integrity or preventing costly mistakes, Datafold Monitors give you the visibility and control you need to keep your entire data stack running smoothly. Want to stop issues before they hit production? Learn more at dataengineeringpodcast.com/datafold today!
Your host is Tobias Macey and today I'm interviewing Burak Karakan about the benefits of building code-only data systems
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what Bruin is and the story behind it?
Who is your target audience?
There are numerous tools that address the ETL workflow for analytical data. What are the pain points that you are focused on for your target users?
How does a code-only approach to data pipelines help in addressing the pain points of analytical workflows?
How might it act as a limiting factor for organizational involvement?
Can you describe how Bruin is designed?
How have the design and scope of Bruin evolved since you first started working on it?
You call out the ability to mix SQL and Python for transformation pipelines. What are the components that allow for that functionality?
What are some of the ways that the combination of Python and SQL improves ergonomics of transformation workflows?
What are the key features of Bruin that help to streamline the efforts of organizations building analytical systems?
Can you describe the workflow of someone going from source data to warehouse and dashboard using Bruin and Ingestr?
What are the opportunities for contributions to Bruin and Ingestr to expand their capabilities?
What are the most interesting, innovative, or unexpected ways that you have seen Bruin and Ingestr used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on Bruin?
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.