undefined

Nick Schrock

Discussing the importance of orchestration and a central location for managing data systems

Top 5 podcasts with Nick Schrock

Ranked by the Snipd community
undefined
76 snips
Oct 28, 2019 • 1h 8min

Build Maintainable And Testable Data Applications With Dagster

Summary Despite the fact that businesses have relied on useful and accurate data to succeed for decades now, the state of the art for obtaining and maintaining that information still leaves much to be desired. In an effort to create a better abstraction for building data applications Nick Schrock created Dagster. In this episode he explains his motivation for creating a product for data management, how the programming model simplifies the work of building testable and maintainable pipelines, and his vision for the future of data programming. If you are building dataflows then Dagster is definitely worth exploring. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show! This week’s episode is also sponsored by Datacoral, an AWS-native, serverless, data infrastructure that installs in your VPC. Datacoral helps data engineers build and manage the flow of data pipelines without having to manage any infrastructure, meaning you can spend your time invested in data transformations and business needs, rather than pipeline maintenance. Raghu Murthy, founder and CEO of Datacoral built data infrastructures at Yahoo! and Facebook, scaling from terabytes to petabytes of analytic data. He started Datacoral with the goal to make SQL the universal data programming language. Visit dataengineeringpodcast.com/datacoral today to find out more. You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, Corinium Global Intelligence, Alluxio, and Data Council. Upcoming events include the combined events of the Data Architecture Summit and Graphorum, the Data Orchestration Summit, and Data Council in NYC. Go to dataengineeringpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today. Your host is Tobias Macey and today I’m interviewing Nick Schrock about Dagster, an open source system for building modern data applications Interview Introduction How did you get involved in the area of data management? Can you start by explaining what Dagster is and the origin story for the project? In the tagline for Dagster you describe it as "a system for building modern data applications". There are a lot of contending terms that one might use in this context, such as ETL, data pipelines, etc. Can you describe your thinking as to what the term "data application" means, and the types of use cases that Dagster is well suited for? Can you talk through how Dagster is architected and some of the ways that it has evolved since you first began working on it? What do you see as the current industry trends that are leading us away from full stack frameworks such as Airflow and Oozie for ETL and into an abstracted programming environment that is composable with different execution contexts? What are some of the initial assumptions that you had which have been challenged or updated in the process of working with users of Dagster? For someone who wants to extend Dagster, or integrate it with other components of their data infrastructure, such as a metadata engine, what interfaces do you provide for extensibility? For someone who wants to get started with Dagster can you describe a typical workflow for writing a data pipeline? Once they have something working, what is involved in deploying it? One of the things that stands out about Dagster is the strong contracts that it enforces between computation nodes, or "solids". Why do you feel that those contracts are necessary, and what benefits do they provide during the full lifecycle of a data application? Another difficult aspect of data applications is testing, both before and after deploying it to a production environment. How does Dagster help in that regard? It is also challenging to keep track of the entirety of a DAG for a given workflow. How does Dagit keep track of the task dependencies, and what are the limitations of that tool? Can you give an overview of where you see Dagster fitting in the overall ecosystem of data tools? What are some of the features or capabilities of Dagster which are often overlooked that you would like to highlight for the listeners? Your recent release of Dagster includes a built-in scheduler, as well as a built-in deployment capability. Why did you feel that those were necessary capabilities to incorporate, rather than continuing to leave that as end-user considerations? You have built a new company around Dagster in the form of Elementl. How are you approaching sustainability and governance of Dagster, and what is your path to sustainability for the business? What should listeners be keeping an eye out for in the near to medium future from Elementl and Dagster? What is on your roadmap that you consider necessary before creating a 1.0 release? Contact Info @schrockn on Twitter schrockn on GitHub LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Dagster Elementl ETL GraphQL React Matei Zaharia DataOps Episode Kafka Fivetran Podcast Episode Spark Supervised Learning DevOps Luigi Airflow Dask Podcast Episode Kubernetes Ray Maxime Beauchemin Podcast Interview Dagster Testing Guide Great Expectations Podcast.__init__ Interview Papermill Notebooks At Netflix Episode DBT Podcast Episode The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
47 snips
Jul 24, 2022 • 58min

Re-Bundling The Data Stack With Data Orchestration And Software Defined Assets Using Dagster

Summary The current stage of evolution in the data management ecosystem has resulted in domain and use case specific orchestration capabilities being incorporated into various tools. This complicates the work involved in making end-to-end workflows visible and integrated. Dagster has invested in bringing insights about external tools’ dependency graphs into one place through its "software defined assets" functionality. In this episode Nick Schrock discusses the importance of orchestration and a central location for managing data systems, the road to Dagster’s 1.0 release, and the new features coming with Dagster Cloud’s general availability. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their new managed database service you can launch a production ready MySQL, Postgres, or MongoDB cluster in minutes, with automated backups, 40 Gbps connections from your application hosts, and high throughput SSDs. Go to dataengineeringpodcast.com/linode today and get a $100 credit to launch a database, create a Kubernetes cluster, or take advantage of all of their other services. And don’t forget to thank them for their continued support of this show! RudderStack helps you build a customer data platform on your warehouse or data lake. Instead of trapping data in a black box, they enable you to easily collect customer data from the entire stack and build an identity graph on your warehouse, giving you full visibility and control. Their SDKs make event streaming from any app or website easy, and their state-of-the-art reverse ETL pipelines enable you to send enriched data to any cloud tool. Sign up free… or just get the free t-shirt for being a listener of the Data Engineering Podcast at dataengineeringpodcast.com/rudder. Data teams are increasingly under pressure to deliver. According to a recent survey by Ascend.io, 95% in fact reported being at or over capacity. With 72% of data experts reporting demands on their team going up faster than they can hire, it’s no surprise they are increasingly turning to automation. In fact, while only 3.5% report having current investments in automation, 85% of data teams plan on investing in automation in the next 12 months. 85%!!! That’s where our friends at Ascend.io come in. The Ascend Data Automation Cloud provides a unified platform for data ingestion, transformation, orchestration, and observability. Ascend users love its declarative pipelines, powerful SDK, elegant UI, and extensible plug-in architecture, as well as its support for Python, SQL, Scala, and Java. Ascend automates workloads on Snowflake, Databricks, BigQuery, and open source Spark, and can be deployed in AWS, Azure, or GCP. Go to dataengineeringpodcast.com/ascend and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $5,000 when you become a customer. Your host is Tobias Macey and today I’m interviewing Nick Schrock about software defined assets and improving the developer experience for data orchestration with Dagster Interview Introduction How did you get involved in the area of data management? What are the notable updates in Dagster since the last time we spoke? (November, 2021) One of the core concepts that you introduced and then stabilized in recent releases is the "software defined asset" (SDA). How have your users reacted to this capability? What are the notable outcomes in development and product practices that you have seen as a result? What are the changes to the interfaces and internals of Dagster that were necessary to support SDA? How did the API design shift from the initial implementation once the community started providing feedback? You’re releasing the stable 1.0 version of Dagster as part of something called "Dagster Day" on August 9th. What do you have planned for that event and what does the release mean for users who have been refraining from using the framework until now? Along with your 1.0 commitment to a stable interface in the framework you are also opening your cloud platform for general availability. What are the major lessons that you and your team learned in the beta period? What new capabilities are coming with the GA release? A core thesis in your work on Dagster is that developer tooling for data professionals has been lacking. What are your thoughts on the overall progress that has been made as an industry? What are the sharp edges that still need to be addressed? A core facet of product-focused software development over the past decade+ is CI/CD and the use of pre-production environments for testing changes, which is still a challenging aspect of data-focused engineering. How are you thinking about those capabilities for orchestration workflows in the Dagster context? What are the missing pieces in the broader ecosystem that make this a challenge even with support from tools and frameworks? How has the situation improved in the recent past and looking toward the near future? What role does the SDA approach have in pushing on these capabilities? What are the most interesting, innovative, or unexpected ways that you have seen Dagster used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on bringing Dagster to 1.0 and cloud to GA? When is Dagster/Dagster Cloud the wrong choice? What do you have planned for the future of Dagster and Elementl? Contact Info @schrockn on Twitter schrockn on GitHub LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don’t forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com) with your story. To help other people find the show please leave a review on Apple Podcasts and tell your friends and co-workers Links Dagster Day Dagster 1st Podcast Episode 2nd Podcast Episode Elementl GraphQL Unbundling Airflow Feast Spark SQL Dagster Cloud Branch Deployments Dagster custom I/O manager LakeFS Iceberg Project Nessie Prefect Prefect Orion Astronomer Temporal The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA Support Data Engineering Podcast
undefined
Jul 9, 2024 • 1h 3min

#176 - Nick Schrock and Wes McKinney - Composable Data Stacks, Open Table Formats, and More

Nick Schrock and Wes McKinney discuss composable data stacks, open table formats, managing complexity, and trends in AI and ML. They explore challenges in data management, hardware acceleration for data processing, and reflections on data work.
undefined
Feb 6, 2024 • 48min

Nick Schrock (Founder, Dagster Labs & Co-Creator, GraphQL) - Facebook Eng Culture & Modern Data Stack Consolidation

Nick Schrock, Founder of Dagster Labs and previously Co-Creator of GraphQL at Facebook, discusses Facebook's culture of urgency and decentralization. He dives into the problem solved by GraphQL and its rollout within Facebook. The episode also covers the modern data stack and how Dagster Labs provides value to data engineering workflows. Nick's decision to transition from CEO to CTO is highlighted, along with advice for other founders in the Open Source space.
undefined
Dec 20, 2023 • 24min

148: Present & Future of Data Engineering

Megan Dibble, a data operations expert, and Nick Schrock, founder of Dagster Labs, delve into the dynamic world of data engineering. They clarify the distinctions between data engineering and data analytics, and introduce the hybrid role of analytics engineer. The discussion uncovers the evolution of roles in data engineering, spotlighting the shift towards a software engineering mindset. They also tackle challenges like vendor fatigue, the need for quality data, and strategies for effective data orchestration, emphasizing its vital role in decision-making.