Data Mesh Radio cover image

Data Mesh Radio

Latest episodes

undefined
Mar 25, 2022 • 1h 1min

#46 Designing a Data Literacy Approach for Data Engineers - Interview w/ Dan Sullivan

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here.Dan's LinkedIn: https://www.linkedin.com/in/dansullivanpdx/Dan's Email: dan.sullivan at 4mile.ioIn this episode, Scott interviewed Dan Sullivan, Principal Data Architect at 4 Mile Analytics. A key point Dan brought up is tech debt around data. Taking on tech debt should ALWAYS be a very conscious choice. But the way most organizations work with data, it is much more of an unconscious choice, especially by data producers, who are taking on debt that the data engineering teams will have to pay down. We need to find ways to deliver value quickly but with discipline.Zhamak has mentioned in a few talks that data engineers soon may not exist in orgs deploying data mesh. Dan actually somewhat agrees that data engineering will change a lot as right now, there is a big rush to build out the initial iterations of data products (the industry definition). Going forward, Dan thinks there will be a need for data engineers that can really understand consumer needs and build the interactions, e.g. the SDKs, to leverage data.Dan has 3 key pillars for driving data literacy for data engineers are domain knowledge, learning, and collaboration. Data engineers should pair with business people to acquire domain knowledge, they should be given the opportunity to spend time doing things like online training to learn, and they should collaborate across the organization instead of just being ticket tacklers.Per Dan, not all data engineers are the same depending on background - some come from a data analyst/data science background but many come from a software engineering background. So we can't treat training all data engineers as if it's the same. But we do need them to have a well-rounded background. A big need is for them to understand more about the data consumers and/or the producers so embedding them in the domains can really help.For driving buy-in with data engineers, Dan points to the problems typically being around incentives. Data engineering is often hampered by organizational issues and a lack of clear direction. So if you can tackle those, you can often win over DEs. In any organization but especially in one implementing data mesh, standards, protocols, and contracts are all very important. However, most data engineering teams are not given the time to create them. They take a lot of effort and are hard to get right!Dan talked about how data can take a lot of useful practices from Agile, especially the fast-cycle feedback loop. And that data people really need to think more about the user experience (UX) for data. Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereAll music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 22, 2022 • 55min

#45 Data Governance in Data Mesh: Address the Micro and the Micro - Interview w/ Mohammad Syed

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Mohammad's Latest Data Governance article: https://www.linkedin.com/pulse/some-thoughts-data-governance-mohammad-syed/Book mentioned: Disrupting Data Governance: A Call to Action by Laura MadsenIn this episode, Scott interviewed Mohammad Syed, Lead Strategist - Data at Caruthers and Jackson about how data mesh governance has to be different from what we've done historically.Per Mohammad, data governance in data mesh is very different to doing governance for either a data lake or a data warehouse. The warehouse has a focus on high-level quality and usability but at the expense of context and agility. Data lake is about metadata and lineage but at the severe expense of usability - schema on query is not fun for consumers - and often quality. For most data organizations, governance has been very macro focused - governing the data warehouse or lake as a whole. That is part of why data governance has become a major bottleneck - the focus is on the macro but the individual requests are the micro.In data mesh, governance can shift to being about maximizing the value of the data instead of mostly preventing risk. Of course, there is a balance between local maximization - the value of each data product - and global maximization - the value at the overall data mesh level.A key focus to data mesh data governance is enabling - especially enabling the domains to govern their data products. Mohammad made the point that you need to enable your domains by creating the technical and business definitions of a "good" data product. Then the governance team needs to teach teams about the quality definitions, e.g. data product consumability. There is a need for policies of course but mostly focus on frameworks to enable policy creation and enforcement - decentralize!A key point Mohammad made was: governance only works with informed governors - you must teach domains to govern properly. Transparency is key to make data governance work. Mohammad emphasized the "good" data product definition leads to the separation of data quality and data product quality. A data product might be more valuable for other reasons - or less costly - by having relaxed data quality standards. In a data warehouse implementation, there is really only a single definition of "good" quality, but that just won't work in data mesh. We really need to develop better frameworks for what data quality means at the micro level. To get data governance right, strategy and maturity are crucial - what are you actually trying to accomplish? Data mesh for the sake of data mesh is worthless, just like any other paradigm.Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereAll music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 21, 2022 • 1h 15min

#44 A Pragmatic Approach to Getting Started with Data Mesh at Northern Trust - Interview w/ Khanh Chau

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Khanh's LinkedIn: https://www.linkedin.com/in/khanhnchau/In this episode, Scott interviewed Khanh Chau, Lead Architect for the Data Mesh Initiative at Northern Trust. Khanh believes you have to be passionate about making data better to do a good job implementing data mesh. And it is DEFINITELY a journey so you need patience and vision. Also, each journey is unique, you can't just copy/paste from another organization. You need to make failure okay - but you should look to make it easy to fail fast, measure, and adjust.Khanh talked about the need for exec buy-in before heading down the data mesh path. They got that exec buy-in by proving that the total cost of ownership of data was quite high as the consumers had to do a LOT of work to get the data to usable. When speaking internally, the business people were very excited to participate if it meant they could get quality data. Some of the IT/data engineering folks were harder to convince. It was especially hard to get them to shed layers of not-useful technology.Some IT teams were easier to convince - they had felt the impact of a few too many middle-of-the-night data downtime incidents. Other teams hadn't felt that pain so there were harder to win over. There was also the incentive of additional possibilities - data mesh meant they could do things they couldn't do before. Khanh talked about making the platform the easy and right path for 80% of use cases. They focused on making things easy to configure; basically: what transformations do you want to do and then it automatically provisions the pipelines. Their goal was to make it easy to make good progress quickly; their time to initial deploy went from 2-3 months per data service to 2-3 weeks per data product and they hope to drive it down further. Northern Trust has been moving forward with data mesh for about 7 months as part of their high-level digital transformation initiative. On the data side, they had previously focused on data virtualization and data federation but it was not delivering the results they wanted. It was not as scalable as they wanted - it was taking 2-3 months to launch each new data service. They also did not have great information on who was consuming the data and why. For their data mesh proof of concept, Khanh and team set a timeline of 9 weeks. They needed to prove value by then or data mesh would be a very tough sell internally. Khanh talked about the need to sell data mesh as a paradigm shift in order to get people out of technology-focused thinking.Northern Trust decided to take a pragmatic approach e.g. not pushing all aspects of data ownership fully left. Khanh and team were focused on finding a "happy balance" on data product SLAs and quality - improvement was necessary but the team preferred done to perfect.A big focus and a key driver for Northern Trust has been building muscle and learning/evolving along the way. It's important to evolve quickly and not build muscle in the wrong way. Northern Trust is still in the early days on figuring out interoperability between data products. It's more of an art than a science. Khanh believes bi-temporality is more important right now than interoperability.There are a lot of great learnings to takeaway from the Northern Trust journey.Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereAll music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 20, 2022 • 20min

Weekly Episode Summaries and Programming Notes - Week of Mar 20, 2022 - Data Mesh Radio

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 18, 2022 • 1h 15min

#43 Applying Resilience Engineering Practices to Scale Data Sharing - Interview w/ Tim Tischler

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Books/posts/papers mentioned:Blameless PostMortems and a Just Culture by John Allspaw - LinkThe Theory of Graceful Extensibility: Basic rules that govern adaptive systems by David D Woods - LinkThe Field Guide to Understanding 'Human Error' by Sidney Dekker - LinkIn this episode, Scott interviewed Tim Tischler, Principal Engineer at Wayfair. Prior to Wayfair, Tim worked as a Site Reliability Champion at New Relic and is well known in the "human factors" and resilience engineering space. Per Tim, our current work culture is overly action-item driven - every meeting must have a set of agenda items generated from it. This prevents people from having learning-focused meetings exclusively designed for context sharing. Humans' brains work differently between learning and fixing mode and we ask totally different questions. To be able to scale our knowledge sharing, we need to have the space to have learning-focused meetings.A good way to center learning-focused meetings, be they "show and tell" or event storming sessions, is via sharing stories - human communication is founded on story sharing through the millennia. Tim's "show and tell" and event storming sessions at Wayfair have had extremely positive reviews so far. Tim sees ticket-based interactions - just throwing requirements on someone's JIRA backlog or similar - as fundamentally flawed. If Team A gives Team B requirements, Team B just looks to close the ticket versus getting both sides in the room to exchange context and have a negotiation. Tim prefers two modes of interactions over ticket systems: #1 - no human-touch, automated interactions, e.g. an API; and #2 - high touch, high context sharing interactions.For resilience engineering specifically, you should apply learnings to each data product AND the mesh as a whole. Part of that is a broad acceptance that you are in a highly dynamic and highly changing org - there will be changes! A few anti-patterns to resilience engineering that apply to data mesh are: 1) a hub and spoke relationship model where one person is the key glue - this is bad at a human level and even worse at a technical level :); 2) business leaders pushing for metrics without sharing the specific context as the results end up as completely empty and useless things you are tracking; and 3) not embedding people building platforms into the teams they are building the platform for - they must really understand the workflows.Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereAll music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 15, 2022 • 59min

#42 Self-Serve Consumption Means Empowerment, Not Chaos - Interview w/ Ust Oldfield

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Ust's LinkedIn: https://www.linkedin.com/in/ust-oldfield/Ust's Twitter: @UstDoesTech / https://twitter.com/UstDoesTechIn this episode, Scott interviewed Ust Oldfield, Principal Consultant at Advancing Analytics. They covered the concept of self-serve from a consumer standpoint in data mesh, and some ideas around how to get it right. According to Ust, the overall data and analytics industry is just starting to move from data consumers only consuming what others have prepared towards self-serve data consumption. But, it is important to still provide those prepared reports so 1) people are working from the same info / on the same page and 2) you give people an easy - and maintained - path to important business information.Ust also mentioned one key to getting self-serve right is to not just enable consumers to get to the data they want, they really need to be able to understand what they are seeing so documentation, sample queries, and other similar tactics are very crucial. Consumers also need training on how to use the platform - in general, training for self-serve data consumption is very lacking across the industry right now.How do we share information at scale? Forums? "Show and Tell"? Office hours? Neither Ust nor Scott had great answers just yet. Time will tell.Ust finished with a recommendation for those building out their self-serve platforms for data consumption: spend a lot of time interviewing your data consumers to figure out what will empower them rather than just trying to deliver what you would want. Also, make sure to enable those who just want to consume data as prepared - those who want to be spoon-fed the info, that's fine, allow them to self-select as that is a valid approach to leveraging data.Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereAll music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 14, 2022 • 1h 5min

#41 Winning Over Application Developers: Agency not Autonomy - Interview w/ Jessitron (AKA Jessica Kerr)

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here (info gated)Jessitron pithy tweet: https://twitter.com/jessitron/status/1471352190149087233Jessitron Twitter: @jessitron / https://twitter.com/jessitronJessitron emails: jessitron @ gmail.com OR jessitron @ honeycomb.ioJessitron Presentation: Principles of Collaborative Automation: https://jessitron.com/2020/08/03/talk-principles-of-collaborative-automation/Jessitron blog post: To share the work, share the decisions: https://jessitron.com/2022/02/01/to-share-the-work-share/Book mentioned: Why Information Grows: The Evolution of Order, from Atoms to Economies by César A. HidalgoIn this episode, Scott interviewed Jessica Kerr more widely known as Jessitron who is a Principal Developer Advocate at Honeycomb.io. Or as many affectionately refer to her, the Empress of Software. You're probably gonna enjoy this one :)Scott asked Jessitron on because she had a very pithy tweet about data mesh being "conscious design for unexpected use" and because she knows the developer mindset extremely well and many folks are having trouble working with the developers to get them bought in to sharing their data well.Jessitron started off by discussing one of the big issues with application development: despite the tooling and process advancements of the last 20 years, it's all somehow only made application development harder. So the starting advice is don't just add more to their plate. It probably won't go well!Her biggest point was giving application developers agency in how they share their data is key. Autonomy is just passing the responsibility over without the help. Application developers want guidance/direction to the target outcome, but they want to make the choices on how to achieve said target outcome while being given the resources to do so. The information and capability to do their job is key.For driving buy-in, start with the why, not the ask. Let them know why their data is valuable. And BE SPECIFIC! The conversation should be about their potential impact, not just the negative of "you changed this and it broke" but the aspirational. You want them to start thinking about how can you work together to enable them to share their data in a high context / highly meaningful way.Data mesh is going to be a big culture shift for application developers - you need to not just put something high priority on the backlog. You need to give them the space - meaning that they have enough points or whatever on their backlog - to really understand and learn how to share their data well. You have to focus on teaching them how - possibly via an internal hackathon to start building that muscle - or possibly even a cross functional pair programming-like initiative to show each other your ways of working and share knowledge. Also, show them the impact they are having along the way as they get going, that will motivate them to do more.Be very conscious of language. The interview with the NAV team building the application platform talked about this a lot. Application developers and data people don't speak the same language. Work with them to put it into their language. Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereAll music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 13, 2022 • 18min

Weekly Episode Summaries and Programming Notes - Week of Mar 13, 2022 - Data Mesh Radio

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 11, 2022 • 1h 10min

#40 Getting Data-as-a-Product Right and Other Learnings From Adevinta's Data Mesh Journey - Interview w/ Xavier Gumara Rigol

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here (info gated)Xavier's Twitter: @xgumara / https://twitter.com/xgumaraXavier's LinkedIn: https://www.linkedin.com/in/xgumara/Adevinta meetup presentation: https://www.youtube.com/watch?v=av6cT_r4orQXavier's Medium Articles:https://medium.com/adevinta-tech-blog/building-a-data-mesh-to-support-an-ecosystem-of-data-products-at-adevinta-4c057d06824dhttps://medium.com/adevinta-tech-blog/treating-data-as-a-product-at-adevinta-c1dce5d394c5https://towardsdatascience.com/data-as-a-product-vs-data-products-what-are-the-differences-b43ddbb0f123Scott interviewed Xavier Gumara Rigol who has been helping lead Adevinta's data mesh implementation as Area Manager for Experimentation and Analytics Enablement. The discussed the data as a product concept and learnings from Adevinta's journey thus far. Xavi has put out some great articles and did a Data Mesh Learning meetup that are linked below.One key aspect to data as a product is to understand the need for data product evolution, both relative to maturity and to what is consumed. This is a common theme in many data mesh conversations as historically, data consumption has resisted evolution and change. Consumers need to really understand that the business is evolving so what they consume will too. If you manage data products well, it won't be a sudden change but if we are trying to share insights into a domain, those insights will change. When thinking about data product maturity, it's totally okay to start by thinking of a data product as a single table or view. Xavi also mentioned some pitfalls to forced data product evolution - e.g. getting it wrong as changes can be quite costly to backfill. Adding new attributes is easy but computing something for 3 to 6 months in hindsight can cost a lot of compute charges. To do do evolution right, versioning and deprecation plans are key.To get data as a product right, Xavi recommends start by prioritizing which data you want to make available; this is a process, not a switch to flip. You should figure out which data is important for each domain and at the broader organization level.Applying data as a product thinking to your data sets is easier said than done. While data mesh is a leading proponent, companies not doing data mesh can also use data as a product thinking - Adevinta started down this path before embarking on their data mesh journey. Of course, data as a product is far easier said than done. For Adevinta's data mesh journey, they started with every data product being a single table. Data was originally centrally managed so interoperability was already established. However, the documentation was lacking and the general usability wasn't great. They spent their first few quarters just focusing on splitting their monolithic data production into separate pipelines for each domain instead of one giant cluster. The giant cluster was becoming a major bottleneck as changes were hard and maintainability was getting harder every day. Now, each domain essentially has one data product but with multiple dimensions/tables. Each product is layered and each layer has different granularity and SLAs. A few other notable points: Xavi believes all data products should be accessible via SQL but definitely not only SQL. Template/blueprints for data products are incredibly useful and important. The tooling/practices to prevent application changes from breaking the data are just very lacking - Adevinta uses data model reviews but it's still not perfect.Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereAll music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
undefined
Mar 9, 2022 • 23min

#39 Driving Buy-In for Data Mesh - Mesh Musings 7

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/Transcript provided by Scott Hirleman hereAn episode all about driving buy-in by different personas for data mesh.Please Rate and Review us on your podcast app of choice!If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see hereEpisode list and links to all available episode transcripts here.Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app