AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
In this episode, Frederick Nielsen emphasizes the significance of considering the desired behaviors when making data technology and architecture choices. He highlights the importance of building a composable platform that can adapt to changing business requirements, with a focus on decentralization and being composable. He also mentions that going too wide with DataMesh implementation initially can hinder finding the right groove and building momentum. Additionally, Frederick discusses how cost transparency and individual responsibility for costs can drive DataMesh adoption and enable better decision-making related to trade-offs. He emphasizes the need to tie the concept of DataMesh to tangible use cases that align with strategic business goals and priorities. Lastly, he advocates for leveraging management consultants while ensuring clear internal data ownership for long-term success.
Frederick shares the insight that it can be more advantageous to focus on implementing DataMesh within more data mature teams initially, as they can move faster. This approach allows for a gradual rollout, providing time for less data mature teams to improve their data management capabilities before adopting DataMesh. He also mentions the importance of creating pre-configured approaches or golden paths for less data mature teams to enable them to start creating data products, bridging the gap and facilitating their transition to DataMesh.
Frederick emphasizes the need to connect DataMesh to tangible use cases that align with strategic business goals and priorities to gain buy-in and engagement from higher levels of the organization. By effectively demonstrating how DataMesh can drive specific business outcomes and focus on strategic priorities, the chances of success increase significantly. This involves showing how DataMesh can enable personalization, omni-channel experiences, and other critical business objectives.
Frederick highlights the role of cost transparency in driving DataMesh adoption. Teams want to understand their costs, and many organizations are aiming to reduce expenses. By providing teams with responsibility for their own costs, it becomes easier to spot trade-offs related to cost and make conscious decisions, such as taking on technical debt. Frederick also emphasizes the importance of decentralization and a composable platform to achieve adaptability to changing business requirements and enable various teams to own and manage their own data.
In the preparation phase for the DataMesh journey, Frederick recommends focusing on user journeys and identifying specific areas where implementing DataMesh can solve problems or provide value. By mapping out user journeys and understanding the data requirements for each step, it becomes evident where decentralization and handoffs between teams are necessary. Frederick suggests starting with use cases that are aligned with business objectives, gradually building capabilities, and assigning ownership of data products to the relevant teams. He also advises being mindful of technology choices and their influence on behavior and scalability.
Please Rate and Review us on your podcast app of choice!
Get involved with Data Mesh Understanding's free community roundtables and introductions: https://landing.datameshunderstanding.com/
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
Episode list and links to all available episode transcripts here.
Provided as a free resource by Data Mesh Understanding. Get in touch with Scott on LinkedIn.
Transcript for this episode (link) provided by Starburst. You can download their Data Products for Dummies e-book (info-gated) here and their Data Mesh for Dummies e-book (info gated) here.
Frederik's LinkedIn: https://www.linkedin.com/in/frederikgnielsen/
In this episode, Scott interviewed Frederik Nielsen, Engineering Manager at Pandora (the jewelry one, not the music one ๐ ).
Some key takeaways/thoughts from Frederik's point of view:
Frederik started with a bit about how their initial data mesh journey started - and it wasn't great ๐ It was led by management consultants and was focused on real-time data with a very tangible use case. However, two things came from it: 1) a better understanding of what data mesh should actually be used for and 2) buy-in around a very specific use case at the highest levels. So while there was a misinterpretation of data mesh and the use case wasn't the best fit, there was still excitement about the term - and somewhat the actual meaning broadly internally. Making it tangible got people to see the potential benefits.
Cost transparency has been a major driver for data mesh internally according to Frederik. Because the costs in a large monolithic stack are very opaque, decomposing the architecture has led to a far better understanding of the cost of individual pieces of work. Because inflation concerns were a big factor for retail in 2023, there was a bigger focus on cost reductions. Being able to give teams the freedom to take different approaches but making them responsible for the costs has led to better cost efficiency - teams can choose more costly methods but those decisions are more exposed. Also, because you have much finer-grained control, there are far more levers to pull when it comes to cost savings, e.g. shutting off test and dev environments at night or scaling up and down dynamically.
Frederik talked about a common pattern when moving to data mesh: some teams are more data mature than others. There will be plenty of teams that need help when it comes to data mesh, especially building good data products. They are considering creating a sort of golden path or easy button approach for those teams that aren't as mature as well to make things relatively pre-configured instead of having to make many complex decisions.
When driving buy-in at the wider level for data mesh, Frederik talked about pitching data mesh as an entire organization transformation versus pitching use case by use case. He believes it's probably better to focus on the use cases but it can be hard to focus on the complete picture of everything you need when you are also focused on specific use cases. It's always a balance between what is needed only for the use case and what is good for the overall company approach to data.
For Frederik, there are two big company strategic priorities: personalization and omni-channel experience (experience across in-store and online). So much of what they have been focusing on is finding use cases that tie into at least one of the priorities because then there will be executive support. Constantly tying the data work back to what people care about shows an understanding of the business instead of doing data work for the sake of data work. However, these are very big challenges across many domains and teams. So making sure to do things in a scalable way and finding the right balance between data products with still high interoperability is crucial.
When discussing bottlenecks, Frederik talked about how the measure for the centralized data team becoming a bottleneck was when the time between a data request and the actually delivery was expanding. The backlog was ballooning even though the data team was quite productive. Many people will feel the pain of the increasing time to delivery, leverage that while still showing a productive team. If you are executing well but aren't succeeding, you need a new strategy.
Frederik talked about the fact your data technology and architecture decisions will incentivize certain behaviors. A monolithic platform incentivizes monolithic ownership and handing off work, responsibilities, etc. When they introduced Kafka, it enabled them to push ownership upstream to data producers because the new technology allowed data producers to more easily own their data. It's of course difficult to incentivize your desired behaviors but always think about what you want to happen and try to make that the easy/happy path.
When it comes to ownership of data, Frederik thinks maturity really matters. When you want to go down the path of data mesh, trying to get every domain to really be advanced with data is just not that realistic. Some teams just don't see data as their focus so if they won't leverage much data for analytical or ML/AI use cases, they are less likely to want to own their data. And less capable quite frankly.
Circling back to tangible use cases, Frederik talked about one really key use case that saw a really big uptake that they couldn't really accomplish before going the data mesh route. Being able to tie something to actual impact, that really helped people get more interested. Whether that is a business capability or directly impacting a business metric. Similarly, when trying to find new use cases, the team did a lot of user journey mapping. The data for that user journey lives in many systems so you need lots of teams participating to make the data available but it can have a big impact on business. Many companies probably can't do something that complex in their existing architecture. You can use the inability to do amazing things in your existing architecture as a potential selling point.
Learn more about Data Mesh Understanding: https://datameshunderstanding.com/about
Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/
If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/
If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here
All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode