Advanced Lakehouse Management With The LakeKeeper Iceberg REST Catalog
Apr 21, 2025
auto_awesome
Victor Kessler, co-founder of Vakama and developer of Lakekeeper, dives into the world of advanced lakehouse management with a focus on Apache Iceberg. He discusses the pivotal role of metadata in data actionability and the evolution of data catalogs. Victor highlights innovative features of Lakekeeper, like integration with OpenFGA for access control and its deployment using Rust on Kubernetes. He also addresses the challenges of migrating data catalogs and the importance of community involvement in open-source projects for better data management.
Lakekeeper, an Apache Iceberg REST catalog, is crucial for managing metadata, storage, and compute components in lake houses.
The integration of OpenFGA with Lakekeeper enables improved access control by facilitating centralized and granular permissions for data management.
Datafold's AI-powered migration agent significantly accelerates data migrations, achieving timelines up to ten times faster than traditional methods.
Deep dives
AI-Powered Data Migration
Data migrations can often take months or even years, leading to resource strain and diminished team morale. An innovative AI-powered migration agent from Datafold aims to expedite this process, enabling migrations to be completed up to ten times faster than traditional manual methods. This agent utilizes both AI code translation and automated data validation to ensure seamless transitions. The solution's reliability is underscored by a guarantee of timely completion in writing, addressing a significant pain point in data management.
Understanding Lake Houses and Iceberg Catalogs
Lakekeeper is introduced as a pivotal component in the creation of lake houses, leveraging an open-source Apache Iceberg REST catalog written in Rust. A lake house requires three core components: storage, compute, and a catalog for managing lifecycle and table formats. The discussion emphasizes the essential role the catalog plays in managing metadata akin to an information schema in databases. This contrasts with the traditional reliance on the Hive Metastore, shifting the focus to a more flexible and actionable approach to metadata.
The Evolving Role of Metadata
In recent years, there has been a resurgence of interest in the catalog ecosystem, particularly in how metadata can become actionable rather than just a necessary component. Traditionally, the emphasis has been on data management, often sidelining the importance of well-governed metadata. The historical reliance on Hive Metastore limited insights into metadata management, leading to issues with ownership and outdated information. By innovating in the catalog space and emphasizing metadata's role, organizations can enhance governance and leverage metadata for effective data operations.
Challenges in Authorization and Security
Data systems today often face challenges with fragmented authorization models, where different tools manage permissions in isolation, complicating access and security management. It is essential to centralize authorization to ensure consistent access controls across various tools like Trino and Lakekeeper. By implementing OpenFGA alongside Lakekeeper, granular permissions can be organized effectively, managing access to data based on user roles and requirements. This approach simplifies the authorization landscape, creating a more secure and efficient data ecosystem.
The Future of Lakekeeper and Open Source Contribution
The roadmap for Lakekeeper includes supporting HDFS and implementing a central access governance model that integrates both data and AI access, known as Lake Share. The ongoing evolution of Lakekeeper demonstrates a commitment to addressing the diverse needs of users in multi-cloud environments while providing flexibility in handling metadata. Community contributions, such as implementing DuckDB or enhancing the platform’s scalability, showcase the collaborative spirit of open-source development. The call for broader participation in open source is emphasized, as contributions can significantly boost individual careers while driving technological innovation forward.
Summary In this episode of the Data Engineering Podcast Victor Kessler, co-founder of Vakama, talks about the architectural patterns in the lake house enabled by a fast and feature-rich Iceberg catalog. Victor shares his journey from data warehouses to developing the open-source project, Lakekeeper, an Apache Iceberg REST catalog written in Rust that facilitates building lake houses with essential components like storage, compute, and catalog management. He discusses the importance of metadata in making data actionable, the evolution of data catalogs, and the challenges and innovations in the space, including integration with OpenFGA for fine-grained access control and managing data across formats and compute engines.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data management
Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.
Your host is Tobias Macey and today I'm interviewing Viktor Kessler about architectural patterns in the lakehouse that are unlocked by a fast and feature-rich Iceberg catalog
Interview
Introduction
How did you get involved in the area of data management?
Can you describe what LakeKeeper is and the story behind it?
What is the core of the problem that you are addressing?
There has been a lot of activity in the catalog space recently. What are the driving forces that have highlighted the need for a better metadata catalog in the data lake/distributed data ecosystem?
How would you characterize the feature sets/problem spaces that different entrants are focused on addressing?
Iceberg as a table format has gained a lot of attention and adoption across the data ecosystem. The REST catalog format has opened the door for numerous implementations. What are the opportunities for innovation and improving user experience in that space?
What is the role of the catalog in managing security and governance? (AuthZ, auditing, etc.)
What are the channels for propagating identity and permissions to compute engines? (how do you avoid head-scratching about permission denied situations)
Can you describe how LakeKeeper is implemented?
How have the design and goals of the project changed since you first started working on it?
For someone who has an existing set of Iceberg tables and catalog, what does the migration process look like?
What new workflows or capabilities does LakeKeeper enable for data teams using Iceberg tables across one or more compute frameworks?
What are the most interesting, innovative, or unexpected ways that you have seen LakeKeeper used?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on LakeKeeper?
When is LakeKeeper the wrong choice?
What do you have planned for the future of LakeKeeper?
From your perspective, what is the biggest gap in the tooling or technology for data management today?
Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.