

Open||Source||Data
Charna Parkey
What can we learn from ai-native development through stimulating conversations with developers, regulators, academics and people like you that drive forward development, seek to understand impact, and are working to mitigate risk in this new world?
Join Charna Parkey and the community shaping the future of open source data, open source software, data in AI, and much more.
Join Charna Parkey and the community shaping the future of open source data, open source software, data in AI, and much more.
Episodes
Mentioned books

Nov 15, 2023 • 58min
Throwback: The AI-Native Stack with Mikiko Bazeley, Zain Hasan, and Tuana Celik
This episode features a panel discussion with Mikiko Bazeley, Head of MLOps at Featureform; Zain Hasan, Senior Developer Advocate at Weaviate; and Tuana Celik, Developer Advocate at deepset.In this episode, Mikiko, Zain, and Tuana discuss what open source data means to them, how their companies fit into the AI-first ecosystem, and how jobs will need to evolve with the AI-native stack.-------------------“We're almost part of a fancy new AI robot kitchen that you'd find in Tokyo, in some ways. I see a virtual feature store as, yes, you can have a bunch of your ingredients tossed into a closet. Or, what you can do is you can essentially have a nice way to organize them. You can have a way to label them, to capture information.” – Mikiko Bazeley“I really like that analogy as well. I like how Mikiko put it where a vector search engine is really extracting value from what you've already got. [...] So where I see vector search engines, really, is if we think of these embedding providers as the translators to take all of our unstructured data and bring it into vector space into a common machine language, vector search engines are essentially the workhorses that allow us to compute and search over these objects in vectorized format. They're essentially the calculators of the AI stack.” – Zain Hasan“Haystack, I would really position as the kitchen. I need Mikiko to bring the apples. I need Zain to bring the pears. I need Hugging Face or OpenAI to bring the oranges to make a good fruit salad. But, Haystack will provide the spoons and the pans and the knives to make that into something that works together.” – Tuana Celik-------------------Episode Timestamps:(02:58): What open source data means to the panelists(09:11): What interested the panelists about AI/ML(24:10): Mikiko explains Featureform(27:00): Zain explains Weaviate(30:23): Tuana explains deepset(36:00): The panelists discuss how their companies fit into the AI-first ecosystem(44:58): How jobs need to evolve with the AI-native stack(54:35): Executive producer, Audra Montenegro's backstage takeaways-------------------Links:LinkedIn - Connect with MikikoVisit FeatureformLinkedIn - Connect with ZainVisit WeaviateLinkedIn - Connect with TuanaVisit deepsetVisit Data-centric AI

Nov 1, 2023 • 38min
How We Should Think About Data Reliability for Our LLMs with Mona Rakibe
This episode features an interview with Mona Rakibe, CEO and Co-founder of Telmai, an AI-based data observability platform built for open architecture. Mona is a veteran in the data infrastructure space and has held engineering and product leadership positions that drove product innovation and growth strategies for startups and enterprises. She has served companies like Reltio, EMC, Oracle, and BEA where AI-driven solutions have played a pivotal role.In this episode, Sam sits down with Mona to discuss the application of LLMs, cleaning up data pipelines, and how we should think about data reliability.-------------------“When this push of large language model generative AI came in, the discussions shifted a little bit. People are more keen on, ‘How do I control the noise level in my data, in-stream, so that my model training is proper or is not very expensive, we have better precision?’ We had to shift a little bit that, ‘Can we separate this data in-stream for our users?’ Like good data, suspicious data, so they train it on little bit pre-processed data and they can optimize their costs. There's a lot that has changed from even people, their education level, but use cases also just within the last three years. Can we, as a tool, let users have some control and what they define as quality data reliability, and then monitor on those metrics was some of the things that we have done. That's how we think of data reliability. Full pipeline from ingestion to consumption, ability to have some human’s input in the system.” – Mona Rakibe-------------------Episode Timestamps:(01:04): The journey of Telmai (05:30): How we should think about data reliability, quality, and observability (13:37): What open source data means to Mona(15:34): How Mona guides people on cleaning up their data pipelines (26:08): LLMs in real life(30:37): A question Mona wishes to be asked(33:22): Mona’s advice for the audience(36:02): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with MonaLearn more about Telmai

Oct 18, 2023 • 41min
Throwback: Open Source Innovation, The GPL for Data, and The Data In to Data Out Ratio with Larry Augustin
This episode features an interview with Larry Augustin, angel investor and advisor to early-stage technology companies. Larry previously served as the Vice President for Applications at AWS, where he was responsible for application services like Pinpoint, Chime, and WorkSpaces.Before joining AWS, Larry was the CEO of SugarCRM, an open source CRM vendor. He also was the founder and CEO of VA Linux, where he launched SourceForge. Among the group who coined the term “open source”, Larry has sat on the boards of several open source and Linux organizations.In this episode, Sam and Larry discuss who owns the rights to data, the data in to data out ratio, and why Larry is an open source titan.-------------------"People are willing to give up so much of their personal information because they get an awful lot back. And privacy experts come along and say, ‘Well, you're taking all this personal information’. But then most people look at that and say, ‘But I get a lot of value back out of that.’ And it's this data ratio value question, which is: for a little in, I get a lot back. That becomes a key element in this. And I think there has to be some kind of similar thought process around open source data in general, which is if I contribute some data into this, I'm going to get a lot of value back. So this data in to data out ratio, I think it's an incredibly important one. And it gets everyone in the mindset of, ‘How do I provide more and more and take less and less?’ It's a principle of application development that I like a lot. And I think there's a similar concept here around open source data. Are there models or structures that we can come up with where people can contribute small amounts of data and as a result of that, they get back a lot of value.” – Larry Augustin-------------------Episode Timestamps:(02:52): How Larry is spending his time now after AWS(06:25): What drove Larry to open source(18:41): What is the GPL for data?(24:28): Areas of progress in open source data(28:57): The data in to data out ratio(36:39): Larry’s advice for folks in open source-------------------Links:LinkedIn - Connect with LarryTwitter - Follow Larry

Sep 27, 2023 • 45min
Reframing Machine Learning and AI-Assisted Development with Jorge Torres
This episode features an interview with Jorge Torres, Co-founder and CEO of MindsDB. MindsDB is a virtual AI database that works with existing data to help developers build AI-centered apps. In 2008, Jorge began his work on scaling solutions using machine learning as the first full-time engineer at Couchsurfing, growing the company from a few thousand users to a few million. He has also served a number of data-intensive start-ups and was a visiting scholar at UC Berkeley researching machine learning automation and explainability.In this episode, Sam and Jorge discuss the inspiration and challenges behind MindsDB, classic data science AI versus applied AI, and time series transformers.-------------------“So much data in the world is time series data, so much data. Even data that people don't know is time series, it's time series. So long as it’s moving over time, it is time series data. Whether you store it or not, that's a different thing. For having a pre-trained model on time series data, it even enabled the fact that you don't have to store all the historical data. You can just take the model and start passing data as it comes through, and then you get out the forecast. So you don't even have to have the historical data. All you need to have is the data at that given instance, and you can pass it to the model and you get an output. It's mind blowing.” – Jorge Torres-------------------Episode Timestamps:(05:20): The inspiration behind MindsDB(10:20): Classic data science AI approach vs. applied AI(22:09): What open source data means to Jorge(28:51): What excites Jorge about Nixtla and time series transformers(37:07): A question Jorge wishes to be asked(40:20): Jorge’s advice for the audience(41:38): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with JorgeLearn more about MindsDB open source codeLearn more about MindsDB

Sep 6, 2023 • 1h 10min
A Sam Ramji Feature: The Evolution of Open Source, Kubernetes, and AI's Forward Journey
Sam Ramji discusses Microsoft's transformation, the impact of Kubernetes, and the rapid acceleration of AI research and development. Topics include defragmentation of the industry by Kubernetes, the transformative power of AI, the concept of cognitive economy, and implications of advancements in robotics, AI, and clean energy.

Aug 23, 2023 • 46min
The Importance of Open Source Data for Generative AI, Now and in the Future with Abby Kearns
This episode features an interview with Abby Kearns, technology executive, board director, and angel investor. Her career has spanned executive leadership, product marketing, product management, and consulting across Fortune 500 companies and startups, including Puppet, Cloud Foundry Foundation, and Verizon. Abby currently serves as a board director for Lightbend, Stackpath, and Invoke. In this episode, Sam sits down with Abby to discuss the betrayal source license, the role open source plays in AI, and empowering trust.-------------------“There's so much happening so quickly that I think open source has the power to help harness a lot of that innovative conversation. In a way that I think it's going to be really, really hard to match in a proprietary way. I think open source and the ability, given the fact that we're talking about AI and data, the two are very interrelated at this point. AI is not super interesting without data. I think the power of open source right now and what's happening, I think it has to happen in open source and I think it really has to have that level of transparency and visibility. But, always the ability for everyone to step up and understand what's happening at this moment in time and shape it.” – Abby Kearns-------------------Episode Timestamps:(00:50): Sam and Abby discuss the betrayal source license(14:12): What open source data means to Abby(23:30): Abby dives into the companies she’s investing in(34:30): How nonprofits can empower trust(38:32): A question Abby wishes to be asked(40:21): Abby’s advice for the audience(43:53): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with AbbyTwitter - Follow AbbyRead Design the Life You Love

Aug 9, 2023 • 34min
The Value of Reproducibility and Ease of AI Deployment with Daniel Lenton
This episode features an interview with Daniel Lenton, Founder and CEO of Ivy, where the team is on a mission to unify the fragmented AI stack. Prior to Ivy, Daniel was a Robotics Research Engineer at Dyson and a Deep Learning Research Scientist for Amazon Prime Air. During his PhD, Daniel explored the intersection between learning-based geometric representations, ego-centric perception, spatial memory, and visuomotor control for robotics.In this episode, Sam and Daniel discuss the inspiration behind Ivy, open source reproducibility, and democratizing AI.-------------------"There's too much amazing stuff going on, from too many different parties. We just want to be the objective source of truth to show you the data and show you where your model will be doing best, and continue to do this as a service or something like this. This is high-level, some of the areas we see and going into, we really want to be a useful tool for anybody that wants to just kind of understand this fragmented complex space quickly and intuitively, and we are trying to be the tool that does that." – Daniel Lenton-------------------Episode Timestamps:(01:00): What open source data means to Daniel(05:37): The challenges of building Ivy(15:37): The future of Ivy(25:19): Who should know about Ivy(28:46): Daniel’s advice for the audience(32:00): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with DanielLearn more about Ivy

Jul 26, 2023 • 50min
ML Engineering Teams and Niche Chat Bot Experiences with Demetrios Brinkmann
This episode features an interview with Demetrios Brinkmann, Founder of the MLOps Community, an organization for people to share best practices around MLOps. Demetrios fell into the Machine Learning Operations world and has since interviewed leading names around MLOps, data science, and machine learning. In this episode, Sam sits down with Demetrios to discuss LLM in production use cases, ML engineering teams, and the LLM Survey Report from the MLOps Community.-------------------"I think the most novel ones that I saw from the survey were when a chat bot would prompt a human as opposed to the human prompting the chat bot. It's almost like you have this LLM coach. And in that way, it's not necessarily like this isn't LLM in production that an end user is getting that's not outside the business or that is outside the business. It's more like internally, you can think about maybe it's an accountant and the accountant is filing my taxes for the year. As they're filing them, the LLM is prompting them on different tax laws that maybe they weren't thinking about or different ways that they could file things." – Demetrios Brinkmann-------------------Episode Timestamps:(04:30): LLMs as the new standard(19:26): Key LLM in production use cases(31:18): What open source data means to Demetrios(34:36): What Demetrios is seeing in open source AI models(42:44): One question Demetrios wishes to be asked(44:41): Demetrios’s advice for the audience(47:19): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with DemetriosRead the LLM Survey ReportListen to The MLOps Podcast

Jul 12, 2023 • 4min
Building With Trust, Inspiration, and Reputation with Jaya Gupta, Yuliia Tkachova, and Omoju Miller
This bonus episode features conversations from season 5 of the Open||Source||Data podcast. In this episode, you’ll hear from Jaya Gupta, Partner at Foundation Capital; Yuliia Tkachova, Co-founder and CEO of Masthead Data; and Omoju Miller, Founder and CEO of Fimio.Sam sat down with each guest to discuss how they are building foundations for trust, inspiration, and reputation as we all race into the AI-centric future.You can listen to the full episodes from Jaya Gupta, Yuliia Tkachova, and Omoju Miller by clicking the links below.-------------------Episode Timestamps:(00:49): Jaya Gupta(01:48): Yuliia Tkachova(03:03): Omoju Miller-------------------Links:Listen to Jaya’s episodeListen to Yuliia’s episodeListen to Omoju’s episode

Jun 28, 2023 • 34min
FMOps and a Founders Automated Future with Jaya Gupta
This episode features an interview with Jaya Gupta, Partner at Foundation Capital, where she leads early-stage investments across the enterprise software stack. Previously, Jaya was a Senior Business Analyst at McKinsey & Company focusing on software diligence and helping startups expand their go-to-market strategies.In this episode, Sam and Jaya discuss her journey to Foundation Model Ops, how software is becoming more accessible, and the democratization of AI tools.-------------------"At the end of the day, FMOps isn't just about the new tools. It's actually more about the new builders, the new workflows, and a completely new market of customers. I was on the other day, looking at LangChain's page of integrations, I don't know if you've seen it, but it's like Anyscale, Databricks, all these other huge legendary companies are integrating with LangChain, and I think it's clear that there's a huge community that is building something real and valuable." – Jaya Gupta-------------------Episode Timestamps:(01:05): What open source data means to Jaya(08:51): Jaya’s journey to Foundation Model Ops(15:58): How software is becoming more accessible(23:04): The democratization of AI tools(27:01): One question Jaya wishes to be asked(29:32): Jaya’s advice for the audience(31:51): Backstage takeaways with executive producer, Audra Montenegro-------------------Links:LinkedIn - Connect with JayaFollow Jaya on TwitterLearn more about FMOps