
Generative AI in the Real World Putting AI in the Hands of Farmers with Rikin Gandhi
Aug 22, 2025
34:54
Rikin Gandhi, CTO of Digital Green, talks with Ben Lorica about using generative AI to help farmers in developing countries become more productive. Farmer.Chat integrates information from training videos, sources of weather and crop information, and other data sources in a multimodal app that farmers can use in real-time.
Points of Interest
- 0:45: Digital Green helps farmers become more productive. Two years ago, Digital Green developed Farmer.Chat, an app that uses generative AI to put local language training videos together with weather data, market information, and other data.
- 2:09: Our primary data source is our library of 10,000 videos in 40 languages that have been produced by farmers. We integrate additional sources for weather and market information. More recently, we’ve added information support tools.
- 3:38: We have a smartphone app. Users who only have feature phones can call into a number and interact with a bot.
- 5:00: Prior to Farmer.Chat, our work was primarily offline: videos shown on mobile projectors to an in-person audience. Sending content to phones flips the paradigm: rather than attending a video, farmers can ask questions relevant to their situation.
- 6:40: When did you realize that generative AI opened up new possibilities? It was a gradual transition from offline videos on projectors. COVID didn’t allow us to get groups of farmers together. And more farmers came online in the same period.
- 8:17: We had a deterministic bot before Farmer.Chat. But users had to traverse a tree to get the information they wanted. That tree was challenging to create and difficult to use.
- 9:33: With GPT-3, we saw that we could move away from complexity and cost of using a deterministic bot.
- 11:15: Did ChatGPT alert you to more possibilities? ChatGPT has scoured open internet knowledge. Farmers are looking for location and time-specific information. Even in the earliest version of ChatGPT, we saw that it had a lot of this information. Putting this world together with our video was powerful.
- 13:07: Accuracy, precision, and recall are all important. Are you fine-tuning and using RAG to make sure you are accurate? We had problems with hallucinations even within our knowledge base. We implemented reranking and filtering, which reduced hallucinations to <1%. We’ve created a golden Q&A set.
- 16:01: People are now talking about GraphRAG, the use of knowledge graphs for RAG. Can you create a knowledge graph because you know your data so well? A lot of concepts in agriculture are related—for example, crop calendars for how crops develop. We’re trying to build those relations into the system.
- 17:05: We are leveraging agentic orchestration for the overall pipeline. Based on the user’s query, we may be able to answer questions directly rather than go through the RAG pipeline.
- 18:44: Your situation is inherently multimodal: video, speech-to-text, voice; is this a challenge? We’re now using tools like GPT Vision to get descriptive metadata about what’s in videos. It becomes part of the database. We began with text queries; we added voice support. And now people can take a photo of a crop or an animal.
- 21:04: Foundation models are becoming multimodal. What’s your user interface today? What are you moving towards? We started with messaging apps that the users already use. We’re plugging the bot into that ecosystem. We’re migrating towards a reality that isn’t text first: putting video first so farmers can speak and take a video. For many farmers, this is the first time they’ve interacted with a bot. Autoprompts are important so they know that it has weather and locale-specific information.
- 23:57: What are specific challenges around AI—privacy, security, and ethics? Agriculture is often a sensitive subject. There’s a lot of personally identifiable information. We try to mask that information so it’s not used to train models. Farmers need to be able to trust that their information won’t be taken away from them.
