

Simplifying On-Device AI for Developers with Siddhika Nevrekar - #697
Aug 12, 2024
Siddhika Nevrekar, Head of AI Hub at Qualcomm Technologies, discusses simplifying on-device AI for developers. She highlights the shift from cloud to local device processing, emphasizing privacy and offline access. The conversation covers challenges in optimizing AI across varied hardware and the collaboration needed between AI frameworks and manufacturers. Siddhika also introduces Qualcomm's AI Hub, aimed at streamlining model testing and fostering innovation in IoT, autonomous vehicles, and enhancing user experiences with AI-integrated solutions.
AI Snips
Chapters
Transcript
Episode notes
On-Device ML Realization
- Siddhika initially doubted on-device ML's feasibility due to model size and data requirements.
- Working on Apple's Face ID, using the Apple Neural Engine, changed her perspective.
On-Device AI Motivations
- Developers move AI models to devices for cost savings, privacy, and connectivity.
- On-device AI allows leveraging the 'free cloud' in everyone's pocket.
Democratizing On-Device AI
- Make AI on-device accessible to all developers, not just specialists.
- This will unlock unimaginable AI experiences by broadening developer participation.