DataTalks.Club cover image

From Theme Parks to Tesla: Building Data Products That Work

DataTalks.Club

00:00

Practical Inference: Hardware and Local LLM Deployment

Abouzar shares experiences running smaller LLMs locally on devices like Raspberry Pi and NVIDIA Orin for team productivity.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app