.NET Rocks! cover image

Local AI Models with Joe Finney

.NET Rocks!

00:00

Local LLMs: Speed, Context, and Hardware Constraints

Joe explains performance trade-offs for local LLMs: slower responses, limited context windows, and GPU/NPU/RAM requirements.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app