
Local AI Models with Joe Finney
.NET Rocks!
00:00
Local LLMs: Speed, Context, and Hardware Constraints
Joe explains performance trade-offs for local LLMs: slower responses, limited context windows, and GPU/NPU/RAM requirements.
Transcript
Play full episode