
Local AI Models with Joe Finney
.NET Rocks!
00:00
Hardware, Context Windows, and Performance
Hosts discuss GPUs, NPUs, context window limits, and hardware tradeoffs for running LLMs locally.
Transcript
Play full episode

Hosts discuss GPUs, NPUs, context window limits, and hardware tradeoffs for running LLMs locally.