
Local AI Models with Joe Finney
.NET Rocks!
00:00
Hardware, Context Windows, and Performance
Hosts discuss GPUs, NPUs, context window limits, and hardware tradeoffs for running LLMs locally.
Play episode from 38:31
Transcript

Hosts discuss GPUs, NPUs, context window limits, and hardware tradeoffs for running LLMs locally.