DevOps and Docker Talk: Cloud Native Interviews and Tooling cover image

Local GenAI LLMs with Ollama and Docker

DevOps and Docker Talk: Cloud Native Interviews and Tooling

00:00

Discussing Olama's Integration with Docker and GPU Utilization

Exploring the integration and architecture of Olama outside of a Docker stack on Mac for GPU utilization, discussing complexity, resource requirements, and the potential for developing web apps. Viewers are encouraged to visit Olama's website, download, and try it out.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app