Join the hosts and guest in a discussion about self-hosting, AI, and more. Topics include training AI to play Pokemon, exploring local language models, configuring LLMs, and using Chat UI. They also talk about AI technologies, setting up GPUs, software updates, Bitcoin developments, troubleshooting home servers, storage solutions, Kubernetes challenges, and listener contributions.
Discussing AI training for Pokemon with reinforcement learning showcases human creativity in AI models.
The importance of T LM in simplifying system commands through conversational style interaction is highlighted.
Tail Scale serves as an efficient networking solution for secure connections between devices, simplifying networking processes.
Deep dives
The Introduction of the Guest and Discussion on Home Labs and Self-Hosting
The podcast episode kicks off with a special guest, Mr. West Pain, joining the hosts in a kitchen setting. They delve into various topics including home labs, self-hosting, and AI applications, expressing their excitement about discussing these subjects.
AI Training in Pokemon with Reinforcement Learning and Home Automation with YAML
They highlight a remarkable video featuring AI training to play Pokemon using reinforcement learning, emphasizing the human creativity behind AI models. The conversation shifts to home automation, with a focus on simplifying YAML generation for notifications, showcasing the practical applications of this technology.
Terminal-like Interaction with the Computer Using T LM
The discussion moves to T LM, a tool enabling users to interact with their computer in a conversational style. They illustrate how T LM allows tasks like retrieving network interface information in plain English, demonstrating its user-friendly and efficient approach to system commands.
Tail Scale's Networking Solution and Home Server Cost Discussion
Tail Scale is introduced as a networking tool facilitating secure and fast connections between devices, offering a simple and efficient networking solution. The conversation also touches on the cost of setting up a home server, with contributions from listeners providing insights on budget considerations and server setup.
Troubleshooting Server Hardware and the Concept of Declarative Deployments with Nix
The hosts share a detailed account of troubleshooting server hardware issues, particularly focusing on an HBA card causing system lockups. They further discuss the concept of declarative deployments using Nix, highlighting the potential for collaborative server builds and streamlined application deployment.
Open WebUI β Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI for various LLM runners, supported LLM runners include Ollama and OpenAI-compatible APIs.
Ollama β Get up and running with large language models, locally.
LM Studio - Discover, download, and run local LLMs β π€ - Run LLMs on your laptop, entirely offline
πΎ - Use models through the in-app Chat UI or an OpenAI compatible local server
π - Download any compatible model files from HuggingFace π€ repositories
π - Discover new & noteworthy LLMs in the app's home page
π Lunch at SCaLE π β Let's put an official time down on the calendar to get together. The Yardhouse has always been a solid go-to, so sit down and break bread with the Unplugged crew during the lunch break on Saturday!
Get the Snipd podcast app
Unlock the knowledge in podcasts with the podcast player of the future.
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode
Save any moment
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Share & Export
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
AI-powered podcast player
Listen to all your favourite podcasts with AI-powered features
Discover highlights
Listen to the best highlights from the podcasts you love and dive into the full episode