Software Huddle cover image

Fast Inference with Hassan El Mghari

Software Huddle

00:00

Intro

This chapter delves into the complexities of using open-source models, highlighting the expertise required in various LLM frameworks like VLLM and TLLM. It also reflects on the original vision of a company leveraging GPU resources from crypto and discusses common pitfalls faced by newcomers in AI app development.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app