Software Huddle cover image

Fast Inference with Hassan El Mghari

Software Huddle

00:00

Intro

This chapter delves into the complexities of using open-source models, highlighting the expertise required in various LLM frameworks like VLLM and TLLM. It also reflects on the original vision of a company leveraging GPU resources from crypto and discusses common pitfalls faced by newcomers in AI app development.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app