Talk Python To Me cover image

#356: Tips for ML / AI startups

Talk Python To Me

Yera Speech Recognition Models Aren't as Expensive as They Used to Be

2min Snip

00:00
Play full episode
Every three to four months, we're like throwing out the current network architecture and using a different one that is giving us better results. And so it's just not trade at bu. Warwork will work on making the models smaller and more compute efficient and less costly to run. Bur right now, t like, like our speech recognition model that does inference on a gpu. It still uses gpus. Yera models acreted iracl, mean, there's, we could run it on cpu, but it's justnot as parallelizable as running it on g pus. O, interesting. Owyou do evaluate the stuff.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode