2min chapter

Practical AI: Machine Learning, Data Science, LLM cover image

Stellar inference speed via AutoNAS

Practical AI: Machine Learning, Data Science, LLM

CHAPTER

How Does Your Platform Integrate With Your Devops Pipeline?

We look at our platform as an ent platfrom fom development to production. We develop two production tools, one of them called inferia and the second is called a t a c. Inferios light weight, edge inference engine that could be integrated to a monolite application easily. And at a c is a containers server. So if you'll take that solution, it could be easily deployed by a devops with the model inside,. fetch from the model repository that we provided part of our sasoffering. This is kind of t ways to get dessy optimized model to a production environment.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode