Last Week in AI cover image

#174 - Odyssey Text-to-Video, Groq LLM Engine, OpenAI Security Issues

Last Week in AI

NOTE

Focusing on Lightning Fast Inference for Language Models

Grok has increased its token processing speed to 1200 tokens per second and launched a developer console for easy transition from OpenAI to Grok. With a custom LPU designed for language models, Grok reflects a trend towards investing more in computing power for inference rather than training, catering specifically to transformers and language models.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner