1min snip

Acquired cover image

Nvidia Part III: The Dawn of the AI Era (2022-2023)

Acquired

NOTE

Cores, parallel, on chip memory

The bottleneck in computer performance is exacerbated by the increasing speed of CPUs and memory size due to limited architecture, particularly through a single channel known as a bus. To overcome this limitation, a shift from von Neumann architecture to parallel processing and incorporating more processors or cores is essential. NVIDIA's hardware advancements and AI researchers have optimized software to enable parallel execution. However, the current constraint for massive language models is not the clock speed or number of cores, but the amount of on-chip memory. This shift underscores the significance of the work done by NVIDIA and in data centers.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode