1min snip

Data Skeptic cover image

Brain Inspired AI

Data Skeptic

NOTE

Connections and Weight Changes in the Transformer Model

The transformer model has many interconnections between neurons which allows it to learn new knowledge by adjusting parameter weights. However, this process can make the neural network forget important information. To optimize the transformer model, we can apply rules from our brain to make it more efficient and sparse while maintaining its power. The advantage of the transformer model is that it can consider long-distance dependencies by gathering information from all inputs. Similar to our brain, short-term and long-term memories are stored in different locations.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode