
#80: Is AWS Bedrock the OpenAI killer, with Randall Hunt
Real World Serverless with theburningmonk
Tokenization Process and Vector Stores
This chapter discusses the tokenization process of the GPT transformer model, the limitations of previous language models, and the use of vector stores for document storage and retrieval. It also explores context management in AWS Bedrock and the process of training models in parameter efficient fine-tuning.
00:00
Transcript
Play full episode
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.