AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
The Cost and Latency of Defining Complex Graph Structures
Every call to GPT is gonna take around like, two to three seconds of you just give it like, 3,000, 4,000 tokens. So there are just some practical constraints in making this graph overly complicated. For a lot of tree structures, practically speaking, what has worked well are just slightly more shallow structures. But I think generally speaking, you know, as these models get better and cost and lanes does come down, this notion of repeatedly querying or recursively querying the L M for more information has already proven to be pretty useful in a lot of cases.