AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using Close Source API for Application-Dependent LLM Output
Building applications using LLM output can be challenging. Using a close source API can solve the headache of deploying your own LLM. However, if your business is not about deploying an LLM, it makes sense to use a service. The speed and token size of LLM can be problematic, but there are open source tools available. Hallucination of LLM models is a major concern, so exposing prompts to end users should be done carefully. When dealing with optimization and correctness, the approach depends on the programming language and existing tools. Unit tests and minimum code changes with gradual optimization are recommended. Validating recommended changes is crucial to minimize risk.