The increasing hype around long context windows in the field of Large Language Models (LLMs) is expected to supercharge RAG applications in the future. With the ability to provide more documents to the LLM due to longer context windows, there may be less emphasis on re-ranking after retrieval. However, the concept of infinite context length windows is not likely to disrupt RAG significantly in the next few years, especially when dealing with tens of millions of documents.
The majority of enterprise data exists in heterogenous formats such as HTML, PDF, PNG, and PowerPoint. However, large language models do best when trained with clean, curated data. This presents a major data cleaning challenge.
Unstructured is focused on extracting and transforming complex data to prepare it for vector databases and LLM frameworks.
Crag Wolfe is Head of Engineering and Matt Robinson is Head of Product at Unstructured. They join the podcast to talk about data cleaning in the LLM age.
Sean’s been an academic, startup founder, and Googler. He has published works covering a wide range of topics from information visualization to quantum computing. Currently, Sean is Head of Marketing and Developer Relations at Skyflow and host of the podcast Partially Redacted, a podcast about privacy and security engineering. You can connect with Sean on Twitter @seanfalconer .
The post Unstructured Data and LLMs with Crag Wolfe and Matt Robinson appeared first on Software Engineering Daily.