Hosts interview Travers Smith's CTO, Director of Legal Tech, and AI Manager. They discuss Travers Smith's cautious approach to deploying cutting-edge legal AI, focusing on generative models, reasoning applications, and document analysis. Travers Smith open sourced their generative AI chatbot to promote responsible AI adoption. The team prioritizes model safety, minimizing subtle errors and hallucination risks. They foresee AI transforming tasks like contract review but not work product. Guests also discuss the challenges and future vision of AI in the legal field.
Read more
AI Summary
Highlights
AI Chapters
Episode notes
auto_awesome
Podcast summary created with Snipd AI
Quick takeaways
Travers Smith is cautious about using generative AI for legal advice due to the risk of hallucination and copyright issues.
Travers Smith emphasizes the importance of structuring prompts effectively to improve the usefulness of AI tools.
Being model-agnostic is crucial for successful implementation of AI models in the legal industry to mitigate reliance on a single model.
Deep dives
Obstacles in Using Generative AI Tools
One of the obstacles in accurately conducting legal work with generative AI tools is the risk of hallucination and copyright issues. The content generated by these tools may be inaccurate or contain copyrighted material. Therefore, Traversmith is cautious about using generative AI for legal advice. Another challenge is the need to balance the legal aspect with the computational aspects of the models. Structuring prompts to get the desired output requires a combination of legal and computer science knowledge. These obstacles have led Traversmith to focus more on extractive AI and reasoning capabilities of the models, which offer value and require less risk.
Collaboration with 273 Ventures on YCN Bot
Traversmith's collaboration with 273 Ventures, particularly Dan Katz and Mike Bryant, focused on making the YCN Bot interface more user-friendly and compatible with different models. They also provided feedback on the code. The goal was to create a tool that can connect to various models through APIs, ensuring the secure and encrypted interaction of data. This flexibility allows Traversmith to switch models and adapt to changing needs or potential restrictions in the future. The collaboration with 273 Ventures also highlights the importance of partnerships in implementing open source code within law firms.
Ensuring Safety and Ethics in AI Experimentation
To leverage AI technology responsibly and ethically, Traversmith advises legal professionals to invest time in learning how to prompt effectively. While developing the technology may not be a lawyer's responsibility, understanding how to structure prompts can greatly improve the usefulness of AI tools. Traversmith suggests that legal information vendors focus on addressing the hallucination problem in generative AI tools to ensure accuracy and reliability. Additionally, maintaining a balance between legal research tools and human judgment is crucial to prevent potential issues with relying solely on AI-generated content.
Suggestions for Legal Information Vendors
Traversmith suggests that legal information vendors should prioritize dealing with the hallucination problem to improve accuracy in generative AI tools. A focus on addressing the subtleties of inaccuracies and copyrighted content is crucial. By increasing the reliability and accuracy of AI tools, these vendors can provide better support for legal research. Traversmith emphasizes the importance of ensuring that judges and legal professionals can trust the generated content when using AI in legal arguments. It is essential to mitigate the potential risks and consequences of relying solely on AI-generated legal information.
The Importance of Being Model-Agnostic in AI Adoption
One of the challenges in adopting AI models is the question of training data and the rush to deploy the technology quickly. Being model-agnostic is crucial for successful implementation, as it allows businesses to leverage various models and mitigate reliance on a single model. Safely applying the models and ensuring proper governance and safety features are in place are key considerations. The goal is to provide access to the technology while addressing potential risks and limitations.
The Open Source Future of AI in Law and Potential Ownership Issues
There is a growing movement towards open-sourcing AI technology in the legal industry. By making AI tools and resources accessible to all, there is an opportunity to distribute the benefits more equitably and avoid dominance by a few large tech organizations. The law should be open and accessible, allowing businesses to leverage AI in computing outcomes. However, challenges remain, such as the issue of hallucination and potential mistakes in legal precedent. Proper regulation, thoughtful feature development, and model improvements are essential to address these challenges.
In this episode of The Geek in Review, hosts Greg Lambert and Marlene Gebauer interview three guests from UK law firm Travers Smith about their work on AI: Chief Technology Officer Oliver Bethel, Director of Legal Technology Sean Curran, and AI Manager Sam Lansley. They discuss Travers Smith's approach to testing and applying AI tools like generative models.
A key focus is finding ways to safely leverage AI while mitigating risks like copyright issues and hallucination. Travers Smith built an internal chatbot called YCNbot to experiment with generative AI through secure enterprise APIs. They are being cautious on the generative side but see more revolutionary impact from reasoning applications like analyzing documents.
Travers Smith has open sourced tools like YCNbot to spur responsible AI adoption. Collaboration with 273 Ventures helped build in multi-model support. The team is working on reducing dependence on manual prompting and increasing document analysis capabilities. They aim to be model-agnostic to hedge against reliance on a single vendor.
On model safety, Travers Smith emphasizes training data legitimacy, multi-model flexibility, and probing hallucination risks. They co-authored a paper on subtle errors in legal AI. Dedicated roles like prompt engineers are emerging to interface between law and technology. Travers Smith is exploring AI for tasks like contract review but not yet for work product.
When asked about the crystal ball for legal AI, the guests predicted the need for equitable distribution of benefits, growth in reasoning applications vs. generative ones, and movement toward more autonomous agents over manual prompting. Info providers may gain power over intermediaries applying their data.
This wide-ranging discussion provides an inside look at how one forward-thinking firm is advancing legal AI in a prudent and ethical manner. With an open source mindset, Travers Smith is exploring boundaries and sharing solutions to propel the responsible use of emerging technologies in law.