BUYER'S GUIDE
The Definitive Guide to Choosing a Gen AI Legal Research Solution
A Framework for Legal Gen AI Success
The emergence of commercially available Legal AI tools — Generative Artificial Intelligence (Gen AI) tools trained specifically for the legal profession — has shifted the industry conversation beyond the initial shock over the power of this new technology and toward a practical examination of the specific ways AI will transform the way that law is practiced on a day-to-day basis.
A 2024 LexisNexis survey of managing partners and C-suite leaders at leading law firms and Fortune 1000 companies found that nearly all legal executives (90%) expect their investment in Gen AI technologies to increase over the next five years and roughly half (53%) of Law360 Pulse® Leaderboard firms have already purchased Gen AI tools. This rapid market adoption has been accompanied by a surge in the number of vendors who are vying for the attention of law firm and corporate leaders, so it’s important to separate the Gen AI hype from the reality of how these tools can improve the way your organization operates and how your lawyers serve their clients.
Key AI Capabilities to Evaluate
It can feel overwhelming to sort through all the technical terminology and innovation breakthroughs related to Gen AI. But for practical purposes, it doesn’t need to be so complicated if you start by focusing on the key functionalities of a Legal AI solution.
There are five specific criteria you should evaluate when considering an investment in a Legal AI product:
Privacy & Security
The rise of powerful generative AI models that can create synthetic media, text, code, and other content raises important questions around privacy and security. As these systems leverage large datasets and can mimic existing content and styles, there are risks that need to be thoughtfully addressed for the safety of your organization and clients.
Some of the core concerns include:
- Data Privacy - How training data is obtained and used
- Data Bias - Potential to perpetuate unfair biases in the training data
- Misinformation - Creating convincing fake or misleading content
- Attribution - Properly crediting source materials used by generative models
- Transparency - Lack of detail in how models work
- Regulation - Need for thoughtful rules and governance of the technology
When evaluating a Legal AI solution, there are a few important questions to ask about how the model is trained and how your data is handled:
- Will my search history and usage information be stored or used to train the model?
- How does your Gen AI solution address the security concerns related to ChatGPT?
- How does your Gen AI solution protect customers’ Intellectual Property and your own?
- What data security measures are in place for any original generated content?
- What safeguards exist to prevent exposure of privileged or confidential client information through generated content?
The Gen AI Model Itself
Not all Gen AI models are created equal.
Public fascination over the potential of these tools was piqued by the release of ChatGPT, but many are unaware of the rapid improvements in the underlying technology since that initial product launch. There is no need to become a software engineer in order to be a savvy buyer of a Legal AI solution, but it does help to understand a bit about Large Language Models (LLMs) and how they operate.
Different LLMs have strengths and weaknesses. Here are five core differences between LLMs that are used to power various Legal AI solutions in the market:
- Architecture — LLMs have different underlying “neural network architectures” that impact their capabilities. For example, some are better at certain tasks such as translation or summarization.
- Size — LLMs can range from millions to trillions of parameters. Larger models are generally more capable, but smaller models can sometimes be more efficient.
- Training Data — The data used to train LLMs affects their knowledge and performance. Models trained on legal data will have different strengths than those trained on general purpose text.
- Fine-Tuning — LLMs can be fine-tuned on niche datasets to improve their domain-specific capabilities.
- Public vs. Proprietary — Open source LLMs allow transparency while proprietary models offer a deeper understanding of the user’s intent to deliver higher-quality responses.
There is another strategy available to Gen AI development teams: A multi-model approach that draws from more than one LLM in the creation of a new tool. By combining the outputs of different models, the overall predictions and performance surpasses that of any one model, allowing users to benefit from the unique capabilities of each LLM while balancing out each one’s weaknesses.
When evaluating a Legal AI solution, here are some questions to pose to the provider about their Gen AI model:
- Do you use a single model or a multi-model approach for creating your product?
- What is the average time the AI takes to return an answer?
- Are there any limitations on the number of prompts you can pose each day?
- What is the underlying architecture of the AI model and how does that design impact its capabilities to perform legal-specific tasks?
- Was the Gen AI model trained on legal-specific data or open-source data?
- Does your AI solution incorporate a retrieval-augmented generation framework to find and link relevant source documents?
Answer Quality
Any legal research solution is only as good as the breadth and depth of the information repository from which it draws its answers to your search queries. It is important to choose a Legal AI solution that is powered by a global database of authoritative legal content, then deploys semantic search technology to understand your question’s intent and pick up on related terms and contextually relevant documents to surface the most comprehensive, verifiable set of results that are responsive to your query.
There are a few important measures you can use to understand the inner workings of a Legal AI solution, helping you to assess the quality of its answers:
Comprehensiveness of Results
LLMs require massive data sets, so the provider must be able to draw from a large repository of authoritative and up-to-date legal content that serves as the grounding data for the model to deliver comprehensive results grounded in authoritative content.
Semantic Search Capability
Semantic search can understand the underlying meaning of your search query, reading between the lines of the words you typed to grasp your intent, and then matching your query to related concepts. This is distinct from keyword search, which simply retrieves answers that match the text entered in the search box. Semantic search is a superior model for a Legal AI solution because it increases the precision of your results, delivering answers that are more relevant and saving you the time required to wade through extraneous information.
Citation Validation & Grounding
Legal industry observers are by now very familiar with the fact that early adoption of Gen AI by law firms was not without its troubles, most notably the risk posed by open web Gen AI tools that infamously “hallucinated” various case citations that didn’t even exist. Traditional Gen AI models struggle with legal use cases because the underlying content feeding the models may be dated, lack citation authority and are prone to factual and conceptual hallucinations.
Here are some questions to consider when evaluating a Legal AI solution on answer quality:
- What is the size of the primary and secondary law database that your solution accesses to surface authoritative legal content in response to search queries?
- Does your tool require a separate subscription to access those primary and secondary sources or is it all integrated under one product experience?
- What options are available for combining keyword and semantic search techniques while conducting research?
- Is semantic search available across all content types (e.g., case law, statutes, practical guidance, etc.) and do you offer any training for learning how to improve semantic search techniques?
- Does the Legal AI output provide in-line citations and links back to the original source material used for creating its answers?
- What steps do you take to minimize hallucination risks?
Performance
The promise of Legal AI technology is to deliver answers to search queries quickly, saving lawyers valuable time on tedious tasks so they have more time to focus on creative problem-solving and strategic thinking.
Look for a Legal AI solution that excels at the specific legal tasks you need — such as legal research or summarizing documents — and then conduct some due diligence to evaluate its real-world speed in generating outputs.
As this guide has explained, not all Legal AI tools are created equal, so it’s important to assess the empirical data surrounding performance of the tool when placed in the hands of practicing lawyers.
For example, a Legal AI solution should accelerate execution of these daily tasks:
- Legal research — A Legal AI tool can rapidly analyze thousands of documents to identify those most relevant to your matter.
- Case summarization — Your selected Legal AI solution should have the capability of summarizing a case in a way that provides legal professionals with succinct, relevant overviews that enable users to quickly gain an understanding of the pertinent information. This might include key legal holdings, a list of relevant material facts, controlling law, the rationale of the court and the outcome of the case.
- Insightful recommendations - A Legal AI Solution can uncover additional persuasive precedent cases and analysis to help strengthen litigation arguments or transactional deal points that lawyers may have overlooked.
- Advanced data visualizations - Look for a Legal AI tool that is capable of generating enhanced legal analytics based on a rapid analysis of similar matters and producing interactive visual aids to highlight key data points.
Here are some questions to pose when evaluating a Legal AI solution on performance:
- For what specific use cases has your tool proven to improve lawyer performance?
- Do you have any documented statistics to illustrate time savings in various use cases by practicing lawyers in their day-to-day workflow?
- How has your solution performed in head-to-head comparisons with competitive solutions in the legal market?
Adherence to Ethical AI Principals
Responsible AI, sometimes referred to as ethical or trustworthy AI, is a set of principles used to document and monitor how AI systems should be developed, deployed and governed to comply with ethics and laws.
The Legal AI solution you select should be developed with a framework that is guided by pre-defined principles, ethics and rules.
This is an important risk management guardrail to increase confidence that the product used by your lawyers will avoid potential reputational and financial damage in the future.
Responsible AI Principles at RELX
RELX, the parent company of LexisNexis, is a global provider of information-based analytics and decision tools for professional and business customers.
RELX established a core set of Responsible AI Principles that provide high-level guidance for any RELX professionals — including those at LexisNexis — who are working on products that involve the delivery of machine-driven insights to customers. These principles provide a risk-based framework drawing on best practices from within our company and other organizations.
Here are some questions to consider when choosing a provider for your Legal AI solution based on ethical AI principles:
- Do you have a formal responsible AI framework with defined principles and policies that guide your approach?
- How are ethics and compliance with laws and regulations addressed within your responsible AI policies?
- What processes do you have in place to monitor AI systems for unintended bias, fairness and other ethical risks?
Generative AI is a new category of technology that has the potential to transform the way that law is practiced all over the world. The way your firm or organization deploys Gen AI is crucial to your ability to serve clients most efficiently and effectively.
It is important to select a Legal AI solution that has been trained specifically for the legal profession to ensure that you are adopting a responsible and transparent Gen AI platform. This requires the identification of the key features and functionalities to look for in a Legal AI product and understanding the ethical principles that should be followed in its development.
Use this guide as your checklist. These criteria come directly from empirical research and real-world lessons.
Gen AI can unlock amazing potential, but the tech alone isn't enough. With Legal AI, what matters most is having the right experience behind it. Make sure the solution you choose is producing outputs that are grounded in authoritative content so you can trust the quality of the responses you receive. Focus on selecting a tool that has proven its speed and performance in the day-to-day workflow of practicing lawyers. And choose a Legal AI partner that combines leading-edge Gen AI capabilities with human insights from the legal industry to develop tools in a responsible way.
The future of legal research is AI. Lexis+ AI platform is your AI legal assistant for the journey, with the experience to guide you every step of the way.