Law Firm AI Policy Template
A free, carrier-ready policy mapped to ABA Formal Opinion 512 and state bar guidance. For small and mid-size US law firms.
What this template does
A written AI policy is the first document a malpractice carrier typically requests at renewal when asked about AI use, and the first deliverable required by ABA Formal Opinion 512's supervisory rules (Rules 5.1 and 5.3). This template is mapped to Opinion 512 section-by-section and is designed for firms of 5 to 50 attorneys without dedicated compliance staff. It produces documentation that reads as serious to a carrier underwriter and that addresses the obligations a disciplinary investigator would expect a firm to have thought through.
Every section below maps to a Model Rule and to the specific passage in Opinion 512 that the section documents. For the full rule-by-rule walkthrough of the Opinion, see ABA Formal Opinion 512: A Practitioner's Guide.
What's in the template
Eleven sections, each mapped to a Rule of Professional Conduct (RPC) and to the part of Opinion 512 that drives it:
- Scope and definitions (Opinion 512, Introduction)
- Approved AI tools (RPC 1.1, 1.6; Opinion 512 §§ A, B)
- Prohibited uses (RPC 1.6; Opinion 512 § B)
- Client data and confidentiality (RPC 1.6; Opinion 512 § B)
- Client disclosure and communication (RPC 1.4; Opinion 512 § C)
- Verification of AI-assisted filings (RPC 3.3, 8.4(c); Opinion 512 § D)
- Supervision and training (RPC 5.1, 5.3; Opinion 512 § E)
- Billing treatment (RPC 1.5; Opinion 512 § F)
- Competence and training requirements (RPC 1.1; Opinion 512 § A)
- Incident response (RPC 1.6, 1.4)
- Policy review and update cadence (RPC 1.1, 5.1)
The template
The sections below are the policy itself. Adapt bracketed placeholders ([like this]) to your firm. The language is written to be used close to as-is; trim where a section does not apply to your practice.
1. Scope and definitions
This policy governs the use of generative artificial intelligence ("AI") tools by all attorneys and staff of [Firm Name] ("the Firm") in connection with firm work and client matters. "AI tool" means any software that generates text, images, audio, video, code, or other content in response to a user prompt, including general-purpose tools (such as ChatGPT, Claude, Gemini, and Copilot), legal-specific tools (such as Harvey, Spellbook, Westlaw Precision AI, Lexis+ AI, and Paxton AI), and AI features embedded in other software the Firm already uses. This policy applies to use on firm devices, personal devices used for firm work, and any third-party platform where firm or client information is processed.
2. Approved AI tools
The Firm maintains a written list of approved AI tools. Only tools on the approved list may be used in connection with firm or client work. The approved-tools list is maintained by [Managing Partner / AI Committee / designated partner] and is reviewed at least annually and whenever a new tool is proposed.
Approval of a tool requires, at minimum: (a) review of the provider's Terms of Use, privacy policy, and data-handling representations; (b) confirmation of whether the provider trains on submitted content, and under what conditions; (c) confirmation of whether and how data is retained after use; (d) confirmation of the jurisdictions in which data is stored and processed; and (e) documentation of the intended firm use case.
Current approved tools: [list each tool, the approved use case, and the partner responsible for reviewing it].
3. Prohibited uses
The following uses are prohibited regardless of which tool is used: (a) inputting any information relating to the representation of a client into a tool that is not on the approved list; (b) inputting any client information into any consumer or free-tier AI tool, including personal accounts on ChatGPT, Claude, Gemini, or similar services, unless the provider's enterprise terms have been reviewed and approved under Section 2; (c) relying solely on AI output to render legal advice, negotiate a client matter, or execute any task requiring the exercise of professional judgment; (d) submitting AI-generated content to any court, arbitrator, or other tribunal without completing the verification protocol in Section 6; and (e) any use that would violate this policy, applicable rules of professional conduct, a client's engagement terms, or outside counsel guidelines.
4. Client data and confidentiality
Before inputting any information relating to the representation of a client into any AI tool, the attorney responsible for the matter must confirm that: (a) the tool is on the approved list; (b) the approved use case covers the intended use; and (c) either the approved-tool review establishes that the tool does not retain or train on submitted content in a manner that raises a material confidentiality risk, or the client has given tool-specific informed consent under Section 5.
ABA Formal Opinion 512 provides that a client's informed consent is required before inputting client information into a self-learning AI tool whose output could lead, directly or indirectly, to the disclosure of that information. Opinion 512 further states that boilerplate engagement-letter language authorizing AI use is not sufficient for this purpose.
Where informed consent is required, the responsible attorney must explain to the client, in a form documented in the matter file: (a) why the tool is being used; (b) the specific categories of client information that will be input; (c) the risks of disclosure and the ways in which disclosed information could be used against the client's interests; and (d) the benefits of the proposed use. Consent is documented in the matter file with date, signatory, and the specific tool and use case authorized.
5. Client disclosure and communication
The Firm discloses its AI practices to clients as follows:
- Default engagement letter language. The Firm's standard engagement letter includes a plain-language paragraph describing the Firm's use of AI tools and inviting the client to request additional detail or to restrict AI use.
- On request. Any client question about whether or how AI was used on the client's matter is answered accurately and promptly.
- Tool-specific consent. Where Section 4 requires informed consent for a specific tool and use case, the consent language is provided to the client before the tool is used, and consent is documented as described in Section 4.
- Outside counsel guidelines. Where a client's engagement terms or outside counsel guidelines require disclosure of AI use, the Firm complies with those terms regardless of whether this policy would otherwise require disclosure.
- Significant-decision consultation. Where AI output will influence a significant decision in the representation (including litigation outcome evaluation, jury analysis, or material drafting judgments), the responsible attorney consults with the client about the use before relying on the output.
6. Verification of AI-assisted filings
Any document submitted to a court, arbitrator, or other tribunal that was drafted, researched, or substantively supported by an AI tool is subject to the following verification protocol before filing:
- Every legal citation, including case names, reporters, docket numbers, statutes, rules, and regulatory references, is independently verified against a primary source.
- Every quotation attributed to a cited authority is independently verified against the cited authority.
- The holding, reasoning, and procedural posture of each cited case are independently confirmed against the primary source, not against the AI tool's summary.
- The filing is reviewed by the responsible attorney for misleading arguments, omitted controlling authority, and factual overstatement.
- The verification is logged in the matter file by the attorney performing it, with date and tool identified.
This protocol applies whether the AI tool used was general-purpose or legal-specific, and whether citations were generated by the tool or only summarized by it.
7. Supervision and training
[Managing Partner / AI Committee / designated partner] is responsible for firm-wide AI governance under Rules 5.1 and 5.3 of the applicable rules of professional conduct. Supervising attorneys are responsible for the AI-assisted work product of attorneys and staff they supervise. No AI-assisted work product leaves the Firm without review by a responsible supervising attorney.
Where the Firm relies on third-party AI providers, the Firm's diligence on the provider includes reference checks and credentials, review of security policies and protocols, confidentiality terms, conflicts screening where applicable, and review of whether the provider retains or claims proprietary rights to submitted content. Diligence is documented in the vendor file for each approved tool.
8. Billing treatment
The Firm bills AI-assisted work in accordance with Rule 1.5 and ABA Formal Opinion 512. Specifically:
- Hourly matters. The Firm bills only for time actually expended, including time spent prompting the AI tool and time spent reviewing its output. Time saved by AI-driven efficiency is not billed.
- Flat and contingent matters. Where AI materially compresses the work contemplated by a flat or contingent fee, the responsible partner considers whether the fee remains reasonable and, if not, raises the question with the client.
- Tool costs as overhead. AI tools that function like standard office infrastructure (for example, AI features embedded in word processing or email) are treated as overhead and are not billed to clients.
- Tool costs as pass-through expenses. AI tools billed on a per-matter or per-use basis, where the cost is specifically attributable to a client matter, are billed as expenses at actual cost, with no surcharge unless separately agreed in writing with the client.
- Learning time. Time spent by Firm personnel learning to use an AI tool that the Firm will use regularly for clients is not billed to any client.
9. Competence and training requirements
Every attorney and staff member using an approved AI tool must complete training on that tool before use, covering at minimum: (a) the tool's capabilities and known limitations; (b) the confidentiality posture of the tool, including the provider's data retention and training practices; (c) prohibited uses; and (d) the verification protocol in Section 6. Training completion is logged in each user's training file with date and tool identified.
Training is refreshed whenever an approved tool is materially updated or whenever a new approved tool is added. Attorneys are expected to maintain ongoing technological competence consistent with Rule 1.1 and Comment [8], including through CLE or equivalent continuing education.
10. Incident response
An AI incident includes, without limitation: (a) inadvertent input of client information into an unapproved tool; (b) disclosure of client information through an AI tool's output; (c) submission of an AI-generated misstatement to a court, opposing counsel, or third party; and (d) any provider-side breach, outage, or service disruption affecting firm data.
On identification of an incident, the individual who identifies it must notify [Managing Partner / AI Committee] within 24 hours. The Firm then assesses whether client notification is required under Rule 1.4 or applicable state rules, whether bar reporting is required, whether remedial disclosure to a tribunal is required under Rule 3.3, and whether any insurance notice obligation is triggered. The incident, the assessment, and the response are documented.
11. Policy review and update cadence
This policy is reviewed and re-approved at least annually, and on any material change to: (a) the approved-tools list; (b) the applicable rules of professional conduct or authoritative bar guidance; or (c) the Firm's practice areas. The current version date and approver are recorded at the top of the policy.
Policy adopted: [date]. Approved by: [name, title]. Version: [1.0].
How to use this template
- Adapt the approved-tools list. Section 2 is the most firm-specific part. Walk through the AI tools actually in use at the firm (including shadow AI on personal devices) before drafting the approved list.
- Have the policy reviewed by your malpractice carrier or broker. Carriers are increasingly asking about AI at renewal. A policy reviewed pre-renewal is documentation on the record; a policy produced in response to a claim is not.
- Get managing partner sign-off. Rules 5.1 and 5.3 assign responsibility to managerial lawyers. The policy should be adopted by the managing partner or the firm's governing committee, not delegated.
- Train the firm before going live. Section 9's training requirements take effect when the policy does. Schedule the initial training session and log completion before the effective date.
State-specific annexes
ABA Formal Opinion 512 is the national baseline, not the ceiling. Several states have issued their own guidance, and firms should check their state's rules before finalizing a policy. Among the state sources to read alongside Opinion 512:
- Florida Bar Ethics Opinion 24-1 (January 2024)
- North Carolina State Bar 2024 FEO 1 (adopted 2024)
- California State Bar Practical Guidance on Generative AI (November 2023)
- Pennsylvania and Philadelphia Joint Formal Opinion 2024-200 (2024)
- Texas, New York, Illinois, and other state opinions and practice guides that have issued since 2024
For the current state-by-state tracker, with primary-source citations for each opinion and court order, see the state tracker. Where state guidance is stricter than Opinion 512, the state rule controls.
Carrier-readiness note
The documentation this template produces is what malpractice carriers have started asking about at renewal: the written policy itself, the approved-tools list with vendor review, training records, a verification protocol for filings, and an incident response procedure. Many carriers now include AI-specific questions on renewal applications; some have begun offering credits for documented AI governance. For more on what carriers are looking at and how the documentation in this template addresses it, see For Carriers.
Next steps
The template above is the free version. For a carrier-ready packet that bundles this policy with state-specific annexes for your jurisdiction, vendor due diligence checklists for the most common legal AI tools, engagement-letter and informed-consent language, and a pre-filing verification form, join the waitlist and we will send it when it launches.
Join the waitlistLast verified against ABA Formal Opinion 512: 2026-04-24.