ABA Formal Opinion 512: A Practitioner's Guide
ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512 (July 29, 2024). Primary source PDF.
What Opinion 512 is, and why it matters
Formal Opinion 512 is the ABA Standing Committee on Ethics and Professional Responsibility's first formal opinion addressing generative artificial intelligence. It was issued July 29, 2024. In a single 15-page document, it applies six Model Rules to lawyer use of GAI tools and frames the obligations as follows:
"To ensure clients are protected, lawyers using generative artificial intelligence tools must fully consider their applicable ethical obligations, including their duties to provide competent legal representation, to protect client information, to communicate with clients, to supervise their employees and agents, to advance only meritorious claims and contentions, to ensure candor toward the tribunal, and to charge reasonable fees."
Opinion 512, syllabus (p. 1).
The opinion is advisory, not binding. The ABA Model Rules are not the rules of professional conduct in any jurisdiction until a state supreme court adopts them. But Opinion 512 has become the de facto national baseline for two reasons. First, 49 of 50 states (California is the outlier) have adopted the Model Rules' core structure, so the Opinion's rule-by-rule framing maps cleanly onto most state RPCs. Second, state bar AI opinions issued since July 2024 routinely cite and track Opinion 512's analysis, with state-specific overlays rather than wholesale departures.
The practical consequence: if a firm cannot document that its AI practices address each of the six rules below, a malpractice carrier, a disciplinary investigator, or a sanctioning court has a ready-made framework for identifying gaps. The sections that follow translate each rule into the obligation and the documentation that firms should keep.
Rule 1.1: Competence
Model Rule 1.1 requires competent representation, and Comment [8] extends that duty to "the benefits and risks associated with relevant technology." Opinion 512 applies this to GAI by stating the standard plainly:
"To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use."
Opinion 512, at 2–3.
The duty is ongoing. A one-time training in 2024 does not satisfy Rule 1.1 two years later if the tool, the model, or the firm's use of it has materially changed. The Opinion is explicit on this point:
"This is not a static undertaking. Given the fast-paced evolution of GAI tools, technological competence presupposes that lawyers remain vigilant about the tools' benefits and risks."
Opinion 512, at 3.
What the Opinion recognizes as acceptable ways to maintain competence: reading about GAI tools aimed at the legal profession, attending CLE, and consulting colleagues or external experts who are proficient with the specific tool in use. The practical documentation a firm should keep is narrow: records showing that attorneys and staff have been trained on each approved tool, and that the training has been refreshed when the tool materially changes.
Rule 1.6: Confidentiality
Rule 1.6 prohibits a lawyer from revealing information relating to the representation of a client without informed consent, and Rule 1.6(c) requires reasonable efforts to prevent inadvertent or unauthorized disclosure. Opinion 512 applies these duties to GAI input with particular force in the context of self-learning tools, where prompts and client information may be retained and later surface in another user's session:
"[B]ecause many of today's self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client's informed consent is required prior to inputting information relating to the representation into such a GAI tool."
Opinion 512, at 7.
Informed consent in this context is not the language most engagement letters already contain. The Opinion is direct about this:
"To obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient."
Opinion 512, at 7.
To satisfy Rule 1.6, the Opinion describes a baseline review duty: every lawyer using a GAI tool should read and understand the Terms of Use, privacy policy, and related contractual terms for that tool, or delegate the review to a colleague or external expert who has. The specific questions to resolve before inputting client information: who has access to inputs and outputs; whether and how the provider retains data; whether the tool trains on submitted content; and how the tool behaves if access is revoked. Firms should document the vendor review for each approved tool, and keep a record of which matters, if any, required tool-specific informed consent.
Rule 1.4: Communication with clients
Rule 1.4 governs when a lawyer must communicate with a client about the means of representation. Opinion 512 does not impose a blanket duty to disclose GAI use on every matter. The facts control. But it identifies three categories where disclosure is required:
- When the client asks whether GAI was used.
- When the engagement letter or outside counsel guidelines require disclosure.
- When informed consent is required under Rule 1.6 (because client information will be input into the tool), when GAI use is relevant to the basis of the fee under Rule 1.5, or when the tool's output will influence a significant decision in the representation.
"[L]awyers must disclose their GAI practices if asked by a client how they conducted their work, or whether GAI technologies were employed in doing so, or if the client expressly requires disclosure under the terms of the engagement agreement or the client's outside counsel guidelines."
Opinion 512, at 8.
Even when Rule 1.4 does not require disclosure, the Opinion notes that a firm may still choose to describe its GAI practices in the engagement agreement as a matter of effective client communication. That is the most common way firms are operationalizing this rule: a short, plain- language paragraph in the engagement letter that explains the firm's AI use, names any categories of tools in use, and invites the client to request additional detail.
Rule 1.5: Fees and billing
Rule 1.5 requires reasonable fees and reasonable expenses. Opinion 512 applies this to GAI in two ways that have direct operational consequence.
First, efficiency gains from GAI cannot be billed at the pre-GAI hourly rate. The Opinion states the rule and illustrates it with a 15-minute drafting example:
"GAI tools may provide lawyers with a faster and more efficient way to render legal services to their clients, but lawyers who bill clients an hourly rate for time spent on a matter must bill for their actual time."
Opinion 512, at 12.
The same principle extends to flat and contingent fees. If a flat fee was set against pre-GAI assumptions about time required, the Opinion signals that the fee may no longer be reasonable if GAI materially compresses the work.
Second, time spent learning a GAI tool that the firm will use regularly cannot be billed to the client:
"[A] lawyer may not charge a client to learn about how to use a GAI tool or service that the lawyer will regularly use for clients because lawyers must maintain competence in the tools they use, including but not limited to GAI technology."
Opinion 512, at 14.
The Opinion also distinguishes overhead-type tool costs (for example, a grammar-check feature bundled into word processing) from client-specific tool expenses (for example, a per-matter charge from a contract-review vendor). Firms should have a billing policy that maps each approved tool to one of those two categories and discloses the billing treatment to clients in advance.
Rules 5.1 and 5.3: Supervisory duties
Rules 5.1 and 5.3 assign responsibility to managerial and supervisory lawyers for the conduct of other lawyers and nonlawyer assistants at the firm. Opinion 512 treats GAI tools as falling within this framework in two ways. Internally, managing partners must establish firm-wide policies; supervisory lawyers must train and oversee staff use. The Opinion's formulation:
"Managerial lawyers must establish clear policies regarding the law firm's permissible use of GAI, and supervisory lawyers must make reasonable efforts to ensure that the firm's lawyers and nonlawyers comply with their professional obligations when using GAI tools."
Opinion 512, at 10.
Externally, the Opinion applies Rule 5.3(b)'s duty of reasonable efforts to the GAI tool's provider. The Opinion carries forward the diligence framework from the ABA's prior cloud-computing and outsourcing opinions and applies it to GAI vendors: reference checks and credentials, review of security policies and protocols, confidentiality agreements, conflicts screening where applicable, and attention to whether the provider retains or asserts proprietary rights to submitted information.
The documentation that follows from Rules 5.1 and 5.3 is concrete: a written firm AI policy (see the policy template), training completion records for each attorney and staff member, a vendor due diligence file for each approved tool, and a supervision protocol for AI-assisted work product that goes out the door.
Rule 3.3: Candor toward the tribunal
Rule 3.3 prohibits false statements of law or fact to a tribunal and requires remedial action if a lawyer comes to know that material evidence was false. Rule 8.4(c) prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation. Opinion 512 applies both to GAI-assisted filings and is deliberately direct about the risk:
"Even an unintentional misstatement to a court can involve a misrepresentation under Rule 8.4(c). Therefore, output from a GAI tool must be carefully reviewed to ensure that the assertions made to the court are not false."
Opinion 512, at 10.
The scope of the pre-filing review duty is broader than just citation checking:
"In judicial proceedings, duties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments."
Opinion 512, at 10.
This is where the rule meets the emerging body of sanctions cases. Courts applying Rule 3.3 (or its state analogs) to AI-generated hallucinations have imposed sanctions ranging from modest fines and mandatory CLE to fee awards exceeding $1.5 million and referrals to state bar discipline.
Notable sanctions cases in this line:
- Mata v. Avianca, Inc. (S.D.N.Y.) : $5,000
- Wadsworth v. Walmart Inc. (D. Wyo.) : $3,000 (Ayala) + $1,000 each (Morgan, Goody); Ayala's pro hac vice revoked
- Garner v. Kadince (Utah Ct. App.) : Attorney fees + client fee refund + $1,000 to access-to-justice nonprofit
- Mezu v. Mezu (Md. App. Ct.)
- Johnson v. Dunn (N.D. Ala.)
- Jason M. Hatfield, P.A. v. Pirani (W.D. Ark.) : $1,578,172 attorney fees + $93,388 costs
- People v. Crabill (Colo. OPDJ) : 1 year + 1 day suspension; 90 days active
- Shahid v. Esaam (Ga. Ct. App.) : $2,500
- Kohls v. Ellison (D. Minn.)
- Creech v. City of Raleigh (N.C. Ct. App.)
Opinion 512 compliance checklist
A 12-item checklist, each item mapped to the rule it documents. A firm that can answer yes to every item has the documentation a malpractice carrier or disciplinary investigator is likely to request.
- Written AI policy in force (Rules 5.1, 5.3). The firm has a current policy, dated within the last 12 months, naming approved tools, prohibited uses, and supervisory responsibility.
- Approved-tools list, reviewed (Rules 1.1, 1.6). Each tool on the list has a completed vendor review covering Terms of Use, privacy policy, data retention, and training-on-input behavior.
- Prohibited-tools list (Rule 1.6). The policy identifies consumer tools (including personal ChatGPT, Gemini, Claude, and similar) that are not approved for firm work, and explains why.
- Tool-specific informed consent language (Rule 1.6). For any self-learning tool into which client information will be input, the firm has client-facing consent language that meets the Opinion's "not boiler-plate" standard.
- Engagement letter disclosure (Rule 1.4). The firm's default engagement letter describes its AI use in plain language, or documents the decision not to disclose and the reasoning.
- Training records (Rules 1.1, 5.1, 5.3). Every attorney and staff member using an approved tool has completed tool-specific training, logged by date.
- CLE tracking for technology competence (Rule 1.1). The firm tracks ongoing CLE or equivalent training so that technological competence is maintained as tools change.
- Pre-filing verification protocol (Rule 3.3). Any filing that used AI research assistance is logged, with the citation-verification step documented by the attorney who reviewed it.
- Billing policy for AI time (Rule 1.5). The firm has written guidance on billing AI-assisted work: hourly rules, flat-fee treatment, overhead vs. pass-through expense classification for each approved tool.
- Supervision protocol for AI-assisted work product (Rule 5.1, 5.3). Any work product generated with AI assistance is reviewed by a supervising lawyer before it leaves the firm, with the review logged.
- Incident response procedure (Rule 1.6). The firm has a written procedure for responding to an AI-related confidentiality incident, including client notification and bar reporting where applicable.
- Annual policy review (Rules 1.1, 5.1). The firm reviews and re-dates the AI policy at least annually, and on any material tool change.
State cross-reference
Opinion 512 is the national baseline. Several states have issued their own formal opinions or practical guidance that add to, clarify, or in some cases diverge from the ABA's framing. Firms should read Opinion 512 and their state's guidance together, not one in place of the other. Among the more detailed state-level sources to date:
- Florida Bar Ethics Opinion 24-1 (January 2024), which predates Opinion 512 and is repeatedly cited in it.
- North Carolina State Bar 2024 FEO 1 (adopted 2024), the first formal NC opinion on GAI.
- California State Bar Practical Guidance on Generative AI (November 2023), cited in Opinion 512 as a source on confidentiality.
- Pennsylvania and Philadelphia Joint Formal Opinion 2024-200, cited in Opinion 512 on the risk of cross-representation leakage in self-learning tools.
- New York, Texas, Illinois, and a growing list of other state bars and local ethics committees that have issued AI-specific guidance since 2024.
For the current state tracker, with primary-source citations for each state's guidance, see the state tracker.
Next steps for your firm
- If your firm does not yet have a written AI policy, start with the law firm AI policy template, which is mapped to Opinion 512 section-by-section.
- If your malpractice renewal is approaching, our carrier documentation guide describes what carriers have started asking at renewal and how Opinion 512 documentation addresses it.
- Check your state's guidance on the state tracker. Where state rules are stricter than Opinion 512, the state rule controls.
Primary source
ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512, Generative Artificial Intelligence Tools (July 29, 2024). Full text: americanbar.org (PDF) .
Last verified against primary source: 2026-04-24.