{"id":150370,"date":"2026-05-05T07:36:20","date_gmt":"2026-05-05T15:36:20","guid":{"rendered":"https:\/\/xira.com\/p\/2026\/05\/05\/california-bar-proposes-rule-requiring-lawyers-to-verify-every-ai-output-and-five-other-ai-focused-ethics-changes\/"},"modified":"2026-05-05T07:36:20","modified_gmt":"2026-05-05T15:36:20","slug":"california-bar-proposes-rule-requiring-lawyers-to-verify-every-ai-output-and-five-other-ai-focused-ethics-changes","status":"publish","type":"post","link":"https:\/\/xira.com\/p\/2026\/05\/05\/california-bar-proposes-rule-requiring-lawyers-to-verify-every-ai-output-and-five-other-ai-focused-ethics-changes\/","title":{"rendered":"California Bar Proposes Rule Requiring Lawyers to Verify Every AI Output \u2014 and Five Other AI-Focused Ethics Changes"},"content":{"rendered":"<p>When using any technology \u2014 including AI \u2014 a lawyer \u201cmust independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client.\u201d That language appears in a new comment to Rule 1.1 on competence proposed by the State Bar of California\u2019s Standing Committee [\u2026]<\/p>\n<p>When using any technology \u2014 including AI \u2014 a lawyer \u201cmust independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client.\u201d<\/p>\n<p>That language appears in a new comment to Rule 1.1 on competence proposed by the State Bar of California\u2019s Standing Committee on Professional Responsibility and Conduct (COPRAC) as part of a <a href=\"https:\/\/www.calbar.ca.gov\/public\/public-meetings-comment\/public-comment\/public-comment-archives\/2026-public-comment\/proposed-amendments-rules-professional-conduct-related-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">package of AI-related amendments<\/a> to six of the state\u2019s Rules of Professional Conduct.<\/p>\n<p>The <a href=\"https:\/\/www.lawnext.com\/wp-content\/uploads\/2026\/05\/Proposed-Amended-Rules-AI-clean-redline.pdf\" rel=\"nofollow noopener\" target=\"_blank\">proposed changes<\/a> would, for the first time, write specific AI obligations into California\u2019s rules. The changes span the rules on competence, client communication, confidentiality, candor toward tribunals, and supervision of both lawyers and other staff.<\/p>\n<p>Unfortunately, I am reporting this a bit late, as the public comment period on the proposals closed yesterday, May 4. But the rulemaking process is still in early stages and the amendments are far from final. For anyone tracking how bar regulators are treating AI in legal practice, these proposals are worth a close read regardless.<\/p>\n<h3>Initiated by Supreme Court<\/h3>\n<p>The rulemaking was set in motion by the California Supreme Court itself. In <a href=\"https:\/\/www.lawnext.com\/wp-content\/uploads\/2026\/05\/Letter-State-Bar-AI_Redacted.pdf\" rel=\"nofollow noopener\" target=\"_blank\">an Aug. 22, 2025, letter<\/a>\u00a0to the state bar\u2019s interim executive director, the court\u2019s clerk and executive officer directed COPRAC to consider whether the guiding principles from the bar\u2019s November 2023 \u201c<a href=\"https:\/\/www.calbar.ca.gov\/sites\/default\/files\/portals\/0\/documents\/ethics\/Generative-AI-Practical-Guidance.pdf\" rel=\"nofollow noopener\" target=\"_blank\">Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law<\/a>\u201d should be incorporated into the formal rules.<\/p>\n<p>The court also directed the bar to consider guidance specifically addressing \u201cagentic AI\u201d tools \u2014 systems that can plan and execute tasks with little or no human intervention.<\/p>\n<p>COPRAC approved the proposed amendments at its March 13, 2026, meeting and opened the 45-day comment period. Rather than drafting a standalone AI rule, the committee wove new language into six existing rules, reflecting a view that AI sharpens existing ethical duties rather than creating entirely new ones.<\/p>\n<p>Whereas California\u2019s 2023 practical guidance was a \u201cliving document\u201d with no binding authority, these proposed amendments would change that by making AI-specific obligations part of the enforceable rules.<\/p>\n<p>Most states that have addressed AI in legal practice have done so through ethics opinions, which carry persuasive but not always disciplinary force. California\u2019s approach, if finalized, would be more muscular.<\/p>\n<p>I have tracked the adoption of the duty of technology competence across jurisdictions on <a href=\"https:\/\/www.lawnext.com\/tech-competence\" rel=\"nofollow noopener\" target=\"_blank\">a dedicated page on this blog<\/a>. These proposals represent among the most detailed and comprehensive set of AI-specific rule amendments I have seen any state bar put forward.<\/p>\n<h3>Amendments to Rule 1.1, Competence<\/h3>\n<p>The existing rule requires lawyers to maintain learning and skill sufficient for competent representation. The proposed amendments add two new comments specific to AI.<\/p>\n<p>The first simply extends the existing technology-competence language to make explicit that the duty to stay abreast of \u201cthe benefits and risks associated with relevant technology\u201d includes artificial intelligence.<\/p>\n<p>The second, and more consequential, comment states that when using technology, including AI, a lawyer \u201cmust independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client.\u201d<\/p>\n<p>In other words, under this proposed change, the lawyer must personally and independently evaluate what AI tools produce before relying on it. There is no carve-out for routine tasks or low-stakes matters.<\/p>\n<h3>Amendments to Rule 1.4, Communication with Clients<\/h3>\n<p>A new Comment 5 to Rule 1.4 addresses when lawyers must disclose their use of AI to clients.<\/p>\n<p>The proposed language provides that when a lawyer\u2019s use of technology, including AI, \u201cpresents a significant risk or materially affects the scope, cost, manner, or decision-making process of representation,\u201d the lawyer must communicate \u201csufficient information regarding the use of technology to permit the client to make informed decisions regarding the representation.\u201d<\/p>\n<p>The comment adds that lawyers must continue to evaluate their communication obligations throughout a representation based on \u201cthe novelty of the technology, risks associated with the use of the technology, scope of the representation, and sophistication of the client.\u201d<\/p>\n<p>Notably, this does not create a blanket disclosure requirement every time a lawyer uses AI. The trigger is a \u201csignificant risk\u201d or \u201cmaterial\u201d effect on the representation.<\/p>\n<p>More routine use may not require affirmative disclosure, depending on the circumstances. But the obligation is ongoing \u2014 it must be reassessed as the representation evolves.<\/p>\n<h3>Amendments to Rule 1.6, Confidential Information of a Client<\/h3>\n<p>The confidentiality rule, which prohibits lawyers from revealing confidential client information, gets a new Comment 2 that expand sthe definition of \u201creveal\u201d to encompass AI use.<\/p>\n<p>Under the proposed language, \u201creveal\u201d includes \u201cexposing confidential information to technological systems, including artificial intelligence tools, where such exposure creates a material risk that the information may be accessed, retained, or used, whether by the technological system or another user of that technological system, in a manner inconsistent with the lawyer\u2019s duty of confidentiality.\u201d<\/p>\n<p>This means that inputting client information into an AI tool \u2014 even if the lawyer never intends for anyone else to see it \u2014 can constitute a revelation of confidential information under the rules if there is a material risk the system or its other users could access, retain or use that data.<\/p>\n<p>Lawyers using cloud-based AI tools with unclear or unfavorable data retention and training policies need to pay attention to this.<\/p>\n<h3>Amendments to Rule 3.3, Candor Toward the Tribunal<\/h3>\n<p>This amendment directly addresses the AI hallucination problem that has generated judicial sanctions and considerable alarm across the profession.<\/p>\n<p>A new Comment 3 states that \u201ca lawyer\u2019s duty of candor towards the tribunal includes the obligation to verify the accuracy and existence of cited authorities, including ensuring no cited authority is fabricated, misstated, or taken out of context, before submission to a tribunal, including any cited authorities generated or assisted by artificial intelligence or other technological tools.\u201d<\/p>\n<p>The existing rule already prohibits knowingly misquoting authority or citing overruled decisions. The new comment makes explicit that AI-generated citations are not exempt from those obligations, and that the verification duty extends specifically to fabricated, misstated or decontextualized authority.<\/p>\n<p>In the wake of now-notorious sanctions cases involving AI-hallucinated citations, this comment codifies what many courts have already been saying in their opinions.<\/p>\n<h3>Amendments to Rule 5.1, Responsibilities of Managerial and Supervisory Lawyers<\/h3>\n<p>The proposed amendment adds AI governance to the list of matters that managerial lawyers at law firms must address through internal policies and procedures.<\/p>\n<p>The existing comment already refers to policies for conflicts, calendaring and client funds. The new language adds that managerial lawyers must make reasonable efforts to establish procedures \u201cgoverning the use of artificial intelligence, in accordance with the Rules of Professional Conduct.\u201d<\/p>\n<p>Law firm leaders, practice group chairs and managing partners will need to ensure their firms have actual, functioning AI governance policies, not just aspirational statements, if this rule is finalized.<\/p>\n<h3>Amendments to Rule 5.3, Responsibilities Regarding Nonlawyer Assistants<\/h3>\n<p>A corresponding amendment to the rule on supervising nonlawyer personnel adds AI to the scope of supervision.<\/p>\n<p>The existing comment states that lawyers must give nonlawyer assistants \u201cappropriate instruction and supervision concerning all ethical aspects of their employment.\u201d The proposed amendment adds \u201cincluding the use of technology in the provision of legal services, such as artificial intelligence.\u201d<\/p>\n<p>This extends the AI supervision obligation to paralegals, legal assistants, law clerks and any other staff who use AI tools in their work. Given that AI tools are proliferating throughout law firm operations at every level, this makes sense as a practical clarification.<\/p>\n<h3>The Takeaway<\/h3>\n<p>A few things stand out to me about California\u2019s approach.<\/p>\n<p>First, by embedding these obligations in the enforceable rules rather than guidance documents, these changes would underscore and make explicit ethical duties that are already implicit in the existing rules. While some might argue that modifications of the existing rules are unnecessary, there are plenty of lawyers out there who have been proving them wrong.<\/p>\n<p>Second, the independent verification requirement in Rule 1.1 is worth emphasizing. It does not say lawyers should generally be careful with AI output. It says they <em>must independently review, verify and exercise professional judgment<\/em> regarding <em>any<\/em> output used in client representation. That is a strict standard, and one that cuts against any casual reliance on AI-generated work product.<\/p>\n<p>Third, the confidentiality amendment\u2019s expansion of \u201creveal\u201d is practically significant. Lawyers accustomed to thinking of confidentiality as a disclosure-to-humans concept will need to rethink how they select and use AI tools in light of this definition.<\/p>\n<p>Finally, while the proposals do not explicitly address agentic AI, as the court suggested in the letter that spurred these revisions, they do address them implicitly.<\/p>\n<p>The independent verification requirement in Rule 1.1 and the supervisory obligations in Rules 5.1 and 5.3 are directly relevant to agentic workflows. If a lawyer deploys an AI agent that researches, drafts and revises a brief with limited oversight, these rules would squarely apply.<\/p>\n<p>Although the comment period has closed, the rulemaking process continues. COPRAC will review public input and could modify the proposals before they advance. The California Supreme Court ultimately has authority over the Rules of Professional Conduct. Whether and when these amendments might take effect remains to be seen.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When using any technology \u2014 including AI \u2014 a lawyer \u201cmust independently review, verify, and exercise professional judgment regarding any output generated by the technology that is used in connection with representing a client.\u201d That language appears in a new comment to Rule 1.1 on competence proposed by the State Bar of California\u2019s Standing Committee [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":150371,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[24],"tags":[],"class_list":["post-150370","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-lawsite"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/xira.com\/p\/wp-content\/uploads\/2026\/05\/169-Welcome_to_California__sign_along_southbound_U.S._Route_95_entering_San_Bernardino_County_California_from_Clark_County_Nevada-HcktCx.jpg?fit=960%2C540&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/150370","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/comments?post=150370"}],"version-history":[{"count":0,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/150370\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media\/150371"}],"wp:attachment":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media?parent=150370"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/categories?post=150370"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/tags?post=150370"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}