
One of the most striking findings from the recently released 8am 2026 Legal Industry Report was that only 11% of firms required mandatory AI training, and only 9% had a written, enforced policy on AI use. Meanwhile, 69% of the 1,300 legal professionals surveyed reported using general-purpose AI tools for work-related purposes.
In other words, the majority of law firm employees are using AI with virtually no guidance or guardrails. How does your law firm compare? Do you have an AI policy in place, and have you educated your staff about appropriate AI usage?
Unfortunately, your firm’s employees have been experimenting even as your AI exploratory committee carefully and methodically researches the issue to determine whether now is the time to invest. If your small law firm doesn’t yet have AI governance in place, the reality is that you’re probably too late: the call is coming from inside the house.
AI is not a future problem, and adapting is no longer a choice; it requires immediate attention. Your first step should be to reduce AI-related risk by drafting a policy, explaining it to your employees, and training them on appropriate AI use.
At a minimum, the policy should address: which tools are approved for use, what client and matter information cannot be entered into any AI system, the mandate that all AI-generated work product requires careful review before use, and that all disclosure obligations required by court rules must be followed.
Fortunately, it’s easier than ever to create governance by leveraging the very tools your employees are already using. Whether it’s ChatGPT, Gemini, or Claude — AI can assist with drafting your firm’s policy and establishing a training program that ensures your employees understand how and when they can use AI in the workplace.
If you’re not sure where to start, here’s a suggested approach using Claude, but you can accomplish this with Gemini or ChatGPT as well. You’ll find it’s easier than you might expect, and with just a few hours’ work, you’ll have created governance documents and a training program for lawyers and your staff.
First, ensure you are using a paid tier and have confirmed in your settings that inputs won’t be used to train the model. This ensures your firm’s internal documents and policies remain private.
Then, create a Project in Claude (or a Gem in Gemini/Project in ChatGPT). A Project in Claude is a dedicated workspace that retains context across every conversation. The more information you add to the project, the more accurate the output will be.
Upload documents into the Project that define your firm’s obligations and risk profile. These can include:
- Existing office policies or employee handbook sections
- Malpractice carrier guidance on AI (most carriers have issued something by now)
- Relevant state bar ethics opinions
- Sample law firm AI policies
- Any vendor agreements for AI tools that the firm already has in place
Next, provide context about your firm that carries over into all project conversations. In a Claude Project, there’s a field called “Project instructions.” Enter relevant information for Claude to consider before responding to each query, such as the firm’s size, practice areas, and applicable jurisdictions.
Once you’ve done that, start a new conversation and ask Claude to draft your firm’s AI policy. A simple starting prompt works fine: “Draft an AI usage policy for our firm based on the uploaded documents and firm description. For each section, note which source you’re drawing from and flag any gaps or conflicts with our jurisdiction’s rules.”
The first draft won’t be perfect. You’ll need to work through it section by section. Ask Claude to compare it to any relevant ethics opinions and sample AI policies added to the project to identify gaps and revise accordingly. Next, carefully review the final draft to determine whether it meets your firm’s needs.
Once the policy is in good shape, open a new conversation in the same Project. Ask Claude to assist with building a training outline using the AI policy and any relevant firm context. Based on that document, have Claude create a 60-minute onboarding module for all staff. With that completed, you’re ready to start training your staff on appropriate and permissible AI use in the firm.
The bottom line: Drafting AI governance isn’t as difficult as it might seem, and there’s no better time than now to get started. Law firm employees are already using AI off the books, in large part because law firms aren’t moving fast enough.
Meanwhile, the governance gap isn’t shrinking on its own. Fortunately, reducing it can be accomplished faster and more efficiently than ever. Once you’ve shared the policy and trained your staff, rest easy knowing that you’ve reduced your firm’s immediate risk. And, even better, through the process of creating the governance, you’ve gained enough AI fluency to make more informed decisions about technology adoption that will guide your firm’s success in an AI-led world.
Nicole Black is a Rochester, New York attorney and Principal Legal Insight Strategist at 8am, the team behind 8am MyCase, LawPay, CasePeer, and DocketWise. She’s been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. She’s easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter at @nikiblack and she can be reached at niki.black@mycase.com.
The post Create An AI Policy Before Your Firm Falls Further Behind appeared first on Above the Law.

One of the most striking findings from the recently released 8am 2026 Legal Industry Report was that only 11% of firms required mandatory AI training, and only 9% had a written, enforced policy on AI use. Meanwhile, 69% of the 1,300 legal professionals surveyed reported using general-purpose AI tools for work-related purposes.
In other words, the majority of law firm employees are using AI with virtually no guidance or guardrails. How does your law firm compare? Do you have an AI policy in place, and have you educated your staff about appropriate AI usage?
Unfortunately, your firm’s employees have been experimenting even as your AI exploratory committee carefully and methodically researches the issue to determine whether now is the time to invest. If your small law firm doesn’t yet have AI governance in place, the reality is that you’re probably too late: the call is coming from inside the house.
AI is not a future problem, and adapting is no longer a choice; it requires immediate attention. Your first step should be to reduce AI-related risk by drafting a policy, explaining it to your employees, and training them on appropriate AI use.
At a minimum, the policy should address: which tools are approved for use, what client and matter information cannot be entered into any AI system, the mandate that all AI-generated work product requires careful review before use, and that all disclosure obligations required by court rules must be followed.
Fortunately, it’s easier than ever to create governance by leveraging the very tools your employees are already using. Whether it’s ChatGPT, Gemini, or Claude — AI can assist with drafting your firm’s policy and establishing a training program that ensures your employees understand how and when they can use AI in the workplace.
If you’re not sure where to start, here’s a suggested approach using Claude, but you can accomplish this with Gemini or ChatGPT as well. You’ll find it’s easier than you might expect, and with just a few hours’ work, you’ll have created governance documents and a training program for lawyers and your staff.
First, ensure you are using a paid tier and have confirmed in your settings that inputs won’t be used to train the model. This ensures your firm’s internal documents and policies remain private.
Then, create a Project in Claude (or a Gem in Gemini/Project in ChatGPT). A Project in Claude is a dedicated workspace that retains context across every conversation. The more information you add to the project, the more accurate the output will be.
Upload documents into the Project that define your firm’s obligations and risk profile. These can include:
- Existing office policies or employee handbook sections
- Malpractice carrier guidance on AI (most carriers have issued something by now)
- Relevant state bar ethics opinions
- Sample law firm AI policies
- Any vendor agreements for AI tools that the firm already has in place
Next, provide context about your firm that carries over into all project conversations. In a Claude Project, there’s a field called “Project instructions.” Enter relevant information for Claude to consider before responding to each query, such as the firm’s size, practice areas, and applicable jurisdictions.
Once you’ve done that, start a new conversation and ask Claude to draft your firm’s AI policy. A simple starting prompt works fine: “Draft an AI usage policy for our firm based on the uploaded documents and firm description. For each section, note which source you’re drawing from and flag any gaps or conflicts with our jurisdiction’s rules.”
The first draft won’t be perfect. You’ll need to work through it section by section. Ask Claude to compare it to any relevant ethics opinions and sample AI policies added to the project to identify gaps and revise accordingly. Next, carefully review the final draft to determine whether it meets your firm’s needs.
Once the policy is in good shape, open a new conversation in the same Project. Ask Claude to assist with building a training outline using the AI policy and any relevant firm context. Based on that document, have Claude create a 60-minute onboarding module for all staff. With that completed, you’re ready to start training your staff on appropriate and permissible AI use in the firm.
The bottom line: Drafting AI governance isn’t as difficult as it might seem, and there’s no better time than now to get started. Law firm employees are already using AI off the books, in large part because law firms aren’t moving fast enough.
Meanwhile, the governance gap isn’t shrinking on its own. Fortunately, reducing it can be accomplished faster and more efficiently than ever. Once you’ve shared the policy and trained your staff, rest easy knowing that you’ve reduced your firm’s immediate risk. And, even better, through the process of creating the governance, you’ve gained enough AI fluency to make more informed decisions about technology adoption that will guide your firm’s success in an AI-led world.
Nicole Black is a Rochester, New York attorney and Principal Legal Insight Strategist at 8am, the team behind 8am MyCase, LawPay, CasePeer, and DocketWise. She’s been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. She’s easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter at @nikiblack and she can be reached at niki.black@mycase.com.
The post Create An AI Policy Before Your Firm Falls Further Behind appeared first on Above the Law.

