Select Page

The biggest argument I’ve gotten into lately — in the legal tech space anyway — is over so-called “Agentic AI.” I say, “so-called” because most of the tools billing themselves as “agentic” don’t bear much resemblance to the “Agentic AI” being talked about in every other sector. Consumer AI companies extol the virtues of agents that autonomously make reservations for you based on scanning your horoscope that morning. “Agentic” is the buzzword of the hour. It’s what gets all the VCs setting their money on fire investing in AI so excited and the technophiles intrigued. And so legal tech companies need to adopt that vernacular too.

However, lawyers considering new products aren’t necessarily psyched about the idea of AI using black box decision-making. Because the buzzword we use for that in this profession is “malpractice.”

The good news is that, despite the moniker, most of the products being described as agentic in the legal space more closely resemble a batch file of professionally manicured chat prompts. Which is good! The providers behind these elaborate automations have spent a lot of time and money to make sure the AI provides the best possible results. AI hallucinations are real, but the greatest source of error remains between the keyboard and the chair. Bad prompts lead to bad results… and even hallucinated ones. Lawyers — whether in-house or at a firm — are likely to feel a lot better about a product described as “an expert-curated workflow to maximize AI’s potential while protecting against errors” than an “autonomous agent.”

The legal industry gets its cues from the tech providers and those providers need to be able to communicate what they can offer in terms that lawyers are ready to hear.

Plat4orm and Lumen Advisory Group just dropped a report to help translate technobabble to legalese: From Hours to Outcomes: The Legal Tech Executive Playbook for Value Creation in the AI Era. It’s the first in a series of planned playbooks, this one offering a strategic guide to coach up legal tech providers on how they can guide their own clients through the AI waters. As someone who interviews tech providers all the time, it’s usually clear when a company is represented by folks like Plat4orm and when they aren’t. This guide offers a slice of insight into why.

Screenshot 2025 09 08 at 11.59.40 AM

AI providers will always talk about time-savings, but it matters how they describe time savings. Silicon Valley tech bros describe time savings in terms of AI “taking over” decisions. They gush about how they have built something to replace humans. And, yes, they’ll probably drop something about it being “agentic” and “autonomous.”

Contrast that with the description above. Note that words like “secure” and “trained on their own contract data” show up before anyone mentions time. Note how it’s stressed that the AI created “a strong first draft,” implicitly reassuring the lawyer customer that we’re only talking about a draft out of the gate. Legal advice is “high-value” and “expert” — keeping those egos stroked — while describing a literal decimation of billable time.

Don’t leave it in terms of billed time lost, focus on real time gained. “Reframe the conversation from ‘hours saved’ to ‘strategic capacity unlocked,’” as the playbook explains.

An MIT study found that some 95% of generative AI pilots fail to deliver measurable business impact. There’s no single cause for this, but at least part of it is the general confusion among lawyers over what all this stuff even means. How do you make the plunge and sink resources into AI — and once you do, how do you commit to overcoming the adoption hurdle — when you aren’t even sure you’re making the right AI decisions? The resulting inaction ends up like a middle school dance: everyone standing awkwardly along the walls while the unruly kids try to spike the punch with bootleg Four Lokos while no one’s looking. People using ChatGPT for legal research are the Four Lokos kids of this analogy.

What this playbook offers is a responsible chaperone for that dance.


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.

The post Explaining AI: Tech Vendors Are From Mars, Lawyers Are From Venus… Or Vice Versa appeared first on Above the Law.

lawyer robot by chatgpt
(Photo by ChatGPT)

The biggest argument I’ve gotten into lately — in the legal tech space anyway — is over so-called “Agentic AI.” I say, “so-called” because most of the tools billing themselves as “agentic” don’t bear much resemblance to the “Agentic AI” being talked about in every other sector. Consumer AI companies extol the virtues of agents that autonomously make reservations for you based on scanning your horoscope that morning. “Agentic” is the buzzword of the hour. It’s what gets all the VCs setting their money on fire investing in AI so excited and the technophiles intrigued. And so legal tech companies need to adopt that vernacular too.

However, lawyers considering new products aren’t necessarily psyched about the idea of AI using black box decision-making. Because the buzzword we use for that in this profession is “malpractice.”

The good news is that, despite the moniker, most of the products being described as agentic in the legal space more closely resemble a batch file of professionally manicured chat prompts. Which is good! The providers behind these elaborate automations have spent a lot of time and money to make sure the AI provides the best possible results. AI hallucinations are real, but the greatest source of error remains between the keyboard and the chair. Bad prompts lead to bad results… and even hallucinated ones. Lawyers — whether in-house or at a firm — are likely to feel a lot better about a product described as “an expert-curated workflow to maximize AI’s potential while protecting against errors” than an “autonomous agent.”

The legal industry gets its cues from the tech providers and those providers need to be able to communicate what they can offer in terms that lawyers are ready to hear.

Plat4orm and Lumen Advisory Group just dropped a report to help translate technobabble to legalese: From Hours to Outcomes: The Legal Tech Executive Playbook for Value Creation in the AI Era. It’s the first in a series of planned playbooks, this one offering a strategic guide to coach up legal tech providers on how they can guide their own clients through the AI waters. As someone who interviews tech providers all the time, it’s usually clear when a company is represented by folks like Plat4orm and when they aren’t. This guide offers a slice of insight into why.

Screenshot 2025 09 08 at 11.59.40 AM

AI providers will always talk about time-savings, but it matters how they describe time savings. Silicon Valley tech bros describe time savings in terms of AI “taking over” decisions. They gush about how they have built something to replace humans. And, yes, they’ll probably drop something about it being “agentic” and “autonomous.”

Contrast that with the description above. Note that words like “secure” and “trained on their own contract data” show up before anyone mentions time. Note how it’s stressed that the AI created “a strong first draft,” implicitly reassuring the lawyer customer that we’re only talking about a draft out of the gate. Legal advice is “high-value” and “expert” — keeping those egos stroked — while describing a literal decimation of billable time.

Don’t leave it in terms of billed time lost, focus on real time gained. “Reframe the conversation from ‘hours saved’ to ‘strategic capacity unlocked,’” as the playbook explains.

An MIT study found that some 95% of generative AI pilots fail to deliver measurable business impact. There’s no single cause for this, but at least part of it is the general confusion among lawyers over what all this stuff even means. How do you make the plunge and sink resources into AI — and once you do, how do you commit to overcoming the adoption hurdle — when you aren’t even sure you’re making the right AI decisions? The resulting inaction ends up like a middle school dance: everyone standing awkwardly along the walls while the unruly kids try to spike the punch with bootleg Four Lokos while no one’s looking. People using ChatGPT for legal research are the Four Lokos kids of this analogy.

What this playbook offers is a responsible chaperone for that dance.


HeadshotJoe Patrice is a senior editor at Above the Law and co-host of Thinking Like A Lawyer. Feel free to email any tips, questions, or comments. Follow him on Twitter or Bluesky if you’re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a Managing Director at RPN Executive Search.