How much more efficient and effective does AI actually make marketing agencies?
The post Navigating The AI Revolution: What You Need To Know If Your Marketing Agency Uses AI appeared first on Above the Law.

Ed note: This article first appeared in Strategies & Voices, a publication of the Legal Marketing Association.

In an era where artificial intelligence’s (AI) influence is proliferating, many have proclaimed the obsolescence of marketing agencies — or, at the very least, the death of certain marketing functions. Can we do away with copywriters now that ChatGPT can automate copy? What about graphic designers now that Canva has Magic Design, an AI design generator?

But how much more efficient and effective does AI actually make marketing agencies? What should clients expect? And should clients be concerned about how their agencies are using AI?

AI Isn’t a New Concept for Marketing Agencies

If we were to play a word association game with “AI,” most of us would immediately think of ChatGPT or another popular large language model (LLM), but these aren’t the first AI tools to affect the marketing industry. This technology might feel shiny and new due to recent dramatic jumps in its progress, but it’s been slowly permeating our workflows for years.

It’s likely that your marketing agency is already using AI and has been for some time. At the very least, it’s probably using Grammarly — the trusty digital writing assistant that combs through your copy to detect errors and plagiarism, suggest changes or spot gaps in writing.

The case for implementing AI into legal marketing is familiar by now: These tools are multitasking powerhouses that reduce time spent on routine, mundane tasks so marketers can devote more energy to high-level, strategic work and people management.

HubSpot’s 2023 report on the state of AI in marketing found that the most common uses across all industries are:

Content creation (48%)
Analyzing or reporting on data (45%)
Learning how to do things (45%)
Conducting research (32%)

In-house marketers should expect AI and automation to increase their agencies’ efficiency (HubSpot’s report found it can free up almost a month of additional work time per year). But, as ever when operating in the legal industry, extreme caution is paramount. Below is a breakdown of why generative AI requires careful implementation and oversight, and isn’t always your best bet for a first draft.

Where Generative AI Fails — and How to Bridge the Gap

For the more left-brained humans among us, it’s the dream: a writing robot that systematizes content creation and pumps out copy in mere seconds. Much like the trusty calculator did for math, generative AI could revolutionize how we write. (Yes, William Shakespeare, Emily Dickinson, et al. are definitely turning in their graves).

But hold up: there are multiple reasons why agencies — especially those serving law firms — should not use AI for pure content creation.

1. Effective Law Firm Marketing Depends on Building Authority and Credibility

In a crowded market, leveraging a law firm’s reputation and subject matter experience can set it apart from competitors. If your marketing materials provide reliable information and valuable insights, readers will seek it out. The problem is that purely AI-generated content is the opposite of this; it lacks authenticity and genuine expertise. And because AI pulls from existing resources, your content might inadvertently include previously used, published or trademarked material.

To avoid this issue, use human insights to inform your content and guide overall strategy and messaging rather than relying solely on AI. Implement redundancy and plagiarism-spotting tools to augment your AI use.

2. Using AI for Pure Content Creation Will Give You Low-Quality, Uncopyrightable Content

Everyone loves an accent, except when it comes to AI-generated content. Text generated by tools such as Anthropic, ChatGPT and Perplexity have a certain cadence that sets them apart from human-written content. But unlike the soothing tones of Morgan Freeman’s narration or the warm hug of Dolly Parton’s Southern twang, this telltale “accent” is neither charming nor unique. That’s because the database of content it’s relying on is based on existing information. This can negatively impact SEO, as search engines prioritize unique and valuable content that meets users’ needs. What’s more, the U.S. Copyright Office’s current position is that a work’s author must be human, not AI, and it’s unclear how much human involvement is necessary for copyright to apply.

To avoid this issue, use human editors to draft and refine content, supplement their work with AI and regularly evaluate its efficacy.

3. Current AI Tools Have a Hallucination Problem

Sure, humans might get the odd stress headache, but generative AI is prone to frequent hallucinations (so frequent, in fact, that a Stanford University study found generative AI tools designed for the legal industry produce false or misleading information between 17% and 33% of the time). That’s because this technology was designed to predict the next word in a sequence, and what the AI predicts should come next based on learned patterns isn’t necessarily what should come next. This can lead to misrepresentations about a lawyer, firm or legal matter in marketing copy — and for a notoriously risk-averse industry, that’s a big deal. Just look at what happened to Air Canada when their chatbot gave inaccurate information to a customer.

To avoid this issue, have human editors copyedit and fact-check all AI-generated content. Use AI to synthesize blocks of text into quick bullet points rather than relying on it for market-ready messaging.

4. AI Platforms Are a Data Privacy and Security Minefield

Anything you put into an AI platform could be used to generate copy and content for other users. Take ChatGPT’s terms of use, which say it can use inputs and outputs “to provide, maintain, develop and improve our services.” Similarly, text-to-image generator Midjourney’s terms of use say it has a perpetual license “to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense and distribute text and image prompts you input into the services, as well as any assets produced by you through the service.”

To avoid this issue, never enter confidential information into a third-party, public AI tool, review all terms and conditions and train your staff on the associated data privacy and security concerns. Look to technology such as Meta’s Llama or any LM Studio model for housing confidential data as they run locally on your computer.

5. Algorithmic Biases Lurk Within the System

Depending on the data used to train its algorithms, AI platforms can unintentionally perpetuate biases by reinforcing stereotypes, encouraging unfair treatment and even discriminating against certain groups of people.

To avoid this issue, don’t rely on AI to make decisions or lead initiatives. Look to platforms that use diverse data sets, conduct regular audits and involve human oversight to spot and rectify potential issues.

Recommendations for In-House Marketers

Armed with these cautionary tales, there are several steps you can take to evaluate and optimize your marketing agency’s use of AI. This involves gaining a clear understanding of how AI is being used and what that means for your firm, reviewing contracts and ensuring your agency is following best practices.

Here are some considerations for discussing AI with your firm’s agency:

Ask your firm’s agency what tools they’re already using and what they’re using them for.
Discuss what you’re comfortable and uncomfortable with, and why.
Inquire about the tools your agency is considering using, along with the pros and cons of implementing them.
Consider reviewing your contract to ensure it aligns with your expectations and adheres to your boundaries. If there’s currently no mention of AI use, consider adding clauses to address how it should or shouldn’t be implemented.

If your firm’s agency is following best practices for implementing AI, you should be able to answer “yes” to the following questions:

Does the agency disclose its AI use to clients for transparency?
Does it remove all personal identifiers to anonymize data?
Does it make sure no sensitive data is ever fed into an AI platform?
Do humans review all the agency’s output before clients receive it?

Your Agency Still Needs Humans

As a human myself, I’m thrilled to write the above heading. And it’s true: AI won’t replace your marketing agency, but it can level up your agency’s operations and content if implemented strategically.

While AI excels at analyzing vast amounts of data and automating time-consuming tasks, it needs the contextual understanding and creative intuition of human marketers. Effective legal marketing requires a deep understanding of market dynamics, relationship building, client preferences, cultural nuances and brand development. With these skills, we can interpret AI-generated insights in a meaningful way.

As with any great hire assigned to a new role, allow AI to explore its most enjoyable strengths (which happen to be completing your most tedious, routine and time-consuming tasks) and rely on its human colleagues for strategic direction, critical oversight and creative innovation.

Michelle Calcote King is the principal and president of Reputation Ink, a public relations and thought leadership marketing agency serving B2B professional services firms. She sits on the board of LMA’s Southeast Region, hosts the thought leadership podcast Spill the Ink and has been recognized twice by Lawdragon as one of the 100 Global Leaders in Legal Strategy & Consulting. Say hello to Michelle on LinkedIn or at [email protected].