In a never-ending game of cat and mouse, cybersecurity companies are using AI to help combat the world’s increasingly sophisticated cyberattacks.
The post Law Firms Groaning at the Need to Incorporate Generative AI into Cybersecurity appeared first on Articles, Tips and Tech for Law Firms and Lawyers.

In a never-ending game of cat and mouse, cybersecurity software and hardware companies are using AI to help combat the world’s increasingly sophisticated cyberattacks. It’s a modern-day version of who can one-up the other: cyberattackers or cybersecurity companies.

AI cybersecurity looking through computer screen
Law Firms Groaning at the Need to Incorporate Generative AI into Cybersecurity 3

Much has been written about artificial intelligence over the past few years, together with insight into how it will impact and transform the world for better or worse, including the legal sector. Many firms are already taking advantage of AI in their daily activities, recently utilizing Microsoft’s Copilot. AI is being increasingly integrated with legal applications, in addition to the continued popularity of ChatGPT.

AI has also made cyberattacks more effective through integration with targeted phishing campaigns and the automated generation of exploits at the click of the mouse. Attackers are no longer required to be skilled programmers and expert hackers. AI allows the average user to become a professional hacker in about the time it takes to run an internet search on the subject.

At the same time, cybersecurity software and hardware companies are using AI to further their development and security capabilities to help fight increasingly sophisticated cyberattacks.

GenAI Explained

Generative AI (GenAI) is behind developing next-generation cybersecurity protections, allowing organizations to increase their defense capabilities and posture. To understand, one must first have some background knowledge on GenAI.

GenAI is a subset of artificial intelligence that uses generative models to produce forms of information. GenAI models use underlying patterns and structures of their training data to create new data based on inputs, which often come in the form of prompts. Essentially, a GenAI model is provided with seed information, learns and recognizes the correlations, and continues to evolve as more data is fed into the model. Why not use these models to help thwart sophisticated cyberattacks, software that can learn and evolve, just like the cybersecurity threats we face? That is what cybersecurity companies are doing; it is what everyone now expects.

CrowdStrike, an American cybersecurity company providing endpoint security and threat intelligence, released a survey citing that over 80% of respondents plan to adopt or have already integrated GenAI into their cybersecurity frameworks. Most prefer seamless integration into existing cybersecurity tools and solutions that have already been implemented and have built trust over the past several years versus separate solutions involving artificial intelligence. It’s not uncommon for cybersecurity vendors to push out new features utilizing AI capabilities in an update or significant release without end users knowing, so you may already have cybersecurity software (next-gen antivirus, Endpoint Detection, and Response software) that has already begun integrating with AI.

There is also a distinction made between the different types of artificial intelligence, and it was strongly recommended that AI solutions built and explicitly designed for cybersecurity be used rather than generic models, which lack the versatility and specialized training that a particular focus can provide. Specifically designed AI cybersecurity models can be more effective at threat detection, response, and mitigation of risks than generic models. Cybersecurity companies are also using GenAI to help fill the shortage of skilled workers by automating mind-numbing repetitive tasks that often overwhelm understaffed cybersecurity professionals.

The integration of GenAI doesn’t come without hesitation by many businesses, including law firms. Many law firms and attorneys are concerned about using AI in their business processes, giving AI models access to sensitive, confidential information. It all comes down to trust and familiarity, especially when working with relatively new, cutting-edge technology that is still being developed and evolving quickly Being cautious is prudent in most cases, especially since attorneys are ethically responsible for protecting their client information. Trust is critical and only the first step. What safeguards are in place to keep specific data out of AI’s grasp? How do law firms prevent staff members from using “unapproved” AI solutions that may grant access to firm data that should be off-limits to the model?

Law firms should seriously consider adopting clear AI usage policies that specify the tools and solutions that staff can use—if allowed. It’s better to be out in front of your staff members, preventing “Shadow AI” problems before they occur. Law firms are notoriously slow to change and adapt, but this is one area where being proactive rather than reactive is in the law firm’s best interest.

Law Firms and the Next Generation of Cybersecurity

For the foreseeable future, there will be law firms and attorneys who will resist the adoption of AI (in any form) at all costs. Even with the ability to put safeguards in place, such as policies, procedures, and technical and security controls, there will remain doubts regarding the benefits and usage of artificial intelligence. The potential for GenAI’s integration into cybersecurity software, hardware, and controls is vast, potentially transforming and tilting the cybersecurity battle in favor of the “good guys” for once. There are concerns to be thought through and worked out, but at the end of the day, we cannot get farther and farther behind the cyber attackers who are “all in” on AI.

To have any chance at thwarting future cyberattacks, we must determine how we can integrate and adopt AI responsibly and as effectively as possible.

Michael C. Maschke is the President and Chief Executive Officer of Sensei Enterprises, Inc. Mr. Maschke is an EnCase Certified Examiner (EnCE), a Certified Computer Examiner (CCE #744), an AccessData Certified Examiner (ACE), a Certified Ethical Hacker (CEH), and a Certified Information Systems Security Professional (CISSP). He is a frequent speaker on IT, cybersecurity, and digital forensics, and he has co-authored 14 books published by the American Bar Association. He can be reached at mmaschke@senseient.com.

Sharon D. Nelson is the co-founder of and consultant to Sensei Enterprises, Inc. She is a past president of the Virginia State Bar, the Fairfax Bar Association, and the Fairfax Law Foundation. She is a co-author of 18 books published by the ABA. snelson@senseient.com

John W. Simek is the co-founder of and consultant to Sensei Enterprises, Inc. He is a Certified Information Systems Security Professional (CISSP), a Certified Ethical Hacker (CEH), and a nationally known digital forensics expert. He is a co-author of 18 books published by the ABA. jsimek@senseient.com

Read more from the Sensei team:

Image © iStockPhoto.com.

AttorneyatWork Logo %C2%AE 2021 1

Don’t miss out on our daily practice management tips. Subscribe to Attorney at Work’s free newsletter here >