{"id":147679,"date":"2026-04-01T06:30:27","date_gmt":"2026-04-01T14:30:27","guid":{"rendered":"https:\/\/xira.com\/p\/2026\/04\/01\/deepfakes-a-problem-in-search-of-a-problem\/"},"modified":"2026-04-01T06:30:27","modified_gmt":"2026-04-01T14:30:27","slug":"deepfakes-a-problem-in-search-of-a-problem","status":"publish","type":"post","link":"https:\/\/xira.com\/p\/2026\/04\/01\/deepfakes-a-problem-in-search-of-a-problem\/","title":{"rendered":"Deepfakes: A Problem In Search Of A Problem?"},"content":{"rendered":"<p>I asked a room full of lawyers and legal professionals recently how many of them had come across deepfakes in litigation. Not a single hand went up. Is the deepfake phenomenon a problem that\u2019s really not one? Or is it like the hallucinated case citation problem once was: skepticism that hadn\u2019t caught up with reality?<\/p>\n<p>I was giving a presentation on deepfakes with the esteemed jurist, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Xavier_Rodriguez\" rel=\"nofollow noopener\" target=\"_blank\">Xavier Rodriguez<\/a>, at ABA\u2019s <a href=\"https:\/\/www.techshow.com\/\" rel=\"nofollow noopener\" target=\"_blank\">TECHSHOW<\/a> to some 50 or so lawyers and legal professionals when I asked my deepfakes question. Judge Rodriguez is a federal district judge for the Western District of Texas and a leading voice on technology and AI in the federal judiciary. I <a href=\"https:\/\/abovethelaw.com\/2025\/12\/the-deepfake-courtroom-problem-a-colorado-blue-ribbon-study-sheds-some-light-and-offers-a-start-to-solutions\/\" rel=\"nofollow noopener\" target=\"_blank\">have written<\/a> before about the threat of AI-generated deepfakes and, like Judge Rodriguez, fear its impact on our judicial system.<\/p>\n<p>The fact that not one person raised their hands is significant. Granted, the sample size was small, but TECHSHOW typically draws some pretty savvy tech people and litigators. So, of anyone, they should be well aware of and sensitive to the potential problem.<\/p>\n<p>We shouldn\u2019t have been all that surprised that no hands went up though. After all, the <a href=\"https:\/\/www.uscourts.gov\/forms-rules\/records-rules-committees\/committee-reports\/advisory-committee-evidence-rules-may-2024\" rel=\"nofollow noopener\" target=\"_blank\">Advisory Committee on Evidence Rules<\/a> that proposes changes to the Federal Rules of Evidence recently rejected a change to Federal Rule 901 to strengthen authentication rules. One major reason: the Committee <a href=\"https:\/\/legal-forum.uchicago.edu\/print-archive\/deepfakes-court-how-judges-can-proactively-manage-alleged-ai-generated-material#:~:text=The%20quality%20of%20AIM%20is,track%20its%20origin.\" rel=\"nofollow noopener\" target=\"_blank\">reportedly<\/a> thought it was premature given that there were so few reported cases involving deepfake evidence. The Committee opted for a wait-and-see approach.<\/p>\n<p>But with all the publicity about deepfakes and the dangers they portend to our judicial system and society, you have to ask why it isn\u2019t showing up more. Is it just a problem in search of a problem (to paraphrase the saying it\u2019s a solution in search of a problem)?<\/p>\n<p><strong>What\u2019s the Why?<\/strong><\/p>\n<p>There could be several reasons that we apparently aren\u2019t yet seeing a deepfake problem in our courtrooms.<\/p>\n<p>Maybe litigants aren\u2019t yet savvy enough to create the kind of deepfake that passes the realistic-looking test one would need for litigation. For those with some tech knowledge, it seems pretty easy to create a convincing fake. But to those with less tech background, maybe it isn\u2019t.<\/p>\n<p>Or perhaps litigants still have respect and outright fear of brazenly offering fake evidence in front of a black-robed judge. After all, committing what is in essence perjury should give anyone pause.<\/p>\n<p>Or maybe, as one litigator who I know well and respect told me after the presentation, maybe deepfakes are occurring, but lawyers and judges aren\u2019t catching them. After all, we have been conditioned for years by the photography and audio recording industry that what you see in a picture or hear in a recording is, in fact, real. So, we assume things are real today when they aren\u2019t.<\/p>\n<p>And the ability to use AI to create extremely realistic but fake evidence is a fairly recent phenomenon. It burst upon us all quickly and continues to develop rapidly. So, our minds have not yet caught up with the fact that its use could create a problem for our litigation system.<\/p>\n<p><strong>It\u2019s Easy \u2014 And Tempting<\/strong><\/p>\n<p>I tend to doubt the first two reasons because it\u2019s so easy to manufacture convincing evidence. Certainly, in the criminal law arena, the opportunity to engage in deepfakes by defendants would seem ripe for the taking. Manufacturing a picture to establish an alibi. Creating an audio recording to suggest someone else committed the crime. The list could go on and on. Indeed, prosecutors have told me that they are in fact very worried about just this.<\/p>\n<p>Another area ripe for abuse is family law. Someone seeking a TRO creates an audio recording suggesting, for example, domestic abuse. That puts a judge in a tough spot since the impact of treating the recording as a fake could have a devastating impact if wrong.<\/p>\n<p>But it\u2019s not just the bad guys. Even well-meaning people might be tempted to cross the line. Over my career, I saw litigants and witnesses constantly convince themselves of a version of facts that were just not correct. Their minds would embellish the version they wanted and add things to it that simply didn\u2019t happen. Indeed, it\u2019s often not conscious; it\u2019s human nature.<\/p>\n<p>And now it would be an easy line to cross from mind embellishment to creating proof. I had a case once that turned on whether a fire protective device was or was not present in a building. One person was sure it wasn\u2019t there when in fact it was. In the age of deepfakes, it would be easy to create a picture showing what the mind\u2019s eye was certain was true: a device that was not there.<\/p>\n<p>Or if one side had, say, a picture showing it was there and their adversary concluded that picture was fake. The temptation to counter the picture with another fake one would be high.<\/p>\n<p><strong>Skepticism vs. Reality<\/strong><\/p>\n<p>Which brings us back to the unscientific poll in our presentation and the Rules Committee attitude: why aren\u2019t we seeing the problem in our courtrooms?<\/p>\n<p>Judge Rodriguez made a good point in our discussion that I mentioned above: there is a presumption of validity for photos, recordings, and videos. It\u2019s the notion that a picture is worth a thousand words. So, skepticism for what we see has not yet caught up with reality. At least in the courtroom.<\/p>\n<p>But more and more people are rightfully questioning what they are seeing on things like social media and elsewhere. There is increased publicity about the deepfake phenomenon as the development of AI has created greater opportunity for realistic deepfakes. Maybe we just aren\u2019t there yet.<\/p>\n<p>It\u2019s like hallucinated cases. Most people knew that LLMs could hallucinate from the time they burst on the scene. Yet it wasn\u2019t until later that the first instance of a hallucinated citation popped up in a courtroom. Now it happens all the time.<\/p>\n<p>The reality is that lawyers and judges have not yet realized that virtually any piece of evidence, the realism of which we have taken for granted, could now be fake. And that routine authentication may need to take on a whole new meaning.<\/p>\n<p>But if it does, litigation will turn into a side show of battles over whether any piece of evidence is real or not. And, even worse, again as Judge Rodriguez pointed out, fact finders \u2014 judges and juries \u2014 won\u2019t or can\u2019t believe any piece of evidence. It could turn our litigation system designed for fact finding on its head. Endless fights with no one believing anything they see or hear.<\/p>\n<p>That threat is real. Waiting and seeing is not an option.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\">\n<p><em><strong>Stephen Embry is a lawyer, speaker, blogger, and writer. He publishes\u00a0<a href=\"https:\/\/www.techlawcrossroads.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">TechLaw Crossroads<\/a>, a blog devoted to the examination of the tension between technology, the law, and the practice of law<\/strong><\/em>.<\/p>\n<p>The post <a href=\"https:\/\/abovethelaw.com\/2026\/04\/deepfakes-a-problem-in-search-of-a-problem\/\" rel=\"nofollow noopener\" target=\"_blank\">Deepfakes: A Problem In Search Of A Problem?<\/a> appeared first on <a href=\"https:\/\/abovethelaw.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Above the Law<\/a>.<\/p>\n<figure class=\"post-single__featured-image post-single__featured-image--medium alignright\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" width=\"300\" height=\"169\" src=\"https:\/\/i0.wp.com\/abovethelaw.com\/wp-content\/uploads\/sites\/4\/2024\/07\/GettyImages-2031734739-300x169.jpg?resize=300%2C169&#038;ssl=1\" class=\"attachment-medium size-medium wp-post-image\" alt=\"\" title=\"\"><\/figure>\n<p>I asked a room full of lawyers and legal professionals recently how many of them had come across deepfakes in litigation. Not a single hand went up. Is the deepfake phenomenon a problem that\u2019s really not one? Or is it like the hallucinated case citation problem once was: skepticism that hadn\u2019t caught up with reality?<\/p>\n<p>I was giving a presentation on deepfakes with the esteemed jurist, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Xavier_Rodriguez\" rel=\"nofollow noopener\" target=\"_blank\">Xavier Rodriguez<\/a>, at ABA\u2019s <a href=\"https:\/\/www.techshow.com\" rel=\"nofollow noopener\" target=\"_blank\">TECHSHOW<\/a> to some 50 or so lawyers and legal professionals when I asked my deepfakes question. Judge Rodriguez is a federal district judge for the Western District of Texas and a leading voice on technology and AI in the federal judiciary. I <a href=\"https:\/\/abovethelaw.com\/2025\/12\/the-deepfake-courtroom-problem-a-colorado-blue-ribbon-study-sheds-some-light-and-offers-a-start-to-solutions\/\" rel=\"nofollow noopener\" target=\"_blank\">have written<\/a> before about the threat of AI-generated deepfakes and, like Judge Rodriguez, fear its impact on our judicial system.<\/p>\n<p>The fact that not one person raised their hands is significant. Granted, the sample size was small, but TECHSHOW typically draws some pretty savvy tech people and litigators. So, of anyone, they should be well aware of and sensitive to the potential problem.<\/p>\n<p>We shouldn\u2019t have been all that surprised that no hands went up though. After all, the <a href=\"https:\/\/www.uscourts.gov\/forms-rules\/records-rules-committees\/committee-reports\/advisory-committee-evidence-rules-may-2024\" rel=\"nofollow noopener\" target=\"_blank\">Advisory Committee on Evidence Rules<\/a> that proposes changes to the Federal Rules of Evidence recently rejected a change to Federal Rule 901 to strengthen authentication rules. One major reason: the Committee <a href=\"https:\/\/legal-forum.uchicago.edu\/print-archive\/deepfakes-court-how-judges-can-proactively-manage-alleged-ai-generated-material#:~:text=The%20quality%20of%20AIM%20is,track%20its%20origin.\" rel=\"nofollow noopener\" target=\"_blank\">reportedly<\/a> thought it was premature given that there were so few reported cases involving deepfake evidence. The Committee opted for a wait-and-see approach.<\/p>\n<p>But with all the publicity about deepfakes and the dangers they portend to our judicial system and society, you have to ask why it isn\u2019t showing up more. Is it just a problem in search of a problem (to paraphrase the saying it\u2019s a solution in search of a problem)?<\/p>\n<p><strong>What\u2019s the Why?<\/strong><\/p>\n<p>There could be several reasons that we apparently aren\u2019t yet seeing a deepfake problem in our courtrooms.<\/p>\n<p>Maybe litigants aren\u2019t yet savvy enough to create the kind of deepfake that passes the realistic-looking test one would need for litigation. For those with some tech knowledge, it seems pretty easy to create a convincing fake. But to those with less tech background, maybe it isn\u2019t.<\/p>\n<p>Or perhaps litigants still have respect and outright fear of brazenly offering fake evidence in front of a black-robed judge. After all, committing what is in essence perjury should give anyone pause.<\/p>\n<p>Or maybe, as one litigator who I know well and respect told me after the presentation, maybe deepfakes are occurring, but lawyers and judges aren\u2019t catching them. After all, we have been conditioned for years by the photography and audio recording industry that what you see in a picture or hear in a recording is, in fact, real. So, we assume things are real today when they aren\u2019t.<\/p>\n<p>And the ability to use AI to create extremely realistic but fake evidence is a fairly recent phenomenon. It burst upon us all quickly and continues to develop rapidly. So, our minds have not yet caught up with the fact that its use could create a problem for our litigation system.<\/p>\n<p><strong>It\u2019s Easy \u2014 And Tempting<\/strong><\/p>\n<p>I tend to doubt the first two reasons because it\u2019s so easy to manufacture convincing evidence. Certainly, in the criminal law arena, the opportunity to engage in deepfakes by defendants would seem ripe for the taking. Manufacturing a picture to establish an alibi. Creating an audio recording to suggest someone else committed the crime. The list could go on and on. Indeed, prosecutors have told me that they are in fact very worried about just this.<\/p>\n<p>Another area ripe for abuse is family law. Someone seeking a TRO creates an audio recording suggesting, for example, domestic abuse. That puts a judge in a tough spot since the impact of treating the recording as a fake could have a devastating impact if wrong.<\/p>\n<p>But it\u2019s not just the bad guys. Even well-meaning people might be tempted to cross the line. Over my career, I saw litigants and witnesses constantly convince themselves of a version of facts that were just not correct. Their minds would embellish the version they wanted and add things to it that simply didn\u2019t happen. Indeed, it\u2019s often not conscious; it\u2019s human nature.<\/p>\n<p>And now it would be an easy line to cross from mind embellishment to creating proof. I had a case once that turned on whether a fire protective device was or was not present in a building. One person was sure it wasn\u2019t there when in fact it was. In the age of deepfakes, it would be easy to create a picture showing what the mind\u2019s eye was certain was true: a device that was not there.<\/p>\n<p>Or if one side had, say, a picture showing it was there and their adversary concluded that picture was fake. The temptation to counter the picture with another fake one would be high.<\/p>\n<p><strong>Skepticism vs. Reality<\/strong><\/p>\n<p>Which brings us back to the unscientific poll in our presentation and the Rules Committee attitude: why aren\u2019t we seeing the problem in our courtrooms?<\/p>\n<p>Judge Rodriguez made a good point in our discussion that I mentioned above: there is a presumption of validity for photos, recordings, and videos. It\u2019s the notion that a picture is worth a thousand words. So, skepticism for what we see has not yet caught up with reality. At least in the courtroom.<\/p>\n<p>But more and more people are rightfully questioning what they are seeing on things like social media and elsewhere. There is increased publicity about the deepfake phenomenon as the development of AI has created greater opportunity for realistic deepfakes. Maybe we just aren\u2019t there yet.<\/p>\n<p>It\u2019s like hallucinated cases. Most people knew that LLMs could hallucinate from the time they burst on the scene. Yet it wasn\u2019t until later that the first instance of a hallucinated citation popped up in a courtroom. Now it happens all the time.<\/p>\n<p>The reality is that lawyers and judges have not yet realized that virtually any piece of evidence, the realism of which we have taken for granted, could now be fake. And that routine authentication may need to take on a whole new meaning.<\/p>\n<p>But if it does, litigation will turn into a side show of battles over whether any piece of evidence is real or not. And, even worse, again as Judge Rodriguez pointed out, fact finders \u2014 judges and juries \u2014 won\u2019t or can\u2019t believe any piece of evidence. It could turn our litigation system designed for fact finding on its head. Endless fights with no one believing anything they see or hear.<\/p>\n<p>That threat is real. Waiting and seeing is not an option.<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n<p><em><strong>Stephen Embry is a lawyer, speaker, blogger, and writer. He publishes\u00a0<a href=\"https:\/\/www.techlawcrossroads.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">TechLaw Crossroads<\/a>, a blog devoted to the examination of the tension between technology, the law, and the practice of law<\/strong><\/em>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I asked a room full of lawyers and legal professionals recently how many of them had come across deepfakes in litigation. Not a single hand went up. Is the deepfake phenomenon a problem that\u2019s really not one? Or is it like the hallucinated case citation problem once was: skepticism that hadn\u2019t caught up with reality? [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":147680,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[16],"tags":[],"class_list":["post-147679","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-above_the_law"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/xira.com\/p\/wp-content\/uploads\/2026\/04\/GettyImages-2031734739-6nZ5JR.jpg?fit=788%2C443&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/147679","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/comments?post=147679"}],"version-history":[{"count":0,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/147679\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media\/147680"}],"wp:attachment":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media?parent=147679"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/categories?post=147679"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/tags?post=147679"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}