{"id":128084,"date":"2025-07-24T03:44:33","date_gmt":"2025-07-24T11:44:33","guid":{"rendered":"https:\/\/xira.com\/p\/2025\/07\/24\/partner-who-wrote-about-ai-ethics-fired-for-citing-fake-ai-cases\/"},"modified":"2025-07-24T03:44:33","modified_gmt":"2025-07-24T11:44:33","slug":"partner-who-wrote-about-ai-ethics-fired-for-citing-fake-ai-cases","status":"publish","type":"post","link":"https:\/\/xira.com\/p\/2025\/07\/24\/partner-who-wrote-about-ai-ethics-fired-for-citing-fake-ai-cases\/","title":{"rendered":"Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases"},"content":{"rendered":"<figure class=\"wp-block-image alignright size-full is-resized\"><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" width=\"400\" height=\"500\" src=\"https:\/\/i0.wp.com\/abovethelaw.com\/wp-content\/uploads\/sites\/4\/2015\/06\/Robot-Lawyer.jpg?resize=400%2C500&#038;ssl=1\" alt=\"\" class=\"wp-image-111012\" title=\"\"><figcaption><\/figcaption><\/figure>\n<p>Most of the time, when a lawyer unwittingly cites a bunch of fake cases spit out by artificial intelligence, it\u2019s because they never bothered to figure out how the product worked or even superficially consider the ethical implications. They plead with the judge that they\u2019re just <a href=\"https:\/\/abovethelaw.com\/2025\/07\/lawyer-cites-ai-hallucinations-responds-with-pretentious-meditation-on-nature-of-being\/\" rel=\"nofollow noopener\" target=\"_blank\">a humble scribe of Ashurbanipal<\/a> who couldn\u2019t possibly grasp the powerful forces involved in asking a mansplaining-as-a-service bot to magic up some cases. As an excuse it <a href=\"https:\/\/abovethelaw.com\/2025\/07\/judge-trolls-lawyer-over-flowery-excuses-for-ai-hallucinations\/\" rel=\"nofollow noopener\" target=\"_blank\">doesn\u2019t always work<\/a>, but tales of ignorance have, thus far, <a href=\"https:\/\/abovethelaw.com\/2025\/07\/mike-lindell-lawyers-earn-pillow-soft-sanction-after-letting-ai-do-the-thinking\/\" rel=\"nofollow noopener\" target=\"_blank\">stayed many a judge\u2019s hand<\/a>.<\/p>\n<p>But when the hallucinations come from a lawyer who once published the article \u201c<a href=\"https:\/\/web.archive.org\/web\/20240909150316\/https:\/\/professionalliabilitymatters.com\/law-practice-management\/ai-in-the-legal-profession-ethical-considerations\/\" rel=\"nofollow noopener\" target=\"_blank\">Artifical Intelligence in the Legal Profession: Ethical Considerations<\/a>,\u201d there\u2019s not a ton of wiggle room.<\/p>\n<p>Goldberg Segalla\u2019s Danielle Malaty, who authored the article about ethics, is now out after taking responsibility for a fake cite in a Chicago Housing Authority filing asking the judge to reconsider a jury\u2019s $24 million verdict in a lead paint poisoning case. The Authority is said to have learned about the lead paint hazard in 1992 and it\u2019s hard to contest liability for a harm you\u2019ve known about since <em>End of the Road<\/em> charted. But the firm struck gold with an Illinois Supreme Court cite,\u00a0<em>Mack v. Anderson<\/em>, that could not have supported the CHA\u2019s argument better\u2026 because it was invented out of thin microchips by ChatGPT.<\/p>\n<p>From the <a href=\"https:\/\/www.chicagotribune.com\/2025\/07\/17\/chicago-housing-authority-lawyers-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">Chicago Tribune<\/a>:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>At the hearing, Danielle Malaty, the attorney responsible for the mistake, told the judge she did not think ChatGPT could create fictitious legal citations and did not check to ensure the case was legitimate. Three other Goldberg Segalla attorneys then reviewed the draft motion \u2014 including Mason, who served as the final reviewer \u2014 as well as CHA\u2019s in-house counsel, before it was filed with the court. Malaty was terminated from Goldberg Segalla, where she had been a partner, following her use of AI. The firm, at the time, had an AI policy that banned its use.<\/p>\n<\/blockquote>\n<p>How did this happen? Was the firm huffing the same lead paint that Chicago Housing doesn\u2019t want to pay for foisting on kids?<\/p>\n<p>According to the Tribune account, lead counsel on the case, Larry Mason, said that \u201cAn exhaustive investigation revealed that one attorney, in direct violation of Goldberg Segalla\u2019s AI use policy, used AI technology and failed to verify the AI citation before including the case and surrounding sentence describing its fictitious holding.\u201d Not quite sure what this policy even means\u2026 has the firm banned \u201cAI\u201d generally, because that\u2019s dumb. It\u2019s going to be embedded in the guts of everything lawyers do soon enough \u2014 a general objection to AI is like lawyers in the 90s informing the court that they\u2019re committed to never allowing online legal research. Hopefully the policy is more nuanced than Mason suggests because blanket policies, paradoxically, only encourage lawyers to go rogue.<\/p>\n<p>But more important than the \u201cAI policy\u201d is the part where \u201cThree other Goldberg Segalla attorneys then reviewed the draft motion \u2014 including Mason, who served as the final reviewer.\u201d Don\u2019t blame the AI for the fact that you read a brief and never bothered to print out the cases. Who does that? Long before AI, we all understood that you needed to look at the case itself to make sure no one missed the literal red flag on top. It might\u2019ve ended up in there because of AI, but three lawyers and presumably a para or two had this brief and no one built a binder of the cases cited? What if the court wanted oral argument? No one is excusing the decision to ask ChatGPT to resolve your $24 million case, but the blame goes far deeper.<\/p>\n<p>Malaty will shoulder most of the blame as the link in the workflow who should\u2019ve known better. That said, her article about AI ethics, written last year, doesn\u2019t actually address the hallucination problem. While risks of job displacement and algorithms reinforcing implicit bias are important, it is a little odd to write a whole piece on the ethics of legal AI without even breathing on hallucinations. <\/p>\n<p>Meanwhile, \u201cCHA continues to contest the ruling and is seeking a verdict in its favor, a new trial on liability or a new trial on damages or to lower the verdict.\u201d Maybe Claude can give them an out.<\/p>\n<hr>\n<p><strong><em><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignright  wp-image-443318\" src=\"https:\/\/i0.wp.com\/abovethelaw.com\/wp-content\/uploads\/2016\/11\/Headshot-300x200.jpg?resize=188%2C125&#038;ssl=1\" alt=\"Headshot\" width=\"188\" height=\"125\" title=\"\"><a href=\"http:\/\/abovethelaw.com\/author\/joe-patrice\/\" target=\"_blank\" rel=\"noopener nofollow\">Joe Patrice<\/a>\u00a0is a senior editor at Above the Law and co-host of <a href=\"http:\/\/legaltalknetwork.com\/podcasts\/thinking-like-a-lawyer\/\" target=\"_blank\" rel=\"noopener nofollow\">Thinking Like A Lawyer<\/a>. Feel free to\u00a0<a href=\"mailto:joepatrice@abovethelaw.com\">email<\/a> any tips, questions, or comments. Follow him on\u00a0<a href=\"https:\/\/twitter.com\/josephpatrice\" target=\"_blank\" rel=\"noopener nofollow\">Twitter<\/a>\u00a0or <a href=\"https:\/\/bsky.app\/profile\/joepatrice.bsky.social\" rel=\"noopener nofollow\" target=\"_blank\">Bluesky<\/a> if you\u2019re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a <a href=\"https:\/\/www.rpnexecsearch.com\/josephpatrice\" target=\"_blank\" rel=\"noopener nofollow\">Managing Director at RPN Executive Search<\/a>.<\/em><\/strong><\/p>\n<p>The post <a href=\"https:\/\/abovethelaw.com\/2025\/07\/partner-who-wrote-about-ai-ethics-fired-for-citing-fake-ai-cases\/\" rel=\"nofollow noopener\" target=\"_blank\">Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases<\/a> appeared first on <a href=\"https:\/\/abovethelaw.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Above the Law<\/a>.<\/p>\n<figure class=\"wp-block-image alignright size-full is-resized\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" width=\"400\" height=\"500\" src=\"https:\/\/i0.wp.com\/abovethelaw.com\/wp-content\/uploads\/sites\/4\/2015\/06\/Robot-Lawyer.jpg?resize=400%2C500&#038;ssl=1\" alt=\"\" class=\"wp-image-111012\" title=\"\"><figcaption><\/figcaption><\/figure>\n<p>Most of the time, when a lawyer unwittingly cites a bunch of fake cases spit out by artificial intelligence, it\u2019s because they never bothered to figure out how the product worked or even superficially consider the ethical implications. They plead with the judge that they\u2019re just <a href=\"https:\/\/abovethelaw.com\/2025\/07\/lawyer-cites-ai-hallucinations-responds-with-pretentious-meditation-on-nature-of-being\/\" rel=\"nofollow noopener\" target=\"_blank\">a humble scribe of Ashurbanipal<\/a> who couldn\u2019t possibly grasp the powerful forces involved in asking a mansplaining-as-a-service bot to magic up some cases. As an excuse it <a href=\"https:\/\/abovethelaw.com\/2025\/07\/judge-trolls-lawyer-over-flowery-excuses-for-ai-hallucinations\/\" rel=\"nofollow noopener\" target=\"_blank\">doesn\u2019t always work<\/a>, but tales of ignorance have, thus far, <a href=\"https:\/\/abovethelaw.com\/2025\/07\/mike-lindell-lawyers-earn-pillow-soft-sanction-after-letting-ai-do-the-thinking\/\" rel=\"nofollow noopener\" target=\"_blank\">stayed many a judge\u2019s hand<\/a>.<\/p>\n<p>But when the hallucinations come from a lawyer who once published the article \u201c<a href=\"https:\/\/web.archive.org\/web\/20240909150316\/https:\/\/professionalliabilitymatters.com\/law-practice-management\/ai-in-the-legal-profession-ethical-considerations\/\" rel=\"nofollow noopener\" target=\"_blank\">Artifical Intelligence in the Legal Profession: Ethical Considerations<\/a>,\u201d there\u2019s not a ton of wiggle room.<\/p>\n<p>Goldberg Segalla\u2019s Danielle Malaty, who authored the article about ethics, is now out after taking responsibility for a fake cite in a Chicago Housing Authority filing asking the judge to reconsider a jury\u2019s $24 million verdict in a lead paint poisoning case. The Authority is said to have learned about the lead paint hazard in 1992 and it\u2019s hard to contest liability for a harm you\u2019ve known about since <em>End of the Road<\/em> charted. But the firm struck gold with an Illinois Supreme Court cite,\u00a0<em>Mack v. Anderson<\/em>, that could not have supported the CHA\u2019s argument better\u2026 because it was invented out of thin microchips by ChatGPT.<\/p>\n<p>From the <a href=\"https:\/\/www.chicagotribune.com\/2025\/07\/17\/chicago-housing-authority-lawyers-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">Chicago Tribune<\/a>:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>At the hearing, Danielle Malaty, the attorney responsible for the mistake, told the judge she did not think ChatGPT could create fictitious legal citations and did not check to ensure the case was legitimate. Three other Goldberg Segalla attorneys then reviewed the draft motion \u2014 including Mason, who served as the final reviewer \u2014 as well as CHA\u2019s in-house counsel, before it was filed with the court. Malaty was terminated from Goldberg Segalla, where she had been a partner, following her use of AI. The firm, at the time, had an AI policy that banned its use.<\/p>\n<\/blockquote>\n<p>How did this happen? Was the firm huffing the same lead paint that Chicago Housing doesn\u2019t want to pay for foisting on kids?<\/p>\n<p>According to the Tribune account, lead counsel on the case, Larry Mason, said that \u201cAn exhaustive investigation revealed that one attorney, in direct violation of Goldberg Segalla\u2019s AI use policy, used AI technology and failed to verify the AI citation before including the case and surrounding sentence describing its fictitious holding.\u201d Not quite sure what this policy even means\u2026 has the firm banned \u201cAI\u201d generally, because that\u2019s dumb. It\u2019s going to be embedded in the guts of everything lawyers do soon enough \u2014 a general objection to AI is like lawyers in the 90s informing the court that they\u2019re committed to never allowing online legal research. Hopefully the policy is more nuanced than Mason suggests because blanket policies, paradoxically, only encourage lawyers to go rogue.<\/p>\n<p>But more important than the \u201cAI policy\u201d is the part where \u201cThree other Goldberg Segalla attorneys then reviewed the draft motion \u2014 including Mason, who served as the final reviewer.\u201d Don\u2019t blame the AI for the fact that you read a brief and never bothered to print out the cases. Who does that? Long before AI, we all understood that you needed to look at the case itself to make sure no one missed the literal red flag on top. It might\u2019ve ended up in there because of AI, but three lawyers and presumably a para or two had this brief and no one built a binder of the cases cited? What if the court wanted oral argument? No one is excusing the decision to ask ChatGPT to resolve your $24 million case, but the blame goes far deeper.<\/p>\n<p>Malaty will shoulder most of the blame as the link in the workflow who should\u2019ve known better. That said, her article about AI ethics, written last year, doesn\u2019t actually address the hallucination problem. While risks of job displacement and algorithms reinforcing implicit bias are important, it is a little odd to write a whole piece on the ethics of legal AI without even breathing on hallucinations. <\/p>\n<p>Meanwhile, \u201cCHA continues to contest the ruling and is seeking a verdict in its favor, a new trial on liability or a new trial on damages or to lower the verdict.\u201d Maybe Claude can give them an out.<\/p>\n<hr \/>\n<p><strong><em><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignright  wp-image-443318\" src=\"https:\/\/i0.wp.com\/abovethelaw.com\/wp-content\/uploads\/2016\/11\/Headshot-300x200.jpg?resize=188%2C125&#038;ssl=1\" alt=\"Headshot\" width=\"188\" height=\"125\" title=\"\"><a href=\"http:\/\/abovethelaw.com\/author\/joe-patrice\/\" target=\"_blank\" rel=\"noopener nofollow\">Joe Patrice<\/a>\u00a0is a senior editor at Above the Law and co-host of <a href=\"http:\/\/legaltalknetwork.com\/podcasts\/thinking-like-a-lawyer\/\" target=\"_blank\" rel=\"noopener nofollow\">Thinking Like A Lawyer<\/a>. Feel free to\u00a0<a href=\"https:\/\/abovethelaw.com\/cdn-cgi\/l\/email-protection#a1cbcec4d1c0d5d3c8c2c4e1c0c3ced7c4d5c9c4cdc0d68fc2cecc\" rel=\"nofollow noopener\" target=\"_blank\">email<\/a> any tips, questions, or comments. Follow him on\u00a0<a href=\"https:\/\/twitter.com\/josephpatrice\" target=\"_blank\" rel=\"noopener nofollow\">Twitter<\/a>\u00a0or <a href=\"https:\/\/bsky.app\/profile\/joepatrice.bsky.social\" rel=\"noopener nofollow\" target=\"_blank\">Bluesky<\/a> if you\u2019re interested in law, politics, and a healthy dose of college sports news. Joe also serves as a <a href=\"https:\/\/www.rpnexecsearch.com\/josephpatrice\" target=\"_blank\" rel=\"noopener nofollow\">Managing Director at RPN Executive Search<\/a>.<\/em><\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Most of the time, when a lawyer unwittingly cites a bunch of fake cases spit out by artificial intelligence, it\u2019s because they never bothered to figure out how the product worked or even superficially consider the ethical implications. They plead with the judge that they\u2019re just a humble scribe of Ashurbanipal who couldn\u2019t possibly grasp [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":128042,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[16],"tags":[],"class_list":["post-128084","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-above_the_law"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/xira.com\/p\/wp-content\/uploads\/2025\/07\/Headshot-300x200-UTscLW.jpg?fit=300%2C200&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/128084","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/comments?post=128084"}],"version-history":[{"count":0,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/128084\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media\/128042"}],"wp:attachment":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media?parent=128084"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/categories?post=128084"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/tags?post=128084"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}