{"id":123401,"date":"2025-06-20T08:02:52","date_gmt":"2025-06-20T16:02:52","guid":{"rendered":"https:\/\/xira.com\/p\/2025\/06\/20\/why-making-social-media-companies-liable-for-user-content-doesnt-do-what-many-people-think-it-will\/"},"modified":"2025-06-20T08:02:52","modified_gmt":"2025-06-20T16:02:52","slug":"why-making-social-media-companies-liable-for-user-content-doesnt-do-what-many-people-think-it-will","status":"publish","type":"post","link":"https:\/\/xira.com\/p\/2025\/06\/20\/why-making-social-media-companies-liable-for-user-content-doesnt-do-what-many-people-think-it-will\/","title":{"rendered":"Why Making Social Media Companies Liable For User Content Doesn\u2019t Do What Many People Think It Will"},"content":{"rendered":"<figure class=\"wp-block-image alignright size-full is-resized\"><img data-recalc-dims=\"1\" decoding=\"async\" loading=\"lazy\" width=\"724\" height=\"483\" src=\"https:\/\/i0.wp.com\/abovethelaw.com\/wp-content\/uploads\/sites\/4\/2025\/06\/GettyImages-1704413556.jpg?resize=724%2C483&#038;ssl=1\" alt=\"\" class=\"wp-image-1163576\" title=\"\"><figcaption><\/figcaption><\/figure>\n<p>Brazil\u2019s Supreme Court appears close to ruling that\u00a0<a href=\"https:\/\/apnews.com\/article\/brazil-social-media-supreme-court-user-content-33312c07ddfae598f4d673d1141d6a4f\" rel=\"nofollow noopener\" target=\"_blank\">social media companies should be liable for content hosted on their platforms<\/a>\u2014a move that appears to represent a significant departure from the country\u2019s pioneering Marco Civil internet law. While this approach has obvious appeal to people frustrated with platform failures, it\u2019s likely to backfire in ways that make the underlying problems worse, not better.<\/p>\n<p>The core issue is that most people fundamentally misunderstand both how content moderation works and what drives platform incentives. There\u2019s a persistent myth that companies could achieve near-perfect moderation if they just \u201ctried harder\u201d or faced sufficient legal consequences. This ignores the mathematical reality of what happens when you attempt to moderate billions of pieces of content daily, and it misunderstands how liability actually changes corporate behavior.<\/p>\n<p>Part of the confusion, I think, stems from people\u2019s failure to understand\u00a0<a href=\"https:\/\/www.techdirt.com\/2019\/11\/20\/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well\/\" rel=\"nofollow noopener\" target=\"_blank\">the impossibility of doing content moderation well at scale<\/a>. There is a very wrong assumption that social media platforms could do perfect (or very good) content moderation if they just tried harder or had more incentive to do better. Without denying that\u00a0<em>some<\/em>\u00a0entities (*cough* ExTwitter *cough*) have made it clear they don\u2019t care at all, most others do try to get this right, and\u00a0<a href=\"https:\/\/www.techdirt.com\/2025\/02\/24\/john-olivers-content-moderation-episode-isnt-just-funny-its-absolutely-accurate\/\" rel=\"nofollow noopener\" target=\"_blank\">discover over and over again<\/a>\u00a0how impossible that is.<\/p>\n<p>Yes, we can all point to examples of platform failures that are depressing and seem obvious that things should have been done differently, but the failures are not there because \u201cthe laws don\u2019t require it.\u201d The failures are because it\u2019s impossible to do this well at scale. Some people will always disagree with how a decision comes out, and other times there are no \u201cright\u201d answers. Also, sometimes, there\u2019s just too much going on at once, and no legal regime in the world can possibly fix that.<\/p>\n<p>Given all of that, what we really want are\u00a0<em>better overall incentives<\/em>\u00a0for the companies to do better. Some people (again, falsely) seem to think the only incentives are regulatory. But that\u2019s not true. Incentives come in all sorts of shapes and sizes\u2014and much more powerful than regulations are things like\u00a0<em>the users themselves,<\/em>\u00a0<a href=\"https:\/\/www.techdirt.com\/2022\/11\/21\/twitters-former-head-of-trust-safety-explains-why-for-all-his-billions-elon-musk-cant-magically-decide-how-twitter-will-work\/\" rel=\"nofollow noopener\" target=\"_blank\">along with advertisers and other business partners<\/a>.<\/p>\n<p>Importantly, content moderation is also a constantly moving and evolving issue. People who are trying to game the system are constantly adjusting. New kinds of problems arise out of nowhere. If you\u2019ve never\u00a0<a href=\"https:\/\/moderatormayhem.engine.is\/\" rel=\"nofollow noopener\" target=\"_blank\">done content moderation<\/a>, you have no idea how many \u201cedge cases\u201d there are. Most people\u2014incorrectly\u2014assume that most decisions are easy calls and you may occasionally come across a tougher one.<\/p>\n<p>But there are constant edge cases, unique scenarios, and unclear situations. Because of this, every service provider\u00a0<strong>will make many, many mistakes<\/strong>\u00a0every day. There\u2019s no way around this. It\u2019s partly the law of large numbers. It\u2019s partly the fact that humans are fallible. It\u2019s partly the fact that decisions need to be made quickly without full information. And a lot of it is that those making the decisions just don\u2019t know what the \u201cright\u201d approach is.<\/p>\n<p>The way to get better is\u00a0<em>constant adjusting<\/em>\u00a0and experimenting. Moderation teams need to be adaptable. They need to be able to respond quickly. And they need the freedom to experiment with new approaches to deal with bad actors trying to abuse the system.<\/p>\n<p><strong>Putting legal liability on the platform makes all of that more difficult<\/strong><\/p>\n<p>Now, here\u2019s where my concerns about the potential ruling in Brazil get to: if there is\u00a0<em>legal liability,<\/em>\u00a0it creates a scenario that is actually\u00a0<em>less likely<\/em>\u00a0to lead to good outcomes. First, it effectively requires companies to replace moderators with lawyers. If your company is now making decisions that come with significant legal liability, that likely requires a much higher type of expertise. Even worse, it\u2019s creating a job that most people with law degrees are unlikely to want.<\/p>\n<p>Every social media company has at least some lawyers who work with their trust &amp; safety teams to review the really challenging cases, but when legal liability could accrue for every decision, it becomes much, much worse.<\/p>\n<p>More importantly, though, it makes it\u00a0<em>way more difficult<\/em>\u00a0for trust &amp; safety teams to experiment and adapt. Once things include the potential of legal liability, then it becomes much more important for the companies to have some sort of plausible deniability\u2014some way to express to a judge \u201clook, we\u2019re doing the same thing we always have, the same thing every company has always done\u201d to cover themselves in court.<\/p>\n<p>But that means that these trust &amp; safety efforts get hardened into place, and teams are less able to adapt or to experiment with better ways to fight evolving threats. It\u2019s a disaster for companies that want to do the right thing.<\/p>\n<p>The next problem with such a regime is that it creates a real heckler\u2019s veto-type regime. If\u00a0<em>anyone<\/em>\u00a0complains about\u00a0<em>anything,<\/em>\u00a0companies are quick to take it down, because the risk of ruinous liability just isn\u2019t worth it. And we now\u00a0<a href=\"https:\/\/cyberlaw.stanford.edu\/blog\/2021\/02\/empirical-evidence-over-removal-internet-companies-under-intermediary-liability-laws\/\" rel=\"nofollow noopener\" target=\"_blank\">have\u00a0<em>decades<\/em>\u00a0of evidence<\/a>\u00a0showing that increasing liability on platforms leads to massive overblocking of information. I recognize that some people feel this is acceptable collateral damage\u2026 right up until it impacts them.<\/p>\n<p>This dynamic should sound familiar to anyone who\u2019s studied internet censorship. It\u2019s exactly how China\u2019s Great Firewall originally operated\u2014not through explicit rules about what was forbidden, but by\u00a0<a href=\"https:\/\/www.techdirt.com\/2016\/05\/04\/why-growing-unpredictability-chinas-censorship-is-feature-not-bug\/\" rel=\"nofollow noopener\" target=\"_blank\">telling service providers<\/a>\u00a0that the punishment would be severe if anything \u201cbad\u201d got through. The government created deliberate uncertainty about where the line was, knowing that companies would respond with massive overblocking to avoid potentially ruinous consequences. The result was far more comprehensive censorship than direct government mandates could have achieved.<\/p>\n<p>Brazil\u2019s proposed approach follows this same playbook, just with a different enforcement mechanism. Rather than government officials making vague threats, it would be civil liability creating the same incentive structure: when in doubt, take it down, because the cost of being wrong is too high.<\/p>\n<p>People may be okay with that, but I would think that in a country with a history of dictatorships and censorship, they would like to be a bit more cautious before handing the government a similarly powerful tool of suppression.<\/p>\n<p>It\u2019s especially disappointing in Brazil, which a decade ago put together\u00a0<a href=\"https:\/\/www.techdirt.com\/2014\/03\/27\/brazils-marco-civil-internet-civil-rights-law-finally-passes-with-key-protections-largely-intact\/\" rel=\"nofollow noopener\" target=\"_blank\">the Marco Civil<\/a>, an internet civil rights law that was designed to protect user rights and civil liberties\u2014including around intermediary liability. The Marco Civil remains an example of more thoughtful internet lawmaking (way better than we\u2019ve seen almost anywhere else, including the US). So this latest move feels like backsliding.<\/p>\n<p>Either way, the longer-term fear is that this would actually limit the ability of smaller, more competitive social media players to operate in Brazil, as it will be way too risky. The biggest players (Meta) aren\u2019t likely to leave, but they have buildings full of lawyers who can fight these lawsuits (and often, likely, win). A study we conducted a few years back detailed how as countries ratcheted up their intermediary liability, the end result was, repeatedly,\u00a0<a href=\"https:\/\/copia.is\/library\/dont-shoot-the-message-board\/\" rel=\"nofollow noopener\" target=\"_blank\">fewer online places to speak<\/a>.<\/p>\n<p>That doesn\u2019t actually improve the social media experience at all. It just gives more of it to the biggest players with the worst track records. Sure, a few lawsuits may extract some cash from these companies for failing to be perfect, but it\u2019s not like they can wave a magic wand and not let any \u201ccriminal\u201d content exist. That\u2019s not how any of this works.<\/p>\n<p><strong>Some responses to issues raised by critics<\/strong><\/p>\n<p>When I wrote about this on a brief Bluesky thread, I received hundreds of responses\u2014many quite angry\u2014that revealed some common misunderstandings about my position. I\u2019ll take the blame for not expressing myself as clearly as I should have and I\u2019m hoping the points above lay out the argument more clearly regarding how this could backfire in dangerous ways. But, since some of the points were repeated at me over and over again (sometimes with clever insults), I thought it would be good to address some of the arguments directly:<\/p>\n<p><strong>But social media is bad, so if this gets rid of all of it, that\u2019s good.<\/strong>\u00a0I get that many people hate social media (though, there was some irony in people sending those messages to me on social media). But, really what most people hate is what they see on social media. And as I keep explaining, the way we fix that is with more experimentation and more user agency\u2014not handing everything over to Mark Zuckerberg and Elon Musk or the government.<\/p>\n<p><strong>Brazil doesn\u2019t have a First Amendment, so shut up and stop with your colonialist attitude.<\/strong>\u00a0I got this one repeatedly and it\u2019s\u2026 weird? I never suggested Brazil had a First Amendment, nor that it should implement the equivalent. I simply pointed out the inevitable impact of increasing intermediary liability on speech. You can decide (as per the comment above) that you\u2019re fine with this, but it has nothing to do with my feelings about the First Amendment. I wasn\u2019t suggesting Brazil import American free speech laws either. I was simply pointing out what the consequences of this one change to the law might create.<\/p>\n<p><strong>Existing social media is REALLY BAD, so we need to do this.<\/strong>\u00a0This is the classic \u201csomething must be done, this is something, we will do this\u201d response. I\u2019m not saying nothing must be done. I\u2019m just saying this particular approach will have significant consequences that it would help people to think through.<\/p>\n<p><strong>It only applies to content after it\u2019s been adjudicated as criminal.<\/strong>\u00a0I got that one a few times from people. But, from my reading, that\u2019s not true at all. That\u2019s what the\u00a0<em>existing law<\/em>\u00a0was. These rulings would expand it greatly from what I can tell. Indeed, the article notes how this would change things from existing law:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>The current legislation states social media companies can only be held responsible if they do not remove hazardous content after a court order.<\/em><\/p>\n<p><em>[\u2026.]<\/em><\/p>\n<p><em>Platforms need to be pro-active in regulating content, said Alvaro Palma de Jorge, a law professor at the Rio-based Getulio Vargas Foundation, a think tank and university.<\/em><\/p>\n<p><em>\u201cThey need to adopt certain precautions<\/em>\u00a0<strong><em>that are not compatible with simply waiting for a judge to eventually issue a decision<\/em><\/strong>\u00a0<em>ordering the removal of that content,\u201d Palma de Jorge said.<\/em><\/p>\n<\/blockquote>\n<p><strong>You\u2019re an anarchocapitalist who believes that there should be no laws at all, so fuck off.<\/strong>\u00a0This one actually got sent to me a bunch of times in various forms. I even got added to a block list of anarchocapitalists. Really not sure how to respond to that one other than saying \u201cum, no, just look at anything I\u2019ve written for the past two and a half decades.\u201d<\/p>\n<p><strong>America is a fucking mess right now, so clearly what you are pushing for doesn\u2019t work.<\/strong>\u00a0This one was the weirdest of all. Some people sending variations on this pointed to multiple horrific examples of US officials trampling on Americans\u2019 free speech, saying \u201csee? this is what you support!\u201d as if I support those things, rather than consistently fighting back against them. Part of the reason I\u2019m suggesting this kind of liability can be problematic is because I want to\u00a0<em>stop<\/em>\u00a0other countries from heading down a path that gives governments the power to stifle speech like the US is doing now.<\/p>\n<p>I get that many people are\u2014reasonably!\u2014frustrated about the terrible state of the world right now. And many people are equally frustrated by the state of internet discourse. I am too. But that doesn\u2019t mean\u00a0<em>any<\/em>\u00a0solution will help. Many will make things much worse. And the solution Brazil is moving towards seems quite likely to make the situation worse there.<\/p>\n<p><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/why-making-social-media-companies-liable-for-user-content-doesnt-do-what-many-people-think-it-will\/\" rel=\"nofollow noopener\" target=\"_blank\">Why Making Social Media Companies Liable For User Content Doesn\u2019t Do What Many People Think It Will<\/a><\/p>\n<p><strong>More Law-Related Stories From Techdirt:<\/strong><\/p>\n<p><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/scotus-simply-ignores-precedent-rather-than-overruling-it-in-allowing-trump-to-fire-officials-congress-deemed-independent\/\" rel=\"nofollow noopener\" target=\"_blank\">SCOTUS Simply Ignores Precedent, Rather Than Overruling It, In Allowing Trump To Fire Officials Congress Deemed Independent<\/a><br \/><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/feds-arrest-yet-another-democrat-for-the-crime-of-helping-others-under-attack-from-ice\/\" rel=\"nofollow noopener\" target=\"_blank\">Feds Arrest Yet Another Democrat For The Crime Of Helping Others Under Attack From ICE<\/a><br \/><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/surprise-minnesota-killer-used-data-brokers-to-target-and-murder-politicians\/\" rel=\"nofollow noopener\" target=\"_blank\">Surprise: Minnesota Killer Used Data Brokers To Target And Murder Politicians<\/a><\/p>\n<p>The post <a href=\"https:\/\/abovethelaw.com\/2025\/06\/why-making-social-media-companies-liable-for-user-content-doesnt-do-what-many-people-think-it-will\/\" rel=\"nofollow noopener\" target=\"_blank\">Why Making Social Media Companies Liable For User Content Doesn\u2019t Do What Many People Think It Will<\/a> appeared first on <a href=\"https:\/\/abovethelaw.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Above the Law<\/a>.<\/p>\n<figure class=\"wp-block-image alignright size-full is-resized\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" width=\"724\" height=\"483\" src=\"https:\/\/i0.wp.com\/abovethelaw.com\/wp-content\/uploads\/sites\/4\/2025\/06\/GettyImages-1704413556.jpg?resize=724%2C483&#038;ssl=1\" alt=\"\" class=\"wp-image-1163576\" title=\"\"><figcaption><\/figcaption><\/figure>\n<p>Brazil\u2019s Supreme Court appears close to ruling that\u00a0<a href=\"https:\/\/apnews.com\/article\/brazil-social-media-supreme-court-user-content-33312c07ddfae598f4d673d1141d6a4f\" rel=\"nofollow noopener\" target=\"_blank\">social media companies should be liable for content hosted on their platforms<\/a>\u2014a move that appears to represent a significant departure from the country\u2019s pioneering Marco Civil internet law. While this approach has obvious appeal to people frustrated with platform failures, it\u2019s likely to backfire in ways that make the underlying problems worse, not better.<\/p>\n<p>The core issue is that most people fundamentally misunderstand both how content moderation works and what drives platform incentives. There\u2019s a persistent myth that companies could achieve near-perfect moderation if they just \u201ctried harder\u201d or faced sufficient legal consequences. This ignores the mathematical reality of what happens when you attempt to moderate billions of pieces of content daily, and it misunderstands how liability actually changes corporate behavior.<\/p>\n<p>Part of the confusion, I think, stems from people\u2019s failure to understand\u00a0<a href=\"https:\/\/www.techdirt.com\/2019\/11\/20\/masnicks-impossibility-theorem-content-moderation-scale-is-impossible-to-do-well\/\" rel=\"nofollow noopener\" target=\"_blank\">the impossibility of doing content moderation well at scale<\/a>. There is a very wrong assumption that social media platforms could do perfect (or very good) content moderation if they just tried harder or had more incentive to do better. Without denying that\u00a0<em>some<\/em>\u00a0entities (*cough* ExTwitter *cough*) have made it clear they don\u2019t care at all, most others do try to get this right, and\u00a0<a href=\"https:\/\/www.techdirt.com\/2025\/02\/24\/john-olivers-content-moderation-episode-isnt-just-funny-its-absolutely-accurate\/\" rel=\"nofollow noopener\" target=\"_blank\">discover over and over again<\/a>\u00a0how impossible that is.<\/p>\n<p>Yes, we can all point to examples of platform failures that are depressing and seem obvious that things should have been done differently, but the failures are not there because \u201cthe laws don\u2019t require it.\u201d The failures are because it\u2019s impossible to do this well at scale. Some people will always disagree with how a decision comes out, and other times there are no \u201cright\u201d answers. Also, sometimes, there\u2019s just too much going on at once, and no legal regime in the world can possibly fix that.<\/p>\n<p>Given all of that, what we really want are\u00a0<em>better overall incentives<\/em>\u00a0for the companies to do better. Some people (again, falsely) seem to think the only incentives are regulatory. But that\u2019s not true. Incentives come in all sorts of shapes and sizes\u2014and much more powerful than regulations are things like\u00a0<em>the users themselves,<\/em>\u00a0<a href=\"https:\/\/www.techdirt.com\/2022\/11\/21\/twitters-former-head-of-trust-safety-explains-why-for-all-his-billions-elon-musk-cant-magically-decide-how-twitter-will-work\/\" rel=\"nofollow noopener\" target=\"_blank\">along with advertisers and other business partners<\/a>.<\/p>\n<p>Importantly, content moderation is also a constantly moving and evolving issue. People who are trying to game the system are constantly adjusting. New kinds of problems arise out of nowhere. If you\u2019ve never\u00a0<a href=\"https:\/\/moderatormayhem.engine.is\/\" rel=\"nofollow noopener\" target=\"_blank\">done content moderation<\/a>, you have no idea how many \u201cedge cases\u201d there are. Most people\u2014incorrectly\u2014assume that most decisions are easy calls and you may occasionally come across a tougher one.<\/p>\n<p>But there are constant edge cases, unique scenarios, and unclear situations. Because of this, every service provider\u00a0<strong>will make many, many mistakes<\/strong>\u00a0every day. There\u2019s no way around this. It\u2019s partly the law of large numbers. It\u2019s partly the fact that humans are fallible. It\u2019s partly the fact that decisions need to be made quickly without full information. And a lot of it is that those making the decisions just don\u2019t know what the \u201cright\u201d approach is.<\/p>\n<p>The way to get better is\u00a0<em>constant adjusting<\/em>\u00a0and experimenting. Moderation teams need to be adaptable. They need to be able to respond quickly. And they need the freedom to experiment with new approaches to deal with bad actors trying to abuse the system.<\/p>\n<p><strong>Putting legal liability on the platform makes all of that more difficult<\/strong><\/p>\n<p>Now, here\u2019s where my concerns about the potential ruling in Brazil get to: if there is\u00a0<em>legal liability,<\/em>\u00a0it creates a scenario that is actually\u00a0<em>less likely<\/em>\u00a0to lead to good outcomes. First, it effectively requires companies to replace moderators with lawyers. If your company is now making decisions that come with significant legal liability, that likely requires a much higher type of expertise. Even worse, it\u2019s creating a job that most people with law degrees are unlikely to want.<\/p>\n<p>Every social media company has at least some lawyers who work with their trust &amp; safety teams to review the really challenging cases, but when legal liability could accrue for every decision, it becomes much, much worse.<\/p>\n<p>More importantly, though, it makes it\u00a0<em>way more difficult<\/em>\u00a0for trust &amp; safety teams to experiment and adapt. Once things include the potential of legal liability, then it becomes much more important for the companies to have some sort of plausible deniability\u2014some way to express to a judge \u201clook, we\u2019re doing the same thing we always have, the same thing every company has always done\u201d to cover themselves in court.<\/p>\n<p>But that means that these trust &amp; safety efforts get hardened into place, and teams are less able to adapt or to experiment with better ways to fight evolving threats. It\u2019s a disaster for companies that want to do the right thing.<\/p>\n<p>The next problem with such a regime is that it creates a real heckler\u2019s veto-type regime. If\u00a0<em>anyone<\/em>\u00a0complains about\u00a0<em>anything,<\/em>\u00a0companies are quick to take it down, because the risk of ruinous liability just isn\u2019t worth it. And we now\u00a0<a href=\"https:\/\/cyberlaw.stanford.edu\/blog\/2021\/02\/empirical-evidence-over-removal-internet-companies-under-intermediary-liability-laws\/\" rel=\"nofollow noopener\" target=\"_blank\">have\u00a0<em>decades<\/em>\u00a0of evidence<\/a>\u00a0showing that increasing liability on platforms leads to massive overblocking of information. I recognize that some people feel this is acceptable collateral damage\u2026 right up until it impacts them.<\/p>\n<p>This dynamic should sound familiar to anyone who\u2019s studied internet censorship. It\u2019s exactly how China\u2019s Great Firewall originally operated\u2014not through explicit rules about what was forbidden, but by\u00a0<a href=\"https:\/\/www.techdirt.com\/2016\/05\/04\/why-growing-unpredictability-chinas-censorship-is-feature-not-bug\/\" rel=\"nofollow noopener\" target=\"_blank\">telling service providers<\/a>\u00a0that the punishment would be severe if anything \u201cbad\u201d got through. The government created deliberate uncertainty about where the line was, knowing that companies would respond with massive overblocking to avoid potentially ruinous consequences. The result was far more comprehensive censorship than direct government mandates could have achieved.<\/p>\n<p>Brazil\u2019s proposed approach follows this same playbook, just with a different enforcement mechanism. Rather than government officials making vague threats, it would be civil liability creating the same incentive structure: when in doubt, take it down, because the cost of being wrong is too high.<\/p>\n<p>People may be okay with that, but I would think that in a country with a history of dictatorships and censorship, they would like to be a bit more cautious before handing the government a similarly powerful tool of suppression.<\/p>\n<p>It\u2019s especially disappointing in Brazil, which a decade ago put together\u00a0<a href=\"https:\/\/www.techdirt.com\/2014\/03\/27\/brazils-marco-civil-internet-civil-rights-law-finally-passes-with-key-protections-largely-intact\/\" rel=\"nofollow noopener\" target=\"_blank\">the Marco Civil<\/a>, an internet civil rights law that was designed to protect user rights and civil liberties\u2014including around intermediary liability. The Marco Civil remains an example of more thoughtful internet lawmaking (way better than we\u2019ve seen almost anywhere else, including the US). So this latest move feels like backsliding.<\/p>\n<p>Either way, the longer-term fear is that this would actually limit the ability of smaller, more competitive social media players to operate in Brazil, as it will be way too risky. The biggest players (Meta) aren\u2019t likely to leave, but they have buildings full of lawyers who can fight these lawsuits (and often, likely, win). A study we conducted a few years back detailed how as countries ratcheted up their intermediary liability, the end result was, repeatedly,\u00a0<a href=\"https:\/\/copia.is\/library\/dont-shoot-the-message-board\/\" rel=\"nofollow noopener\" target=\"_blank\">fewer online places to speak<\/a>.<\/p>\n<p>That doesn\u2019t actually improve the social media experience at all. It just gives more of it to the biggest players with the worst track records. Sure, a few lawsuits may extract some cash from these companies for failing to be perfect, but it\u2019s not like they can wave a magic wand and not let any \u201ccriminal\u201d content exist. That\u2019s not how any of this works.<\/p>\n<p><strong>Some responses to issues raised by critics<\/strong><\/p>\n<p>When I wrote about this on a brief Bluesky thread, I received hundreds of responses\u2014many quite angry\u2014that revealed some common misunderstandings about my position. I\u2019ll take the blame for not expressing myself as clearly as I should have and I\u2019m hoping the points above lay out the argument more clearly regarding how this could backfire in dangerous ways. But, since some of the points were repeated at me over and over again (sometimes with clever insults), I thought it would be good to address some of the arguments directly:<\/p>\n<p><strong>But social media is bad, so if this gets rid of all of it, that\u2019s good.<\/strong>\u00a0I get that many people hate social media (though, there was some irony in people sending those messages to me on social media). But, really what most people hate is what they see on social media. And as I keep explaining, the way we fix that is with more experimentation and more user agency\u2014not handing everything over to Mark Zuckerberg and Elon Musk or the government.<\/p>\n<p><strong>Brazil doesn\u2019t have a First Amendment, so shut up and stop with your colonialist attitude.<\/strong>\u00a0I got this one repeatedly and it\u2019s\u2026 weird? I never suggested Brazil had a First Amendment, nor that it should implement the equivalent. I simply pointed out the inevitable impact of increasing intermediary liability on speech. You can decide (as per the comment above) that you\u2019re fine with this, but it has nothing to do with my feelings about the First Amendment. I wasn\u2019t suggesting Brazil import American free speech laws either. I was simply pointing out what the consequences of this one change to the law might create.<\/p>\n<p><strong>Existing social media is REALLY BAD, so we need to do this.<\/strong>\u00a0This is the classic \u201csomething must be done, this is something, we will do this\u201d response. I\u2019m not saying nothing must be done. I\u2019m just saying this particular approach will have significant consequences that it would help people to think through.<\/p>\n<p><strong>It only applies to content after it\u2019s been adjudicated as criminal.<\/strong>\u00a0I got that one a few times from people. But, from my reading, that\u2019s not true at all. That\u2019s what the\u00a0<em>existing law<\/em>\u00a0was. These rulings would expand it greatly from what I can tell. Indeed, the article notes how this would change things from existing law:<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>The current legislation states social media companies can only be held responsible if they do not remove hazardous content after a court order.<\/em><\/p>\n<p><em>[\u2026.]<\/em><\/p>\n<p><em>Platforms need to be pro-active in regulating content, said Alvaro Palma de Jorge, a law professor at the Rio-based Getulio Vargas Foundation, a think tank and university.<\/em><\/p>\n<p><em>\u201cThey need to adopt certain precautions<\/em>\u00a0<strong><em>that are not compatible with simply waiting for a judge to eventually issue a decision<\/em><\/strong>\u00a0<em>ordering the removal of that content,\u201d Palma de Jorge said.<\/em><\/p>\n<\/blockquote>\n<p><strong>You\u2019re an anarchocapitalist who believes that there should be no laws at all, so fuck off.<\/strong>\u00a0This one actually got sent to me a bunch of times in various forms. I even got added to a block list of anarchocapitalists. Really not sure how to respond to that one other than saying \u201cum, no, just look at anything I\u2019ve written for the past two and a half decades.\u201d<\/p>\n<p><strong>America is a fucking mess right now, so clearly what you are pushing for doesn\u2019t work.<\/strong>\u00a0This one was the weirdest of all. Some people sending variations on this pointed to multiple horrific examples of US officials trampling on Americans\u2019 free speech, saying \u201csee? this is what you support!\u201d as if I support those things, rather than consistently fighting back against them. Part of the reason I\u2019m suggesting this kind of liability can be problematic is because I want to\u00a0<em>stop<\/em>\u00a0other countries from heading down a path that gives governments the power to stifle speech like the US is doing now.<\/p>\n<p>I get that many people are\u2014reasonably!\u2014frustrated about the terrible state of the world right now. And many people are equally frustrated by the state of internet discourse. I am too. But that doesn\u2019t mean\u00a0<em>any<\/em>\u00a0solution will help. Many will make things much worse. And the solution Brazil is moving towards seems quite likely to make the situation worse there.<\/p>\n<p><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/why-making-social-media-companies-liable-for-user-content-doesnt-do-what-many-people-think-it-will\/\" rel=\"nofollow noopener\" target=\"_blank\">Why Making Social Media Companies Liable For User Content Doesn\u2019t Do What Many People Think It Will<\/a><\/p>\n<p><strong>More Law-Related Stories From Techdirt:<\/strong><\/p>\n<p><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/scotus-simply-ignores-precedent-rather-than-overruling-it-in-allowing-trump-to-fire-officials-congress-deemed-independent\/\" rel=\"nofollow noopener\" target=\"_blank\">SCOTUS Simply Ignores Precedent, Rather Than Overruling It, In Allowing Trump To Fire Officials Congress Deemed Independent<\/a><br \/><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/feds-arrest-yet-another-democrat-for-the-crime-of-helping-others-under-attack-from-ice\/\" rel=\"nofollow noopener\" target=\"_blank\">Feds Arrest Yet Another Democrat For The Crime Of Helping Others Under Attack From ICE<\/a><br \/><a href=\"https:\/\/www.techdirt.com\/2025\/06\/18\/surprise-minnesota-killer-used-data-brokers-to-target-and-murder-politicians\/\" rel=\"nofollow noopener\" target=\"_blank\">Surprise: Minnesota Killer Used Data Brokers To Target And Murder Politicians<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Brazil\u2019s Supreme Court appears close to ruling that\u00a0social media companies should be liable for content hosted on their platforms\u2014a move that appears to represent a significant departure from the country\u2019s pioneering Marco Civil internet law. While this approach has obvious appeal to people frustrated with platform failures, it\u2019s likely to backfire in ways that make [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":123392,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[16],"tags":[],"class_list":["post-123401","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-above_the_law"],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/xira.com\/p\/wp-content\/uploads\/2025\/06\/GettyImages-1704413556-bQEP3M.jpeg?fit=724%2C483&ssl=1","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/123401","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/comments?post=123401"}],"version-history":[{"count":0,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/posts\/123401\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media\/123392"}],"wp:attachment":[{"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/media?parent=123401"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/categories?post=123401"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/xira.com\/p\/wp-json\/wp\/v2\/tags?post=123401"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}