Mastodon, an alternative social network to Twitter, has a serious problem with child sexual abuse material according to researchers from Stanford University. In just two days, researchers found over 100 instances of known CSAM across over 325,000 posts on Mastodon. The researchers found hundreds of posts containing CSAM related hashtags and links pointing to CSAM trading and grooming of minors. One Mastodon server was even taken down for a period of time due to CSAM being posted. The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

  • @Mandy@beehaw.org
    link
    fedilink
    199 months ago

    Pedos that got banned from platforms turn to other platform who hasnt done it yet

    In other news: the sky is blue

    • @jarfil@beehaw.org
      link
      fedilink
      29 months ago

      While white knights propose ways to control everyone everywhere everytime, in the name of catching the pedos who will just hop to the next platform (or have already).

  • Cylinsier
    link
    fedilink
    99 months ago

    The researchers suggest that decentralized networks like Mastodon need to implement more robust moderation tools and reporting mechanisms to address the prevalence of CSAM.

    I agree, but who’s going to pay for it? Those aren’t just freely available additions to any application that you only need to toggle on.

    • @zephyrvs@lemmy.ml
      link
      fedilink
      2
      edit-2
      9 months ago

      The researchers can’t be taken seriously if they don’t acknowledge that you can’t force free software to do something you don’t want it to.

      Even if we started way down at the stack and we added a CSAM hash scanner to the Linux kernel, people would just fork the kernel and use their own build without it.

      Same goes for nginx or any other web server or web proxy. Same goes for Tor. Same goes for Mastodon or any other Fedi/ActivityPub implementation.

      It. Does. Not*. Work.

      * Please, prove me wrong, I’m not all knowing, but short of total surveillance, I see no technical solution to this.

    • @abhibeckert@beehaw.org
      link
      fedilink
      5
      edit-2
      9 months ago

      I agree, but who’s going to pay for it?

      How about police/the tax payer?

      If university researchers can find the stuff, then police can find it too. There should be an established way to flag the user (or even the entire instance) so that content can be removed from the fediverse while simultaneously asking for all data that is available to try to catch the criminals.

      And of course, if regular users come across anything illegal they will report it too, and it should be removed quickly (I’d hope immediately in many cases, especially if the post was by a brand new/untrusted account).

  • alyaza [they/she]M
    link
    fedilink
    109 months ago

    not surprised at all. this is a growing pain here too because this was previously a thing handled invisibly by platforms and federation makes it fall to individual sysadmins and whoever they have on staff. the tools for this stuff are, in general, not here yet–and as people have noted there are potential conflicts with some of the principles of federation introduced by those tools that can’t be totally handwaved.

  • 🦊 OneRedFox 🦊
    link
    fedilink
    English
    289 months ago

    Yeah I recall that the Japanese instances have a big problem with that shit. As for the rest of us, Facebook actually open sourced some efficient hashing algorithms for use for dealing with CSAM; Fediverse platforms could implement these, which would just leave the issue of getting an image hash database to check against. All the big platforms could probably chip in to get access to one of those private databases and then release a public service for use with the ecosystem.

    • @zephyrvs@lemmy.ml
      link
      fedilink
      English
      69 months ago

      That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.

      We’re not gonna fix society using tech unless we’re all hooked up to some all knowing AI under government control.

      • 🦊 OneRedFox 🦊
        link
        fedilink
        English
        59 months ago

        That’d be useless though, because first, it’d probably opt-in via configuration settings and even if it wasn’t, people would just fork and modify the code base or simply switch to another ActivityPub implementation.

        No it wouldn’t, because it’d still be significantly easier for instances to deal with CSAM content with this functionality built into the platforms. And I highly doubt there’s going to be a mass migration from any Fediverse platform that implements such a feature (though honestly I’d be down to defederate with any instance that takes serious issue with this).

        • @zephyrvs@lemmy.ml
          link
          fedilink
          English
          19 months ago

          And the instances who want to engage with that material would all opt for the fork and be done with it. That’s all I meant.

  • @while1malloc0@beehaw.org
    link
    fedilink
    60
    edit-2
    9 months ago

    While the study itself is a good read and I agree with the conclusions—Mastodon, and decentralized social media need better moderation tools—it’s hard to not read the Verge headline as misleading. One of the study authors gives more context here https://hachyderm.io/@det/110769470058276368. Basically most of the hits came from a large Japanese instance that no one federates with; the author even calls out that the blunt instrument most Mastodon admins use is to blanket defederate with instances hosted in Japan due to their more lax (than the US) laws around CSAM. But the headline seems to imply that there’s a giant seedy underbelly to places like mastodon.social[1] that are rife with abuse material. I suppose that’s a marketing problem of federated software in general.

    1. There is a seedy underbelly of mainstream Mastodon instances, but it’s mostly people telling you how you’re supposed to use Mastodon if you previously used Twitter.
    • @jherazob@beehaw.org
      link
      fedilink
      English
      12
      edit-2
      9 months ago

      The person outright rejects defederation as a solution when it IS the solution, if an instance is in favor of this kind of thing you don’t want to federate with them, period.

      I also find worrying the amount of calls for a “Fediverse police” in that thread, scanning every image that gets uploaded to your instance with a 3rd party tool is an issue too, on one side you definitely don’t want this kinda shit to even touch your servers and on the other you don’t want anybody dictating that, say, anti-union or similar memes are marked, denounced and the person who made them marked, targeted and receiving a nice Pinkerton visit.

      This is a complicated problem.

      Edit: I see somebody suggested checking the observations against the common and well used Mastodon blocklists, to see if the shit is contained on defederated instances, and the author said this was something they wanted to check, so i hope there’s a followup

      • @Pseu@beehaw.org
        link
        fedilink
        English
        2
        edit-2
        9 months ago

        The person outright rejects defederation as a solution when it IS the solution

        It’s the solution in the sense that it removes it from view of users of the mainstream instances. It is not a solution to the overall problem of CSAM and the child abuse that creates such material. There is an argument to be made that is the only responsibility of instance admins, and that past that is the responsibility of law enforcement. This is sensible, but it invites law enforcement to start overtly trawling the Fediverse for offending content, and create an uncomfortable situation for admins and users, as they will go after admins who simply do not have the tools to effectively monitor for CSAM.

        Defederation also obviously does not prevent users of the instance from posting CSAM. Admins even unknowingly having CSAM on their instance can easily lead to the admins being prosecuted and the instance taken down. Section 230 does not apply to material illegal on a federal level, and SESTA requires removal of material that violates even state level sex trafficking laws.

  • FIash Mob #5678
    link
    fedilink
    6
    edit-2
    9 months ago

    Mastodon.art doesn’t.

    And the beauty of Mastodon is you can block an entire instance, as can your admin, when something awful is posted. Mastodon even has a hashtag they use as an alert for this kind of thing. (#Fediblock)

    • aes <she/her>
      link
      fedilink
      139 months ago

      This is a whataboutist counterpoint at best. Universities and their researchers are not a monolith.

    • @sanzky@beehaw.org
      link
      fedilink
      29 months ago

      This is just bad press. The actual study is quite good and offers good recommendations on how to improve moderation on the fediverse

  • sub_o
    link
    fedilink
    English
    39 months ago

    I think some of the problematic instances have been defederated, IIRC there’s a large japanese instance that was defederated long time ago due to child abuse content. But still since I’ve been seeing increases of hate speech and dog whistling misogyny and homophobia in some instances, I won’t be surprised if CSAM stuff has been trading under our noses.

    The main issue is that, with so many users nowadays and small moderation teams, especially in the larger instances, it’s hard to moderate and tackle CSAM problems effectively. I really wish larger instances would limit user registrations or start splitting off into smaller manageable ones.

    Also, since they are trading using certain hashtags, blocking those hashtags might not be a bad idea.