Elon Musk‘s X failed to block a California law that requires social media companies to disclose their content-moderation policies.

U.S. District Judge William Shubb rejected the company’s request in an eight-age ruling on Thursday.

“While the reporting requirement does appear to place a substantial compliance burden on social medial companies, it does not appear that the requirement is unjustified or unduly burdensome within the context of First Amendment law,” Shubb wrote, per Reuters.

The legislation, signed into law in 2022 by California Gov. Gavin Newsom, requires social media companies to publicly issue their policies regarding hate speech, disinformation, harassment and extremism on their platforms. They must also report data on their enforcement of these practices.

  • Drivebyhaiku@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    10 months ago

    It would appear so but anything to do with digital spaces are murky.

    As we kind of treat digital space the way we do physical space aince the digital space is owned the people who own it get to set the rules and policies which govern the space… But just like a shopping mall can’t eject you for the sole reasoning of you being a specific race certain justifications within moderation policies are theoretically grounds for constitutional protections.

    However it is a fucking mess to try and use a court to actually enforce the laws like we do in physical spaces. Like here in Canada uttering threats and performing hate speech to a crowd and scribbing swastikas on things for instance are illegal. But do that over a video game chat or some form of anonymizing social media and suddenly you’re dealing with citizens of other countries with different laws, a layer of difficulty in determining the source that would require a warrant to obtain and even if both people are Canadian you would need a court date, documentation that the law was appropriately followed in obtaining all your evidence, proving guilt, deciding where the defendant must physically show up to defend themselves and even if they do prove assault by uttering threats or hate speech violations… They would probably just get a fine or community service.

    Nobody has time for that.

    So if you want to enforce the protections of these laws either you hold the platform responsible for internal policing of the law and determine whether it is discharging it’s duty properly by giving citizens a means to check for and report violations of it’s own internal policies for later reveiw and give them means to pursue civil cases… Or you go hands off and create means to give a platform’s users means to check and make informed choices based on their own personal standards and ethical principles. Every moderation policy leaves a burden on someone but the question is who.

    So it might be a transparency law but it also opens the door for applying Constitutional protections to users by holding the business accountable if there are glaring oversights in their digital fifedoms…but such laws are basically inert until someone tries to challenge them.