cross-posted from: https://lemmy.zip/post/63748731

Ashley MacIsaac, who is seeking $1.5m in civil lawsuit, says inaccurate information led to concert cancellation

  • Warl0k3@lemmy.world
    link
    fedilink
    arrow-up
    49
    ·
    6 days ago

    Google’s AI Overview about MacIsaac now includes the statement: “In late 2025 and 2026, he made headlines for taking legal action against Google.”

    lmao fuck off google.

  • hOrni@lemmy.world
    link
    fedilink
    arrow-up
    39
    ·
    6 days ago

    Now they will accuse random people, to draw attention away from the pedo in chief.

    • EmpathicVagrant@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      6 days ago

      Satiation. If you say it enough it loses meaning, then becomes normalized to people, that’s part of the goal effect when they accuse others of what they’re doing.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    6 days ago

    Only $1.5 million? That’s peanuts, especially to Google and especially for such heinous defamation. It should be for at least 100 million.

  • merc@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    Internet companies have become way too used to hiding behind Section 230 of the Communications Act which allows them to say that anything on their sites was created by a user, not by them, and therefore they’re not liable. This made sense when the Internet was just chronological forums and there was no way for the operator to know that something defamatory or illegal had been posted.

    They’ve managed to maintain that fiction, even when hand-picking content they want to show. Now they have algorithms that scan all the content people post, even videos, decide which content is going to generate the most views, clicks and engagement, and choose to prioritize that content, even if it is illegal.

    But, now, with AI, they’re actually the ones generating that content. There’s no user to hide behind anymore. Let’s hope that the courts say that section 230 doesn’t protect them here and that they’re liable for all false claims made with AI, just as they’d be liable if it were an employee or the boss who posted it.