• Concave1142@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    1 month ago

    Until the AI companies find a way around it. Love the idea so hopefully it causes at least 3 days of struggle for the AI crawlers.

    Having said that… Can someone else put this in place so we do not have Cloudflare hosting everything where we would just be one intern away from a global outage. Please? Pretty please?

    • orclev@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 month ago

      The problem is that the biggest service Cloudflare provides is DDoS protection, and doing that requires that you have more bandwidth available than your attacker. Having enough bandwidth to withstand modern botnet powered DDoS attacks is ridiculously expensive (and it’s also a finite resource, there’s only so much backbone infrastructure). Basically it’s economically infeasible to have multiple companies providing the service Cloudflare does. You might be able to get away with two companies doing so, but it’s unlikely you could manage more than that without some of them starting to go bankrupt.

      • acosmichippo@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 month ago

        when a critical service is not economical for more than one business to do (natural monopoly), that’s when govt should be stepping in.

      • Kowowow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 month ago

        I wonder if it would be a good investment for a country to have their own then down the line expand to sell the same service to others

  • yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 month ago

    This is not about stopping bot-scrapers, it’s about charging them.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Hopefully people will price their content out of reach of the bot-scrapers, effectively stopping them.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    I can’t wait to be denied access to websites because of it. Even more than I already am, that is.

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    I really wish the answer was a legally enforced robots.txt file that very easily allowed any web data any organization or individual user is posting to script out what the permissions are. I often use a LLM as a search and most of the time the citations are pretty decent and I use those to link out to source content. I run a small blog and I’d love to get indexed in a LLM, not blocked, as long as I was assured a reference link for any content used and had some legal recourse if I found my data was being misused. I don’t love the answer being another mega corporation posing as a white knight looking to skim some money off of the “loophole” that is AI copyright infringement.