That’s just BattleBots with a different name.
You’re not wrong.
Ok, I now need a screensaver that I can tie to a cloudflare instance that visualizes the generated “maze” and a bot’s attempts to get out.
You probably just should let an AI generate that.
They should program the actions and reactions of each system to actual battle bots and then televise the event for our entertainment.
Then get bored when it devolves into a wedge meta.
Somehow one of them still invents Tombstone.
Putting a chopped down lawnmower blade in front of a thing, and having it spin at harddrive speeds is honestly kinda terrifying…
this is some fucking stupid situation, we somewhat got a faster internet and these bots messing each other are hogging the bandwidth.
Especially since the solution I cooked up for my site works just fine and took a lot less work. This is simply to identify the incoming requests from these damn bots – which is not difficult, since they ignore all directives and sanity and try to slam your site with like 200+ requests per second, that makes 'em easy to spot – and simply IP ban them. This is considerably simpler, and doesn’t require an entire nuclear plant powered AI to combat the opposition’s nuclear plant powered AI.
In fact, anybody who doesn’t exhibit a sane crawl rate gets blocked from my site automatically. For a while, most of them were coming from Russian IP address zones for some reason. These days Amazon is the worst offender, I guess their Rufus AI or whatever the fuck it is tries to pester other retail sites to “learn” about products rather than sticking to its own domain.
Fuck 'em. Route those motherfuckers right to /dev/null.
Geez, that’s a lot of requests!
It sure is. Needless to say, I noticed it happening.
the only problem with that solution being applied to generic websites is schools and institutions can have many legitimate users from one IP address and many sites don’t want a chance to accidentally block one.
This is fair in those applications. I only run an ecommerce web site, though, so that doesn’t come into play.
It’s what I’ve been saying about technology for the past decade or two … we’ve hit an upper limit to our technological development … that limit is on individual human greed where small groups of people or massively wealthy people hinder or delay any further development because they’re always trying to find ways to make money off it, prevent others from making money off it, monopolize an area or section of society … capitalism is literally our world’s bottleneck and it’s being choked off by an oddly shaped gold bar at this point.
Lol website traffic accounts for like 1% of bandwidth budget. 1 netflix movie is like 20k web pages.
I have no idea why the makers of LLM crawlers think it’s a good idea to ignore bot rules. The rules are there for a reason and the reasons are often more complex than “well, we just don’t want you to do that”. They’re usually more like “why would you even do that?”
Ultimately you have to trust what the site owners say. The reason why, say, your favourite search engine returns the relevant Wikipedia pages and not bazillion random old page revisions from ages ago is that Wikipedia said “please crawl the most recent versions using canonical page names, and do not follow the links to the technical pages (including history)”. Again: Why would anyone index those?
Because you are coming from the perspective of a reasonable person
These people are billionaires who expect to get everything for free. Rules are for the plebs, just take it already
Because it takes work to obey the rules, and you get less data for it. The theoretical competitor could get more ignoring those and get some vague advantage for it.
I’d not be surprised if the crawlers they used were bare-basic utilities set up to just grab everything without worrying about rules and the like.
So the world is now wasting energy and resources to generate AI content in order to combat AI crawlers, by making them waste more energy and resources. Great! 👍
The energy cost of inference is overstated. Small models, or “sparse” models like Deepseek are not expensive to run. Training is a one-time cost that still pales in comparison to, like, making aluminum.
Doubly so once inference goes more on-device.
Basically, only Altman and his tech bro acolytes want AI to be cost prohibitive so he can have a monopoly. Also, he’s full of shit, and everyone in the industry knows it.
AI as it’s implemented has plenty of enshittification, but the energy cost is kinda a red herring.
And soon, the already AI-flooded net will be filled with so much nonsense that it becomes impossible for anyone to get some real work done. Sigh.
Some of us are only here to crank hog.
AROOO!
I guess this is what the first iteration of the Blackwall looks like.
“I used the AI to destroy the AI”
We had to kill the internet, to save the internet.
I’m imagining a sci-fi spin on this where AI generators are used to keep AI crawlers in a loop, and they accidentally end up creating some unique AI culture or relationship in the process.
Considering how many false positives Cloudflare serves I see nothing but misery coming from this.
Lol I work in healthcare and Cloudflare regularly blocks incoming electronic orders because the clinical notes “resemble” SQL injection. Nurses type all sorts of random stuff in their notes so there’s no managing that. Drives me insane!
Will this further fuck up the inaccurate nature of AI results? While I’m rooting against shitty AI usage, the general population is still trusting it and making results worse will, most likely, make people believe even more wrong stuff.
You have Thirteen hours in which to solve the labyrinth before your baby AI becomes one of us, forever.
While AI David Bowie sings you rock lullabies.
deleted by creator
Damned
ArasakaCloudflare ice walls are such a painI swear someone released this exact thing a few weeks ago
I introduce to you, the Trace Buster Buster!
If you’ve never seen the movie The Big Hit, it’s great.