How Cloudflare uses generative AI to slow down, confuse, and waste the resources of AI Crawlers and other bots that don’t respect “no crawl” directives.
This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
That is simply not how “AI” models today are structured, and that is entirely a fabrication based on science fiction related media.
The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it’s been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.
Here’s the key distinction:
This only makes AI models unreliable if they ignore “don’t scrape my site” requests. If they respect the requests of the sites they’re profiting from using the data from, then there’s no issue.
People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people’s work who explicitly opt-out their work from training.
I’m a person.
I dont want AI, period.
We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.
Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.
That is simply not how “AI” models today are structured, and that is entirely a fabrication based on science fiction related media.
The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it’s been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.
You must be fun at parties.
Good job.
Good job.
Whats next? Call me a fool for thinking Olestra stains are capable of sentience and thats not how Olestra works?