Bots and large language models are clogging the web and crowding out humans. AI is consuming lakes of fresh water and gigawatts of electricity. It is chewing up hundreds of billions of dollars with no sign of profit in sight. Tell me again why this is a good thing.
Exhibit #1: I had realised that the Swedish word “flicka” means “girl” and I thought of the old book and TV show, My Friend Flicka (it’s about a horse) and figured that must have been the origin of the name. The people I was with seemed a little dubious, so I decided to settle the matter with an internet search. These days such searches are not what they used to be just a few short years ago. Back then you could get a quick, straight answer. That changed a while back and now we’ve learned to skip the first dozen or more results which will invariably be some sponsored rubbish from paying advertisers. More recently the situation changed again with the advent of AI on search engines. These annoying things sit at the top of search results and claim to offer an authoritative view of whatever question we pose. In this case, the AI summary observed that Flicka, in the book and TV shows, was a stallion. That killed my theory about the origin of the name, right?
Wrong. The AI summary was “slop”. It was dead wrong. Scrolling past the AI summary and the paid garbage, I finally arrived at some actual articles that explained the name really did come from the Swedish word and that Flicka was a filly. That’s just one tiny, trivial example that springs to mind. I have learned that AI is a pompous, flattering bullshitter whose pronouncements deserve to be treated with deep suspicion. It’s right often enough to trick you into relying on it but not right often enough to be dependable. That’s because it doesn’t really have a brain. It just regurgitates whatever gets fed into it and it eats everything it can find.
Have you heard of the “Dead Internet Theory”? Basically, it suggests that most activity and content on the internet is now generated by AI bots – not human beings. To the extent that this is true, it means that the “large language models” of AI are increasingly “eating” slop produced by other AI. It’s like mad cow disease, where cattle fed on the mashed up brains of other cows got brain rot, which spread to humans who ate the affected cows. I reckon AI is spreading a similar brain disease.
In 2024 51 per cent of all internet traffic was automated bot activity – much of it actively malicious. Earlier this year Wikimedia reported that a 50 per cent increase in bandwidth on its site was due to “scraping” by hungry AI “crawlers”. I can vouch for this:
Exhibit #2: Very often recently when I try to access my own website I find I can’t log in. For a while I just assumed this was some kind of server glitch and I waited for it to clear up. But it became so bad and so persistent that I did a web-search (lol!!) and discovered the problem was actually very common, and especially severe in its impact on library and museum online collections. The culprit, it appeared, was often AI bots.
A short chat with a bot
So I contacted my server administrators. Not incidentally, when I first contacted them I had to get past a chat bot, which told me to toddle off and check some stuff. I persisted, suggesting bot traffic might be my problem. It cheerily agreed and asked me if I would like to be transferred to a human. The humans quickly confirmed that the problem is being caused by automated bots. These inhuman monsters have been visiting my site in large numbers and so often that they clog the site and make it slow to the point of inaccessibility.
Some bot traffic to a website is good. Search engines crawl a site looking for new content to index so that humans searching for particular topics can find relevant material. But when they visit every few seconds, day-in and day-out, and they bring all their bot mates with them to do the same thing, they make a traffic jam that humans can’t squeeze past.
Basically, as I understand it, AI scrapers are flocking to websites where they know real humans live because they are desperately hungry for more data to mix with all the other material they have already eaten in a bid to – theoretically – improve the quality of the answers they provide to those who ask them questions.
It’s like the bots are standing at the door of the website, knocking and ringing the doorbell over and over again and yelling out: “Hey, anything new for us to eat?” And when you say: “No, I’ve been on holiday”, they don’t just turn around and leave. Instead they just knock again in a few seconds and yell out: “What about now? Anything new yet?” And they keep doing this every minute of every day, waiting for some new morsel of data to greedily steal and mix with the other stolen data they represent to the world as the result of their own innate and all-knowing genius.
Once it’s eaten and digested by the large language model, the data is excreted as answers to questions. Large language models have become really good really quickly because they have already eaten practically everything that exists in the digital realm. The trouble is that, from now on, every increment of improvement must come from scoffing increasing amounts of increasingly hard-to-get data. Given that more and more of what appears on the internet these days is actually generated by AI you can see how the brain-rot is inevitable. Errors and hallucinations get compounded, magnified and redistributed as the large language models eat and re-eat each other’s increasingly smelly excretions. And don’t forget, some of what they’ve eaten from human sources is potentially dubious to start with. For example, I read somewhere that the Murdoch media outfit had received payment in return for making its propaganda “news” slop available to train some large language models. This would be the equivalent of deliberately infecting them with rabies, in my humble opinion.
AI slop, also known as “botshit” is already causing serious problems, for example in law courts where fake legal cases are being cited. AI already tells deliberate lies, so maybe loading it up with neo-fascist Murdoch-think is no more than a logical next step.
We’re stealing “your” content. Like it or lump it.
Exhibit #3: Have you been getting a lot of emails lately from the likes of Microsoft, Spotify etc, notifying you of updates to their terms and conditions? I have (I’m not even a Spotify subscriber) and I have noted they all contain updates to clauses relating to “your content”. Here’s a typical one:
You retain ownership of your User Content when you post it to the Service. However, in order for us to make your User Content available on the xxxxx Service and to provide you with certain features and functions, we do need a limited license from you to that User Content. Accordingly, you hereby grant to xxxxx a non-exclusive, transferable, sub-licensable, royalty-free, fully paid, irrevocable, worldwide license to reproduce, make available, perform and display, translate, modify, create derivative works from, distribute, and otherwise use any such User Content through any medium, whether alone or in combination with other Content or materials, in any manner and by any means, method or technology, whether now known or hereafter created. Where applicable and to the extent permitted under applicable law, you also agree to waive, and not to enforce, any “moral rights” or equivalent rights, such as your right to be identified as the author of any User Content, including Feedback, and your right to object to derogatory treatment of such User Content. This clause does not otherwise modify xxxxx’s obligation to use your User Content in accordance with the licence above.
Is that so “your” content can be monetized for AI training? It certainly lets them do anything at all with “your” content, whenever they wish, forever. Sounds like a good deal.
I also read how some characters are trying to get the Australian Government to spend a fortune creating its own “sovereign” large language model” – a kind of “Aussie AI” whose propaganda can be tweaked domestically instead of from overseas head office. I gather the corporations that are raking in the bucks from AI see this concept as a new big earner, so I expect Australia will throw money at them – unless head office vetoes it, of course.
Even if we don’t get an Aussie AI as such, plenty of politicians are keen on having AI data centres in Australia to drink all the spare fresh water we don’t have and to mop up all that excess cheap electricity we haven’t got. Nuclear power plant, anybody? The Hunter Valley seems a prime spot . . .
Meanwhile, if you are still wasting your hours on the internet, be aware that large numbers of the people you interact with on various platforms may not be people at all, but just AI bots, carrying out the wishes of their designers – perhaps to tilt your opinion in favour of some policy or viewpoint. This experiment conducted by the University of Zurich shows how that can work.
Back to my website and to the many others like it. Google has more or less declared war on sites like mine by rolling out a new AI search function that will discourage anybody from visiting the sites it has stolen data from. It will interpose its own AI-generated essays in response to your questions, putting itself forward as the owner of the information it uses to create those essays. Which it is, I suppose, if the content comes from platforms whose users have accepted the terms like those noted above, in “Exhibit #3”.
As for the bots that have been making my website stall, the server administrators have been able to make some tweaks to prevent them from completely smashing the place up. But the bot makers and programmers (human and inhuman) are always on the job, working out new ways to get around every new measure to exclude them. They do their best to pretend to be humans and will sooner or later beat any obstacle put in their way to exclude them.
We can imagine where this might head.
Perhaps some governments will decide that it’s time to protect we poor feeble humans by kindly providing each of us with some kind of block-chain linked digital ID. Then we could have digital gateways through which only those with the requisite ID could pass. Maybe we could start with minors and move on from there . . .
Or maybe economics will catch up with AI. Maybe the incredible investment bubble inflated by its extraordinary promises will pop and that will change everything again. One thing I know about AI is that many people hate it and will strive to avoid it, no matter how hard the corporations try to jam it into everything. As the amazing Cory Doctorow points out in this article, the economic underpinnings of AI are a joke. According to him (and he’s quoting from this guy): “Unlike all the successful tech of the 21st century, each generation of AI is more expensive to make, not cheaper. And unlike the most profitable tech services of this century, AI gets more costly to operate the more users it has”.
So the AI industry in general is going to spend US$370 billion in 2025 and is projecting revenue of US$32 billion and profits of zero. Looks to me like the mother of all bubbles. If so it will make a mess when it bursts; but maybe no bigger than the mess it’s making during its period of hyperinflation.
I am so with you on the crap that AI extrudes while posing as authoritative answers in search queries. I find it a constant struggle to avoid looking at the AI answer at the top of the search results – I know it is highly unreliable, but it keeps pushing forward saying “look at me, look at me, I’m the shortcut to the truth you seek” … and then serves up rot.