top of page

The Bot Invasion: Dead Internet Theory Explained


Watch the full video on YouTube: https://youtu.be/UoG7HsPCB7M


If you’re terminally online like I am, then you probably know that accusing someone of being a bot is a regular feature of any argument, and arguing is the official means of communicating online, so the Internet is just full of people accusing each other of being bots. It’s an insult that means basically that whatever’s being argued sounds a lot like official propaganda from one side or another. Calling people a bot is fighting words in the virtual world, and the word is bandied about incessantly and probably mostly by people under the age of eleven. But unfortunately for us, there is actually a lot of truth to the underlying concept.


Let’s talk about the Dead Internet Theory and what the data actually says about fake accounts and bot activity on the web.


I’m Kevin Lankes, and I’m your host for the day we’re all working the mines to power the Internet forums for the robot meme lords.


Like a lot of other pseudo-intellectual bro philosophy, the Dead Internet Theory began on a message board. But this one quickly spread around into the popular culture and mainstream news outlets picked up on it up fast because it actually fell in line with what legitimate studies have been reporting on for a while now.


The Dead Internet Theory says that content made by AI and bots has either surpassed content generated by humans or it will very soon. And because AI models are trained and grow by scraping content on the Internet to analyze and mimic, the theory says that pretty soon the whole web will just be AI bots scraping other AI bots’ content and spitting out slop that other AI bots will scrape to post more crap, forever and ever amen.


The first question we need to ask is whether or not this is really happening. And spoiler alert -- it is. Estimates have shown that half of all Internet traffic was made by bots in 2024. And AI is improving all the time by leaps and bounds, so it’s very possible that the majority of Internet traffic at this very moment is completely artificial.


Okay, so aside from just kind of feeling gross about that fact, is there anything genuinely worrying about this? Is it a problem that most traffic on the Internet is non-human? Turns out, there are some issues that could result from this, and they range from kinda irritating to completely catastrophic. So we should probably talk about them.


Especially because malicious bot activity is exceedingly common, making up a third of all Internet traffic in the U.S. But it has completely taken over some countries, with 71% of Internet traffic in Ireland made up of so-called bad bots, and 68% in Germany. Bad bot! (kendo hit?)


So what are these bad bots doing? They’re not out smoking cloves and stealing your girlfriend, but, well, actually, maybe they are. You see, artificial Internet traffic from bots and AI agents is part of a massive misinformation infrastructure that’s been building up on the web. This has been happening for years, ever since the ad revenue model of digital media took over the finance system governing why people get paid and how much. Content operations have been pushing totally hysterical nonsense to get clicks for a long time now, and that’s probably not new information to most people, even though most people are still very much falling for it. And now it’s even easier because all of this can be done in an automated, mass produced fashion with no human labor involved.


If I personally wanted to, I could set up a system where I could generate an entire news site with constantly refreshing articles that provide real news and that would market and disseminate those articles widely across the web using a completely automated framework that relies almost exclusively on artificial intelligence. I could do all of this with one mouse click, once the system was in placeI don’t think people realize just how easy this is, just like I don’t think most people ever realized just how easy it is to build a fake news empire and make lots of money from scare tactics and unethical fudging of facts, data, politics, and science. Before you had to do it manually, but now you can just have the robots do it. So that’s what’s happening.


And that’s one of the biggest issues with the Dead Internet Theory: the idea that there are so many bots and AI agents roaming around now, huge clusters of them each owned by a specific person or organization, and they could all theoretically be for hire. A bad actor could buy their way into the white house, for instance, by weaponizing fake information spread through bazillions of artificial accounts and hijacked accounts, and even sites that resemble real news sites. And, you know, there are reports that suggest this scenario isn’t fiction but actually happened during the last election cycle.


A 2018 study from Nature found that bots were spreading misinformation from unreliable sources and was at the time a massive threat to global democracies. A 2019 study found that bots are even more active after a major event like a school shooting and they work to drive the narrative about those events in the public conversation. This happens in larger geopolitical moments as well, such as the 2024 election and the Russian invasion of Ukraine. All these bots are being mobilized to spread propaganda in order to sway public opinion one way or the other.


And these hijacked accounts I mentioned before are real peoples’ accounts, like maybe their facebook or their twitter profile, and usually these are accounts that have been dormant and out of use for a while, but bots will attempt to log into those and sometimes they succeed, and then they have completely legitimate social media profiles to leverage and fit into whatever endgame they’re aiming for. Of course, account takeovers also happen outside of social media with bank accounts and investment accounts, and with all these bot armies at one’s disposal, that’s much easier than it used to be, too, and this is actually also an entire cottage industry because account takeovers are a product that can be packaged and sold. As much as 11% of all account logins are actually attempts to hijack and weaponize legitimate accounts that belong to regular people.


Another thing AI agents are doing is running fake profiles on social media sites that automate fun or cool or just stupid funny images and memes and they build a following because people are liking and subscribing to those accounts. You know, because funny. See Exhibit A: Shrimp Jesus. This is a place where real life intercepts with tech bro pseudophilosophy, and the Dead Internet Theory gets it right. Because if you’re running your own eclectic mix of fake AI accounts, then of course you’re going to have all of them interact with each other, like each others’ posts and comment and follow each other, because what you’re doing is legitimizing their presence so that real people will start to follow them, too. And then suddenly you have a bunch of big accounts with a genuine audience and you get share whatever you want to all those people.


Weaponized bots and malicious artificial traffic is everywhere, and it crosses all industries. The biggest portion of robot goblins occurs in the government sector, then in finance, in gaming, and entertainment all following close behind. And again, this could be anything from account takeovers to coordinated misinformation campaigns. You don’t like something about that upcoming Disney movie? Well there’s a whole army of bots ready to spew propaganda about it to make sure everyone else hates it, too.


This is a big problem. Because the bots are capitilizing on manipulating information where people go to get that information. 56% of people prefer getting their news in a digital format. Lots of people still watch tv and watch the news, but over half of people are pretty much glued to their phones when it comes to absorbing the information that tells them what to believe about the world. And social media is the second most preferred sources of news. This is why all these artificial accounts and bot traffic is such a gigantic threat.


No wonder we’re struggling to break through the noise and genuinely engage with each other anymore, because over half of what we’re engaging with isn’t actually each other. It’s all bullshit and a lot of it is intentionally harmful bullshit whether we’re conscious of being harmed or not.


As for the related idea inherent in the Dead Internet Theory, the musing that because of the way things are going with the AI arms race right now, sometime soon the Internet could just be AI GPTs talking to each other for the rest of time, engaging with each other in weirder and weirder ways, looping forever until it’s just AI slop all the way down. There’s no specified end goal to this and it’s confusing to even think about where it might lead, but it’s easy to see how that situation could render our most vital communication tool completely useless.


But the real concern is that the system we’ve developed and the technology used therein allows for bad actors to potentially control the narrative about important moments in society, politics, and culture. The public can be swayed to think what they’re designed to think, and pushed toward a specific result.


So how do we escape a dead Internet? What can you or I actually do to combat the fact that robots are out to get our very souls? The first thing we can do is evaluate and improve our information and media literacy. This is an incredibly important skill, especially these days, and we didn’t have enough time to get everyone on board with it before our lives were completely taken over by the Decepticons. Misinformation was rampant before robots were spreading it and people were not prepared then, and they definitely aren’t prepared now that it’s so easy to propagate it now.


Some quick tips: follow reputable sources. Reputable doesn’t mean elitist or whatever, it means sources where, if someone makes a mistake in factual accuracy, there will be consequences for that employee. The Associated Press, Reuters, NPR, the Guardian, and weirdly enough Wired, are all great places to start right now. You can also download Ground News, which is an app that highlights bias in news organizations and provides you with additional coverage of a topic from outlets across the spectrum. It’s really helpful, and even though I would totally take their money without hesitation, they’re not a sponsor or anything, I just think the concept is really great.


If you find yourself encountering weird angry shitheads on social media, the first thing to do is to ignore them. The second thing to do if you fail at the first thing is to check out their profile and go through their post history and see how old the account is. If you’re looking at a week-old account that does nothing but shit posts, guess what -- you’re looking at a bot. If suddenly your friends and family have stopped posting updates on the really hideous kids you’re related to but wish you weren’t, and all of a sudden they’re inviting you to their ponzi scheme breakfast smoothie club where you have to take out a loan from the company CEO to join up, then your loved one has been the victim of an account takeover.


And don’t you get caught in a takeover. Make sure to change your passwords regularly, and use strong gibberish passwords instead of the combination on your luggage, which is one two three four. Google makes stupid complicated passwords for you, all you have to do is set them up in your password manager. Or get a password tool like 1Password that saves all your info and keeps everything secure for you.


This one’s probably too much to ask, but, we could all collectively stop using generative AI as much as we are. These big tech companies are only in this arms race because everyone is slobbering all over each other to get the latest and greatest large language model. I’m going to do a whole video on the realities of AI, but in short, it’s not even very good, and it’s nowhere near the utopian singularity or the apocalyptic cyborg empire that either side is claiming. It’s just a predictive text generator, or a predictive pixel generator. It can’t think or reason, it just tries to predict what character should come next. That’s all it does.


Maybe someday we’ll have some kind of artificial intelligence technology that can genuinely compare to the output of the human brain, but we’re far, far, away right now, and worse still, the current models we do have just completely make shit up almost half of the time you use them. 40% of the time, generative AI is just hallucinating. Which, again, is not a bug, hallucinating is just something it does because it can’t think or understand what you’re actually asking it, it just predicts letters to put down on the screen for you.


I’m going to predict my foot up some tech bro’s ass pretty soon here, because from where I’m sitting, I see this whole AI bubble collapsing pretty hard. At least in the individual commerce sense. The enterprise usefulness of spreading targeted misinformation is unfortunately likely here to stay. So get equipped and be mindful.


Please make sure to like and subscribe so we can continue to build a community of real people to combat the impending robot army. I got my hunting license when I was 12, because America, so I’ll pick them off from the second-floor window if you barricade the doors.


Until then, let’s do some f*cking good about fake bot bullshit and garbage tech bro tools. Oh, that’s a double entendre. You couldn’t predict that, ya robot bitch.





Episode sources:










 

Comments


Recent Posts
Archive
Follow Me
  • Youtube
  • Threads
  • Twitter Classic
  • Facebook Classic
  • LinkedIn Square
  • Blogger Square

​Follow Me

  • Youtube
  • Threads
  • Twitter Classic
  • Facebook Classic
  • LinkedIn Square
  • Blogger Square

© 2024 Kevin Lankes.

bottom of page