Skip to main content

The (new) Dead Internet Theory

A bit of background

Dead Internet Theory originated about three years ago before the most recent GenAI boom (think pre-ChatGPT). The theory argued that most of the internet was "dead" because much of the content being written was produced by bots. It also argued that algorithms intentionally minimized human-to-human interaction.

Three years ago, Dead Internet Theory wouldn't have resonated with me. Even today, the idea that algorithms are intentionally attempting to minimize organic human interaction feels a tad too tinfoil-hat for my tastes. However, given the rise of GenAI and the sheer volume of content it has already produced and will produce in the coming months and years, I think it's worth seriously considering what a new Dead Internet Theory means for the web and the people that inhabit it.

System Shock

In the early days of ChatGPT many smart people tried to wrap their heads around what these technologies would mean for themselves and humanity. The zealots and doomsdayers argued that Artificial General Intelligence was just around the corner and would soon bring about either a utopia or the literal end of days. Those with a more "moderate" outlook predicted either mass unemployment due to no longer needing as much cognitive labor or unprecedented productivity and creativity for many of the same reasons. Both of the moderate predictions are likely happening to a smaller extent across different geographies and industries, however the macroscopic net impact of generative AI still isn't certain.

Pushed Out

The amount of autogenerated content that technologies like ChatGPT, Gemini, Llama and others have enabled is absolutely staggering. These models were trained on the entirety of the public web so they're really good at mimicking what people sound like online. Emails, marketing websites, Reddit, political arguments, YouTube comments, you name it: the entirety of mankind's web presence has been consumed and studied to build never-before imagined content factories

As a result of this, some analysts are predicting that by 2025, 90% of online content may be created by GenAI. I'll take that one step further and argue that by 2030, the internet as we know it today will be unrecognizable due to sheer volume of AI content, "dialog" between bots, and new recommendation algorithms. In fact, I'd be willing to bet you've recently seen a Reddit thread, a YouTube comment section, or a Twitter interaction that felt more than a little off. Those pockets of bizarreness are harbingers of things to come.

Feeling Targeted

Given the enormous investment in GenAI data centers by FAANG-like companies, we know that the big movers are betting on their content factories being in high demand. Why shouldn't they? If we inhabit an attention economy, custom generated content designed to capture our attention (and our dollars) is an invaluable resource. 

Before GenAI, those seeking to get our attention needed to use blunt-force marketing campaigns. For example, a large company would create several different marketing campaigns and then use Google, Meta, and others to segment their demographic and reach them via paid search, SEO, real-time ad bidding and other steams. They hope that the 18-29 year old male from New Brunswick who recently searched for ATVs and checked into his local brewery might be interested in some Foo Fighters tickets. Their strategy isn't dissimilar to airing a Bounty Paper Towel ad at 9:15 AM on a Tuesday because Nielsen tells them that stay-at-home parents who control household budgets are likely watching ABC at that time.

GenAI flips the script. In the "old" model, centralized outreach campaigns are created and then a vast amount of resources are dedicated to targeting the campaigns towards the most likely-to-be-converted users. In the new model, vast amounts of resources are used to generate a custom campaigns in real time

A "Dead" Internet

Imagine checking out a Reddit thread that compares the pros and cons of Mint Mobile and Verizon. If you visited that thread a year ago you would see actual users who had tried both services (or a least one of them) sharing their experiences. After visiting, you would probably then see ads throughout the web for both companies who know that you might be interested in them. Your browsing history suggests you might be in the market for a new phone plan, and so the full force of the modern digital marketing engine is thrown at you.

If you visit a similar thread five years from now, the Reddit thread itself will be where the advertising happens. GenAI will parse the conversation in real time, and it will contribute, as if it were a person, with retorts, agreements, upvotes, and other interactions. Now imagine that the thread wasn't created by an actual person, but instead by a GenAI acting on behalf of one of the companies. Imagine that none of the dialog in the thread was written by a person. Imagine that even though the thread had been "trending", a real person had actually never seen it before today. All of the activity, the buzz, the liveliness, it's all manufactured. And even though Verizon and Mint are probably happy that you stubbled upon their "lively debate", it was never for you anyway. Well not entirely, it's mostly for the benefit other bots crawling the web, the ones picking up trends, generating headlines, and contributing to meta analysis for investment recommendations, sentiment analysis, and all sorts of other algorithmic BI gathering.

Now imagine that that hypothetical Reddit thread, the one with all of the buzz and none of the humanity represents 99.999% of interactions on the internet. What then?

The Tragedy of the Commons

There's a concept in economics called, the tragedy of the commons. It's an idea that argues if too many people have unfettered access to a finite, public resource they will tend to overuse it and thereby destroy its underlying value. Overfishing is a good example of this. The internet is a different case because it's not "finite", at least not in the way that economists mean. That being said, that feels like what's going on here: companies and advertisers eager to have a constant presence will use GenAI to produce such a staggering amount of content that the internet itself becomes a mostly valueless wasteland.

What's Next?

That's the million-token question. All of the nonsensical autogenerated interaction will probably backfire spectacularly on the people and orgs rushing to put it online in the first place. Trust in text, images, audio, and video will approach near zero, and actual people with their valuable attention and bank accounts wont like it. Maybe a set of invite-only online communities will heavily self-govern and ban users that invite bots or publish autogenerated content? Maybe we'll start trusting our local DJ or newsman more than we do now? Maybe we'll all "log off" and gravitate towards live events and in-person meet ups? Who knows. The one thing I can say for certain is that it will become increasingly difficult to interact with real people on the web. My take is that this is probably for the best: in the age of screen addition and digital detoxes, cutting the "drug" with a bunch of non-lethal garbage means that people will spend less time using it. Perhaps a dying internet is exactly the sort of thing we need right now.

Comments