My love of RSS is something I’m not shy about. I think it’s a pretty excellent one-to-many communication tool. But what’s interesting is if you look at its history, it had one major flaw (that’s also partly why I love it). And in many people’s minds, RSS is a dead technology.
This blog post came directly from a slack discussion with the local tech scene that went a little something like this:
NewsletterAuthor: I've migrated my newsletter to Ghost! Here's the new URL.
Kevin: Oh cool! For those of us using RSS readers, here's the new feed URL.
OtherUser: Wait, didn't Google kill RSS?
So, this post is going to be a bit of a “what is RSS?”, “is RSS really dead?”, and “what’s going on with syndication on the internet anyways?”
I think we’re at an interesting inflection point between a couple of different directions, and we could progress in a way that returns a lot of power to content creators. The pendulum is swinging back to a utility based economy from an attention based economy, and this is a good thing, even if we’re in a moment that’s painful for the ad-revenue based paradigm of the 2005-2025 world.
This post is my potted history of communication technology, and why I think things are improving for individual creators.
This is a deep dive. Here’s the tl;dr:
RSS isn’t dead; it just became the invisible plumbing that powers the entire podcast ecosystem. We “killed” text RSS because it abstracted away the ads and revenue for blogs and news, but audio RSS was baked in due to bandwidth limits and how listening let creators bake ads into the media (unlike blogs).
We’re seeing a migration from walled gardens (Substack) to owned platforms (Ghost) driven by “Subscriber RSS” — that lets readers support creators directly while still reading their work in your app of choice (shout out to 404 Media for nailing this).
The future isn’t just humans subscribing, though. With the rise of “agentic browsing,” AI bots don’t look at ads. Instead, they’ll likely use protocols like HTTP 402 to pay micropayments for access to data (or subscription to content APIs or MCPs). The internet is moving from an attention economy (ads) back to a utility economy (paid delivery), and protocols are the only way that works.
What is RSS?
RSS stands for really simple syndication. The wikipedia article is a really nice historical introduction to the concept, that goes into the history more carefully. But I want to introduce the concept and a bit of history.
Here’s how the internet works, in a really simple kind of lens:
You have a computer, which has a piece of software known as a web browser.
If you’re publishing content to a website, you typically do so in a format for a web browser. It’s published as HTML and Javascript (and likely talks to some computers elsewhere for images and other things), which is collected together and rendered as a formatted experience inside of Chrome, Edge, Firefox, Opera, etc. In this case, what’s curated for you is the visual experience of a website.
When I want to see what content you have published on your website, I start up my computer, and I open my browser, and type the address on your website. My browser makes a request to your server, and it returns HTML and Javascript for your website which displays as a page. How this appears is determined by files that describe the overall style of the content (CSS).
Both of us have the costs of running our computers (electricity, hardware), and costs for sending and retrieving data over the internet.
But what matters to me as a reader isn’t really how you’ve chosen to style your writing. In fact, rather than going about making requests to a bunch of servers myself, it’d be much nicer for me as a human being if I had an application that would make those requests for me, and I could view the bits that I want, all inside of that app, with uniform styling. If there was a uniform way of building content that apps could read, that would help readers consume content.
It was an attractive idea back in 1995. But it has evolved a bit over time. The fundamental idea is that you can markup the essential elements of a site using XML and plain text, build that into a feed, which can be retrieved by a separate application or portal. Originally, This was built for use in Netscape, but with Netscape’s acquisition by AOL, it has an interesting history of parties trying to revive and maintain the standard.
This history is why Atom, a different standard, similar to RSS was created.
But basically, as of about 2005, RSS was an initial syndication standard for mainstream technical blogs. Companies like Google were releasing applications that allowed you to save and access lists of RSS feeds viewing content from multiple websites in a single portal, arranged chronologically.
It became a vital part of how I used the web, reading web comics, blogs, and more. With an RSS reader, I can open up a single page, and see news and posts from multiple sources, organized in a way that lets me direct my attention without distractions.
The “Murder” of RSS
I put “murder” in quotes, since I think RSS survived. More on that later. But a few factors lead to the decline of RSS in the 2010s.
Google Reader was the most popular RSS reader. It was added to the Google graveyard in 2013. I was using it heavily. But I was also hitting the point where I was seeing too much content in my feed. for it to be something I could stay on top of. There was definitely a saturation point for blog content.
So, how does a website make money? After all, it costs money to run a server, and send HTML to all those browsers visiting your pages. The answer was advertisements. By embedding advertisements on your website, you can get paid based on the amount of traffic you have for your website.
At the same time, social media was rapidly expanding. We were viewing feeds of content about the lives of our friends and family. And that would include news and eventually community pages for people who were trying to build a following. If you ran a media company, you could pay to get traffic directed to your site, but you were also hoping that things you posted would go viral. Traffic would come to your site, and you’d be paid based on the number of people who spent time on your site, or clicked on blog posts.
Marketing was the vehicle which funded content creation and content creators were leveraging social media platforms to get their content to their readers. Cory Doctorow talks about this as the “enshittification” of platforms.
Where did this leave RSS and Atom? if your ads were served in Javascript powered widgets on your website, they wouldn’t easily be loaded in your content. So maybe you start to truncate the content that makes it into your RSS feed to drive traffic to your website. Or maybe you rely on merchandise sales to support your webcomic or your blog. Gradually speaking, the utility of your RSS reader decreases from a place to read content to a place to drive traffic. Your time is better spent on your social media strategy since social media is getting more and more attention.
This was the main attack on RSS from companies. Largely one driven by financial incentives, where companies tried to maximize the capture of attention to sell ads and drive revenue growth based on a non-paying user base.
A major blow was when Google retired Google Reader. There were other readers available, but they often operated on a subscription model to get more fully powered features that let you keep up with scale. Meanwhile, algorithms were getting stronger on Twitter and Facebook, so both discovery and delivery would occur on those platforms. Advertising supported platforms like Feedly, but it wasn’t able to grab a significant portion of the user base of Google Reader. Self-hosting was not as popular and RSS became more and more of a curiosity.
For many people, if they have an RSS feed at all for their website, it’s vestigial and they’re barely aware it exists. Or you were a bit of a weird legacy tech nerd trying to reach other tech nerds. This was exactly the dynamic that inspired writing this post!
What about audio?
Podcasts are a cool technology. It’s interesting to think a bit about why podcasts are the way they are, while other forms of media didn’t go this direction.
The biggest piece of this puzzle is how people were consuming audio vs. text and bandwidth limitations. Back in the days of lower data caps it was common to have a separate device (mp3 player) to listen to music, lectures, or other kinds of long-form audio content. Because your cable or high-speed internet provider had larger data limits than your phone, you could download more audio without running into the hard limits on your smart phone.
iTunes was a nascent marketplace for music, but there were also people publishing audio content to various websites. RSS readers let you collect updates from those websites. By having an excellent MP3 player, RSS became a way to get an excellent ecosystem of content onto that device, without Apple having to build their own marketplace/database.
RSS and iTunes 4.9 were the key to unlocking this idea. Basically an intermediary app became the way to deliver audio content to your device. Instead of checking for updates, these would be automatically loaded to your device whenever you were charging it via USB at your computer.
If you wanted your audio to be available to people on their iPods, RSS became an easy way to access that market. It was also easy enough to adapt as mobile internet caught up to delivering audio.
What’s kind of wild is that podcasting emerged and stabilized largely because of bandwidth concerns. Audio is relatively cheap - hosting and delivering 60 minutes of audio (~50 MB) to 10,000 users is manageable for a server. But 5 minutes of video is 200 MB and the costs of video going viral could bankrupt a company. “Vodcasting” was almost a thing, but Youtube and Google really altered the trajectory here. That’s another story.
It’s also worth remarking on the behavioural differences between audio and video too - video requires visual attention, while podcasts can be listened to on the commute, while on a run, or while doing dishes.
In a way, audio was perfect for RSS syndication because of its mode of communication, the size of the files, and the variety of devices that could play these files.
Why do podcasts “work” when blogs don’t?
There’s a couple of reasons.
At first glance, it’s very tempting to attribute it to attention. When you listen to a podcast, it’s more difficult to skip past promotional content. You listen to the podcast from start to finish.
The monetization between blogs and podcasts is strikingly similar though. You have the following options really:
- Advertisements
- Paying for views/clicks
- Referral link payout
- Subscriptions
- Sales of merchandise or other services.
And I’ve seen this play out in both blogs and larger podcasting services. I think the biggest reason why podcasts work while blogging fell apart has to do with the portability of attention. Reading a blog post requires visual attention while listening to a podcast can be done on the go.
if you have to spend time visually attending to your RSS reader and Facebook to get information, and Facebook has teams of PMs, developers, and UX designers building a system to maximize your attention for ad revenue. On the other hand, RSS readers fundamentally abstract away from revenue generating mechanisms, syndication for text was fundamentally doomed.
The big thing is that in these environments algorithms shape what content is presented to users, and these algorithms are built to maximize ad revenue, which is secured by keeping people on the platform. External links get downgraded, unless it’s a service that could be paid for, but then people only really engage with advertisements that are well targeted to them. Basically monetization by advertising revenue could be maximized in an environment where companies could control what you see. And multimedia is a way of grabbing attention.
Gradually the expansion of multimedia as a means of grabbing attention reduced the share of blogging. These platforms, driven largely by advertising as a means of monetization, began to choke out websites as an economically viable thing. Browsers and operating systems began to have news items baked into the interface, trying to bake attention and advertisements into how you access the web. And news media continued to maintain websites, but they gradually cut off RSS to focus on systems and sources where they could maintain revenue, which was dwindling rapidly. Capital began to aggregate news providers, and quality declined. Generally speaking, things looked pretty grim for people in media.
The actual story is a bit more complicated of course. Microblogging (Twitter/X, followed by Mastodon and Bluesky) was another form of Web2.0 that cut into the world of blogging, and Instagram also played a part. That and TikTok. The relationship of short form video to attention, marketing, and media consumption is a massive topic, but I’m going to try and limit this post to talking about writing. After all, I’m a writer and blogger.
Also, I’m just going to note that many podcast syndication services now insert ads into audio files based on information from the podcast app being used. Advertising on podcasts is a cool topic in and of itself!
Some countries (notably Australia and Canada) realized that this was pretty bad for the media ecosystem and made attempts to legislate that the social media giants and web platforms contributed to regional media ecosystems.
This backfired and there were interesting results like garbage collection companies becoming de facto local news, posting short videos and getting engagement on social media. As a Canadian, it’s an interesting media landscape. Some of the strongest sources of local news come from perspectives not attached to my local newspaper.
Subscriptions and media
Ok, so newsrooms are failing, bloggers have moved to Instagram, Twitter and Facebook, and social media is thriving on advertisement revenue. There’s a bunch of talented writers who are displaced and can build a following, but need to monetize long-form content.
RSS/Atom are fundamentally ways of syndicating content that emphasizes delivering it to people who want to opt-in. But if you’re a creator looking to control who has access to content, they’re fundamentally open. If I post an RSS feed to my website, anyone with that URL can add it to their reader.
But if I build a website that allows me to collect the email addresses of paying subscribers, send them content on a regular basis, and control access to the content, I can get paid and distribute content. This is pretty great. I also get pretty solid analytics on subscriber numbers and readership rates (which helps with selling advertising, if I’m doing that). Some people picked up on the viability of offering this as a service to writers who didn’t have the technical chops to put together this kind of service (basically gluing together a CMS, payment service, and emailing platform), and recommending similar pieces of writing to grow the audience and subscription base, and voila, we have platforms like Medium and Substack.
These platforms can use their aggregated audiences and consumers to drive profitability, paying and supporting star writers, and subsidizing the service for smaller writers. They’re an interesting revival of monetized blogging, and are probably the most popular/widespread means of sharing writing for journalists.
Since writers need to build audiences, they use these newsletter platforms for syndication and subscription management, while building the audience through microblogging services, podcasts, and video media. Here’s Paris Marx talking about his journey with finding a provider for his newsletter (hoping that there would be network effects) and why he left (due to association with less politically savory folks and “profit pressure” making the environment less friendly to creators). We’ll come back to Ghost, but I think it’s really important to note that there are both platforms and self-hosted tools that let you build newsletters
The core tension is this: email isn’t really a preferred way to read long form articles, and accessing a site through log-in can be a bit of a pain for users. So authors need a way to control access to their content, while users want an easy pipeline to consume content. Newsletters let authors syndicate and manage. Feed readers and email providers become the way we choose to consume the content.
It’s interesting to watch, currently, how podcasting is faring in this current technological landscape. Spotify is an audio streaming platform that has started to pay high profile podcasters to get their audience onto the platform. Some podcasters monetize by offering premium subscription tiers with bonus content. Others form into networks or rafts of related podcasts to share advertising revenue.
In a real sense, the challenges of monetization for podcassts are the same that are being faced by bloggers. Which is why we see similar kinds of things happening to podcasters as in the latter days of blogging. Blog networks, subscription-based content, and membership communities are all strategies to build that revenue in ways that close the ecosystem off.
This tension will likely shape the media to come. So I want to talk about three trends that are breaking away from this landscape and where I think we’re headed.
Three trends
I think there’s three main trends for how the internet is evolving after Web 2.0 (I’m not going to talk about Web3 because of reasons):
- Paid syndication outside of email — 404, Aftermath, and self-hosted newsletters.
- Algorithmic choice and federated social media — Bluesky and Mastodon.
- Paid access for crawlers — Cloudflare and generative AI-powered browsers.
A look at breaking away from platforms: 404 Media and paid RSS
So platform subscription services are falling victim to the usual profitability pressures. Here’s what I think is a useful model going forward: It’s becoming easier and easier to host and publish your own content to the web. While I use a fairly technically hands-on stack, it’s entirely possible to self-host a newsletter service for something comparable to paid hosting services. The overall challenge is balancing the cost of hosting against revenue coming in. And if I were trying to monetize this blog, I wouldn’t want to use invasive advertising. I’d either move to a platform that could handle subscription payments, or I’d look at some kind of passive sponsorship from a very specific sort of advertiser (rather than using traditional Web2 ad markets). Rather than rely on a network, I’d keep it at its own URL. Maybe it wouldn’t be successful, but if I have to enter into an agreement, I want it to reflect my brand the whole way down.
Ghost is a newsletter platform/CMS that is free-to-use. Many of it’s themes are optimized for newsletters, but you can easily use it to run something between a blog and newsletter. You’ll need to deploy a server and DNS, but in theory, you can use it without paying the creators directly. If you don’t want to muck about with that, you can also use their hosting (which has multiple paid subscription tiers based on the size of your audience). Either way, you own your own content, and can use your own domain name. As a newsletter service, you can break your content into publicly available, and available to subscribers. Posts can be sent to subscribers. And by default, it has RSS feeds enabled.
404 Media is a collective of tech journalists that previously worked for companies like Vice Media. They do high quality reporting on technology and society. And when Vice began to contract, they got together to create a small, independently owned site driven by subscriber access. Since they have an audience that is engaged and technologically focused, they had readers that use RSS readers to consume news from a variety of sources, and that forcing paying subscribers to log into a website is a poor user experience.
The solution is a “tokenized” RSS feed. When you subscribe to 404 Media, you get a unique, private RSS URL (e.g., 404media.co/feed/unique_token_123). This feed delivers full-text articles bypassing the paywall because the URL is the key. If you cancel your subscription, they stop posting the feed to the URL.
This stack uses Ghost, plus a stack that handles subscriptions and the management of tokenized feeds. And it’s cool to see this approach spreading to other collectives of journalists, such as Aftermath.
What’s great about this model is that it respects the users choice of reading environment, while protecting creator revenue. It treats the RSS reader as a first class citizen, not a pirating tool. As a user, this feels good. I know that I’m paying the creators directly for their content, but it’s quiet. I can review and see all the content from the site in an environment I have control over. It determines my pace of consumption and access, and I can rave and review things without the noise and intrusion of notifications from a social media interface.
It also allows the platform to own or control more of the stack. They decide which authors are publishing within the site (unlike Medium or Substack). It is a bit of an epicycle on the idea of subscription content, but I think it moves the needle forward for both readers and creators.
Bluesky and algorithmic choice
If RSS is “build my own subscription algorithm” and Twitter/X is “consume what the algorithm feeds me”, Bluesky tries to find the middle ground.
Under the hood, Bluesky runs on ATProto, which shares an outlook with RSS. It’s an open data standard for social media posts. On Twitter/Instagram/Facebook, your feed is a black box optimized to make you buy things. On Bluesky, the “algorithm” is just another feed that you can subscribe to. People build and maintain feeds with names like “Tech News”, “Cat Photos” or “Mutuals Only”. An open standard lets different federated servers share content, so in theory, you could have a separate app talk with Bluesky and you could access content from other services on Bluesky itself. It’s a seriously cool break from platform lock-in.
It’s possible to define your own feeds and lists, and switch between them. I’m subscribed to a local posters feed for the Waterloo Region. But the core idea is that curation is in your hands as a user, while you can still access a firehose of general content structured via ATProto. It’s syndication, with a customizable filter on top.
In addition to syndication, it’s possible to have your own servers which integrate with other publicly available ATProto servers. Mastodon is similar in scope and provision, with ActivityPub playing a similar role to ATProto, and a more diverse range of servers available for users to post on.
The catch with Bluesky or Mastodon is that it they are emerging frameworks: to get up to speed with using either (or developing on ATProto/ActivityPub) is more involved than traditional Web2.0 use. Still, it’s open and not too hard to work with. I’ve built a syndication script for this site, and found creating a client and making a post is straightforward. The development hurdles for LinkedIn, Reddit, or Twitter have all become more difficult (rate limiting, stricter auth requirements, and limited development programs etc.).
Cloudflare, AI, and paid access
If I look at the traffic for this site, it’s overwhelmingly from Ireland. Maybe I have a dedicated readership from there, but I think it’s much more likely that web crawlers accessing my site are hosted in data centres in Ireland (If I’m wrong, shout out to my Irish audience — you’re one of my favourite Eurovision countries). The rise in web crawler traffic driven by the proliferation of generative AI shapes the fundamental economics of running high traffic websites.
I pay pennies per month for hosting this site, but for higher traffic sites, this becomes as much of a cost/bandwidth issue as video was for RSS. If social media discovery killed ad revenue for news sites, the expansion of machines crawling their content will be even worse. Crawlers increase traffic, but don’t generate meaningful ad revenue.
And this is just the early days of machine driven interaction with sites. I don’t know whether “generative AI-powered browsers” will take off, but the idea is that generative AI-powered tools will browse the web for us, acting like an interface for the content posted on websites. They don’t look at ads, they don’t buy merchandise (or at least I don’t want AI doing that for me). They extract information and present it to readers in a way that is more focused than the noisy world of Web2. It’s natural to prefer the experience of a focused chain of messages with an LLM over the noisy clutter of Twitter or Facebook. It’s part of why I like RSS!
If the web is funded by ads and the “users” incurring costs ignore ads, the economic model collapses.
Enter HTTP:402: Payment Required (It’s an old idea!)
This is an old mostly unused web standard that companies are talking about reviving. The idea is simple. When a crawler hits your site, instead of blocking it, or showing ads that won’t reach a human, your server asks for a micro-payment. Basically it says “If you want to scrape this blog post to summarize it for a user, please pay me a $0.01”. This payment is transferred via a digital wallet, and the server delivers the content in a machine readable format (typically JSON/Markdown).
In a sense, this is an evolution of paid RSS. The idea is that my hosting service provides content to the consuming service for a fee, with no ad-tech middlemen. It’s being heavily discussed and implemented by providers like Cloudflare. Also, I could build an MCP for interacting with my site that includes rate limits and subscription requirements for access to my content. When an agent attempts to interface with the site through the MCP, they’re rejected without a subscription. The question really becomes one of scale, utility, and access.
There are difficulties and problems for this route, and there are risks of technological capture by providers like Cloudflare. It also leaves me wondering about whether this is closing off the old ideas of the open web, but we also have to navigate handling costs for hosting and creating content.
Summary
It has been an interesting historical moment. Not just because of the massive social and political upheavals that are going on, but because we’re reaching the breaking point of the Web 2.0 paradigm.
We’ve spent 20 years trying to fund the web with human attention and we ended up with a surveillance economy that feels unsustainable. I think the future consists of a mix of tools for discovery and paying for the content that you want to support or use, and I think we’ll see a mixture of self-funded projects supported by 402 payments for machine access, tokenized RSS feeds, and self-owned newsletter services.
If we’re drifting away from the fully open web, I’m okay with that. I’d much rather an honest, utility driven system of communication. We should give more power to creators and consumers of media, that preserves our attention.
And best of all, it’s going to be built on plumbing that works.*
- Ok, there’s a lot of messy, competing protocols here and I’m being a little optimistic. But I do think that there’s a fighting chance for the internet at the moment.