A Likes Feed

Client-side RSS parsing is fun and awesome.

I recently saw a tweet from Dave Rupert about how Feedbin will give you a URL to a public RSS feed made up of articles that you’ve starred through Feedbin. He made a page on his site that parses that feed and shows them as normal links. I then found another post that gives a little more detail as to how to actually set it up.

Naturally, I had to try it for myself. After all, I recently integrated search into the site without any server-side nonsense. RSS parsing in the browser? Yes, please!

Feedbin

Feedbin is awesome. I miss Google Reader but contrary to popular belief, RSS did not die with it. RSS is alive and well. I use Feedbin as a sync service for my curated collection of feeds. I use Reeder to access my feeds most of the time, but the Feedbin web interface is nearly as good. Feedbin isn’t free but I encourage anyone and everyone to try it out as it’s a wonderful service.

Enabling the Starred Feed

In order to do any of this, I had to enable the starred articles feed. This can be done by going to the Feedbin General Settings page and enabling Starred articles feed. Then the page will show the URL for that feed which we’ll need later.

Bonus: External Saving

Not all the fun articles I read come from my RSS feeds, but there is a way to save external links into Feedbin and there is a way to auto-star them!

  1. To save an external page into Feedbin, there is a bookmarklet found under the Newsletters & Pages page of the settings. This will save a page into Feedbin under a special “Pages” feed.
    • If you have the Feedbin mobile app, you can also use a native share extension.
  2. Under the Actions page of the settings, I created a new action. I selected the Pages feed and checked both boxes to mark the item as read and to start the item.

After doing both of these, I can use the bookmarklet (or share extension) to save any page into Feedbin. The action then takes over to mark the item as read and star it. This basically imports it directly into the starred feed without me seeing it in my normal unread feed.

The Feed Page

This next part is similar to what I did for the search page on the site.

Layout Template

I called it feed.ejs thinking that I may want to integrate more RSS feeds into the site using this method. Here’s the entirety of that template:

<main class="container container--postlist" data-feed-url="<%= page.feedUrl %>">
    <h1 class="u-sr-only"><%= page.title %></h1>

    <div class="post post--listitem post--archive-tag">
        <div class="post__inner">
            <%- page.content %>
        </div>
    </div>
</main>

<script src="/js/feed.js" defer></script>

The important part here is the page.feedUrl being injected into the document. Also the script being loaded up.

The Page

Now I needed a page to use that template. I create a file under my source directory at likes/index.md. This ensures it exists at the URL of /likes. Here’s that entire page:

---
layout: feed
title: Likes
feedUrl: https://feedbin.com/starred/01c4070610ce6d1f90944c88a486a653.xml
---

Starred articles and such from [Feedbin](https://feedbin.com/). You can also [subscribe to this feed directly](https://feedbin.com/starred/01c4070610ce6d1f90944c88a486a653.xml). [What is this?](/2020/02/a-likes-feed/)

Note the feedUrl property in the front-matter. That’s what connects to the page.feedUrl in the template. Each page I create using the template can define it’s own feed.

Parsing the RSS Feed

This is the fun part.

let container = document.querySelector('main[data-feed-url]');
var params = new URLSearchParams({ url: container.dataset.feedUrl })
let url = `https://moscardino-cors.azurewebsites.net/api/proxy?${params}`;

let response = await fetch(url);
let responseText = await response.text();
let data = new window.DOMParser().parseFromString(responseText, 'text/xml');
let html = ``;

data.querySelectorAll("item").forEach(el => {
    let url = new URL(el.querySelector('link').textContent);
    let date = new Date(el.querySelector('pubDate').textContent);

    html += createPostItemHtml({
        title: el.querySelector('title').textContent,
        url: url.toString(),
        hostname: url.hostname,
        date: date.toISOString().substring(0, 10)
    });
});

container.insertAdjacentHTML("beforeend", html);

What’s happening here? Let’s break it down:

  1. The first block of variables is just getting the feed URL from the DOM and building the full URL with a simple CORS proxy I have running as an Azure Function.
  2. The second block is actually fetching and parsing the feed. The magic, though, is the DOMParser. RSS is just XML. XML can be parsed by the browser natively and then traversed like you traverse the DOM. CSS Tricks has a good guide for this.
  3. The loop is getting each item from the feed and creating a blob of HTML from it. I’m only grabbing the title, date, and URL for each page, since the description contains HTML and I don’t want to deal with that.
    • I left off the createPostItemHtml function because it’s just a wrapper around a template string.
  4. Finally, add our new blob of HTML to the page.

Side note: I’m using APIs that are not available in IE. These include URLSearchParams, fetch() and async/await. I don’t support IE because I don’t want to and it is marvelously freeing. These new APIs are so nice.

Conclusion

This method is simple and works pretty well. I see this as an alternative to Twitter for sharing interesting articles I find. It’s something that uses (mostly) open web technologies to make a pseudo-social network without interference from a big company.

RSS is great and Feedbin is awesome for providing the starred feed.

Photo by Isiah Gibson on Unsplash

Update: As of 2020-07, I’m now using Pinboard to power the feed.