3
0 Comments

Creating a back end for a sustainable weekly newsletter

Interested in the development of the quest to create decentralized-identity for people online, whos users are in control of their personal data, I began creating an Awesome List which quickly grew into its own website, Decentralized-ID.com a few years ago. (introduction)

Developing the backend for a sustainable weekly newsletter

More recently, I'd been chatting with Kaliya Identity Woman on and off for around a year, after initially contacting her about the potential for our collaborating on decentralized identity. At some point, she proposed the idea of writing a newsletter together, under the Identosphere.net domain.

Instead of jumping in head first, like I usually do, we've spent a lot of time figuring out how to do a newsletter sustainably, with as few third-party services as possible, while I'm learning my way around various web-tools.

We're tackling a field that touches every domain, has a deep history, and is currently growing faster than anyone can keep up with. But this problem of fast-moving information streams isn't unique to digital identity, so I figured some indie hackers might appreciate learning our process.

GitHub Pages

Once my 'Awesome List' outgrew the Awesome format, I began learning to create static web-sites with GitHub Pages and Jekyll.

Static Websites are great for security and easy to set up, but if you're an indie hacker, you're gonna want some forms so you can begin collecting subscribers! Forms are not supported natively through Jekyll or GitHub Pages.

Enter Staticman

Staticman is a comments engine for static websites but can be used for any kind of form, with the proper precautions.

It can be deployed to Heroku with a click of a button, made into a GitHub App, or run on your own server. Once set up, it will submit a pull-request to your repository with the form details (and an optional mailgun integration).

I set it up on my own server and created a bot account on GitHub with permissions to a private repository for the Staticman app to update with subscriptions e-mails to.

Made the form, and a staticman.yml config file in the root of the private repository where I'm collecting e-mail addresses.

The Subscription Form

<center>
<h3>Subscribe for Updates</h3>
<form class="staticman" method="POST" action="https://identosphere.net/staticman/v2/entry/infominer33/subscribe/master/subscribe">
    <input name="options[redirect]" type="hidden" value="https://infominer.xyz/subscribed">
    <input name="options[slug]" type="hidden" value="infohub">
    <input name="fields[name]" type="text" placeholder="Name (optional)"><br>
    <input name="fields[email]" type="email" placeholder="Email"><br>
    <input name="fields[message]" type="text" placeholder="Areas of Interest (optional)"><br>
    <input name="links" type="hidden" placeholder="links">
    <button type="submit">Subscribe</button>
</form>
</center>

The staticman.yml config in the root of my private subscribe repo

subscribe:
  allowedFields: ["name", "email", "message"]
  allowedOrigins: ["infominer.xyz","identosphere.net"]
  branch: "master"
  commitMessage: "New subscriber: {fields.name}"
  filename: "subscribe-{[@timestamp](/timestamp)}"
  format: "yaml"
  generatedFields:
    date:
      type: "date"
      options:
        format: "iso8601" 
  moderation: false
  name: "infominer.xyz"
  path: "{options.slug}" 
  requiredFields: ["email"]

It seems to be struggling with GitHub's recent move to change the name of your default branch from master to main (for new repositories), so unfortunately I had to re-create a master branch to get it running.

Planet Pluto Feed Reader

One of the most promising projects I found, in pursuit of keeping up with all the info, is Planet Pluto Feed Reader, by Gerald Bauer.

In online media a planet is a feed aggregator application designed to collect posts from the weblogs of members of an internet community and display them on a single page. - Planet (Software)

For the uninitiated, I should add that websites generate RSS feeds that can be read by a newsreader, allowing users to keep up with posts from multiple locations without needing to visit each site individually. You very likely use RSS all the time without knowing, for example, your podcast player depends on RSS feeds to bring episodes directly to your phone.

What Pluto Feed reader does is just like your podcast app, except, instead of an application on your phone that only you can browse, it builds a simple webpage from the feeds you add to it, that can be published on GitHub, your favorite static web-hosting service, or on your own server in the cloud.

Pluto is built with Ruby, using the ERB templating language for web-page design.

One of the cool things about ERB is it lets you use any ruby function in your web-page template, supporting any capability you might want to enable while rendering your feed. This project has greatly helped me to learn the basics of Ruby while customizing its templates to suit my needs.

Feed Search

I use the RSSHub Radar browser extension to find feeds for sites while I'm browsing. However, this would be a lot of work if I have a list of sites I want to add, in bulk.

I found a few simple python apps that find feeds for me. They aren't infallible, but they do allow me to find feeds for multiple sites at the same time, all I have to do is format the query and hit enter.

As you can see below, these are not fully formed applications, just a few lines of code. To run them, it's necessary to install Python, install the package with pip (pip install feedsearch-crawler), and type python at the command prompt, which takes you to a Python terminal that will recognize these commands.

From there you can type\paste python commands for demonstration, practice, or for simple scripts like this. I could also put the following scripts into their own feedsearch.py file and type python feedsearch.py, but I haven't gotten around to doing anything like that.

Depending on the site, and the features you're interested in, either of these feed seekers has its merits.

Feedsearch Crawler

from feedsearch_crawler import search
import logging
logging.basicConfig(filename='example.log',level=logging.DEBUG)
import output_opml
list = ["http://bigfintechmedia.com/Blog/","http://blockchainespana.com/","http://blog.deanland.com/"]
for items in list:
    feeds = search(items)
    output_opml(feeds).decode()
    logger = logging.getLogger("feedsearch_crawler")

Feed seeker

from feed_seeker import generate_feed_urls
list = ["http://bigfintechmedia.com/Blog/","http://blockchainespana.com/","http://blog.deanland.com/"]
for items in list:
    for url in generate_feed_urls(items):
        print(url)

GitHub Actions

Pluto Feed Reader is great, but I needed to find a way for it to run a regular schedule, so I wouldn't have to run the command every time I wanted to check for new feeds. For this, I've used GitHub actions.

This is an incredible feature of GitHub that allows you to spin up a virtual machine, install an operating system, dependencies supporting your application, and whatever commands you'd like to run, on a schedule.

name: Build BlogCatcher
on:
  schedule:
    # This action runs 4x a day.
    - cron:  '0/60 */4 * * *'
  push:
    paths:
    # It also runs whenever I add a new feed to Pluto's config file.
    - 'planetid.ini'
jobs:
  updatefeeds:
    # Install Ubuntu
    runs-on: ubuntu-latest
    steps:
    # Access my project repo to apply updates after pluto runs
    - uses: actions/checkout[@v2](/v2)
    - name: Set up Ruby
      uses: ruby/setup-ruby[@v1](/v1)
      with:
        ruby-version: 2.6
    - name: Install dependencies
      # Download and install SQLite (needed for Pluto), then delete downloaded installer
      run: | 
        wget http://security.ubuntu.com/ubuntu/pool/main/s/sqlite3/libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb
        sudo dpkg -i libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb
        rm libsqlite3-dev_3.22.0-1ubuntu0.4_amd64.deb
        gem install pluto && gem install nokogiri && gem install sanitize
    - name: build blogcatcher # This is the command I use to build my pluto project
      run: pluto b planetid.ini -t planetid -o docs
    - name: Deploy Files # This one adds the updates to my project
      run: |
        git remote add gh-token "https://github.com/identosphere/identity-blogcatcher.git"
        git config user.name "github-actions[bot]" 
        git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
        git add .
        git commit -a -m "update blogcatcher"
        git pull
        git push gh-token master

Identosphere Blogcatcher

Identosphere Blogcatcher (source) is a feed aggregator for personal blogs of people who've been working on digital identity through the years, inspired by the original Planet Identity.

We also have a page for companies, and another for organizations working in the field.

Identosphere Weekly Highlights

Last month, Kaliya suggested that since we have these pages up and running smoothly, we were ready to start our newsletter. This is just a small piece of the backend information portal we're working towards, and not enough to make this project as painless and comprehensive as possible, but we had enough to get started.

Every weekend we get together, browse the BlogCatcher, and share essential content others in our field will appreciate.

We'll be publishing our 6th edition, at the start of next week, and our numbers are doing well!

This newsletter is free, and a great opportunity for us to work together on something consistent while developing a few other ideas.

identosphere.substack.com

Setting up a newsletter without a third-party intermediary is more of a challenge than I'm currently up for, so we've settled on substack for now, which seems to be a trending platform for tech newsletters.

It has a variety of options for both paid and free content, and you can read our content before subscribing.

Support on Patreon

While keeping the newsletter free, we are accepting contributions via Patreon. (yes another intermediary, but we can draw upon a large existing userbase, and it's definitely easier than setting up a self-hosted alternative.)

So far, we have enough to cover a bit more than server costs, and this will ideally grow to support our efforts, and enable us to sustainably continue developing these open informational projects.

Python, Twitter API, and GitHub Actions

Since we're publishing this newsletter, and I've gotten a better handle on my inner state, I decided it was time to come back to Twitter. However, I knew I couldn't do it the old way, where I manually re-tweeted everything of interest, spending hours a day scrolling multiple accounts trying to stay abreast of important developments.

Instead, I dove into the Twitter API. The benefits of using Twitter programmatically can't be understated. For my first project, I decided to try an auto-poster, which could enable me to keep an active Twitter account, without having to regularly pay attention to Twitter.

I found a simple guide How To Write a Twitter Bot with Python and tweepy composed of a dozen lines of python. That simple script posts a tweet to your account, but I wanted to post from a pre-made list, and so figured out how to read from a yaml file, and then used GitHub actions to run the script on a regular schedule.

While that didn't result in anything I'm ready to share here, quite yet, somewhere during that process I realized that I could write python. After playing around with Ruby, in ERB, to build the BlogCatcher, and running various python scripts that other people wrote, tinkering where necessary, eventually, I had pieced together enough knowledge I could actually write my own code!

Decentralized ID Weekly Twitter Collections

With that experience as a foundation I knew I was ready to come back to Twitter, begin trying to make more efficient use of its wealth of knowledge, and see about keeping up my accounts without losing too much hair.

I made a script that searches Twitter for a variety of keywords related to decentralized identity, and write the tweet text and some other attributes to a CSV file. From there, I can sort through those tweets, and save only the most relevant, and publish a few hundred tweets about decentralized identity to a weekly Twitter collection, that make our job a lot easier than going to 100's of websites to find out what's happening. :D

Soon these collections will be regularly published to decentralized-id.com, which I found out is an approved method of re-publishing tweets, unlike the ad hoc method I was using before, sharing them to discord channels (which grabs metadata and displays the preview image \ text), exporting their contents and re-publishing that.

I do intend to share the source, for all that, after I've gotten the kinks worked out, and set it running on an action.

Twitter collections I've made so far

Thanks for Reading!

Look forward to any feedback, suggestions, or ideas you have!

Originally published on infominer.xyz, adapted for this context.

Trending on Indie Hackers
How I grew a side project to 100k Unique Visitors in 7 days with 0 audience 49 comments Competing with Product Hunt: a month later 33 comments Why do you hate marketing? 29 comments My Top 20 Free Tools That I Use Everyday as an Indie Hacker 16 comments $15k revenues in <4 months as a solopreneur 14 comments Use Your Product 13 comments