GitHub Pages Hosting

As I had mentioned in my post about Dropbox Passwords, I’m looking to cut down on the number of services that I pay for each month. One of the areas I’ve decided to cut down on are my domains; I’m letting a few domains that I never ended up finding much of a use for expire rather than having them automatically renew. Some have been renewing like this for years just because I didn’t want to lose them for some reason despite never having any real use for them. With a decrease in my domains comes a decrease in websites, to the point where I started to wonder if I could get away with ditching my VPS. I had been using the same VPS for over 2 years, and it served me well. In a world with so many hosting options, though, it seemed overkill just to run 2 static websites, each of which were only a single page.

One of my sites I placed on Firebase. I’m not a fan of using Google products, but I’ve used Firebase previously (moving my website to an existing, stale Firebase project will be the topic of another post), and the free Spark plan gives me more than enough for a simple site with 1 GB of storage and 10 GB of egress traffic each month.

I wanted to check out some different options for, though. After recently reading one of Kev Quirk’s blog posts, I thought I would give Netlify a shot. Their free Starter plan seems great for a simple hobby site and includes CI (continuous integration) from a git repository. I signed up for an account but quickly disliked the fact that leveraging my own domain meant I needed to move my nameservers for it to Netlify. While this isn’t horrible, I really prefer to keep managing my DNS in a single place as opposed to scattering nameservers around to wherever my content is hosted. Currently all of my personal domains have DNS hosted in the same place, and I’d like to keep it that way. As a result, I shelved the idea of Netlify and looked to GitHub Pages instead.

I actually used GitHub Pages before, way back in the day when they were brand new and I set up my first Jekyll-based blog. It wasn’t bad by any stretch, but a lot of it was clunky. I remembered having to manually add some text files to the repository to configure my custom domain and to host content out of a folder that was named differently than what was expected. Likewise, there were no SSL options, so I ended up putting my GitHub Pages site behind CloudFlare in order to secure it. I figured this would be a good opportunity to see what, if anything, had changed. If I hated it, I wouldn’t be out anything and could continue to look at other options.

The initial setup is still the same as I remember: just create a public repository with a name of:

I did this through the GitHub website in less than a minute. Next up I ran git clone in order to initialize the repository on my local laptop in the same directory where I keep all of my other GitHub repos. With my local environment ready, I just copied the handful of files that I had backed up from my VPS into the root directory for the repository; if I don’t take any other action, GitHub will host content from the root of the repo. Since this is a static, single page site, I don’t need to worry about compiling it with static site generators like Jekyll or Hugo. I was able to commit the change for adding the files, navigate to, and see my site.

With the content out of the way, I wanted to set up my custom domain. The GitHub side of the work can now be done through the Settings menu of the repository; it basically replaces the manual work that I previously had to do by adding files to my repository:

The top allows me to change the branch and directory to host content from; in my case I could just leave the defaults. The Custom domain sections allows me to type in my domain of choice. This just adds a file named CNAME to my repo containing the domain information. Then I just had to follow the directions for setting up a custom domain in my DNS host’s settings.

Note: It’s a little wonky from the directions, but to make GitHub redirect everything appropriately when using both an apex domain and a subdomain, you follow both sections of the instructions verbatim. For example, I wanted the domain to be, but I also wanted to still redirect to the site. I configured the apex domain via the instructions above, creating 4 A records pointing to different IP addresses. Then I configured a CNAME record for pointing not to, but instead to If you do it this way, GitHub will work it all out under the hood.

Immediately after setting up my DNS records, the option for Enforce HTTPS was not available, telling me that the site was not configured properly. I rightly assumed this just meant DNS needed time to propagate. I checked back 15 minutes later (which is the TTL of my DNS records), and it presented me with a new message that the certificate wasn’t finished being created yet. I once again rightly assumed that they were spinning up these certificates through Let’s Encrypt, so I browsed Hacker News for a few minutes until refreshing my repository’s settings showed that the option to force HTTPS was now available. I simply checked the box, waited a few minutes, and then verified that going explicitly to would redirect me successfully to If this doesn’t work for you, chances are that you just didn’t give it enough time. While the tooltip in the GibHub UI says it can take up to 24 hours, it took about 5 minutes for my site.

The last thing to check was that the CI was working so that changes to the repo would be reflected on the site. A few things had changed since I took the backup of my site, meaning there were some needed tweaks with which I could test. For one I restarted this blog and I deleted my Twitter account since Twitter is a cesspool (that might be a good topic for another post…), so I wanted to swap the Twitter link on my site with one for this blog. I first did a git pull to get local copies of things like the CNAME file that had been made in the cloud, and then I quickly updated my HTML to share a link with the Font Awesome RSS feed icon as the content. After committing and pushing the change, I refreshed the site to confirm it had also been updated.

On the whole, there’s really nothing for me to complain about with GitHub Pages. It’s free, I can use the same GitHub account I’m already in every day, I can use a custom domain without moving my DNS, and I get a Let’s Encrypt certificate out of the box. Obviously, though, my use case for it is very simple, and your mileage may vary. With options like this, though, I feel even better about my idea to stop running my own VPS just to host a couple of small, low-traffic websites.

Self-Hosting A Static Website

Earlier this week a friend reached out to me regarding a website. He had just finished developing his very first iOS game and was ready to submit it to Apple for approval. One of Apple’s myriad requirements, though, is a website containing the author’s privacy policy. My friend had no website and no idea how to make one, so he asked me if I could help. It seems wild to me that someone could have the chops to make an iOS app in Objective-C or Swift but not be able to make a website, but each of us has a different skill set.

We first took some early steps gathering requirements. What did he want for the site? Literally just the privacy policy. Where did he want to host it? Wherever was the cheapest. Did he have a domain name already? Yes! This was fairly straightforward; he literally just wanted the very basics. After a bit of discussion I convinced him to write up a quick “about me” type of page so that we could have more than just the privacy policy. From there I could get to work.


The first thing I did was have him head over to Vultr and spin up their cheapest instance. I think this is running him $5 USD per month. I had him pick Ubuntu as the server operating system given that it’s the one I’m most familiar with. My friend has some familiarity with Linux but not a lot of practical knowledge; when I asked him to shoot me some SSH credentials with sudo access he literally sent me the root account from Vultr. Ick.

Configuring The Host


My first goal was to configure the host. I started that off by creating user accounts for each of us:

adduser username
usermod -aG sudo username

After switching users and verifying my new account worked, I disabled root’s ability to log in:

sudo passwd -l root


Next I wanted to change the default SSH port since having 22 open means a million places from across the planet are going to throw garbage traffic at your server. I did this by modifying the SSH config at /etc/ssh/sshd_config, finding the line with #Port 22, uncommenting it, and changing the port to a high number of my friend’s choice. Then I restarted SSH:

sudo systemctl restart ssh


I wanted to enable the firewall as well, so I opened up with the new SSH port and 80 and 443 for our eventual website:

sudo ufw allow sshPortNumber/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp


I next needed a web server; Nginx has been my go-to choice for a long time. Rather than re-hashing all of the steps, I’ll just recommend following the excellent documentation from DigitalOcean which nicely covers the Nginx configuration. That takes you to the point where you are hosting a website. Then you just need content on it.


I’m an advocate of using HTTPS for everything, and with free certificates from Let’s Encrypt there’s no reason not to. Given that we have shell access, using certbot is the way to go. There’s also excellent documentation on that process on Ubuntu with Nginx. I highly recommend selecting the option to redirect any HTTP traffic to HTTPS.


Now for the website itself. I’m not really much of a web developer, and I dislike making anything frontend; I don’t exactly have the best design sense. So I once again opted to leverage Hugo to take care of that for me. I’ve written about the specifics of using Hugo in detail. Since we really just wanted a generic landing page with my friend’s socials and then links to the About and Privacy Policy pages, I ended up going with the Hermit theme. It has a nice, simple look. My friend’s favorite color is mint green, so the default background also works nicely with that when I changed the accent color. The theme nicely includes an exampleSite so that I can steal their config.toml file and also their “About” page to make things even easier for myself.


One of the nice things about Hugo is that, since everything is a simple text file, it’s very easy to compress your entire site and save a backup. Then if something terrible happens to your server, it’s extremely easy to get the site back up and running on a different machine. In this case, I made tarballs for both the finished, compiled site and the Hugo directory storing the configuration and Markdown.

tar -zvcf ~/temp/html_output.tar.gz /var/www/
tar -zvcf ~/temp/hugo_directory.tar.gz /var/www/

With the tarballs created, I used an SFTP client to copy them off the server for safe keeping.

Wrap Up

In total it took me about an hour and a half to get everything up and running. Having gone through this process many times for websites of my own, I’ve got a decent bit of experience with the process, but this shows it still doesn’t necessarily take a super long time to get a decent website up and running. The big benefits are:

  1. The site is cheap to run. Even the smallest instance at any VPS provider will be able to handle multiple sites with ease unless they start getting really popular, so if my friend wants to create any other sites in the future he won’t need additional hosting.
  2. Backups are stupid simple. My friend isn’t beholden to a hosting provider or trying to work within the confines of something more expensive like WordPress or Squarespace.

The downsides are present, though, so you have to be cool with them:

  1. Setup takes more technical chops than clicking through a Squarespace template editor. While the documentation for everything in this post is extremely good, if working out a terminal freaks you out then this likely isn’t for you.
  2. Content is authored in Markdown. This likely doesn’t matter for my friend at the moment since he’s not really posting anything new to the site, but it would be something to keep in mind if he decided to start a blog. In that scenario, I usually just SSH to the server and author my content in Vim. You could also author the Markdown elsewhere and copy it to the server, or use SFTP to open the Markdown file on the server from an editor on your local machine. It’s definitely not as simple as a WYIWYG editor in your browser, though.
  3. Maintenance is something that will need to be done at least periodically. The server will need to be patched. That’s easy enough to do with a simple sudo apt update && sudo apt upgrade and then reboot when necessary, but it’s just another step to keep in mind. Likewise, bouncing the server means that the website will be down, even if it’s typically only for a moment or two.

Being kind of pretentious, technical snob I personally find it easier to author my comment in Markdown on Vim instead of using a WYSIWYG editor in a GUI, but your mileage will vary based on your own prefrences.

Fixing Let’s Encrypt Certificates After You Delete Them Like An Idiot

In Episode 11, I had discussed how I run a couple of my websites on a Linux server running Nginx as the web server and encrypting connections to them via Let’s Encrypt certificates. Shortly after recording that episode, though, I realized I had messed up my certificate configuration via certbot. If you don’t recall the episode, I had taken my web server which was only running and added to it so that I had both sites running on the same server. When I added, I had to re-run certbot and get a certificate for it along with the certificate I had for That’s where I messed up; I got tipped off when I received the following email from Let’s Encrypt letting me know that my certificate for was about to expire.

“Your certificate (or certificates) for the names listed below will expire in 10 days (on 07 Jul 19 12:52 +0000). Please make sure to renew your certificate before then, or visitors to your website will encounter errors.

We recommend renewing certificates automatically when they have a third of their
total lifetime left. For Let’s Encrypt’s current 90-day certificates, that means
renewing 30 days before expiration. See for details.”

That seemed odd to me since I knew I had a cron job running to update the certificates. I checked the expiration for the certificate on and saw that it had nearly two months left on it. I checked the certificate applied to and saw the same thing. EXACTLY the same thing in fact. In double-checking the certificate on, I realized that the Common Name was for I was using the certificate for both of my sites. Oops. What happened was that when I added and re-ran certbot, I got the following:

My thought at the time was that I needed to select ALL of the sites. In reality, this overwrote the configuration I already had on and applied the certificate to both sites. This is where I decided to be really stupid. I decided that I would delete the existing certificates, re-run certbot twice (one for and once for, and then be done. I started off by deleting the certificate that was applied to both sites:

sudo certbot delete --cert-name

I did the same to delete the certificate. Then I tried to do a vanilla run of certbot to get the menu in my screenshot above and individually configure each of my two sites. Instead of getting that menu, though, I received an error message that my sites were pointing to certificates that didn’t exist. certbot then exited without giving me any further options. The problem is that my configuration files below still referenced the certificates that I just nuked. Oops.


After thinking about it for a few seconds, it made sense; certbot can’t know what’s going on and is expecting me to do some cleanup on the mess I made instead of making assumptions about whether or not I should still have certificates. To keep my life simple, I decided to go back to a clean slate on my sites-available configurations since I knew that I could get certbot to redo the configuration again as long as I could get it to successfully run. As a result, I just set the configurations for both and back to a super vanilla setup. Just %s/ on the file below for what I configured on

server {
        listen 80;
        listen [::]:80;

        root /var/www/;
        index index.html index.htm index.nginx-debian.html;


        location / {
                try_files $uri $uri/ =404;

Once I had that done, I restarted nginx just to make sure it was working and I could hit port 80 for both sites.

sudo systemctl restart nginx

With that working, I was able to re-run certbot and finally get the menu from my initial screenshot. I first configured a certificate for and its www variant. Once that was done, I ran certbot one more time and walked through getting a certificate for and its www-variant. In both instances, I opted to have certbot reconfigure the files in sites-available to redirect all HTTP traffic for HTTPS. I restarted Nginx one more time and finally I had everything configured the way I wanted with each site using its own certificate.

The moral of the story is to actually troubleshoot the problem instead of just starting off by deleting shit from your server. Also, try staying pink!

PSA: Get Ready For New Let’s Encrypt Validation

If you’re using Let’s Encrypt, now would be a really great time to make sure that you’re ready for them to stop supporting ACME TLS-SNI-01 domain validation. I got an email a couple of days ago (as I assume everyone using Let’s Encrypt did) letting me know this change was coming. I had nothing to actually do, but going through the validation was super easy and is likely worth the time to ensure your site(s) aren’t impacted. March 13th is the deadline for ACME TLS-SNI-01 to no longer function, so there’s still a lot of time to take a couple of minutes and verify you’re in good shape.

*Note: I’m using certbot, which makes this whole thing super easy. If you’re not using certbot then your steps will be different.*

The Let’s Encrypt staging environment already has disabled ACME TLS-SNI-01 validation, so checking against that is a good test. As a certbot user, I also needed to validate that I was using at least version 0.28 of the application, which is simple enough to do via:

certbot --version    

That appears to be the latest version offered by the PPA: ppa:certbot/certbot

Testing a certbot run against the staging environment is toggled via the --dry-run switch. If you do a dry run of your renewal against the staging environment and everything comes back successful, you should be in good shape:

sudo certbot renew --dry-run

My certs all validated successfully, so everything is ready to go for the change. I presume if there are any failures then the dry run will alert you to what needs to be fixed; I can’t say for sure since I was lucky enough to not see any of those. Full instructions from Let’s Encrypt are available on their site, though.

Happy encrypting!