GitHub Pages Hosting

As I had mentioned in my post about Dropbox Passwords, I’m looking to cut down on the number of services that I pay for each month. One of the areas I’ve decided to cut down on are my domains; I’m letting a few domains that I never ended up finding much of a use for expire rather than having them automatically renew. Some have been renewing like this for years just because I didn’t want to lose them for some reason despite never having any real use for them. With a decrease in my domains comes a decrease in websites, to the point where I started to wonder if I could get away with ditching my VPS. I had been using the same VPS for over 2 years, and it served me well. In a world with so many hosting options, though, it seemed overkill just to run 2 static websites, each of which were only a single page.

One of my sites I placed on Firebase. I’m not a fan of using Google products, but I’ve used Firebase previously (moving my website to an existing, stale Firebase project will be the topic of another post), and the free Spark plan gives me more than enough for a simple site with 1 GB of storage and 10 GB of egress traffic each month.

I wanted to check out some different options for, though. After recently reading one of Kev Quirk’s blog posts, I thought I would give Netlify a shot. Their free Starter plan seems great for a simple hobby site and includes CI (continuous integration) from a git repository. I signed up for an account but quickly disliked the fact that leveraging my own domain meant I needed to move my nameservers for it to Netlify. While this isn’t horrible, I really prefer to keep managing my DNS in a single place as opposed to scattering nameservers around to wherever my content is hosted. Currently all of my personal domains have DNS hosted in the same place, and I’d like to keep it that way. As a result, I shelved the idea of Netlify and looked to GitHub Pages instead.

I actually used GitHub Pages before, way back in the day when they were brand new and I set up my first Jekyll-based blog. It wasn’t bad by any stretch, but a lot of it was clunky. I remembered having to manually add some text files to the repository to configure my custom domain and to host content out of a folder that was named differently than what was expected. Likewise, there were no SSL options, so I ended up putting my GitHub Pages site behind CloudFlare in order to secure it. I figured this would be a good opportunity to see what, if anything, had changed. If I hated it, I wouldn’t be out anything and could continue to look at other options.

The initial setup is still the same as I remember: just create a public repository with a name of:

I did this through the GitHub website in less than a minute. Next up I ran git clone in order to initialize the repository on my local laptop in the same directory where I keep all of my other GitHub repos. With my local environment ready, I just copied the handful of files that I had backed up from my VPS into the root directory for the repository; if I don’t take any other action, GitHub will host content from the root of the repo. Since this is a static, single page site, I don’t need to worry about compiling it with static site generators like Jekyll or Hugo. I was able to commit the change for adding the files, navigate to, and see my site.

With the content out of the way, I wanted to set up my custom domain. The GitHub side of the work can now be done through the Settings menu of the repository; it basically replaces the manual work that I previously had to do by adding files to my repository:

The top allows me to change the branch and directory to host content from; in my case I could just leave the defaults. The Custom domain sections allows me to type in my domain of choice. This just adds a file named CNAME to my repo containing the domain information. Then I just had to follow the directions for setting up a custom domain in my DNS host’s settings.

Note: It’s a little wonky from the directions, but to make GitHub redirect everything appropriately when using both an apex domain and a subdomain, you follow both sections of the instructions verbatim. For example, I wanted the domain to be, but I also wanted to still redirect to the site. I configured the apex domain via the instructions above, creating 4 A records pointing to different IP addresses. Then I configured a CNAME record for pointing not to, but instead to If you do it this way, GitHub will work it all out under the hood.

Immediately after setting up my DNS records, the option for Enforce HTTPS was not available, telling me that the site was not configured properly. I rightly assumed this just meant DNS needed time to propagate. I checked back 15 minutes later (which is the TTL of my DNS records), and it presented me with a new message that the certificate wasn’t finished being created yet. I once again rightly assumed that they were spinning up these certificates through Let’s Encrypt, so I browsed Hacker News for a few minutes until refreshing my repository’s settings showed that the option to force HTTPS was now available. I simply checked the box, waited a few minutes, and then verified that going explicitly to would redirect me successfully to If this doesn’t work for you, chances are that you just didn’t give it enough time. While the tooltip in the GibHub UI says it can take up to 24 hours, it took about 5 minutes for my site.

The last thing to check was that the CI was working so that changes to the repo would be reflected on the site. A few things had changed since I took the backup of my site, meaning there were some needed tweaks with which I could test. For one I restarted this blog and I deleted my Twitter account since Twitter is a cesspool (that might be a good topic for another post…), so I wanted to swap the Twitter link on my site with one for this blog. I first did a git pull to get local copies of things like the CNAME file that had been made in the cloud, and then I quickly updated my HTML to share a link with the Font Awesome RSS feed icon as the content. After committing and pushing the change, I refreshed the site to confirm it had also been updated.

On the whole, there’s really nothing for me to complain about with GitHub Pages. It’s free, I can use the same GitHub account I’m already in every day, I can use a custom domain without moving my DNS, and I get a Let’s Encrypt certificate out of the box. Obviously, though, my use case for it is very simple, and your mileage may vary. With options like this, though, I feel even better about my idea to stop running my own VPS just to host a couple of small, low-traffic websites.

It’s Always DNS

There’s a saying among system administrators:

It’s always DNS.

Meaning that whenever there’s an issue, DNS is likely the culprit. This morning that adage proved itself yet again.

My home network is currently running off of a Cradlepoint router. Cradlepoint’s specialty is making routers that can leverage LTE, so my router is configured to use my home ISP as the primary WAN link, but it will fail over to a cellular connection if my home ISP is unavailable. This is pretty handy, especially considering that I now work from home full-time. That being said, mobile data isn’t cheap here, and the data plan the Cradlepoint is using is paid for by my company. While it’s nice to fail over to LTE while I’m trying to work, I don’t want to be eating through LTE data while I’m just sitting on the couch watching Hulu. As a result, I’ve configured alerting from the router’s cloud management platform to notify me when a failover occurs so that I can troubleshoot the network and tailor my online activity accordingly if I’m going to be on LTE for a while.

This morning was basically one of the worst starts to a weekend morning where I want to hang out with a cup of coffee and catch up on my RSS feeds. I woke up to an email alert from a few hours prior letting me know that my router had failed over to LTE. It happened once around 6 AM for a few minutes, failed back over to my ISP network, and then maintained that for roughly 40 minutes before failing over to LTE again a little before 7 AM. The first step, which I could easily do from bed with my phone, was to check for any outages from my ISP. Logging into my account there showed me that there weren’t any known outages, though.

Finally being forced to shuffle out of bed and into the living room to get eyes on the situation, I saw that the lights on the modem looked normal. I logged into the router’s management interface and verified that everything looked correct. I rebooted the modem to be safe, and the Cradlepoint immediately reconnected to LTE rather than using my modem’s connection. I bounced the Cradlepoint, and the connection status persisted. I disabled LTE on the router, and it listed the Ethernet port as the current WAN link, which seemed good. I tried connecting to, though, and it wouldn’t load up. I tried to ping one of the OpenDNS servers of and also got no response. This was a critical mistake, though I didn’t know it yet at the time.

Thinking now that maybe something was up with my Cradlepoint, I pulled a bin of miscellaneous tech stuff out of the closet and fished through it to find the router from my ISP that I never use. I plugged that in line after my modem, removing the Cradlepoint from the equation, and bounced the modem. The ISP-provided router came online right away with the characteristic blue light that indicates everything is fine. I connected my laptop to its WiFi network and tried to load a webpage… with no success. I once again tried to ping also without any response.

This was when I finally realized the flaw in my troubleshooting. Both the Cradlepoint and my ISP-provided router had been configured by me to use the OpenDNS servers as what they hand out with DHCP leases. Literally all of my devices are using and as their DNS servers. Likewise, the Cradlepoint needs something it can test to determine if a WAN link is up or down so that it can fail over to LTE and fail back to the Ethernet WAN link. I had that set as as well. So what if that was the problem? While still connected to my ISP router’s WiFi network, I tried to ping and immediately got a response. OpenDNS is what was unreachable.

Ripping the ISP router out of the network, I linked the Cradlepoint back up. I reconfigured it to use as the DNS servers it hands out, and to leverage that for the state of the WAN link. As soon as I did that, everything began working and the Cradlepoint failed back to the Ethernet WAN link on the next check. I should probably rethink this setup where I’m using the same IP address for DNS as I am for the state of the WAN, but I should also remember that it’s always DNS and check that a little earlier in the process.