Connecting An Existing Firebase Hosting Project To A New Site

As a follow-up to my last post on GitHub Pages, I mentioned that I moved one of my websites to Firebase. Firebase is a platform from Google for creating web and mobile applications. As a PaaS offering, there are a lot of different parts to the service, but as a platform for web applications hosting is naturally one of them. The free Spark plan offers 10 GB of storage, 360 MB of data transfer per day (which works out to 10 GB of bandwidth per month), and support for custom domains and SSL. That’s more than enough for me to host a simple, single page website that’s only made up of static HTML, CSS, and a single image. If anyone is curious, my site is using just 1.8 MB of storage and 15 MB of bandwidth. Note that bandwidth used divided by storage used will not be indicative of total hits due to caching, compression, etc.

I’ve used Firebase before, so I already had my Google account linked up to Firebase, and I even had a project still technically “live” there, though the domain had long since been shifted somewhere else. To be honest, it had been so long since I used Firebase that I almost forgot about it until I just happened to start receiving some well-timed emails from the service informing me that I needed to re-verify ownership of the domain I was using for my defunct project. I had no interest in re-verifying anything, but I did want to start hosting something new there.

The first step for hosting new content was to log in to the Firebase Console. Since I had already used the service, this gave me tiles of my existing projects; in my scenario, I just had a single project for my hosting. I clicked on that tile, and I was taken to a Project Overview screen. This gives me a high-level look at my project. To get to the hosting-specific functionality, though, I just had to click the Hosting option under the Develop menu to the left.

On the hosting dashboard, the first item listed contains all of the domains associated with the project. Clicking the 3 dots … next to a domain allowed me to delete it; I removed the two entries (apex domain and www) for the domain I used previously. Then I clicked the button for Add a custom domain. I followed the instructions on the screen to add a custom domain; I won’t document the steps here since they’re directly covered through the Firebase custom domain documentation.

With everything configured on the Firebase side, I next needed to crack into the Firebase CLI to link up my local project. I opted to install the standalone CLI, though you can still get it through npm if you prefer to roll that way. The first thing I had to do was link the CLI to my Firebase account. This is different based on whether you’re going to be using the CLI from a system with a GUI or if you’re doing it from a headless system you’re accessing via SSH. I was using it from a headless system where I cannot pop a browser to follow the normal authentication process; as a result I ran:

firebase login --no-localhost

If you’re running this from a system with a GUI, I believe you just omit the --no-localhost parameter. In the headless setup, though, this gives a Firebase URL to navigate to on another system. I copied it out of my terminal and pasted it into the browser in my laptop. This gives me an authentication code for the CLI. I copied that from my browser, pasted it into my terminal, and that linked the CLI to my account in the Firebase platform.

Since I was just moving my content from my old VPS to Firebase, I didn’t have to worry about actually creating a website; I already had one that was backed up in a tarball. I simply had to expand my tarball on the same system where I was using the Firebase CLI. I did this by creating a new directory for the project, expanding my tarball that had all of my site’s content, and then copying that content to the project directory:

mkdir ~/laifu
tar -zxvf ~/temp/laifu.tar.gz
cp -r ~/temp/html ~/laifu

Note: If you look closely at the commands above, you’ll see that after I expand the tarball I’m recursively copying not the entire directory but the html folder from it. This is due to the fact that my tarball is of the entire /var/www/ directory that Nginx was previously hosting on my VPS, and the html directory is what contains the content of the site. If your backup is storing the content directly (e.g. it’s not in a subfolder) that’s fine. However, you’ll want to make a new folder inside of your project directory that you copy the content to because you do not want the content of the site to be in the root of the Firebase project’s directory. For example, your mkdir command would look something like: ~/myproject/html

One I had the files situated accordingly, I needed to tell Firebase that my directory was a Firebase project. Similar to using git, I do this by navigating to my project directory and running:

firebase init

This gets the ball rolling by asking some questions interactively through the CLI. One question will ask what service the project should be connected to; be sure to pick “Hosting.” After that there should be a prompt for which existing hosting project you’d like to use. The existing project should be listed as an option to be selected. If it’s not there, you can cancel out of the process and ensure everything worked correctly with your authentication by running the following and verifying that you see the project. If it’s missing, you may need to redo the authentication (e.g. maybe you were in the wrong Google account when pasting into your browser.)

firebase project:list

After selecting the project, the CLI will ask what to use as the “public directory.” This is essentially asking what directory inside of the project directory contains the web content to be hosted. In my case I picked html since that’s what I named the folder.

Be wary of the next couple of prompts, which will trigger regardless of whether or not there’s something in your public directory matching them. When prompted about your 404.html page, opt not to overwrite it unless you really hate your existing one. When prompted about index.html, definitely don’t overwrite it or you’ll lose the first page of your site.

Once that’s all done, you should get a message:

“Firebase initialization complete!”

This means that the directory has been initialized successfully as a Firebase project, but the local content still hasn’t been pushed to the cloud. So the last step is to run the following:

firebase deploy

This will give a “Deploy complete!” message along with a Firebase-specific URL in the format of:

Copying this URL and pasting it into a browser should allow you to verify that the content you expect is now being hosted, even if you’re currently waiting for DNS TTLs to expire before you can navigate to the custom DNS. The Hosting Dashboard of the Firebase console will also show the update in the “Release History” section.

GitHub Pages Hosting

As I had mentioned in my post about Dropbox Passwords, I’m looking to cut down on the number of services that I pay for each month. One of the areas I’ve decided to cut down on are my domains; I’m letting a few domains that I never ended up finding much of a use for expire rather than having them automatically renew. Some have been renewing like this for years just because I didn’t want to lose them for some reason despite never having any real use for them. With a decrease in my domains comes a decrease in websites, to the point where I started to wonder if I could get away with ditching my VPS. I had been using the same VPS for over 2 years, and it served me well. In a world with so many hosting options, though, it seemed overkill just to run 2 static websites, each of which were only a single page.

One of my sites I placed on Firebase. I’m not a fan of using Google products, but I’ve used Firebase previously (moving my website to an existing, stale Firebase project will be the topic of another post), and the free Spark plan gives me more than enough for a simple site with 1 GB of storage and 10 GB of egress traffic each month.

I wanted to check out some different options for, though. After recently reading one of Kev Quirk’s blog posts, I thought I would give Netlify a shot. Their free Starter plan seems great for a simple hobby site and includes CI (continuous integration) from a git repository. I signed up for an account but quickly disliked the fact that leveraging my own domain meant I needed to move my nameservers for it to Netlify. While this isn’t horrible, I really prefer to keep managing my DNS in a single place as opposed to scattering nameservers around to wherever my content is hosted. Currently all of my personal domains have DNS hosted in the same place, and I’d like to keep it that way. As a result, I shelved the idea of Netlify and looked to GitHub Pages instead.

I actually used GitHub Pages before, way back in the day when they were brand new and I set up my first Jekyll-based blog. It wasn’t bad by any stretch, but a lot of it was clunky. I remembered having to manually add some text files to the repository to configure my custom domain and to host content out of a folder that was named differently than what was expected. Likewise, there were no SSL options, so I ended up putting my GitHub Pages site behind CloudFlare in order to secure it. I figured this would be a good opportunity to see what, if anything, had changed. If I hated it, I wouldn’t be out anything and could continue to look at other options.

The initial setup is still the same as I remember: just create a public repository with a name of:

I did this through the GitHub website in less than a minute. Next up I ran git clone in order to initialize the repository on my local laptop in the same directory where I keep all of my other GitHub repos. With my local environment ready, I just copied the handful of files that I had backed up from my VPS into the root directory for the repository; if I don’t take any other action, GitHub will host content from the root of the repo. Since this is a static, single page site, I don’t need to worry about compiling it with static site generators like Jekyll or Hugo. I was able to commit the change for adding the files, navigate to, and see my site.

With the content out of the way, I wanted to set up my custom domain. The GitHub side of the work can now be done through the Settings menu of the repository; it basically replaces the manual work that I previously had to do by adding files to my repository:

The top allows me to change the branch and directory to host content from; in my case I could just leave the defaults. The Custom domain sections allows me to type in my domain of choice. This just adds a file named CNAME to my repo containing the domain information. Then I just had to follow the directions for setting up a custom domain in my DNS host’s settings.

Note: It’s a little wonky from the directions, but to make GitHub redirect everything appropriately when using both an apex domain and a subdomain, you follow both sections of the instructions verbatim. For example, I wanted the domain to be, but I also wanted to still redirect to the site. I configured the apex domain via the instructions above, creating 4 A records pointing to different IP addresses. Then I configured a CNAME record for pointing not to, but instead to If you do it this way, GitHub will work it all out under the hood.

Immediately after setting up my DNS records, the option for Enforce HTTPS was not available, telling me that the site was not configured properly. I rightly assumed this just meant DNS needed time to propagate. I checked back 15 minutes later (which is the TTL of my DNS records), and it presented me with a new message that the certificate wasn’t finished being created yet. I once again rightly assumed that they were spinning up these certificates through Let’s Encrypt, so I browsed Hacker News for a few minutes until refreshing my repository’s settings showed that the option to force HTTPS was now available. I simply checked the box, waited a few minutes, and then verified that going explicitly to would redirect me successfully to If this doesn’t work for you, chances are that you just didn’t give it enough time. While the tooltip in the GibHub UI says it can take up to 24 hours, it took about 5 minutes for my site.

The last thing to check was that the CI was working so that changes to the repo would be reflected on the site. A few things had changed since I took the backup of my site, meaning there were some needed tweaks with which I could test. For one I restarted this blog and I deleted my Twitter account since Twitter is a cesspool (that might be a good topic for another post…), so I wanted to swap the Twitter link on my site with one for this blog. I first did a git pull to get local copies of things like the CNAME file that had been made in the cloud, and then I quickly updated my HTML to share a link with the Font Awesome RSS feed icon as the content. After committing and pushing the change, I refreshed the site to confirm it had also been updated.

On the whole, there’s really nothing for me to complain about with GitHub Pages. It’s free, I can use the same GitHub account I’m already in every day, I can use a custom domain without moving my DNS, and I get a Let’s Encrypt certificate out of the box. Obviously, though, my use case for it is very simple, and your mileage may vary. With options like this, though, I feel even better about my idea to stop running my own VPS just to host a couple of small, low-traffic websites.

Salvaging Images From Squarespace

I wrote previously about moving this blog from Squarespace to WordPress. One of my cited concerns with Squarespace was being locked into that particular platform without a lot of options for moving somewhere else. So how did I move my content to WordPress? I was able to export the written content for the posts themselves from within Squarespace, fortunately. Inside of Settings > Advanced is an Import / Export option. The only export offering is WordPress, so I guess it was lucky that’s where I was moving. This gives an XML file with the written content and metadata for each post. Unfortunately, there is no option to export the images that I’ve uploaded over the past year of creating content over at Squarespace; within the XML file the images show up as <div> tags with a link to the Squarespace CDN for the actual image. For example, this is what I see where the image is for the last post I authored over on Squarespace:

<div style="padding-bottom:45.903255462646484%;" class=" image-block-wrapper has-aspect-ratio " data-animation-role="image" > <noscript><img src="" alt="html.png" /></noscript><img class="thumb-image" data-src="" data-image="" data-image-dimensions="1013x465" data-image-focal-point="0.5,0.5" alt="html.png" data-load="false" data-image-id="5f175ce6cb20a366ea6f4d62" data-type="image" /> </div>

If you think that looks disgusting, that’s because it is. When I imported the XML file into WordPress, I saw an option to download any attachments on each post. I checked that box, but since the images are linked to the Squarespace CDN they’re considered to be HTML content rather than attachments. As a result, WordPress simply embeds the <div> in each post as a custom HTML block that doesn’t actually render the image.

Set on not going through 50 posts to manually save the images out of them, I started looking at the XML to see if I could do anything useful with the image URLs. One thing that immediately concerned me was that, when I wasn’t sure what I was going to do with the domain but knew that I didn’t want to keep it at Squarespace, I marked the Squarespace site as Private, meaning the only way to view the content was to log in. I assumed this meant the image content on the Squarespace CDN would be inaccessible until I made the site public again. After copying an image URL from my XML file, though, I saw that it was still publicly available. Flagging a Squarespace site as private means you can’t load the site directly, but content on Squarespace’s CDN is still accessible. That in itself seems like a problem to me and a very good reason to leave the platform, but in this one case it was working to my benefit. I realized that I could parse all of the images files out of the XML file with a script and download them programmatically.

As you can see from the XML snippet above, images on the Squarespace CDN have URLs like this:

There’s a whole lot of CDN nonsense, followed by a forward slash and the original file name at the very end. While this would be handy for getting the original file name, I didn’t want to end up with dozens of images in a folder where I had no idea what post they belonged to, and I definitely didn’t want to manually correlate the file name with the CDN link in each of the HTML blocks in WordPress.

The XML, though, also includes the title of each post, and I realized that if I was scanning each line of the XML for image tags, I could also check for the title tag and keep a variable constantly updated with that. With this idea, I would just start each file name with the post title so that they would be grouped together. Once the dots were connected, it was simple to come up with the following short PowerShell script:

It downloads all of the images referenced in the XML file in the format of:


I added some extra checks to remove unsavory characters from the file names; while it’s a valid character in most modern filesystems, for example, have you ever tried to actually programmatically do things from a Bash shell with a file that has a [ in the name? It’s not pretty.

While this saved me from having to manually download each image from Squarespace, I still had to manually go through each post in WordPress, remove the custom HTML block where each image should have been, and then upload the appropriate image. With the way I downloaded the images, though, I just started at the top of the directory and worked my way through the images alphabetically since each post was grouped together. It sucked, but it could have been a lot worse. If nothing else it made me glad that I moved forward with migrating the site now rather than waiting a few more months for the Squarespace subscription to lapse; I didn’t want to deal with this for any more posts than was strictly necessary.

Dropbox Passwords

I tend to pay for a lot of subscription services. In fact, my friend Mark and I have enough of them between us that we needed not one but two episodes of our podcast just to talk about all of our subscriptions. Since the pandemic means I have nothing better to do with my time than sit around and think about things like how much money I spend on subscriptions, though, I’ve been thinking about which ones I might be able to do without, which ones I could swap for cheaper services, etc. to save myself a little bit of money each year. It often feels trivial to tack on yet another thing that costs $5 – $10 a month, but over the course of the year it adds up.

Enter Dropbox Passwords, a password manager built into Dropbox. In the past I’ve used Dropbox to sync passwords in conjunction with KeePassX, so having the same functionality built directly into the platform seemed nice. The fact that it’s a feature included with my Dropbox Plus plan and would save me from paying $80 a year for my current password manager is also a nice bonus. First I just had to put it through its paces.


Migrating to a new password manager is typically a fairly painless process. Every password manager I’ve ever used has given me the option to export my passwords to a variety of plaintext file formats. Naturally, having a plaintext file with all of your credentials is a terrible idea, but unless the machine you’re operating on is a digital cesspool it should be fine for the few minutes it takes to import the file somewhere else.

In my case, I exported a CSV and imported it into Dropbox Passwords. I initially got a message that there weren’t any accounts to import. I opened the CSV file and saw that some of the columns had weird headings and assumed Dropbox didn’t know what fields in the CSV mapped to which fields within Passwords. Their help documentation covers what’s needed:

The columns in your CSV file must be labeled so Dropbox Passwords knows how to import the information. Although Dropbox Passwords can recognize a range of labels, we recommend labeling them “Name”, “Password”, “Username”, “Notes”, and “URL”.

I updated the column headings to match the documentation above, and everything was fine.

Desktop Client

The desktop client for Dropbox Passwords is spartan to say the least. You get fields for:

  • Site Name
  • Username
  • Password
  • URL
  • Notes

That’s it. In other password managers, I frequently leverage either additional passwords or custom fields to add things like app passwords, API keys, etc. While I could store those in the free-form Notes field in Dropbox Passwords, the values aren’t masked out like they would be in other services with dedicated fields for this sort of thing.

After the initial setup where I logged in with my Dropbox credentials, the app gave me a “word list.” This PDF just had 12, random, English words on it. This serves as an extra security mechanism that I’ll touch on in the next section.

After the app was set up, it asked me to create a 6 digit PIN. That PIN is used to unlock the app if it times out due to inactivity. It’s worth noting that the browser extensions will not autofill login information is the application is currently locked; more on that later as well.

Mobile Apps

There isn’t too much to say about the mobile apps; they’re basically exactly what you would expect. It is worth mentioning that, at the time of this writing, there’s no iPad version of the app, meaning I’m stuck looking at the blown-up iOS app. It’s not a huge ordeal, though, because aside from logging in initially I almost never open the app itself. Like every other password manager, iOS can be configured to automatically get passwords out of it without requiring an app switch. It also integrates with Face ID and Touch ID on iOS for quick unlocking.

Multi Factor Authentication

Dropbox Passwords automatically implements a sort of MFA. When I logged in to the app on my phone, for example, it gave me a prompt on the desktop client. I had to accept the prompt there to confirm that I was, in fact, trying to configure the app on a phone. Likewise, when I configured the app on my iPad, I received a prompt on both my laptop and my phone.

This is where you might wonder what happens if I don’t have any of those other devices handy. In that case, I can use the word list to log in. I actually ended up doing this one time, and it worked without a hitch. What happens if I also lose the word list? Let’s hope I never find out. It’s nice to know, though, that despite the fact that the content is tied to a Dropbox account, Dropbox account credentials alone aren’t enough to access it.

Browser Extensions

You might wonder why I talked about desktop and mobile clients, switched gears to authentication, and then came back to a “client.” The reason is that the browser extensions are literally just a wrapper that provides integration with the desktop app for things like autofilling credentials. For example, clicking on the Dropbox Passwords extension icon in Safari on macOS doesn’t even open a UI for the extension… it pops open the full Dropbox Passwords client. I see this frequently when nothing autofills in my browser, I click on the icon for the browser extension, and then it opens the full app where I see it requesting my PIN to unlock it.

The reason why wrapper browser extensions are noteworthy for me is that there are no standalone extensions or even direct web access. If Dropbox Passwords doesn’t have a client on your platform of choice, you’re simply out of luck. For example, I can’t access my passwords when using Manjaro Linux on my Pinebook Pro. I verified this by installing the browser extension; clicking on it will bring me to a lovely message that the application isn’t available for my platform.

Where this seems really insane to me is that if I log into my Dropbox account on the web I can see the vault for Dropbox Passwords! But clicking on it gives me the same screen as shown in the image above.

I can’t actually do anything to access it. Even just some kind of web portal like I can access with Bitwarden, LastPass, or 1Password would be better than nothing. I can definitely understand not making a native Linux app a priority, but not having a browser extension or web access in 2020 blows my mind more than a little.

I really hope this is something the Dropbox Passwords team is actively working on. While the overall service isn’t quite as slick or polished as some of its competitors, the fact that it comes included with paid Dropbox Plans is a huge boon; people like myself will have to think twice about paying extra money for a service they already have included with their existing Dropbox subscription. There are some hurdles to overcome for Dropbox Passwords to reach parity with its competitors, but for many people it’ll be good enough as-is.

Unusually Pink Migration

So Long, Squarespace!

If anyone stumbles across this site who was previously an Unusually Pink reader, then you might notice that the site looks a bit different after a few months of hiatus. In the short, just under 2 year lifespan of the site it has now moved to its 3rd host. Originally it was hosted on a Vultr VPS that I had been hosting a few other things on, back when I originally bought the domain because I loved the name but had no idea what to do with it. Then Brandi, my former co-host, and I decided to start a podcast; it quickly became apparent that my web development skills weren’t exactly up to par with what we wanted to accomplish. As a result, we moved the site over to Squarespace.

Our podcast lived just long enough for the Squarespace hosting to renew before Brandi and I both decided that things had run their course. It was unfortunate that I had just forked over another year’s worth of money to Squarespace for hosting before reaching that decision. With that being said, you might be wondering why on Earth I’d be re-hosting the site somewhere else if I still have time left on the Squarespace subscription; more on that will come a little later on. With this being my first time using Squarespace, though, I thought I would first share some thoughts after running a site there for a year.

The Good

When I initially decided to move the site from my VPS to Squarespace, it was mainly because I knew I needed hosting somewhere, and it seemed like a good chance to mess around with something new. I had run numerous blogs on a free account along with compiling many of my own blogs with Hugo as I tend to discuss frequently. With us wanting to have a presence online that made us look like we knew what we were doing, though, I figured this was a worthwhile opportunity to justify spending the money on hosting with Squarespace.

Squarespace offers, hands down, the nicest management interface I’ve ever seen. Everything is very slick and inviting, without being overly cluttered and complicated. It’s simple to add new pages to your site or even branches to your site. For example, I originally migrated the blog I had been running under the Unusually Pink domain to Squarespace, but I quickly realized that the best way to handle the show notes for each podcast episode would also be basically a blog. It was trivial to literally add another blog to the site; I just had to tell Squarespace what directory I wanted to host that under and which of the two would be the “main” page of the site. The two were then independent of one another.

Squarespace doesn’t offer nearly as many themes as you’ll find with something like WordPress, but all of the Squarespace themes are highly customizable without having to wander into the realm of HTML and CSS. For example, for any theme I can change literally every color by simply using the menus presented to me. On the flip side, the WordPress theme you see right now only offered a handful of elements for color modification. Even worse, this theme offered more options than many of the others I looked at, where changing anything beyond the text color would’ve involved modifying the CSS.

Finally, Squarespace gives you an absurd amount of information about the traffic to your site, all without the need for any type of plugins. You can simply link up Google credentials to integrate with Google Analytics, for example, and see what people are searching for to reach your site, what position you’re in for the search results, the click percentage, how many impressions you get, etc. It also offers a very slick, interactive map if you want to drill down to the specifics of where your hits stem from.

The Bad

The main purpose for the previous site on Squarespace was blogging. Case in point, there were two blogs hosted on it; one for my own random posts and one for the show notes that went along with each podcast episode. Easily the single biggest nail in the Squarespace coffin is that the service is in no way designed for blogging. That might seem contradictory considering I just said that I hosted not one but two blogs on a single site there, but allow me to elaborate.

Adding a blog to Squarespace just means that when you go to edit the site, you have two different streams of posts you can choose from. You pick the blog, say you want to make a new post, and start to edit the content. This is where things immediately get murky. The editor for authoring content in Squarespace is pretty bad. It tries to break the content of each post down into blocks the way the current WordPress editor does, but it does so in an extremely clunky, unintuitive way. Simple things like handling the appearance of media you upload is often not possible, meaning that I had to resize every photo prior to uploading since I knew there would be no good options for scaling this after the fact. Likewise, trying to embed any sort of content was frequently gated behind a paywall; I couldn’t embed the player for each episode into the post with the show notes because they wanted me to pay more for that privilege. I couldn’t embed tweets but had to just link to them. That may not have been a big deal were it not for the fact that the Squarespace plan I was on was already more than double what I’m paying for hosting now.

As another blow to blogging, Squarespace doesn’t provide any real outlet for managing the posts on the site. While in the management interface, for example, going to one of the two blogs I had added would simply show me a lists of posts on the left in chronological order. If the post I needed to modify was at the very bottom of the list because it was old, then I had to just keep scrolling until I got to it, letting the clusters of posts incrementally load the further I scrolled. There weren’t any options to just search for the post I wanted. This may have been a limitation of the theme I selected, but I was equally disappointed that I couldn’t search the blog itself for specific content, either. I frequently author blog posts that I know will help me in the future; they live on a blog as opposed to just in my personal notes because they might also be beneficial to someone else. If I can’t easily get back to that content, though, without mindlessly clicking a “Next” button, that’s a problem. This WordPress blog offers both a search box and sane pagination; neither was an option for my Squarespace deployment. I’d frequently have to search the web for what I wanted to find with the URL of my own site to reach it. That’s a problem.

The last thing I’ll mention is portability. Admittedly, WordPress might be just as bad at this, but it’s extremely difficult to take content from Squarespace and move it somewhere else. This was the big reason why I didn’t want to continue creating content on Squarespace even though I’ve already paid for the hosting there; I knew that I didn’t want to stick with Squarespace once the current hosting expired, but anything new I posted there would just be more work to move to somewhere else later on. Squarespace offers you the ability to export your content, but it’s to an XML file. While this will get the written content for each post and the metadata about it, it will not include any media. I managed to throw together a bit of a workaround that’ll most likely be the topic of my next post, but it was still a large amount of work to move everything from one host to another.

An obvious question at this point would be:

But aren’t you just in the same option regarding portability after moving to WordPress?

The answer is… maybe. As long as I don’t become disenfranchised with the platform as a whole, there are many different WordPress hosting platforms out there. If I want to move from one to another, I can easily export my site or take a backup of it and move the content somewhere else. I had initially tried moving a lot of the content from Squarespace to a Hugo site I already ran, but I very quickly ran into many of the same issues I described with Squarespace regarding management and discoverability; while being lightweight is nice, sometimes having a CMS is beneficial.


Despite the vibe you may get, I don’t dislike Squarespace at all. I feel like their business is really tailored to users who want a professional, mostly static website but who don’t have the skills to create that themselves. For a hobbyist like myself with a focus on blogging, the premium you pay for Squarespace gets you essentially nothing. Any WordPress instance is going to be a better blogging platform, and one that is significantly cheaper at that. Similarly, if you need to have firm divisions in your site (e.g. a blog for the sake of shitposting and a blog for podcast show notes), you can’t easily do that within WordPress. While you can create multiple pages, such as the About page here, you can’t set up an entirely separate blog.

At least for the moment, what I did with Squarespace for both a blog and podcast repository wouldn’t be possible with WordPress. For a standalone blog, though, the experience is significantly better on WordPress. It’s important to understand what the goal of your site is and what you need out of your platform. When that goal changes, moving platforms might be the best move. Hopefully my next post on how I migrated my images between Squarespace and WordPress can help with that.

Note-Taking With Notable

The Others

I’ve struggled for years with finding a good, reliable, and simple note-taking application that fit my needs and didn’t lock me in to a particular platform. When I started my career, I was using Evernote for handling my notes at work. At the time, the free version of Evernote was pretty solid which was good because I didn’t have the money to be spending on notes. After a few years, however, Evernote apparently decided that not enough people were paying for the premium version of the product; as a result they crippled the free version. The free version had previously been limited to the amount of data you could sync in a month, and that alone seemed reasonable. They added on to this by limiting the number of devices which could connect to an account. Since having my work laptop, personal laptop, and phone all connect was no longer an option, I decided to look for something else.

At the time, nothing else really stuck out to me. I was working in a very Microsoft-centric environment and was managing Office 365 at the time a fairly new service. I opted to use OneNote since it would integrate in to Office 365. I almost immediately hated pretty much everything about OneNote, from the appearance, to how shitty the web app was at the time, to how poorly it would index and search my notes. However, I stuck with it for years because 1.) it was able to import my years of existing notes from Evernote and 2.) intertia made it easy to stay with a product (even if I strongly disliked it) because it meant I didn’t need to invest my time in anything else.

When I finally switched to a new job about a year ago, though, I decided it was time for a fresh start with my notes. I was working in a new role that meant my years of previous notes were no longer going to be nearly as important to me as they were. In the rare instance I needed one, I could easily pop open the OneNote (finally improved) web app and find it; I didn’t need to worry about importing those notes into another system for daily use. Since the job change also marked a change in switching from Windows to macOS for work, I originally started off using Apple Notes. I rather like Apple Notes in that it’s simple, fast, lightweight, and it syncs nicely between my MacBook, iPhone, and iPad. However, I quickly found that being locked in to Apple’s ecosystem for my notes wasn’t exactly what I was wanting. For example, while there’s a web app for Apple Notes, it’s clunky and slow. This means accessing my notes from my personal laptop running Linux is a painful experience. Likewise, what if I stopped using Apple products in the future? It makes no sense to be locked in to a particular hardware vendor when it comes to something as ubiquitous as note-taking software. While I still use Apple Notes occasionally for quick, personal notes that I’m only accessing from my phone, I didn’t want to continue using it as my primary note-taking application.

Since I was already an avid Dropbox user and had been for many years, I decided to give Dropbox Paper a try. I was initially drawn to it since it seemed like it was basically Markdown, the markup language I perfer to write things in. In fact, all of the posts for this blog are created in Markdown and compiled through Hugo. In reality, though, the syntax wasn’t exactly Markdown but a weird mix where some pieces of Markdown had been cherry-picked (e.g. bold, italics) while others were ignored (e.g. hyperlinks.) Being that the files were created with a .paper extension also meant they weren’t Markdown files I could directly edit with something else in a clean manner; I was locked in to Dropbox. What if I wanted to change my cloud storage to something different, which could very well happen if ProtonDrive lives up to my expectations when it releases.

This is when I started to realize that what I really wanted was something that would allow me to easily work with Markdown but that would leave vanilla Markdown files on my system. These files could be synced through whatever means I wanted to use, be in Dropbox, ProtonDrive, iCloud, or anything else; I didn’t want to be dependent upon a particular sync mechanism. Likewise, I needed the files to be Markdown so that I wasn’t dependent upon a particular application, either. I’ve discussed before how I love having all of the posts on this site saved as Markdown because it means that I can (and have!) moved them quickly and easily between different websites. I wanted to have the same flexibility with my notes.


The Good

I did a quick search for note-taking applications that deal with Markdown, and one of the first results I got was Notable. Almost immediately it seemed to fit the bill. It was a simple, lightweight application that dealt with Markdown files. When a file is open in edit mode, I see all of the Markdown syntax I know and love. When I save a file, the Markdown is rendered for easy consumption. While I don’t get a live-preview like I do with Atom, I think this is a much more elegant setup for note-taking and reference.

It’s important to note, pun not intended, that the name for “Notable” gives away the fact that it is focused on notes in particular. When I was discussing my attempts to find a good Markdown editor for my notes, a friend of mine shared with me an episode of the Mac Power Users podcast focused on Markdown. While they list a lot of options (with an obvious focus on software for Apple products), many were not note-specific; some were just Markdown editors. For example, Byword looks cool but seems to be much more focused on a minimalist writing experience than on a note-taking experience. While I could use something like that and simply search through my notes with grep from the CLI, if I wanted to do that I would just use Vim or Atom as my editor and be done with it. I was really looking for something that would allow me to easily categorize and search my notes. Notable does this through tags which can be applied to each of my notes. Tags are used as an organization method; with them it’s easy to then do a text search across the content of either all of my notes or on just the notes with particular tags applied.

All of the notes created in Notable are .md files that live in a directory I choose. At the moment, that directory is inside of my Dropbox folder. This is especially cool for a couple of reasons. First off, Dropbox can render Markdown files. So if I just need to reference one of my notes from another device, I can simply go through Dropbox on the web, open the file, and reference all of my notes. I just have to know the name of the file since the tags are not readily accessible or searchable outside of Noteable. All of that information is stored as metadata at the top of each .md file.

The Bad

While using Notable has been working well for me after about a month, there are a couple of things that could be better. The immediate problem is that there isn’t any type of mobile app, and even if there were a mobile app I don’t know exactly how it would continue to sync since Dropbox isn’t keeping my files directly on my iPhone and iPad the way that it does on my MacBook. I think the design of Notable would need to be fundamentally changed, and suddenly integration with cloud storage would need to be done at the application level rather than the filesystem level. I don’t think that’s a good solution. Similarly, I also don’t really want to be authoring a bunch of Markdown content on my mobile devices, either. Most of the notes I’d be using on my phone are more personal (e.g. my grocery list) and those I continue to use Apple Notes for. In the instance I need to view some notes from Notable on my mobile device, that’s where opening them from Dropbox and rendering the Markdown works rather well.

Notable itself exists on a wide array of platforms. While it’s fairly simple to install on macOS or Debian-based Linux, I haven’t installed it yet on my Manjaro Linux laptop where it would be available via the AUR. I didn’t see the point since on this machine I haven’t installed Dropbox, either, and that’s where all of the notes are. On this machine, however, I mainly only need to reference blog-specific notes; for those I’m typically just once again opening the files from Dropbox on the web. In the instance that I want to edit a note, I can use Dillinger for that to edit the files directly in Dropbox from the cloud. In another life, I made heavy use of Dillinger for authoring blog posts for WordPress via Markdown; this was back when WordPress had support for authoring content in Markdown but didn’t support it in their editor.

In very rare cases I’ve wanted to create a new note in Notable but didn’t have access to my MacBook. In that case, from Dropbox I can simply copy an existing note, manually update the metadata to apply the appropriate tags, and then make whatever notes I need. I’ve verified in a few occasions that this seems to work without a hitch, though I suppose it’s possible to mess something up in the metadata if you really farkle it up.

Overall, the downsides I’ve enumerated here are more minor inconveniences than serious issues. I am curious how well the application will scale; right now I have a few dozen notes saved and everything is snappy. If I reach the same number of notes that I had in OneNote, though, I’m curious how quickly things like searching and swapping between notes will continue to be. The good news, though, is that since I’m not really locked in to Notable given that the files are just Markdown, if there are any problems in the future it shouldn’t be too terribly difficult to switch to something else or just work with the files directly if I can’t find a better solution.

If you’re comfortable with Markdown and the idea of controlling your notes without being locked into a particular application for editing and syncing them is important to you, then I would highly recommend checking out Notable. I’m extremely pleased with it right now, and for the low cost of free there’s really no reason not to give it a shot. It’s worth mentioning that while Notable was originally open source, that’s no longer the case. While I’d personally prefer if it was open source, it’s not a dealbreaker for me.

It’s Always DNS

There’s a saying among system administrators:

It’s always DNS.

Meaning that whenever there’s an issue, DNS is likely the culprit. This morning that adage proved itself yet again.

My home network is currently running off of a Cradlepoint router. Cradlepoint’s specialty is making routers that can leverage LTE, so my router is configured to use my home ISP as the primary WAN link, but it will fail over to a cellular connection if my home ISP is unavailable. This is pretty handy, especially considering that I now work from home full-time. That being said, mobile data isn’t cheap here, and the data plan the Cradlepoint is using is paid for by my company. While it’s nice to fail over to LTE while I’m trying to work, I don’t want to be eating through LTE data while I’m just sitting on the couch watching Hulu. As a result, I’ve configured alerting from the router’s cloud management platform to notify me when a failover occurs so that I can troubleshoot the network and tailor my online activity accordingly if I’m going to be on LTE for a while.

This morning was basically one of the worst starts to a weekend morning where I want to hang out with a cup of coffee and catch up on my RSS feeds. I woke up to an email alert from a few hours prior letting me know that my router had failed over to LTE. It happened once around 6 AM for a few minutes, failed back over to my ISP network, and then maintained that for roughly 40 minutes before failing over to LTE again a little before 7 AM. The first step, which I could easily do from bed with my phone, was to check for any outages from my ISP. Logging into my account there showed me that there weren’t any known outages, though.

Finally being forced to shuffle out of bed and into the living room to get eyes on the situation, I saw that the lights on the modem looked normal. I logged into the router’s management interface and verified that everything looked correct. I rebooted the modem to be safe, and the Cradlepoint immediately reconnected to LTE rather than using my modem’s connection. I bounced the Cradlepoint, and the connection status persisted. I disabled LTE on the router, and it listed the Ethernet port as the current WAN link, which seemed good. I tried connecting to, though, and it wouldn’t load up. I tried to ping one of the OpenDNS servers of and also got no response. This was a critical mistake, though I didn’t know it yet at the time.

Thinking now that maybe something was up with my Cradlepoint, I pulled a bin of miscellaneous tech stuff out of the closet and fished through it to find the router from my ISP that I never use. I plugged that in line after my modem, removing the Cradlepoint from the equation, and bounced the modem. The ISP-provided router came online right away with the characteristic blue light that indicates everything is fine. I connected my laptop to its WiFi network and tried to load a webpage… with no success. I once again tried to ping also without any response.

This was when I finally realized the flaw in my troubleshooting. Both the Cradlepoint and my ISP-provided router had been configured by me to use the OpenDNS servers as what they hand out with DHCP leases. Literally all of my devices are using and as their DNS servers. Likewise, the Cradlepoint needs something it can test to determine if a WAN link is up or down so that it can fail over to LTE and fail back to the Ethernet WAN link. I had that set as as well. So what if that was the problem? While still connected to my ISP router’s WiFi network, I tried to ping and immediately got a response. OpenDNS is what was unreachable.

Ripping the ISP router out of the network, I linked the Cradlepoint back up. I reconfigured it to use as the DNS servers it hands out, and to leverage that for the state of the WAN link. As soon as I did that, everything began working and the Cradlepoint failed back to the Ethernet WAN link on the next check. I should probably rethink this setup where I’m using the same IP address for DNS as I am for the state of the WAN, but I should also remember that it’s always DNS and check that a little earlier in the process.

Full Content RSS Feeds With Hugo

Last week I made a test post on Mastodon linking to one of my blogsthisthis via curl. I was doing this to see how to best handle the formatting for a script I was working on to periodically check my blog and link to any new posts from Mastodon. My friend tomasino who I’ve known for quite a while through SDF reached out to ask if there was an RSS feed for it. (Note that I now link to the RSS feed from the menu!) After he subscribed, he noticed that the RSS feed is only showing part of each post. It turns out that my theme, like many others, is using the default RSS template in Hugo. This only publishes a portion of each post to the RSS feed, the idea undoubtedly being that people will navigate to the site to finish reading the content, allowing whatever trackers are in place to see this. Since I don’t have any trackers on my site and don’t care about hits, this just serves to be a pain in the butt for anyone trying to use RSS; I personally hate having to move out of my own RSS reader in order to finish reading something.

With a lot of help from tomasino (who clearly knows a significantly more about RSS than I do), I started trying to modify my RSS template to include the full content for each post. Since the Terminal theme is using the default template, I started by taking the base template content and placing that in /layouts/_defaults/ as index.xml so that I had something to modify. The first thing that I did was modify the <description> tag so that it contained .Content instead of .Summary. This did cause the full content to be displayed in the RSS feed but caused all sorts of HTML encoding problems. Next I tried modifying the <description> block again so that instead of:

.Content | html

It was:

.Content | safeHTML

This was… slightly better. It fixed the HTML encoding problems, but it also caused the paragraph tags to disappear, meaning each post was a wall of text. tomasino’s thought was that I needed a CDATA block, which I also saw mentioned in the Hugo support forum. The problem I quickly ran into was that the block, which I was now adding in addition to the <description> block, needed to look something like this:

<content:encoded><![CDATA[{{ .Content | safeHTML }}]]></content:encoded>

Adding that directly to to the layout file cause the leading < get HTML encoded, thus breaking the entire thing. Back to hunting on DuckDuckGo, I found several people with the same issue. While a few people in those threads had offered some solutions for how to properly escape things, tomasino ultimately found the cleanest solution. After recompiling my site yet again, the encoding looked good, but the XML was still missing some metadata. Trying to open it in Firefox would give the following error:

XML Parsing Error: prefix not bound to a namespace

It’s worth noting, since I was missing this initially, that Firefox will not render the XML when you’re using the view-source: view. This makes complete sense, but I had overlooked it. You need to actually navigate to the file normally, e.g. What was going on here was that I needed to define the namespace, which I did by just copying the same line from tomasino’s own XML file for a site of his:

<rss version="2.0" xmlns:atom="" xmlns:content="">

After this addition and yet another recompile of the site, everything finally started to appear correctly in RSS readers. Suffice to say I would’ve preferred if I could simply toggle something in my config.toml file to switch my RSS feed from a summary to the full text, but at least it’s possible to modify this on your own if you have to.

Python’s Beautiful Soup

In my last post, I more or less just complained about what a dumpster fire developing anything for Twitter is. Originally, though, the post was intended to be about what I was developing for Twitter. It’s nothing amazing or even complicated, but it was a fun learning experience… Twitter itself not withstanding. I made a Twitter bot tweeting different shades of pink each day. Since that’ll most likely seem nonsensical to most people, let me explain.


A little over a year ago, a friend and I started a podcast. I won’t go into the backstory of why we named it what we did, but the name of the podcast revolved around the color pink due to an inside joke between my friend and I. We ended up publishing 21 episodes in the span of a year before we decided to stop it. I had moved about an hour away from where I previously lived for a new job, so recording in-person involved a decent bit of travel for one of us. Then the coronavirus pandemic really started to take off in my country, and given what a dumpster fire trying to record a podcast remotely is, my friend and I jointly decided to shutter the podcast. It was a fun experience, but nothing either of us were really wanting to keep putting time and money into. As is typically the case, we reached this conclusion just a month after the hosting for both the podcast and our website renewed. Go figure.

That being said, we had set up social media for the podcast, and that social media was now doing exactly nothing. While I didn’t want to do anything with the Facebook or Instagram accounts that my co-host ran (you couldn’t pay me to touch a Facebook property), I thought about what I could do with the lingering Twitter account. I eventually decided to make a simple bot that would tweet a different shade of the color pink each day.

Python and Beautiful Soup

As I mentioned previously, the actual code to post to Twitter ended up being extremely simple. I just used the Twython library to do the heavy lifting. What ended up being more interesting was how to create the database of colors I would use. After all, I don’t personally know that many different shades fo pink, and I wanted to include the RGB and hex color codes for each shade in the daily post. I basically needed a repository of shades of pink. After some DuckDuckGo-fu, I eventually found a page that included not just the RGB and hex color codes, but also a name. It was exactly what I needed.

The only problem was how to get the information from that page into something I could use in my script for the bot. My immediate thought was to copy and paste all of the information, but along with being error-prone over hundreds of shades, that’s also insanely tedious. In a shell script, something like xmllint would fit the bill. Since I was already working in Python, though, I decided to use Beautiful Soup. I had actually used Beautiful Soup one time before on a project years ago where I admittedly didn’t really know Python and most definitely didn’t understand what I was doing with Beautiful Soup; I just ended up copying and pasting a bunch of code from the Internet until things worked the way I wanted.

This time, I took just a little time to read the documentation for Beautiful Soup and understand what I was actually doing. The crux of my script comes down to:

divisions = soup.find_all("div", {"class": "color-inner"})

This gets me each of the div groupings for a color. With each of those groupings defined, it was then simple to get the name, hex, and RGB information I needed:

for division in divisions:
    color_name = division.find("span", {"class": "color-sub"}).get_text()
    color_hex = division.find("span", {"class": "color-id"}).get_text()
    color_rgb = division.find("span", {"class": "color-rgb"}).get_text()

Instead of trying to copy everything by hand, I had a working script to get all of the colors without needing to worry about human error. Plus, if the source website adds any new colors it’s trivial to re-run the script and get an updated list. I ended up making a map for each color and adding all of the maps to a list.

rows.append({"name": color_name, "hex": color_hex, "rgb": color_rgb})

Then I wrapped it all up at the end by exporting the list of maps to a JSON file.

with open('pinks.json', 'w') as outfile:
   json.dump(all_colors, outfile)

My other script which actually pushes the post to Twitter ingests this JSON file and then selects a random shade from it.

Twitter Still Sucks

As if further proof was needed that Twitter is garbage, though, I found myself simultaneously amused and irritated just a few days ago when I saw that a daily post had not been completed. When logging into the account for the bot, I received a notification that the account had been flagged for “suspicious activity”, and I had to walk through a verification process before the account could post again. It’s amazing to me that a platform which tolerates the most hateful and dangerous rhetoric chooses to flag a clear bot that makes a single post each day with details on a different shade of the color pink as “suspicious.” It’s just further proof that Twitter really isn’t worth anyone’s time at this point.

My latest project, though, involves pushing data to Mastodon instead of Twitter. This post will serve as the first test of it, so assuming everything works look for a post on that in the near future.

Twitter Development Impressions

I recently had an idea to turn the Twitter account for my defunct podcast into a Twitter bot posting a new shade of pink every day; it makes sense because the podcast and Twitter account were centered around the color pink. I didn’t really think anyone would care about this particular bot, but it seemed like a fun project idea to work on. It ended up being an interesting learning experience, but not at all for the reasons I actually expected.

Developer Account

Getting a Twitter developer account can be either really simple or really irritating, and there’s no discernable difference that dictates what experience you’ll get. I went to the Developer portal and registered for a developer account with my normal Twitter account. This account is clearly me IRL; my name is in it, I have a photo of myself on the account, it links to my personal website, and the post history clearly indicates that the account is a person rather than a bot. As part of registering for the account, I had to describe what I was going to create. I honestly stated that I was just going to create a bot that would tweet a shade of pink each day. Twitter asks questions such as if you plan to export data out of the service, if you plan to display information posted to Twitter outside of Twitter, etc. I answered “no” to all of these questions since I wasn’t pulling any data out of the service. I just needed to post.

After I completed the registration form, I received a message that my account was under review and I would receive a notification when that review was completed. I was a little bummed since it happened to be a long weekend for me, and I was hoping that this project would give me something to fill the time. I was hopeful maybe the review would be completed quickly. I was wrong. It took just shy of 2 weeks before the review was completed. I had almost forgotten about the whole thing since I’ve been trying to stay off of Twitter as of late, but then I got an email telling me I was allowed in. Wild.


Handling authentication with Mastodon is a relatively simple, straightforward process if you’ve done this sort of thing with… pretty much any API. It’s a little different than the type of things I do for work since creating a client means people other than the person writing the code can be authenticating, but it still makes sense and is well documented. On the other hand, authenticating through Twitter is a complete nightmare. Outside of the specifics of authentication, everything in Twitter’s documentation seems aimed at keeping each individual page as short as possible. As a result, every page links to numerous other pages, and you end up having dozens of browser tabs open just to have some clue as to what your complete workflow looks like. For the OAuth 1.0a option, which is the option to use if you need an account other than the registered developer account to leverage an application, they recommend strongly against making the JWT yourself in favor of leveraging a library… but they don’t actually share any of the particulars about their JWT setup… or even call it a JWT. You very clearly get the impression that Twitter doesn’t want anyone actually using their API. Crazy.

After seeing the poor documentation, I abandoned my ideas of making my bot in Rust or maybe Bash, and instead just decided to use Python with the twython library. I’ve manually parsed together enough JWTs that I didn’t think I cared enough to do it again for this. Seeing the workflow for twython showcased the next bit of crazy, though, which is that the authentication workflow sends the OAuth token to the callback URL. I basically needed to set up something completely different with an HTTP listener for the OAuth token so that I could move forwad with authentication. That was entirely more than I wanted to put into this simple bot.

Note: The craziness of this makes me still think I’m not actually understanding the setup properly. I did verify, though, that the response received from where I was running the code included just the HTTP status, so the information is not coming back to the sender by default. Likewise, I couldn’t open my application up to other users without giving a callback URL, so omitting that wasn’t an option, either. Hit me up on Mastodon if I’m just dumb and there’s a reasonable way to handle this that I’m misunderstanding.

Registering Another Developer Account

At this point I realized that I really should have just registered for a developer account with the account I was planning to use with the bot. I started that process again fully expecting to wait another two weeks. I filled out all of the same information during the registration process, but this time when I completed the form I was immediately kicked over to the developer portal to start working on whatever I needed.

I’m still amazed that while the registration for my actual account that I use as a human being, I had to wait two weeks to get developer access. When I registered witht the account that had been used for my podcast, though, I was allowed access sans review. The podcast Twitter account generally had nothing posted to it other than the automatic posts from Squarespace, had the podcast logo as the profile picture, had no website linked, and quite clearly was not a person… yet that’s the account that got in right away. Okie dokie!


Since I now had the account I was planning to post from in the developer portal, I could spin up my OAuth token directly from there in my browser as opposed to having to leverage 3-legged OAuth to a callback URL. As I started working on the code, though, I quickly realized that the script for the bot to post was going to be insanely easy. Instead, the much more interesting part of the code was the script I wrote to make a little local repository of information on shades of pink that was completely unrelated to Twitter. I had originally planned to cover that in this post, but since this ended up being longer than I expected that’s what I’ll cover next time around.