SSIDs Everywhere

As someone who has been an apartment dweller for a relatively long time, there are some extremely solid perks to it. It’s nice to never have to worry about things like maintenance, for example; if something goes wrong, I open a service ticket and someone shows up to fix the problem. When my furnace wouldn’t start on a cold day, for example, I wasn’t scrambling to figure out who to call. There are some downsides as well, though. For one, many people are frustrated by the fact that they can’t customize their home to the degree they’d like with respect to things like the color of the walls. Anyone who knows me knows that I couldn’t possibly care less about that. What does give me grief, though, is how significantly the spectrum around me is completely polluted by my neighbors.

I had issues with this in my last apartment, which was roughly the same size as my current apartment but with a longer, narrower layout than my current home. When I moved my home office to the bedroom, which was at the opposite side of the apartment from where my router was, I saw a noticeable performance decrease in my home network; this was a problem when I was working from home and an even bigger problem when I was trying to reach Platinum in Overwatch (spoiler alert, I didn’t manage to do it.) At the time, I replaced my cheap home router with a mesh WiFi setup so that I could utterly drown out my neighbors with sheer WiFi dominance. I ended up buying a pack of 3 access points because it was only slightly more expensive than buying 2 access points individually. That was more than enough to blanket my 900 square foot apartment, and I didn’t really think too much about my setup after that.

Fast forward to my current apartment, which is located in a much more populous area than where I previously lived. Pre-pandemic, I still didn’t think too much about my home network setup. The only bandwidth-intensive activity I did was video streaming, and that was mainly done from streaming sticks connected to my TV that was literally right on top of the router/access point connected to my modem; I never ran into issues with it. After the pandemic kicked into high gear in this area, though, I started living from my home office and spending unholy amounts of time on web conferences. While they worked well enough most of the time, I’d periodically have spikes of extremely high latency that would cause me to sound like Megatron’s cousin while on calls. This was annoying on work calls and infuriating when trying to record podcast episodes.

At first I assumed that the problem had to be upstream with my ISP being overloaded since suddenly everyone was staying home all the time. As the problem persisted, though, it made less and less sense to me. It would be reasonable if this behavior happened during the evenings when everyone is sitting around binge-watching their favorite shows because they can’t go out. When my network is choking to death on a 9 AM call, I was scratching my head. Surely not enough people could be doing video calls at that time, right? While latency affects VOIP traffic heavily, it isn’t exactly the most bandwidth-intensive thing to be doing in 2020.

Thinking my mesh network was the problem, I even tried ripping it out and replacing it with a single router connected to my modem. While it at first seemed to have a bit more stability, I still ran into some of the same problems. Finally, I realized that I was seeing a lot of networks when I just looked at what was available from any of my client devices. I fired up WiFi Explorer and was presented with this nightmare.

If you’re thinking that looks disgusting, you’re correct. Every channel in both the 2.4 GHz and 5 GHz bands is completely packed with networks. I’ve been checking this periodically since realizing it could be the problem, and I regularly see anywhere between 50 and 70 different wireless networks from my apartment. Yikes.

Admittedly, I’m part of the problem. I’m broadcasting with a 5 GHz and a 2.4 GHz network from a router that I use exclusively for work. I’m also broadcasting a main and guest network on my main mesh setup. That causes matching SSIDs to broadcast on both the 2.4 GHz and 5 GHz spectrum on each of the 3 access points per network, meaning I’m technically broadcasting 12 SSIDs. Even so, I’m still competing with, at minimum, 40-ish other devices crowding the same spectrum.

After coming to the realization that I may have been blaming all of the wrong things, I adjusted the setup of my access points so that the main router/access point which is connected to my modem is in direct line of sight across my apartment from the mesh access point at my office desk (which moved from the desk proper to a table next to the desk.) Since doing that, knock on wood, things have at least seemed to be a bit more stable for me. Either I’ve been having a better experience on web conferences or no one bothers to complain to me about it when my audio suddenly sounds like garbage because they’re just used to that happening from my end.

All that being said, I do still have an issue where the network stack on my MacBook Pro will crash, leaving me with no network connectivity until I disable and then re-enable WiFi. I haven’t managed to find a fix for that particular problem yet, though I imagine having 4 different flavors of VPN client installed probably isn’t doing me any favors.

Ubuntu Linux GRUB Error After 20.04 Upgrade

While I’ve nuked my personal VPS, I still have a VPS that I use for work; it comes in handy for things like running cron jobs, maintaining persistent shells, and generally handling things where a Linux shell seems better than a macOS shell (I’m looking at you, remote PowerShell sessions connecting to Microsoft Exchange.) This week I decided to upgrade it from Ubuntu 18.04 to Ubuntu 20.04. I like to stick on the LTS (long term support) releases for my servers, but I do typically prefer to keep even the LTS releases upgraded rather than waiting for them to go end of life. I could have kept using Ubuntu 18.04 with maintenance updates until 2023 and security maintenance until 2028, but what’s the fun in that?

Upgrading a VPS is always a bit of a nerve-wracking situation just because I don’t have local access to the host in case something goes extremely awry. Ubuntu tries to help alleviate this by opening a second SSH daemon on a different port just in case the primary daemon crashes during the upgrade, but if the machine ends up in a non-bootable state I’m still more or less hosed. Fortunately for me, things almost went off without a hitch… almost.

While the upgrade did complete, I received an error toward the end of the process that GRUB failed to upgrade successfully. This was mildly terrifying since GRUB is the bootloader; if it’s not working properly the system won’t boot, and I can’t access the host of the VPS to troubleshoot it. Luckily, GRUB continued to work in my case, and my system was able to reboot successfully after the 20.04 upgrade and beyond. GRUB just wasn’t getting upgraded. I quickly noticed that I also received an error from GRUB every time I ran sudo apt update && sudo apt upgrade to update my system. Again, the other packages would upgrade successfully, but GRUB would always complain:

dpkg: error processing package grub-pc (–configure):

installed grub-pc package post-installation script subprocess returned error exit

status 1

Errors were encountered while processing: grub-pc

E: Sub-process /usr/bin/dpkg returned an error code (1)

After spending some time just ignoring the problem since it wasn’t exactly critical, I finally decided to do some digging. It turns out that problems like this have apparently plagued Ubuntu upgrades for a while, as I found a thread with the same problem all the way back with an upgrade to Ubuntu 14.04. The solution in that case was to simply “nuke and pave” by removing GRUB and then re-installing it. It’s once again a bit of a white-knuckle situation since if anything happens between removing and reinstalling GRUB the system will not have the ability to boot. The steps were very similar to the linked thread with some minor differences in the era of Ubuntu 20.04. The first step was still to purge GRUB:

sudo apt-get purge grub-pc grub-common

Running this command in 2020 removes /etc/grub.d/ already, so there’s no reason to manually run the removal. Instead, I next moved straight to re-installing GRUB:

sudo apt-get install grub-pc grub-common

The installation process kicks off an interactive wizard asking which disk(s) GRUB should be installed to. In my case, I only needed it on the main disk, which is /dev/sda. With that done, I updated GRUB and then rebooted:

sudo update-grub
sudo reboot now

This part kind of sucked as I was left running nmap against the SSH port for my VPS and hoping that GRUB was properly set up to allow the system to boot. After a nervous 15 seconds, though, the port started to respond again, and I could successfully SSH into the server. Re-checking for updates showed that everything was fine; the errors about GRUB having a needed upgrade that couldn’t be installed were gone. Admittedly, it was probably unnecessary to go through this upgrade without any specific reason for it, but the beauty of Ubuntu is its popularity. Rarely will there be an issue someone else hasn’t encountered, solved, and documented before, and this problem was no exception.

Offsite Podcasting

I’ve written before about how difficult it can be to record podcasts remotely, something that has continued to be a struggle throughout the pandemic. For the Unusually Pink Podcast, the irritation of recording remotely was enough to make both Brandi and myself decide it was best to throw in the towel after a year. Mark and I have managed to continue doing well with the Same Shade Of Difference podcast, though significantly more work is involved in getting an episode together when done remotely. I say that like I have anything to do with it, but it’s really all Mark going through the work of recording our podcast episodes that we do over Discord, editing the files, adding in the music, and everything else. It’s substantially more effort, and at the end of the day we still don’t end up with as clean and natural of a product as we would while recording in person; latency means we still periodically try to talk over one another even as we endeavor to avoid exactly that with video calls.

What we’ve been experimenting with lately, though, are offsite recordings. It gives us a similar experience to the onsite recordings we used to do from our old podcast studio but in random locations. We first actually gave this a shot at Dead Lizard Brewing Company in Orlando when we were there for the Podfest Expo, AKA the last thing we did before the pandemic ground everything to a halt here. That was a surprisingly good experience, and you might be surprised at how infrequently even the loudest background noise comes through to the recording.

With how cool the experience was at Dead Lizard, Mark and I have been trying to basically emulate that experience now from places that allow us to sit outdoors, away from everyone else. We’re racing against time a bit as the weather continues to get colder, but it’s mostly been a great experience thus far. We first recorded at the West Sixth Farm for the “How We Work in the COVID-19 Era” episode. The Farm is a terrific location, with a covered pavilion area that has picnic tables, accessible WiFi, and outdoor power we can use. Barley the farm dog came to hang out with us for a little while, and it’s a very difficult view to top while recording.

The one downside of this setup is that, with things being relatively out in the open, we do end up at the mercy of the wind. We attempted to record a second Same Shade Of Difference episode there maybe about a month ago, but we had to scrap the idea because the windscreens for our microphones couldn’t quite keep up with the wind that would have caused a constant disruption to the episode.

More recently, we recorded at Idlewild Park in northern Kentucky. This was also a cool experience where we were able to take a quick look around, find a pavilion with no one else at it, and post up to record without having to worry about being too close to anyone. We were fairly close to Cincinnati/Northern Kentucky International Airport, but even that sound barely came through the microphones.

That episode isn’t live yet, but look for it soon over at the Same Shade Of Difference website or your podcast source of choice.

As with anything, there are caveats to offsite recording. While the West Sixth Farm is an insanely good location due to this, things like electricity and Internet access aren’t guaranteed when you’re trying to stay away from everyone else. While at Idlewild Park, we just leaned on the batteries for our gear and used the hotspots from our phones and tablets for connectivity. The pavilions at both locations were clutch for the possibility of inclement weather. For both locations, either Mark or myself had to travel at least a bit to get there, so having backups of pretty much everything is helpful. Spare batteries, memory cards, headphones, etc. is useful to save yourself from a scrapped recording session. I say this as if I provide any of it, but it’s really all Mark who has both everything you need and backups of everything you need in his bag at all times. I give him a lot of grief over how heavy his backpack is, but I’d be lying if I said it hadn’t saved me more than a few times.

On the whole, I would highly recommend anyone struggling with remote podcast recording over the Internet try meeting up at an out of the way place to record in person while being outdoors and socially distant, weather permitting. The elephant in the room for this topic, of course, is how you record the episode since dragging something like the RODECaster Pro out there isn’t feasible, even if it’s technically possible since the device will run off of D batteries. The recordings in Orlando and at the Farm were done with Mark’s Zoom H4n Pro, and while that device did a terrific job of recording, it still left a lot to be desired with regard to the amount of edit work that was required afterward. Without any built-in soundboard someone (AKA Mark) would have to go in and actually edit our recording to insert things like our intro music. To alleviate this, Mark recently got the new PodTrak P4. I won’t share any thoughts on that here, though, as that’s the topic of a future podcast episode! Stay tuned to the Same Shade Of Difference if you’re interested in our take on that particular device.

Updating PowerShellGet

It’s not too often these days that I find myself needing to update the underpinnings of PowerShell. The majority of the PowerShell work I do now is based on PowerShell Core, the current version of which is 7.0.3, and frequently just comes with newer versions of the supporting modules. PowerShell Core began with PowerShell 6 and is created with .NET Core, which is Microsoft’s open source and cross-platform flavor of .NET. PowerShell version 5 and before, known as Windows PowerShell, is the original, Windows-specific variant of PowerShell. Microsoft doesn’t really do any new development work on Windows PowerShell, instead opting to work on PowerShell Core and slowly make the full set of functionality available on all platforms.

This is awesome, but some systems very specifically target Windows PowerShell. This can easily happen since the interpreter even has a different name; Windows PowerShell calls powershell.exe while PowerShell Core calls pwsh.exe in an effort to allow the two versions to co-exist on the same Windows host. As a result, systems which proxy PowerShell commands or scripts on your behalf down to a target machine that have not been updated to expect PowerShell Core will generally target Windows PowerShell instead. This was the situation I found myself in last week.

I was attempting to load a script that I had written into a monitoring platform which will then send my script down to any number of “collector” machines in order to for it to execute and do the actual data aggregation. In this case, my script failed because it was calling the MSAL.PS module. MSAL is the Microsoft Authentication Library, and as the name indicates it facilitates authentication to Azure AD. It replaces the older Azure AD Authentication Library (ADAL), and is honestly much nicer to use. The module needs to be installed first, though, and while I had previously installed it on the target system under PowerShell Core, Windows PowerShell is a completely separate entity with a separate space for modules. I remoted to the system and ran the following to handle the installation from an administrative Windows PowerShell session:

Install-Module -Name MSAL.PS

Instead of joy, I got the following error message:

WARNING: The specified module ‘MSAL.PS’ with PowerShellGetFormatVersion ‘2.0’ is not supported by the current version of PowerShellGet. Get the latest version of the PowerShellGet module to install this module, ‘MSAL.PS’.

Ick… some things were a bit old in the Windows PowerShell installation. This was one of the rare instances where the error message didn’t tell me exactly how to fix the issue, though, so I did a few searches on this exact error. The trick is that updating PowerShellGet involves not one but two steps.

While PowerShellGet is a module specific for discovering and installing PowerShell packages from the PowerShell Gallery, it leverages Microsoft’s much more generic NuGet package manager. To get the latest version of PowerShellGet, I first had to make sure I was using the latest version of NuGet by running:

Install-PackageProvider Nuget -Force

Once that completed, then I was able to successfully update PowerShellGet via:

Install-Module -Name PowerShellGet -Force

Once the update completes, the current PowerShell session will still be running the old version. I just closed PowerShell, re-launched a new administrator instance, and then successfully installed the module via the same cmdlet from earlier:

Install-Module -Name MSAL.PS

Safari 14

Last week the 10.15.7 update to macOS Catalina came with a nice surprise: Safari 14. I was caught off guard by this since I had assumed we wouldn’t see Safari 14 until Big Sur released later this year. It was also a nice surprise for me since Safari has become my browser of choice, not just on my iPhone and iPad, but also on my MacBook Pro. The big reason for this is that I do my best to avoid any Chromium-based browser. Over the last few years we’ve seen diversity in browsers erode more and more as new browsers are built based on Chromium (e.g. Brave) while others abandon their own engines in favor of using Chromium (e.g. Opera and Edge.) I personally see this homogeneous browsing platform as being pretty bad for the Internet as a whole, as it opens up the possibility for web developers to focus all of their development on Chrome and ignore everything else. This leads to sites that only work on Chrome and that ignore web standards, just like we saw back in the day when much of the web was developed with only Internet Explorer 6 in mind. The difference now is the way the web has evolved into an entire platform. In 2004 the main issue was that sites developed just for IE 6 wouldn’t quite render properly on other browsers. In 2020, there are entire web apps that straight up won’t work on non-Chromium browsers. That’s something I can’t support.

The two major browsers moving forward with different engines are Firefox (with Gecko) and Safari (with WebKit.) I was previously using Firefox on my laptops, but I became extremely concerned recently when Mozilla had massive layoffs and switched their mission to focus on revenue. I certainly understand that Mozilla needs to make money in order to continue making Firefox, but when a group lays off their entire incident response team, I don’t exactly feel warm and fuzzy inside about using the product. I still use it on my Linux installations, but on macOS I switched to Safari.

The pleasant part about switching to Safari is that, for the most part, it’s been a very slick browser that I’ve enjoyed. While Safari 14 doesn’t do anything too Earth-shattering or even different from any other browsers, it does bring Apple’s offering up to parity with some of the major players. For example, Safari will now finally display favicons for websites on tabs. How they’ve made it this far without supporting them I’ll never understand, but it immediately makes a huge difference in quickly finding the tab I want… and I say this as a person who typically doesn’t have more than 10 tabs open at any given time. Tab addicts (you know who you are) will especially appreciate this when Safari starts stacking tabs on top of one another. As another update to tabs, Safari can now preview the content of a page when the mouse is hovered over a tab. This can also be useful for quickly finding the appropriate tab without actually having to switch to anything.

The big change, though, is how Safari communicates with the user about how it has helped protect against invasive tracking. This feature is extremely similar to the Protections Dashboard in Firefox. There’s an icon to the left of the address bar that can be clicked at any given time to see a breakdown of trackers on the current page. Clicking will also allow me to see the specifics of what trackers are being blocked:

For a bigger picture, I can also get an overall view of what’s been blocked in the past 30 days. I can see which sites were attempting to be the most invasive, and similar to the per-site rendering, each can be expanded to show which trackers they had embedded:

Similarly, I can click on the Trackers heading in order to see a list of which trackers appear the most frequently across the sites I’m visiting. I can expand those listings to see which specific sites are hosting that tracker:

I don’t think it should come as a surprise to anyone that Google, Bing, and Facebook appear the most frequently after just a short period of testing. It’s also interesting to see trackers from both Facebook and Snapchat when I don’t use either of those “services”. It really shows you how pervasive they are across the Internet.

While I can already hear the Apple-haters I know railing on the fact that Firefox already has this feature, in my opinion it’s nice to see Apple bringing their browser up to feature parity and offering a more transparent and secure browsing experience to people in a package that also does not leverage Chromium but which does have a support team behind it that’s more than a skeleton crew. Similarly, you still don’t see anything like this today in Chrome or Edge, likely because the companies behind them both appear relatively high up in the tracker list.

Connecting An Existing Firebase Hosting Project To A New Site

As a follow-up to my last post on GitHub Pages, I mentioned that I moved one of my websites to Firebase. Firebase is a platform from Google for creating web and mobile applications. As a PaaS offering, there are a lot of different parts to the service, but as a platform for web applications hosting is naturally one of them. The free Spark plan offers 10 GB of storage, 360 MB of data transfer per day (which works out to 10 GB of bandwidth per month), and support for custom domains and SSL. That’s more than enough for me to host a simple, single page website that’s only made up of static HTML, CSS, and a single image. If anyone is curious, my site is using just 1.8 MB of storage and 15 MB of bandwidth. Note that bandwidth used divided by storage used will not be indicative of total hits due to caching, compression, etc.

I’ve used Firebase before, so I already had my Google account linked up to Firebase, and I even had a project still technically “live” there, though the domain had long since been shifted somewhere else. To be honest, it had been so long since I used Firebase that I almost forgot about it until I just happened to start receiving some well-timed emails from the service informing me that I needed to re-verify ownership of the domain I was using for my defunct project. I had no interest in re-verifying anything, but I did want to start hosting something new there.

The first step for hosting new content was to log in to the Firebase Console. Since I had already used the service, this gave me tiles of my existing projects; in my scenario, I just had a single project for my hosting. I clicked on that tile, and I was taken to a Project Overview screen. This gives me a high-level look at my project. To get to the hosting-specific functionality, though, I just had to click the Hosting option under the Develop menu to the left.

On the hosting dashboard, the first item listed contains all of the domains associated with the project. Clicking the 3 dots … next to a domain allowed me to delete it; I removed the two entries (apex domain and www) for the domain I used previously. Then I clicked the button for Add a custom domain. I followed the instructions on the screen to add a custom domain; I won’t document the steps here since they’re directly covered through the Firebase custom domain documentation.

With everything configured on the Firebase side, I next needed to crack into the Firebase CLI to link up my local project. I opted to install the standalone CLI, though you can still get it through npm if you prefer to roll that way. The first thing I had to do was link the CLI to my Firebase account. This is different based on whether you’re going to be using the CLI from a system with a GUI or if you’re doing it from a headless system you’re accessing via SSH. I was using it from a headless system where I cannot pop a browser to follow the normal authentication process; as a result I ran:

firebase login --no-localhost

If you’re running this from a system with a GUI, I believe you just omit the --no-localhost parameter. In the headless setup, though, this gives a Firebase URL to navigate to on another system. I copied it out of my terminal and pasted it into the browser in my laptop. This gives me an authentication code for the CLI. I copied that from my browser, pasted it into my terminal, and that linked the CLI to my account in the Firebase platform.

Since I was just moving my content from my old VPS to Firebase, I didn’t have to worry about actually creating a website; I already had one that was backed up in a tarball. I simply had to expand my tarball on the same system where I was using the Firebase CLI. I did this by creating a new directory for the project, expanding my tarball that had all of my site’s content, and then copying that content to the project directory:

mkdir ~/laifu
tar -zxvf ~/temp/laifu.tar.gz
cp -r ~/temp/html ~/laifu

Note: If you look closely at the commands above, you’ll see that after I expand the tarball I’m recursively copying not the entire directory but the html folder from it. This is due to the fact that my tarball is of the entire /var/www/laifu.moe/ directory that Nginx was previously hosting on my VPS, and the html directory is what contains the content of the site. If your backup is storing the content directly (e.g. it’s not in a subfolder) that’s fine. However, you’ll want to make a new folder inside of your project directory that you copy the content to because you do not want the content of the site to be in the root of the Firebase project’s directory. For example, your mkdir command would look something like: ~/myproject/html

One I had the files situated accordingly, I needed to tell Firebase that my directory was a Firebase project. Similar to using git, I do this by navigating to my project directory and running:

firebase init

This gets the ball rolling by asking some questions interactively through the CLI. One question will ask what service the project should be connected to; be sure to pick “Hosting.” After that there should be a prompt for which existing hosting project you’d like to use. The existing project should be listed as an option to be selected. If it’s not there, you can cancel out of the process and ensure everything worked correctly with your authentication by running the following and verifying that you see the project. If it’s missing, you may need to redo the authentication (e.g. maybe you were in the wrong Google account when pasting into your browser.)

firebase project:list

After selecting the project, the CLI will ask what to use as the “public directory.” This is essentially asking what directory inside of the project directory contains the web content to be hosted. In my case I picked html since that’s what I named the folder.

Be wary of the next couple of prompts, which will trigger regardless of whether or not there’s something in your public directory matching them. When prompted about your 404.html page, opt not to overwrite it unless you really hate your existing one. When prompted about index.html, definitely don’t overwrite it or you’ll lose the first page of your site.

Once that’s all done, you should get a message:

“Firebase initialization complete!”

This means that the directory has been initialized successfully as a Firebase project, but the local content still hasn’t been pushed to the cloud. So the last step is to run the following:

firebase deploy

This will give a “Deploy complete!” message along with a Firebase-specific URL in the format of:

https://project-name-GUID.web.app

Copying this URL and pasting it into a browser should allow you to verify that the content you expect is now being hosted, even if you’re currently waiting for DNS TTLs to expire before you can navigate to the custom DNS. The Hosting Dashboard of the Firebase console will also show the update in the “Release History” section.

GitHub Pages Hosting

As I had mentioned in my post about Dropbox Passwords, I’m looking to cut down on the number of services that I pay for each month. One of the areas I’ve decided to cut down on are my domains; I’m letting a few domains that I never ended up finding much of a use for expire rather than having them automatically renew. Some have been renewing like this for years just because I didn’t want to lose them for some reason despite never having any real use for them. With a decrease in my domains comes a decrease in websites, to the point where I started to wonder if I could get away with ditching my VPS. I had been using the same VPS for over 2 years, and it served me well. In a world with so many hosting options, though, it seemed overkill just to run 2 static websites, each of which were only a single page.

One of my sites I placed on Firebase. I’m not a fan of using Google products, but I’ve used Firebase previously (moving my website to an existing, stale Firebase project will be the topic of another post), and the free Spark plan gives me more than enough for a simple site with 1 GB of storage and 10 GB of egress traffic each month.

I wanted to check out some different options for jfabhd.com, though. After recently reading one of Kev Quirk’s blog posts, I thought I would give Netlify a shot. Their free Starter plan seems great for a simple hobby site and includes CI (continuous integration) from a git repository. I signed up for an account but quickly disliked the fact that leveraging my own domain meant I needed to move my nameservers for it to Netlify. While this isn’t horrible, I really prefer to keep managing my DNS in a single place as opposed to scattering nameservers around to wherever my content is hosted. Currently all of my personal domains have DNS hosted in the same place, and I’d like to keep it that way. As a result, I shelved the idea of Netlify and looked to GitHub Pages instead.

I actually used GitHub Pages before, way back in the day when they were brand new and I set up my first Jekyll-based blog. It wasn’t bad by any stretch, but a lot of it was clunky. I remembered having to manually add some text files to the repository to configure my custom domain and to host content out of a folder that was named differently than what was expected. Likewise, there were no SSL options, so I ended up putting my GitHub Pages site behind CloudFlare in order to secure it. I figured this would be a good opportunity to see what, if anything, had changed. If I hated it, I wouldn’t be out anything and could continue to look at other options.

The initial setup is still the same as I remember: just create a public repository with a name of:

github-account.github.io

I did this through the GitHub website in less than a minute. Next up I ran git clone in order to initialize the repository on my local laptop in the same directory where I keep all of my other GitHub repos. With my local environment ready, I just copied the handful of files that I had backed up from my VPS into the root directory for the repository; if I don’t take any other action, GitHub will host content from the root of the repo. Since this is a static, single page site, I don’t need to worry about compiling it with static site generators like Jekyll or Hugo. I was able to commit the change for adding the files, navigate to https://jfaby-noc.github.io, and see my site.

With the content out of the way, I wanted to set up my custom domain. The GitHub side of the work can now be done through the Settings menu of the repository; it basically replaces the manual work that I previously had to do by adding files to my repository:

The top allows me to change the branch and directory to host content from; in my case I could just leave the defaults. The Custom domain sections allows me to type in my domain of choice. This just adds a file named CNAME to my repo containing the domain information. Then I just had to follow the directions for setting up a custom domain in my DNS host’s settings.

Note: It’s a little wonky from the directions, but to make GitHub redirect everything appropriately when using both an apex domain and a subdomain, you follow both sections of the instructions verbatim. For example, I wanted the domain to be jfabhd.com, but I also wanted www.jfabhd.com to still redirect to the site. I configured the apex domain via the instructions above, creating 4 A records pointing to different IP addresses. Then I configured a CNAME record for www.jfabhd.com pointing not to jfabhd.com, but instead to jfaby-noc.github.io. If you do it this way, GitHub will work it all out under the hood.

Immediately after setting up my DNS records, the option for Enforce HTTPS was not available, telling me that the site was not configured properly. I rightly assumed this just meant DNS needed time to propagate. I checked back 15 minutes later (which is the TTL of my DNS records), and it presented me with a new message that the certificate wasn’t finished being created yet. I once again rightly assumed that they were spinning up these certificates through Let’s Encrypt, so I browsed Hacker News for a few minutes until refreshing my repository’s settings showed that the option to force HTTPS was now available. I simply checked the box, waited a few minutes, and then verified that going explicitly to http://jfabhd.com would redirect me successfully to https://jfabhd.com. If this doesn’t work for you, chances are that you just didn’t give it enough time. While the tooltip in the GibHub UI says it can take up to 24 hours, it took about 5 minutes for my site.

The last thing to check was that the CI was working so that changes to the repo would be reflected on the site. A few things had changed since I took the backup of my site, meaning there were some needed tweaks with which I could test. For one I restarted this blog and I deleted my Twitter account since Twitter is a cesspool (that might be a good topic for another post…), so I wanted to swap the Twitter link on my site with one for this blog. I first did a git pull to get local copies of things like the CNAME file that had been made in the cloud, and then I quickly updated my HTML to share a link with the Font Awesome RSS feed icon as the content. After committing and pushing the change, I refreshed the site to confirm it had also been updated.

On the whole, there’s really nothing for me to complain about with GitHub Pages. It’s free, I can use the same GitHub account I’m already in every day, I can use a custom domain without moving my DNS, and I get a Let’s Encrypt certificate out of the box. Obviously, though, my use case for it is very simple, and your mileage may vary. With options like this, though, I feel even better about my idea to stop running my own VPS just to host a couple of small, low-traffic websites.

Salvaging Images From Squarespace

I wrote previously about moving this blog from Squarespace to WordPress. One of my cited concerns with Squarespace was being locked into that particular platform without a lot of options for moving somewhere else. So how did I move my content to WordPress? I was able to export the written content for the posts themselves from within Squarespace, fortunately. Inside of Settings > Advanced is an Import / Export option. The only export offering is WordPress, so I guess it was lucky that’s where I was moving. This gives an XML file with the written content and metadata for each post. Unfortunately, there is no option to export the images that I’ve uploaded over the past year of creating content over at Squarespace; within the XML file the images show up as <div> tags with a link to the Squarespace CDN for the actual image. For example, this is what I see where the image is for the last post I authored over on Squarespace:

<div style="padding-bottom:45.903255462646484%;" class=" image-block-wrapper has-aspect-ratio " data-animation-role="image" > <noscript><img src="https://images.squarespace-cdn.com/content/v1/5cabd40b755be258403ccb99/1595366630913-VD12AA9IHURXLT6CWQ66/ke17ZwdGBToddI8pDm48kKEtlFlv-8yggmb8KJA0a9wUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcpEap199WJ5tA07nqy9HB7RsfdGE2RUqSBzw535kCng92V_tkyiZ3FgjXcK6wugnz/html.png" alt="html.png" /></noscript><img class="thumb-image" data-src="https://images.squarespace-cdn.com/content/v1/5cabd40b755be258403ccb99/1595366630913-VD12AA9IHURXLT6CWQ66/ke17ZwdGBToddI8pDm48kKEtlFlv-8yggmb8KJA0a9wUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcpEap199WJ5tA07nqy9HB7RsfdGE2RUqSBzw535kCng92V_tkyiZ3FgjXcK6wugnz/html.png" data-image="https://images.squarespace-cdn.com/content/v1/5cabd40b755be258403ccb99/1595366630913-VD12AA9IHURXLT6CWQ66/ke17ZwdGBToddI8pDm48kKEtlFlv-8yggmb8KJA0a9wUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcpEap199WJ5tA07nqy9HB7RsfdGE2RUqSBzw535kCng92V_tkyiZ3FgjXcK6wugnz/html.png" data-image-dimensions="1013x465" data-image-focal-point="0.5,0.5" alt="html.png" data-load="false" data-image-id="5f175ce6cb20a366ea6f4d62" data-type="image" /> </div>

If you think that looks disgusting, that’s because it is. When I imported the XML file into WordPress, I saw an option to download any attachments on each post. I checked that box, but since the images are linked to the Squarespace CDN they’re considered to be HTML content rather than attachments. As a result, WordPress simply embeds the <div> in each post as a custom HTML block that doesn’t actually render the image.

Set on not going through 50 posts to manually save the images out of them, I started looking at the XML to see if I could do anything useful with the image URLs. One thing that immediately concerned me was that, when I wasn’t sure what I was going to do with the unusually.pink domain but knew that I didn’t want to keep it at Squarespace, I marked the Squarespace site as Private, meaning the only way to view the content was to log in. I assumed this meant the image content on the Squarespace CDN would be inaccessible until I made the site public again. After copying an image URL from my XML file, though, I saw that it was still publicly available. Flagging a Squarespace site as private means you can’t load the site directly, but content on Squarespace’s CDN is still accessible. That in itself seems like a problem to me and a very good reason to leave the platform, but in this one case it was working to my benefit. I realized that I could parse all of the images files out of the XML file with a script and download them programmatically.

As you can see from the XML snippet above, images on the Squarespace CDN have URLs like this:

https://images.squarespace-cdn.com/content/v1/5cabd40b755be258403ccb99/1595366630913-VD12AA9IHURXLT6CWQ66/ke17ZwdGBToddI8pDm48kKEtlFlv-8yggmb8KJA0a9wUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcpEap199WJ5tA07nqy9HB7RsfdGE2RUqSBzw535kCng92V_tkyiZ3FgjXcK6wugnz/html.png

There’s a whole lot of CDN nonsense, followed by a forward slash and the original file name at the very end. While this would be handy for getting the original file name, I didn’t want to end up with dozens of images in a folder where I had no idea what post they belonged to, and I definitely didn’t want to manually correlate the file name with the CDN link in each of the HTML blocks in WordPress.

The XML, though, also includes the title of each post, and I realized that if I was scanning each line of the XML for image tags, I could also check for the title tag and keep a variable constantly updated with that. With this idea, I would just start each file name with the post title so that they would be grouped together. Once the dots were connected, it was simple to come up with the following short PowerShell script:

It downloads all of the images referenced in the XML file in the format of:

postTitle_fileName.extension

I added some extra checks to remove unsavory characters from the file names; while it’s a valid character in most modern filesystems, for example, have you ever tried to actually programmatically do things from a Bash shell with a file that has a [ in the name? It’s not pretty.

While this saved me from having to manually download each image from Squarespace, I still had to manually go through each post in WordPress, remove the custom HTML block where each image should have been, and then upload the appropriate image. With the way I downloaded the images, though, I just started at the top of the directory and worked my way through the images alphabetically since each post was grouped together. It sucked, but it could have been a lot worse. If nothing else it made me glad that I moved forward with migrating the site now rather than waiting a few more months for the Squarespace subscription to lapse; I didn’t want to deal with this for any more posts than was strictly necessary.

Dropbox Passwords

I tend to pay for a lot of subscription services. In fact, my friend Mark and I have enough of them between us that we needed not one but two episodes of our podcast just to talk about all of our subscriptions. Since the pandemic means I have nothing better to do with my time than sit around and think about things like how much money I spend on subscriptions, though, I’ve been thinking about which ones I might be able to do without, which ones I could swap for cheaper services, etc. to save myself a little bit of money each year. It often feels trivial to tack on yet another thing that costs $5 – $10 a month, but over the course of the year it adds up.

Enter Dropbox Passwords, a password manager built into Dropbox. In the past I’ve used Dropbox to sync passwords in conjunction with KeePassX, so having the same functionality built directly into the platform seemed nice. The fact that it’s a feature included with my Dropbox Plus plan and would save me from paying $80 a year for my current password manager is also a nice bonus. First I just had to put it through its paces.

Migration

Migrating to a new password manager is typically a fairly painless process. Every password manager I’ve ever used has given me the option to export my passwords to a variety of plaintext file formats. Naturally, having a plaintext file with all of your credentials is a terrible idea, but unless the machine you’re operating on is a digital cesspool it should be fine for the few minutes it takes to import the file somewhere else.

In my case, I exported a CSV and imported it into Dropbox Passwords. I initially got a message that there weren’t any accounts to import. I opened the CSV file and saw that some of the columns had weird headings and assumed Dropbox didn’t know what fields in the CSV mapped to which fields within Passwords. Their help documentation covers what’s needed:

The columns in your CSV file must be labeled so Dropbox Passwords knows how to import the information. Although Dropbox Passwords can recognize a range of labels, we recommend labeling them “Name”, “Password”, “Username”, “Notes”, and “URL”.

I updated the column headings to match the documentation above, and everything was fine.

Desktop Client

The desktop client for Dropbox Passwords is spartan to say the least. You get fields for:

  • Site Name
  • Username
  • Password
  • URL
  • Notes

That’s it. In other password managers, I frequently leverage either additional passwords or custom fields to add things like app passwords, API keys, etc. While I could store those in the free-form Notes field in Dropbox Passwords, the values aren’t masked out like they would be in other services with dedicated fields for this sort of thing.

After the initial setup where I logged in with my Dropbox credentials, the app gave me a “word list.” This PDF just had 12, random, English words on it. This serves as an extra security mechanism that I’ll touch on in the next section.

After the app was set up, it asked me to create a 6 digit PIN. That PIN is used to unlock the app if it times out due to inactivity. It’s worth noting that the browser extensions will not autofill login information is the application is currently locked; more on that later as well.

Mobile Apps

There isn’t too much to say about the mobile apps; they’re basically exactly what you would expect. It is worth mentioning that, at the time of this writing, there’s no iPad version of the app, meaning I’m stuck looking at the blown-up iOS app. It’s not a huge ordeal, though, because aside from logging in initially I almost never open the app itself. Like every other password manager, iOS can be configured to automatically get passwords out of it without requiring an app switch. It also integrates with Face ID and Touch ID on iOS for quick unlocking.

Multi Factor Authentication

Dropbox Passwords automatically implements a sort of MFA. When I logged in to the app on my phone, for example, it gave me a prompt on the desktop client. I had to accept the prompt there to confirm that I was, in fact, trying to configure the app on a phone. Likewise, when I configured the app on my iPad, I received a prompt on both my laptop and my phone.

This is where you might wonder what happens if I don’t have any of those other devices handy. In that case, I can use the word list to log in. I actually ended up doing this one time, and it worked without a hitch. What happens if I also lose the word list? Let’s hope I never find out. It’s nice to know, though, that despite the fact that the content is tied to a Dropbox account, Dropbox account credentials alone aren’t enough to access it.

Browser Extensions

You might wonder why I talked about desktop and mobile clients, switched gears to authentication, and then came back to a “client.” The reason is that the browser extensions are literally just a wrapper that provides integration with the desktop app for things like autofilling credentials. For example, clicking on the Dropbox Passwords extension icon in Safari on macOS doesn’t even open a UI for the extension… it pops open the full Dropbox Passwords client. I see this frequently when nothing autofills in my browser, I click on the icon for the browser extension, and then it opens the full app where I see it requesting my PIN to unlock it.

The reason why wrapper browser extensions are noteworthy for me is that there are no standalone extensions or even direct web access. If Dropbox Passwords doesn’t have a client on your platform of choice, you’re simply out of luck. For example, I can’t access my passwords when using Manjaro Linux on my Pinebook Pro. I verified this by installing the browser extension; clicking on it will bring me to a lovely message that the application isn’t available for my platform.

Where this seems really insane to me is that if I log into my Dropbox account on the web I can see the vault for Dropbox Passwords! But clicking on it gives me the same screen as shown in the image above.

I can’t actually do anything to access it. Even just some kind of web portal like I can access with Bitwarden, LastPass, or 1Password would be better than nothing. I can definitely understand not making a native Linux app a priority, but not having a browser extension or web access in 2020 blows my mind more than a little.

I really hope this is something the Dropbox Passwords team is actively working on. While the overall service isn’t quite as slick or polished as some of its competitors, the fact that it comes included with paid Dropbox Plans is a huge boon; people like myself will have to think twice about paying extra money for a service they already have included with their existing Dropbox subscription. There are some hurdles to overcome for Dropbox Passwords to reach parity with its competitors, but for many people it’ll be good enough as-is.

Unusually Pink Migration

So Long, Squarespace!

If anyone stumbles across this site who was previously an Unusually Pink reader, then you might notice that the site looks a bit different after a few months of hiatus. In the short, just under 2 year lifespan of the site it has now moved to its 3rd host. Originally it was hosted on a Vultr VPS that I had been hosting a few other things on, back when I originally bought the domain because I loved the name but had no idea what to do with it. Then Brandi, my former co-host, and I decided to start a podcast; it quickly became apparent that my web development skills weren’t exactly up to par with what we wanted to accomplish. As a result, we moved the site over to Squarespace.

Our podcast lived just long enough for the Squarespace hosting to renew before Brandi and I both decided that things had run their course. It was unfortunate that I had just forked over another year’s worth of money to Squarespace for hosting before reaching that decision. With that being said, you might be wondering why on Earth I’d be re-hosting the site somewhere else if I still have time left on the Squarespace subscription; more on that will come a little later on. With this being my first time using Squarespace, though, I thought I would first share some thoughts after running a site there for a year.

The Good

When I initially decided to move the site from my VPS to Squarespace, it was mainly because I knew I needed hosting somewhere, and it seemed like a good chance to mess around with something new. I had run numerous blogs on a free WordPress.com account along with compiling many of my own blogs with Hugo as I tend to discuss frequently. With us wanting to have a presence online that made us look like we knew what we were doing, though, I figured this was a worthwhile opportunity to justify spending the money on hosting with Squarespace.

Squarespace offers, hands down, the nicest management interface I’ve ever seen. Everything is very slick and inviting, without being overly cluttered and complicated. It’s simple to add new pages to your site or even branches to your site. For example, I originally migrated the blog I had been running under the Unusually Pink domain to Squarespace, but I quickly realized that the best way to handle the show notes for each podcast episode would also be basically a blog. It was trivial to literally add another blog to the site; I just had to tell Squarespace what directory I wanted to host that under and which of the two would be the “main” page of the site. The two were then independent of one another.

Squarespace doesn’t offer nearly as many themes as you’ll find with something like WordPress, but all of the Squarespace themes are highly customizable without having to wander into the realm of HTML and CSS. For example, for any theme I can change literally every color by simply using the menus presented to me. On the flip side, the WordPress theme you see right now only offered a handful of elements for color modification. Even worse, this theme offered more options than many of the others I looked at, where changing anything beyond the text color would’ve involved modifying the CSS.

Finally, Squarespace gives you an absurd amount of information about the traffic to your site, all without the need for any type of plugins. You can simply link up Google credentials to integrate with Google Analytics, for example, and see what people are searching for to reach your site, what position you’re in for the search results, the click percentage, how many impressions you get, etc. It also offers a very slick, interactive map if you want to drill down to the specifics of where your hits stem from.

The Bad

The main purpose for the previous site on Squarespace was blogging. Case in point, there were two blogs hosted on it; one for my own random posts and one for the show notes that went along with each podcast episode. Easily the single biggest nail in the Squarespace coffin is that the service is in no way designed for blogging. That might seem contradictory considering I just said that I hosted not one but two blogs on a single site there, but allow me to elaborate.

Adding a blog to Squarespace just means that when you go to edit the site, you have two different streams of posts you can choose from. You pick the blog, say you want to make a new post, and start to edit the content. This is where things immediately get murky. The editor for authoring content in Squarespace is pretty bad. It tries to break the content of each post down into blocks the way the current WordPress editor does, but it does so in an extremely clunky, unintuitive way. Simple things like handling the appearance of media you upload is often not possible, meaning that I had to resize every photo prior to uploading since I knew there would be no good options for scaling this after the fact. Likewise, trying to embed any sort of content was frequently gated behind a paywall; I couldn’t embed the player for each episode into the post with the show notes because they wanted me to pay more for that privilege. I couldn’t embed tweets but had to just link to them. That may not have been a big deal were it not for the fact that the Squarespace plan I was on was already more than double what I’m paying for hosting now.

As another blow to blogging, Squarespace doesn’t provide any real outlet for managing the posts on the site. While in the management interface, for example, going to one of the two blogs I had added would simply show me a lists of posts on the left in chronological order. If the post I needed to modify was at the very bottom of the list because it was old, then I had to just keep scrolling until I got to it, letting the clusters of posts incrementally load the further I scrolled. There weren’t any options to just search for the post I wanted. This may have been a limitation of the theme I selected, but I was equally disappointed that I couldn’t search the blog itself for specific content, either. I frequently author blog posts that I know will help me in the future; they live on a blog as opposed to just in my personal notes because they might also be beneficial to someone else. If I can’t easily get back to that content, though, without mindlessly clicking a “Next” button, that’s a problem. This WordPress blog offers both a search box and sane pagination; neither was an option for my Squarespace deployment. I’d frequently have to search the web for what I wanted to find with the URL of my own site to reach it. That’s a problem.

The last thing I’ll mention is portability. Admittedly, WordPress might be just as bad at this, but it’s extremely difficult to take content from Squarespace and move it somewhere else. This was the big reason why I didn’t want to continue creating content on Squarespace even though I’ve already paid for the hosting there; I knew that I didn’t want to stick with Squarespace once the current hosting expired, but anything new I posted there would just be more work to move to somewhere else later on. Squarespace offers you the ability to export your content, but it’s to an XML file. While this will get the written content for each post and the metadata about it, it will not include any media. I managed to throw together a bit of a workaround that’ll most likely be the topic of my next post, but it was still a large amount of work to move everything from one host to another.

An obvious question at this point would be:

But aren’t you just in the same option regarding portability after moving to WordPress?

The answer is… maybe. As long as I don’t become disenfranchised with the platform as a whole, there are many different WordPress hosting platforms out there. If I want to move from one to another, I can easily export my site or take a backup of it and move the content somewhere else. I had initially tried moving a lot of the content from Squarespace to a Hugo site I already ran, but I very quickly ran into many of the same issues I described with Squarespace regarding management and discoverability; while being lightweight is nice, sometimes having a CMS is beneficial.

Wrap-Up

Despite the vibe you may get, I don’t dislike Squarespace at all. I feel like their business is really tailored to users who want a professional, mostly static website but who don’t have the skills to create that themselves. For a hobbyist like myself with a focus on blogging, the premium you pay for Squarespace gets you essentially nothing. Any WordPress instance is going to be a better blogging platform, and one that is significantly cheaper at that. Similarly, if you need to have firm divisions in your site (e.g. a blog for the sake of shitposting and a blog for podcast show notes), you can’t easily do that within WordPress. While you can create multiple pages, such as the About page here, you can’t set up an entirely separate blog.

At least for the moment, what I did with Squarespace for both a blog and podcast repository wouldn’t be possible with WordPress. For a standalone blog, though, the experience is significantly better on WordPress. It’s important to understand what the goal of your site is and what you need out of your platform. When that goal changes, moving platforms might be the best move. Hopefully my next post on how I migrated my images between Squarespace and WordPress can help with that.