RememBear Password Manager

A few weeks ago I received an email offering to let me try the premium variant of the RememBear password manager for a year for free. I assume that I received this since I currently have an active TunnelBear subscription that I use for my VPN. I didn’t bother looking into things too closely, but my understanding is that you can normally use RememBear for free, but syncing your passwords between multiple devices requires a premium subscription… meaning that in 2020 when people have a phone, a tablet, and at least one computer, a premium subscription is basically a requirement. I figured that I may as well give it a shot in order to see if it could possibly tide me over until Dropbox Passwords is a bit more mature; a year of free service seems like a good bit of time for the Dropbox team to improve upon their product.

The RememBear apps for macOS, iOS, and iPadOS are all very slick. They feature the same cute, cartoon bear as TunnelBear. If I type in my master password on macOS, the bear will even move its head to follow along as I type. That being said, typing in my master password happens infrequently since the apps on all 3 platforms work really well with Touch ID and Face ID.

The apps on all 3 platforms also do well with auto-filling passwords for me, including with the browser extension for Safari on macOS. Likewise, the iPadOS app is, mercifully, an iPad application rather than the expanded iOS app. I did run into a few random bugs on macOS where the application would be blank after unlocking it, showing as if I didn’t have any logins stored. I saw another issue where my logins were all listed, but using the search feature wouldn’t return any results even if there were matches. In both cases, closing out of the application and re-opening it fixed the problem, and I only encountered each bug once.

Adding new devices or recovering your account is streamlined with a QR code mechanism that leverages another device which already has RememBear configured on it. This makes the setup quick and easy, though it remains unclear what recourse there is if the master password itself is lost and no other devices are configured. They don’t give you anything like the secret word list for Dropbox Passwords or the secret key from 1Password.

One of the nice benefits to RememBear over the immaturity of Dropbox Passwords is that it does offer an option for creating and syncing secure notes. I frequently use these for things like saving WiFi pre-shared keys. Unfortunately, I’ve had to start using these for saving API keys, too, because RememBear doesn’t offer the option to add multiple credential fields for each login. I can also store them, much as I did with Dropbox Passwords, in the provided notes section for each login, though it does make copying and pasting the keys a bit more annoying than in something like 1Password or Bitwarden which simply give you the option to add multiple credential fields.

The biggest issue with Dropbox Passwords, though, is unfortunately shared with RememBear: there is no web-only option. Installing the browser extension for Firefox, for example, will not work on my Manjaro Linux laptops because the desktop client is still expected and there is no Linux application. This was really surprising to me since one of the reasons I initially started using TunnelBear as my VPN service was due to the fact that they offered a standalone browser extension that I could use back when I had Chromebooks which couldn’t install any type of full-fledged VPN client. Given that RememBear has been around for at least a few years and is by no means a new product, I did a little bit of digging to see if there were any plans to support Linux or Chrome OS (while I no longer use Chromebooks because Google is pretty gross, Chrome OS support would indicate a standalone browser extension.) The latest I could find was a comment from May of 2019 on the Chrome extension which simply confirmed Linux and Chrome OS weren’t supported.

With its slick, cute, and bear-laden UI, RememBear is probably one of the nicest-looking and most user-friendly password managers around. For the overwhelming majority of people, it also likely ticks all of the boxes they would care about as far as features are concerned. Any Linux users out there, though, will be disappointed with the complete inability to use it with their operating system of choice. Here’s to hoping for web support or a standalone browser extension at some point in the future, but for the time being Linux aficionados are better off sticking with something like 1Password or Bitwarden.

Keychron K2 V2

I recently ran into a problem with my keyboard at home, a Logitech K350, AKA a Logitech Wave; the O key stopped reliably working. Sometimes it would work fine, sometimes it would work only when pressed with an excessive amount of force no touch-typist could reliably muster (at which point the key would clank down awkwardly), and sometimes it just wouldn’t work at all. Unsurprisingly, the letter O is one that I use with a good bit of regularly, so this presented a problem. I decided to go ahead and order a new keyboard and started sizing up my options. I didn’t do much research, though, before Craft Brew Geek recommended the Keychron K2 V2. He hadn’t used one but had seen positive feedback on it from some highly respected people (MKBHD, anyone?), and I think he wanted someone he knew to try it out before he decided if he should get one. I was happy to oblige after doing a little research of my own and seeing almost exclusively positive feedback.

The first thing worth noting is that this is Version 2. That’s important because you can find the first version floating around on the cheap. It’s important to not mix them up, though; the initial offering of the K2 was fraught with problems that V2 tackles nicely. V2 comes in a few different variants based on the lighting and body style that you want, along with how much you want to pay. I went for the variant with RGB lighting (cool but not that important) and an aluminum frame (extremely important.) I had read online that the keyboard sans frame was light to a detriment without enough weight to properly hold it down on the desk. I did pay more for the frame, but with the build and the features, this is still relatively cheap in the world of mechanical keyboards.

The K2 is a TKL (tenkeyless) keyboard, meaning that it doesn’t have a number pad. Unlike many other TKL models, though, it has 84 keys because it still features dedicated arrow keys and dedicated Home, End, Page Up, and Page Down keys. Memorizing another function layer for keyboards that don’t feature dedicated keys like this isn’t the end of the world, but I view having them as a significant quality of life bonus.

The bigger quality of life bonus, though, falls to the keycaps. The K2 features macOS-based keycaps. It comes with replacement keycaps for Windows in case you use an that operating system and need to swap Option for Windows and Command for Alt. This is pretty common among mechanical keyboards, though it’s worth mentioning that the K2 has a dedicated physical toggle to control if it’s operating in macOS or Windows mode; most keyboards use a software function for that. The bigger difference, though, is in the function row. Along with having the standard F1 – F12 markings, the keycaps also show the corresponding macOS function. As a macOS user, I can’t stress how helpful it is to have the keycaps appropriately marked so that I know which key is going to turn up my volume and which one will launch Exposé, for example. As another quality of life bonus, the K2 also features a dedicated screenshot key that’s the equivalent of hitting Command + Shift + 4 on macOS. That’s extremely cool in my opinion.

On top of all of this, Keychron also provides orange keycaps that you can opt to use on Escape and on the key which controls the backlighting. I thought they looked snazzy and decided to use them, though you can swap to more standard keys if you wish. I was also a pleasant surprised that a wire keycap puller was included rather than one of the cheaper plastic ones that’s all but guaranteed to scratch up your keycaps. While there are plenty of dedicated keys, there’s even more functionality of the K2 tucked behind key combinations in conjunction with the Function key. The included instruction booklet clearly highlights all of these, and it was a matter of minutes to get everything configured the way I wanted it.

The lighting in my photo isn’t great, but rest assured that you can eschew RGB in favor of locking the lights in to a nice shade of pink.

I’m the type of person who rarely uses any type of adjustable riser on my keyboards, and this was one of the biggest flaws with the original take on the K2; it was almost completely flat. V2 of the K2 has a nice, gentle slope that’s pretty much exactly where I want it to be. For those who want more of an angle, there are 2 sets of adjustable feet on the bottom. The official website only calls out the difference of 9 degrees, which has to be for the more drastic of the pairs. I’d guess the other is 4 degrees, but I wasn’t about to dig up a protractor to find out. The default slope makes for a very nice typing experience for me, so I haven’t even worried about the feet beyond confirming that they exist.

Typing on the K2 is an overall pleasant experience with one minor issue I’ll touch on later. Keychron offers three choices of switch: Gateron Blues, Browns, and Reds. This was my only real hesitance in deciding to buy the keyboard since I’ve always been a stickler for Cherry switches. I opted for the Gateron Browns even though I would prefer Blues simply because I’m using this keyboard for work, and I don’t need my typing to be any louder than it already is while I’m thundering out 120 WPM on calls. While the Gateron Browns don’t feel quite as nice as Cherry Browns in my opinion, they’re really close. Both Cherry and Gateron Browns even actuate at the same 55 grams. I feel good that my concern over the switches wasn’t warranted.

The K2 can connect to devices either via Bluetooth or USB-C. I’ll likely never use it via Bluetooth if my Plum Nano serves as any indication, so USB-C will be my go-to method. This is where one of the two issues I have with the K2 V2 comes into play. The USB-C port is on the left side of the keyboard rather than the back, which you can see in the top image. The provided USB-C cable accounts for this by connecting at a right-angle so that it can immediately be directed behind the board, and this seems to work fine… as long as I have that cable. If I’m ever forced to use a different, more standard cable, that’s going to make for a janky setup.

The other issue I have with the K2 V2 I hesitate to even really call an “issue”; it’s more something I need to adjust to a little bit as a touch-typist. The right Shift key is a little bit shorter than it would be on a standard ANSI QWERTY keyboard. This is done to allow for the dedicated arrow keys. I would say that 98% of the time (yes, I’m completely making up this number), it’s not an issue. The other 2% of the time, I accidentally extend my pinky just a bit too far and hit the up arrow or awkwardly catch the edge of the Shift key. It’s not a huge ordeal, and as someone who has periodically dealt with using ISO keyboards before I know I’ll adjust quickly; it feels worth mentioning, though. It’s also good practice for me since my Starlabs Lite Mark III that’s currently sitting in customs in the UK has a much shorter right Shift key for the exact same reason.

On the whole, I’ve been extremely pleased with the K2 V2 so far. I’ve used it for a little over a week, generally spending 9 – 14 hours a day on it between work, training, and personal projects. It’s a treat to type on, the functionality is nice without being overkill, and I think the size really hits the sweet spot between not taking over my desk and not forcing me to re-adjust it every 5 lines of code because it’s constantly moving; the aluminum frame that I opted for undoubtedly helps it in that regard. I really do think that for $90 USD you could do significantly worse, and the K2 V2 has features and a build quality I’d be expecting from a mechanical keyboard in the $120 – $150 USD price range. I think this is a great keyboard for anyone, but especially if you’re a macOS user in the market for a mechanical keyboard, the Keychron K2 V2 would be a smart place to start.

Even More Storage: Yearly Bonuses For Paid ProtonMail Accounts

It’s no secret around this blog or for anyone who listened to the Unusually Pink Podcast that I’m a fan of ProtonMail. While it’s unfortunate that privacy in the world of computing often comes with an associated monetary cost, the simple fact of the matter is that if you aren’t paying for your email account then chances are the provider is making money off of your data that happens to be stored in it. Google, for example, is happy to give you free email so that they can scrape your data out of it and make a comprehensive profile about your life for advertising purposes. ProtonMail takes a firm stance against this practice, and they actually do permit users to create completely free email accounts that are never scraped or monitored; their setup actually ensures that they couldn’t access the plaintext content of your email even if they wanted to. Doing this is only possible, though, because some users opt to pay for additional features and thus subsidize the free accounts. I happen to be one of those people who has been paying for a few years now both to help support the ProtonMail mission and to get access to a custom domain in my account.

ProtonMail occasionally likes to give special perks to their paid customers as a token of appreciation; I’ve written before about how they’ve given away a bonus 5 GB of storage for paid accounts. They’ve done the same thing another time since that post, giving me 10 GB of bonus storage on top of the 5 GB that comes with my paid plan. While I said at the time of my original post that I didn’t really need the storage at the moment but was happy to have the extra bits just in case, ProtonMail has subsequently stated that the storage can be shared with the upcoming ProtonDrive secure cloud storage offering once that’s available.

As another token of gratitude, ProtonMail has announced regular storage increases for paid accounts; each year that a paid account remains active, it receives 1 GB of bonus storage on the anniversary. Even better, this is retroactive. Since I’ve had a paid account for 3 years, when the initiative was implemented I immediately received 3 GB of bonus storage. I’ll get a 4th GB on my yearly anniversary coming up later this month:

The general principle is straightforward:

– When you sign up for any paid Proton plan, you are automatically eligible for Storage Bonuses.

– On the one year anniversary of your paid subscription, you will receive 1 GB of additional storage for free that can be used with your ProtonMail inbox. (In the future, your storage will be shared between your ProtonMail inbox and your ProtonDrive vault.)

– This will happen every year, and your Storage Bonuses will accumulate as long as you have a paid plan with Proton without interruption.

While the current bonus value is 1 GB right now, they say this will increase in the future, though it’s unclear if the increase is also retroactive or if it will only apply to new bonus accumulations going forward:

The current Storage Bonus is 1 GB per year, but this will be increased in the future.

The full details are available, along with a FAQ, from their support article. The tl;dr is basically that you get bonus storage every year on the anniversary of your paid account as long as you remain with a paid subscription on any Proton product. This means that if you pay for ProtonVPN but not ProtonMail, you’re still eligible for the bonus storage. That eligibility and current storage accumulation continues to apply even if you stop paying for ProtonVPN and start paying for ProtonMail (or ProtonDrive when it comes out) as long as the two periods overlap. Having a lapse in paid subscription will reset the accumulation of bonus storage.

All told, I think it’s another nice perk from ProtonMail, and with the ability to leverage the storage for ProtonDrive, a product that I’m highly looking forward to, the more storage that I can get my hands on the better.

Books: Hands On Hacking

While I’ve been stuck at home as the global coronavirus pandemic rages on (currently on day 241 of quarantine, for those who listen to the Same Shade Of Difference), I’ve been trying to make the most of my time in captivity with lots of reading, training, and personal projects to learn as much new stuff as I can. One of the items that came on to my radar a few months ago was a new infosec book titled Hands On Hacking from Wiley. Written in part by Hacker Fantastic, who I’ve followed on Twitter for quite a few years across my various accounts, I figured it would be a good refresher for some of the hacking concepts I’ve used before and a primer for newer tooling that I’m not as familiar with.

As you can see from the book’s cover, the idea is to teach “purple teaming”, which is the idea of doing away with the silos for the “red team” that tries to breach systems and the “blue team” that tries to defend them. The book covers the full gamut of hacking, starting with open source information gathering to get as much data as you can about your target before actively engaging with any of their systems all the way through compromising web applications and moving laterally through internal systems.

All throughout, the book uses purple teaming as a focus; it very clearly outlines that taking part in any of the activities covered without the express consent of the owners of the system can carry severe legal penalties. The goal is to assist you with either a career as a penetration tester or to give you the tools and knowledge to be able to pen test and secure your own systems that you manage. You will not read the book and immediately find yourself living the life of a Mr. Robot character.

The book, in my opinion, is very well written. While I was familiar with most of the concepts covered, I think it was written in a way that makes the material approachable even for readers without much prior knowledge in the world of infosec. That being said, while there is a good bit of hand-holding in the introduction to Linux, I think there are some basic, assumed competencies in the world of computing. I don’t think that’s a fault; you really have to draw the line somewhere, and I think the authors did a fantastic job of making everything as approachable as possible.

The book comes with a complete lab environment with virtual machines pre-configured to be exploitable in a fashion to demonstrate the concepts covered in each chapter of the book, giving readers the option to either read the book purely for information or to work through the labs and practice executing the material discussed. In my mind it’s essentially like a self-guided, DIY version of something like the excellent Foundstone Ultimate Hacking class that I was fortunate enough to take a few years ago.

If you’re already a skilled hacker, is the book going to enlighten you to new, next-level exploits? Definitely not. But if you’re a systems administrator who is responsible for the managing servers at your company, a SaaS admin responsible for identities, or a developer responsible for creating applications exposed to the Internet at large, it’ll give you a very solid baseline for making sure that your own systems aren’t vulnerable to the most egregious of issues. I personally found the open source intelligence gathering chapter very useful; it covered techniques and services for determining the amount of information about your company and specific details regarding the employees that’s available to literally anyone with an interest in finding out more. It’s allowed me to work through setting up some scripts to automatically check on this and notify me when perhaps more information is leaking out than it should due to things like 3rd party breaches where users may have signed up with a company email address.

Similarly, I think the book is also a good read for leadership-level people who may not need to know the technical details of how hacks are accomplished but need to be mindful of what’s possible and what their employees should be looking for when developing and administering systems. These readers likely don’t need to go through things like achieving the exploits themselves in the lab (though obviously it’s cool if they want to), but the book can serve as a nice reference for what the company’s employees should be looking for when they decide to roll out a new service or application.

Trying Out NextDNS

I recently decided to take a little bit of time to set up NextDNS. While on the surface it’s similar to the myriad other DNS resolvers out there you can opt to use instead of the likely horrible DNS servers provided by your ISP, NextDNS is essentially a cloud-based Pi-hole. You can use some of the built-in blacklists in the product to block things like advertising, trackers, malware domains, and more. There are a few key benefits to blocking things at the DNS level rather than relying on something to do the blocking your web browser. First off, most mobile browsers don’t offer the same robust extension ecosystems we’re all used to with full computers; you might be able to toggle some settings to block trackers, for example, but advertising is often a different beast altogether (though the really bad advertisements typically also have egregious tackers, meaning that blocking the trackers will block the ads.) Additionally, there are some nefarious browser extension posing as ad-blockers, and there have been instances of legitimate extensions being sold and turned nefarious. Google’s extremely sleazy war on ad-blocking also makes DNS-based filtering attractive, though I’d still recommend people avoid Chromium-based browsers if at all possible. The other nice part is that you can easily configure DNS settings on a router, meaning the same degree of protection can apply to IoT devices with no accessible network settings of their own. This similarly applies to applications on your computer that connect to the network outside of a browser, as we’ll see later.

NextDNS handles this by allowing you to specify your public IP address in their portal, thus linking traffic from your home with your NextDNS profile and all of the configurations that are in place there. The only caveat to it is needing to configure DDNS in some way or simply remembering to go to the portal in order to update the IP address should it ever change. Additionally, they offer apps for basically every major platform for configuring your devices and pointing them to the appropriate account. While you can just update the DNS settings in your host operating system rather than using an app, the app is still required for linking your device to your profile so that it will leverage the appropriate block lists. What I found to be really nice was that the iOS and iPad apps are able to leverage the new iOS 14 DNS profiles, meaning that it doesn’t need to create a shell VPN tunnel just for your DNS requests; that’s a huge win.

To really test it out, for about 2 weeks I disabled the ad-blockers in my common browsers and tried to let NextDNS handle the brunt of my filtering needs. Getting it configured everywhere was fairly simple. The web portal will tell you what your current public IP is and offers a simple button to update your account to leverage that IP address. This made it simple to get basically everything in my home network using it after I modified the DNS servers in my router. I still went ahead and configured the settings individually on my devices, too, for the rare instance during a global pandemic when I’m on a network other than my home network. The iOS and iPad apps just need you to tap a button to add the new DNS profile to the device. Likewise, the macOS app simply adds a new icon to your tray at the top-right and offers a toggle for turning on your NextDNS settings. Unsurprisingly, there’s no Manjaro Linux app, though the service offers a bevy of examples for configuring your DNS settings on Linux; you’re just stuck in the position of not being able to link the device to your NextDNS account if you happen to leave the home network. The only real problem I ran into was that I had configured Firefox to use DNS over HTTPS and forgotten about it; once I realized I needed an additional change beyond the OS DNS settings, everything was fine.

Operating this way, for the most part browsing the web was business as usual. Not quite as many advertisements were being blocked as I would have expected with a browser extension enabled (more on that later), but on the whole the experience was still positive. What really surprised me, though, was the degree to which IoT devices are just an absolute dumpster fire; checking the metrics NextDNS generates showed that anywhere from 10 – 20% of my total DNS requests were being blocked, but nearly all of the top 10 blocked domains were based on under-the-hood queries being made by my devices phoning home rather than from actual advertising or tracking on web pages. All of this is nicely showcased with graphs in the NextDNS portal:

While I expected the combined privacy invasion of my two Amazon Echo devices to be the worst offender, my single Roku device actually took the top spot by a significant margin. My top 10 blocked domains were:

  • scribe.logs.roku.com – 17,226
  • device-metrics-us-2.amazon.com – 1,327
  • telemetry.dropbox.com – 1,121
  • giga.logs.roku.com – 1,095
  • device-metrics-us.amazon.com – 794
  • mads.amazon-adsystem.com – 717

The only non-device domain making the top 10 was from telemetry queries from Dropbox, the frequency of which was a bit disturbing. Roku really caught me off guard, though, with a single device making over 18,000 queries in just a couple of weeks. While using more domains, the two Echo devices made under 3,000 (which is still really bad!)

As a bit of an aside, I was curious if the devices would simply give up on whatever they were phoning home about and drop the information or if they were storing it locally to upload in a bulk at the first opportunity. I ended up disabling NextDNS on my router around 9 AM and checked on the traffic of both my Echo and my Roku, neither of which were being actively used at the time. The Roku showed zero data use since the time I had been streaming with it the night before:

The Echo, on the other hand, immediately spiked with network usage to transmit who-knows-what. That’s nice and terrifying:

The other insight I found particularly interesting was just how deeply some of the biggest players on the web have their claws embedded across the Internet. For example, I don’t think any big Internet company is more evil than Facebook (though Google is trying hard), so I created a custom blocklist preventing facebook.com from resolving. This prevents not just Facebook from loading but also some of Facebook’s other properties. For example, I hate and don’t use Instagram, but some friends occasionally send me posts from there. Instagram straight up won’t work if the main Facebook domain can’t be resolved. How gross is that?

NextDNS operates by giving you 300,000 customized DNS queries per month where your policies and blacklists will be applied. After 300k queries, the service will act like a normal DNS resolver; while your devices won’t suddenly stop having functioning DNS, they won’t be benefiting from any of the blocks you might be expecting. For just a couple of dollars a month, you can get access to unlimited DNS queries, and subscribing for a year gives a month for free. I found staying on top of my query usage to be a bit confusing, though. As is shown in the screenshot above, my total number of queries for the month is provided, along with how many were blocked. I was averaging around 30,000 queries per day on this graph. I realized after the first week, though, that going into my account settings in the NextDNS portal gave me a different metric on how many of my free queries had been used, and the data there was significantly lower. I ended my testing after using 280,000 queries according to the analytics graph, while my account settings showed that I had used just shy of 200,000 queries. I have no idea how those two numbers are different by 80,000 queries, especially when the two weeks of testing were done in the same month.

While I liked NextDNS, it wasn’t perfect. I had mentioned previously that the blocking wasn’t quite as good as what I’d expect from just relying on an extension when browsing the web. The main reason I could see for this is that some companies host trackers and advertising on the same domain they use for other, more legitimate purposes. For example, checking the blocked tracking metrics within Safari 14 showed that bing.com was still high in the running. Naturally a service like NextDNS can’t just block the entire Bing domain without breaking plenty of services people actually might want to use. If “close enough” satisfies your needs for blocking ads then I could see NextDNS being a good solution. If you’re like me and want to block everything, though, then you’re still going to need extensions in your browser, and that makes the value proposition for paying for something like NextDNS for unlimited queries a bit less tenable. What ultimately made the decision for me, though, was that I ended up running into a handful of issues with the app I used the most: the one on macOS. I’m willing to own that the issue might stem from my setup or my device, as my MacBook has 4 different VPN clients on it that I use (and frequently switch between) for work, the network stack on the device crashes with semi-regular frequency, and my home network is significantly more convoluted than most. What I saw, though, was that my DNS queries would periodically just fail. Trying to dig from the CLI would give me a timeout error, like the NextDNS servers weren’t responding. Pings to known IP addresses worked fine. If I turned off the NextDNS macOS application, then everything worked fine. Toggling it back on would result in broken queries again. Fixing this was a mixed back that ranged from completely closing and re-launching the app to disabling my wireless network in macOS all the way to rebooting my laptop. Between that problem and the fact that it wasn’t a standalone blocking solution for me, I opted to not dive into paying for the service, even though it has a lot of promise and does keep my Roku from being the chattiest device possible.

SSIDs Everywhere

As someone who has been an apartment dweller for a relatively long time, there are some extremely solid perks to it. It’s nice to never have to worry about things like maintenance, for example; if something goes wrong, I open a service ticket and someone shows up to fix the problem. When my furnace wouldn’t start on a cold day, for example, I wasn’t scrambling to figure out who to call. There are some downsides as well, though. For one, many people are frustrated by the fact that they can’t customize their home to the degree they’d like with respect to things like the color of the walls. Anyone who knows me knows that I couldn’t possibly care less about that. What does give me grief, though, is how significantly the spectrum around me is completely polluted by my neighbors.

I had issues with this in my last apartment, which was roughly the same size as my current apartment but with a longer, narrower layout than my current home. When I moved my home office to the bedroom, which was at the opposite side of the apartment from where my router was, I saw a noticeable performance decrease in my home network; this was a problem when I was working from home and an even bigger problem when I was trying to reach Platinum in Overwatch (spoiler alert, I didn’t manage to do it.) At the time, I replaced my cheap home router with a mesh WiFi setup so that I could utterly drown out my neighbors with sheer WiFi dominance. I ended up buying a pack of 3 access points because it was only slightly more expensive than buying 2 access points individually. That was more than enough to blanket my 900 square foot apartment, and I didn’t really think too much about my setup after that.

Fast forward to my current apartment, which is located in a much more populous area than where I previously lived. Pre-pandemic, I still didn’t think too much about my home network setup. The only bandwidth-intensive activity I did was video streaming, and that was mainly done from streaming sticks connected to my TV that was literally right on top of the router/access point connected to my modem; I never ran into issues with it. After the pandemic kicked into high gear in this area, though, I started living from my home office and spending unholy amounts of time on web conferences. While they worked well enough most of the time, I’d periodically have spikes of extremely high latency that would cause me to sound like Megatron’s cousin while on calls. This was annoying on work calls and infuriating when trying to record podcast episodes.

At first I assumed that the problem had to be upstream with my ISP being overloaded since suddenly everyone was staying home all the time. As the problem persisted, though, it made less and less sense to me. It would be reasonable if this behavior happened during the evenings when everyone is sitting around binge-watching their favorite shows because they can’t go out. When my network is choking to death on a 9 AM call, I was scratching my head. Surely not enough people could be doing video calls at that time, right? While latency affects VOIP traffic heavily, it isn’t exactly the most bandwidth-intensive thing to be doing in 2020.

Thinking my mesh network was the problem, I even tried ripping it out and replacing it with a single router connected to my modem. While it at first seemed to have a bit more stability, I still ran into some of the same problems. Finally, I realized that I was seeing a lot of networks when I just looked at what was available from any of my client devices. I fired up WiFi Explorer and was presented with this nightmare.

If you’re thinking that looks disgusting, you’re correct. Every channel in both the 2.4 GHz and 5 GHz bands is completely packed with networks. I’ve been checking this periodically since realizing it could be the problem, and I regularly see anywhere between 50 and 70 different wireless networks from my apartment. Yikes.

Admittedly, I’m part of the problem. I’m broadcasting with a 5 GHz and a 2.4 GHz network from a router that I use exclusively for work. I’m also broadcasting a main and guest network on my main mesh setup. That causes matching SSIDs to broadcast on both the 2.4 GHz and 5 GHz spectrum on each of the 3 access points per network, meaning I’m technically broadcasting 12 SSIDs. Even so, I’m still competing with, at minimum, 40-ish other devices crowding the same spectrum.

After coming to the realization that I may have been blaming all of the wrong things, I adjusted the setup of my access points so that the main router/access point which is connected to my modem is in direct line of sight across my apartment from the mesh access point at my office desk (which moved from the desk proper to a table next to the desk.) Since doing that, knock on wood, things have at least seemed to be a bit more stable for me. Either I’ve been having a better experience on web conferences or no one bothers to complain to me about it when my audio suddenly sounds like garbage because they’re just used to that happening from my end.

All that being said, I do still have an issue where the network stack on my MacBook Pro will crash, leaving me with no network connectivity until I disable and then re-enable WiFi. I haven’t managed to find a fix for that particular problem yet, though I imagine having 4 different flavors of VPN client installed probably isn’t doing me any favors.

Ubuntu Linux GRUB Error After 20.04 Upgrade

While I’ve nuked my personal VPS, I still have a VPS that I use for work; it comes in handy for things like running cron jobs, maintaining persistent shells, and generally handling things where a Linux shell seems better than a macOS shell (I’m looking at you, remote PowerShell sessions connecting to Microsoft Exchange.) This week I decided to upgrade it from Ubuntu 18.04 to Ubuntu 20.04. I like to stick on the LTS (long term support) releases for my servers, but I do typically prefer to keep even the LTS releases upgraded rather than waiting for them to go end of life. I could have kept using Ubuntu 18.04 with maintenance updates until 2023 and security maintenance until 2028, but what’s the fun in that?

Upgrading a VPS is always a bit of a nerve-wracking situation just because I don’t have local access to the host in case something goes extremely awry. Ubuntu tries to help alleviate this by opening a second SSH daemon on a different port just in case the primary daemon crashes during the upgrade, but if the machine ends up in a non-bootable state I’m still more or less hosed. Fortunately for me, things almost went off without a hitch… almost.

While the upgrade did complete, I received an error toward the end of the process that GRUB failed to upgrade successfully. This was mildly terrifying since GRUB is the bootloader; if it’s not working properly the system won’t boot, and I can’t access the host of the VPS to troubleshoot it. Luckily, GRUB continued to work in my case, and my system was able to reboot successfully after the 20.04 upgrade and beyond. GRUB just wasn’t getting upgraded. I quickly noticed that I also received an error from GRUB every time I ran sudo apt update && sudo apt upgrade to update my system. Again, the other packages would upgrade successfully, but GRUB would always complain:

dpkg: error processing package grub-pc (–configure):

installed grub-pc package post-installation script subprocess returned error exit

status 1

Errors were encountered while processing: grub-pc

E: Sub-process /usr/bin/dpkg returned an error code (1)

After spending some time just ignoring the problem since it wasn’t exactly critical, I finally decided to do some digging. It turns out that problems like this have apparently plagued Ubuntu upgrades for a while, as I found a thread with the same problem all the way back with an upgrade to Ubuntu 14.04. The solution in that case was to simply “nuke and pave” by removing GRUB and then re-installing it. It’s once again a bit of a white-knuckle situation since if anything happens between removing and reinstalling GRUB the system will not have the ability to boot. The steps were very similar to the linked thread with some minor differences in the era of Ubuntu 20.04. The first step was still to purge GRUB:

sudo apt-get purge grub-pc grub-common

Running this command in 2020 removes /etc/grub.d/ already, so there’s no reason to manually run the removal. Instead, I next moved straight to re-installing GRUB:

sudo apt-get install grub-pc grub-common

The installation process kicks off an interactive wizard asking which disk(s) GRUB should be installed to. In my case, I only needed it on the main disk, which is /dev/sda. With that done, I updated GRUB and then rebooted:

sudo update-grub
sudo reboot now

This part kind of sucked as I was left running nmap against the SSH port for my VPS and hoping that GRUB was properly set up to allow the system to boot. After a nervous 15 seconds, though, the port started to respond again, and I could successfully SSH into the server. Re-checking for updates showed that everything was fine; the errors about GRUB having a needed upgrade that couldn’t be installed were gone. Admittedly, it was probably unnecessary to go through this upgrade without any specific reason for it, but the beauty of Ubuntu is its popularity. Rarely will there be an issue someone else hasn’t encountered, solved, and documented before, and this problem was no exception.

Updating PowerShellGet

It’s not too often these days that I find myself needing to update the underpinnings of PowerShell. The majority of the PowerShell work I do now is based on PowerShell Core, the current version of which is 7.0.3, and frequently just comes with newer versions of the supporting modules. PowerShell Core began with PowerShell 6 and is created with .NET Core, which is Microsoft’s open source and cross-platform flavor of .NET. PowerShell version 5 and before, known as Windows PowerShell, is the original, Windows-specific variant of PowerShell. Microsoft doesn’t really do any new development work on Windows PowerShell, instead opting to work on PowerShell Core and slowly make the full set of functionality available on all platforms.

This is awesome, but some systems very specifically target Windows PowerShell. This can easily happen since the interpreter even has a different name; Windows PowerShell calls powershell.exe while PowerShell Core calls pwsh.exe in an effort to allow the two versions to co-exist on the same Windows host. As a result, systems which proxy PowerShell commands or scripts on your behalf down to a target machine that have not been updated to expect PowerShell Core will generally target Windows PowerShell instead. This was the situation I found myself in last week.

I was attempting to load a script that I had written into a monitoring platform which will then send my script down to any number of “collector” machines in order to for it to execute and do the actual data aggregation. In this case, my script failed because it was calling the MSAL.PS module. MSAL is the Microsoft Authentication Library, and as the name indicates it facilitates authentication to Azure AD. It replaces the older Azure AD Authentication Library (ADAL), and is honestly much nicer to use. The module needs to be installed first, though, and while I had previously installed it on the target system under PowerShell Core, Windows PowerShell is a completely separate entity with a separate space for modules. I remoted to the system and ran the following to handle the installation from an administrative Windows PowerShell session:

Install-Module -Name MSAL.PS

Instead of joy, I got the following error message:

WARNING: The specified module ‘MSAL.PS’ with PowerShellGetFormatVersion ‘2.0’ is not supported by the current version of PowerShellGet. Get the latest version of the PowerShellGet module to install this module, ‘MSAL.PS’.

Ick… some things were a bit old in the Windows PowerShell installation. This was one of the rare instances where the error message didn’t tell me exactly how to fix the issue, though, so I did a few searches on this exact error. The trick is that updating PowerShellGet involves not one but two steps.

While PowerShellGet is a module specific for discovering and installing PowerShell packages from the PowerShell Gallery, it leverages Microsoft’s much more generic NuGet package manager. To get the latest version of PowerShellGet, I first had to make sure I was using the latest version of NuGet by running:

Install-PackageProvider Nuget -Force

Once that completed, then I was able to successfully update PowerShellGet via:

Install-Module -Name PowerShellGet -Force

Once the update completes, the current PowerShell session will still be running the old version. I just closed PowerShell, re-launched a new administrator instance, and then successfully installed the module via the same cmdlet from earlier:

Install-Module -Name MSAL.PS

Safari 14

Last week the 10.15.7 update to macOS Catalina came with a nice surprise: Safari 14. I was caught off guard by this since I had assumed we wouldn’t see Safari 14 until Big Sur released later this year. It was also a nice surprise for me since Safari has become my browser of choice, not just on my iPhone and iPad, but also on my MacBook Pro. The big reason for this is that I do my best to avoid any Chromium-based browser. Over the last few years we’ve seen diversity in browsers erode more and more as new browsers are built based on Chromium (e.g. Brave) while others abandon their own engines in favor of using Chromium (e.g. Opera and Edge.) I personally see this homogeneous browsing platform as being pretty bad for the Internet as a whole, as it opens up the possibility for web developers to focus all of their development on Chrome and ignore everything else. This leads to sites that only work on Chrome and that ignore web standards, just like we saw back in the day when much of the web was developed with only Internet Explorer 6 in mind. The difference now is the way the web has evolved into an entire platform. In 2004 the main issue was that sites developed just for IE 6 wouldn’t quite render properly on other browsers. In 2020, there are entire web apps that straight up won’t work on non-Chromium browsers. That’s something I can’t support.

The two major browsers moving forward with different engines are Firefox (with Gecko) and Safari (with WebKit.) I was previously using Firefox on my laptops, but I became extremely concerned recently when Mozilla had massive layoffs and switched their mission to focus on revenue. I certainly understand that Mozilla needs to make money in order to continue making Firefox, but when a group lays off their entire incident response team, I don’t exactly feel warm and fuzzy inside about using the product. I still use it on my Linux installations, but on macOS I switched to Safari.

The pleasant part about switching to Safari is that, for the most part, it’s been a very slick browser that I’ve enjoyed. While Safari 14 doesn’t do anything too Earth-shattering or even different from any other browsers, it does bring Apple’s offering up to parity with some of the major players. For example, Safari will now finally display favicons for websites on tabs. How they’ve made it this far without supporting them I’ll never understand, but it immediately makes a huge difference in quickly finding the tab I want… and I say this as a person who typically doesn’t have more than 10 tabs open at any given time. Tab addicts (you know who you are) will especially appreciate this when Safari starts stacking tabs on top of one another. As another update to tabs, Safari can now preview the content of a page when the mouse is hovered over a tab. This can also be useful for quickly finding the appropriate tab without actually having to switch to anything.

The big change, though, is how Safari communicates with the user about how it has helped protect against invasive tracking. This feature is extremely similar to the Protections Dashboard in Firefox. There’s an icon to the left of the address bar that can be clicked at any given time to see a breakdown of trackers on the current page. Clicking will also allow me to see the specifics of what trackers are being blocked:

For a bigger picture, I can also get an overall view of what’s been blocked in the past 30 days. I can see which sites were attempting to be the most invasive, and similar to the per-site rendering, each can be expanded to show which trackers they had embedded:

Similarly, I can click on the Trackers heading in order to see a list of which trackers appear the most frequently across the sites I’m visiting. I can expand those listings to see which specific sites are hosting that tracker:

I don’t think it should come as a surprise to anyone that Google, Bing, and Facebook appear the most frequently after just a short period of testing. It’s also interesting to see trackers from both Facebook and Snapchat when I don’t use either of those “services”. It really shows you how pervasive they are across the Internet.

While I can already hear the Apple-haters I know railing on the fact that Firefox already has this feature, in my opinion it’s nice to see Apple bringing their browser up to feature parity and offering a more transparent and secure browsing experience to people in a package that also does not leverage Chromium but which does have a support team behind it that’s more than a skeleton crew. Similarly, you still don’t see anything like this today in Chrome or Edge, likely because the companies behind them both appear relatively high up in the tracker list.

Connecting An Existing Firebase Hosting Project To A New Site

As a follow-up to my last post on GitHub Pages, I mentioned that I moved one of my websites to Firebase. Firebase is a platform from Google for creating web and mobile applications. As a PaaS offering, there are a lot of different parts to the service, but as a platform for web applications hosting is naturally one of them. The free Spark plan offers 10 GB of storage, 360 MB of data transfer per day (which works out to 10 GB of bandwidth per month), and support for custom domains and SSL. That’s more than enough for me to host a simple, single page website that’s only made up of static HTML, CSS, and a single image. If anyone is curious, my site is using just 1.8 MB of storage and 15 MB of bandwidth. Note that bandwidth used divided by storage used will not be indicative of total hits due to caching, compression, etc.

I’ve used Firebase before, so I already had my Google account linked up to Firebase, and I even had a project still technically “live” there, though the domain had long since been shifted somewhere else. To be honest, it had been so long since I used Firebase that I almost forgot about it until I just happened to start receiving some well-timed emails from the service informing me that I needed to re-verify ownership of the domain I was using for my defunct project. I had no interest in re-verifying anything, but I did want to start hosting something new there.

The first step for hosting new content was to log in to the Firebase Console. Since I had already used the service, this gave me tiles of my existing projects; in my scenario, I just had a single project for my hosting. I clicked on that tile, and I was taken to a Project Overview screen. This gives me a high-level look at my project. To get to the hosting-specific functionality, though, I just had to click the Hosting option under the Develop menu to the left.

On the hosting dashboard, the first item listed contains all of the domains associated with the project. Clicking the 3 dots … next to a domain allowed me to delete it; I removed the two entries (apex domain and www) for the domain I used previously. Then I clicked the button for Add a custom domain. I followed the instructions on the screen to add a custom domain; I won’t document the steps here since they’re directly covered through the Firebase custom domain documentation.

With everything configured on the Firebase side, I next needed to crack into the Firebase CLI to link up my local project. I opted to install the standalone CLI, though you can still get it through npm if you prefer to roll that way. The first thing I had to do was link the CLI to my Firebase account. This is different based on whether you’re going to be using the CLI from a system with a GUI or if you’re doing it from a headless system you’re accessing via SSH. I was using it from a headless system where I cannot pop a browser to follow the normal authentication process; as a result I ran:

firebase login --no-localhost

If you’re running this from a system with a GUI, I believe you just omit the --no-localhost parameter. In the headless setup, though, this gives a Firebase URL to navigate to on another system. I copied it out of my terminal and pasted it into the browser in my laptop. This gives me an authentication code for the CLI. I copied that from my browser, pasted it into my terminal, and that linked the CLI to my account in the Firebase platform.

Since I was just moving my content from my old VPS to Firebase, I didn’t have to worry about actually creating a website; I already had one that was backed up in a tarball. I simply had to expand my tarball on the same system where I was using the Firebase CLI. I did this by creating a new directory for the project, expanding my tarball that had all of my site’s content, and then copying that content to the project directory:

mkdir ~/laifu
tar -zxvf ~/temp/laifu.tar.gz
cp -r ~/temp/html ~/laifu

Note: If you look closely at the commands above, you’ll see that after I expand the tarball I’m recursively copying not the entire directory but the html folder from it. This is due to the fact that my tarball is of the entire /var/www/laifu.moe/ directory that Nginx was previously hosting on my VPS, and the html directory is what contains the content of the site. If your backup is storing the content directly (e.g. it’s not in a subfolder) that’s fine. However, you’ll want to make a new folder inside of your project directory that you copy the content to because you do not want the content of the site to be in the root of the Firebase project’s directory. For example, your mkdir command would look something like: ~/myproject/html

One I had the files situated accordingly, I needed to tell Firebase that my directory was a Firebase project. Similar to using git, I do this by navigating to my project directory and running:

firebase init

This gets the ball rolling by asking some questions interactively through the CLI. One question will ask what service the project should be connected to; be sure to pick “Hosting.” After that there should be a prompt for which existing hosting project you’d like to use. The existing project should be listed as an option to be selected. If it’s not there, you can cancel out of the process and ensure everything worked correctly with your authentication by running the following and verifying that you see the project. If it’s missing, you may need to redo the authentication (e.g. maybe you were in the wrong Google account when pasting into your browser.)

firebase project:list

After selecting the project, the CLI will ask what to use as the “public directory.” This is essentially asking what directory inside of the project directory contains the web content to be hosted. In my case I picked html since that’s what I named the folder.

Be wary of the next couple of prompts, which will trigger regardless of whether or not there’s something in your public directory matching them. When prompted about your 404.html page, opt not to overwrite it unless you really hate your existing one. When prompted about index.html, definitely don’t overwrite it or you’ll lose the first page of your site.

Once that’s all done, you should get a message:

“Firebase initialization complete!”

This means that the directory has been initialized successfully as a Firebase project, but the local content still hasn’t been pushed to the cloud. So the last step is to run the following:

firebase deploy

This will give a “Deploy complete!” message along with a Firebase-specific URL in the format of:

https://project-name-GUID.web.app

Copying this URL and pasting it into a browser should allow you to verify that the content you expect is now being hosted, even if you’re currently waiting for DNS TTLs to expire before you can navigate to the custom DNS. The Hosting Dashboard of the Firebase console will also show the update in the “Release History” section.