Grep and Sed to Modify All Files in a Directory

I recently decided to rebrand one of my websites, complete with a different domain, title, and author (that’s me!) This is part of the beauty of using a static site generator like Hugo; I updated the domain in my configuration file, and everything else just magically changed when the site was recompiled. The caveat is that I wanted to change the author attribute in each post to a different name to match my new Mastodon profile. In this particular Hugo theme the author is specified in each post rather than in the central config.toml file so that I can have different authors in a single site. This meant I needed to mass modify all of my posts to change the author (at least for the ones that have it since I previously used a theme where the author wasn’t specified.) I knew that this should be possible with sed but couldn’t remember the exact syntax. Since Hugo stores all of the Markdown files for my posts in a single directory, though, I knew it shouldn’t be too complicated.

The syntax sed uses to do find-and-replace is very familiar to me since I use the same syntax in vim all the time. It’s great for those moments when I realize I’ve given a particular variable or function an extremely poor name and need to change every instance in a particular file. So what I really needed in this case was to recall how to use the CLI to change the value in multiple files; specifically I needed to change every file in a particular directory. It didn’t take many searches for me to confirm that this would require another utility to discover each file so that sed could then update them. Ultimately I got up and running with the following commands thanks to what I found on this site:

grep -lr -e "author = \"oldName\"" . | xargs sed -i '' -e "s/author = \"oldName\"/author = \"consoleaccess\"/g"

In this case, grep is discovering the files and then I’m using xargs to redirect the output from grep to the arguments for sed. I won’t rehash the specific parameters of each command, as the original site I linked to above does a terrific job of that. However, it’s worth mentioning that I was able to swap out just the instances of my old account name for the Markdown file’s author property by specifying the full line instead of just the account name; all I had to do was use the \ to escape things like the double-quotes that I needed to include. This way everything runs as efficiently as possible since grep is only returning the files that contain the old name in the author property, and then sed is only changing that single line of each file passed to it.

Quarantine Time Script

Are you tired of sitting at home wondering how many days you’ve been choosing to quarantine like a responsible adult? Me too! The number of times I’ve been in conversations or working on posts for blogs or social media and thought, “Wait, how long have I been at home now?” followed by wasting time doing rough calendar math in my head was enough that I finally burned some time this weekend putting together a script for it.

In the interest of full disclosure, every time this has come up before I’ve done some very simple PowerShell to actually calculate this, at least once I passed the point where I couldn’t just think of it off the top of my head:

$now = Get-Date
$then = Get-Date -Date "2020/03/11"
($now - $then).Days

Clearly this is extremely simple! I’ve found myself needing a few shell scripts, though, so I figured it would be a good opportunity to write this in Bash instead for a little exposure. The biggest key was to just figure out how the heck to:

  1. Create a date at a specific time.
  2. Subtract the dates.

Date at a specific time

This was pretty easy after a quick DuckDuckGo search. The date utility includes a -d parameter that allows me to give it a string that it’ll use as the date, just like -Date does in PowerShell.

Subtract the dates

The second piece also ended up being much more straightforward than I expected. The date utility similarly includes a few codes I can use to specify how I’d like the date to be formatted, including %s which will give the date in seconds relative to the Unix epoch time. I could get both dates in seconds, subtract the current date from when I started quarantine, and then convert the seconds to days. For those keeping score at home, there are 86,400 seconds in a day.

As an added bonus, date returns the time in seconds just like everything else in the universe that isn’t Java-based. I’m looking at you, Groovy.

Getting a starting date

The easy method would’ve been to hard-code the date when I started quarantine and leave it at that. To make it a little more extensible, though, I instead opted to pass the date as a parameter. Given that people can pass anything as a parameter, though, I put together a regex to enforce the YYYY/MM/DD format on whatever is typed. That being said, I still included an additional check after parsing the starting date regardless since it would still be possible to specify a date that matches the regex but that isn’t real (e.g. 2020/02/31.)


Here’s the code in all of its janky glory.

It’s extremely simple, but it was a fun little learning experience to kill some time on a weekend when I was sitting at home… continuing to quarantine…

Hugo and the Implausibly Old Timestamp

Management of one of my blogs is handled through a variety of shell scripts. I have a script for executing hugo to rebuild the site and copy the output of the public directory to the folder where Nginx hosts it, for example. One of my scripts creates a tarball of the site and uses rsync to copy it to another server so that, if my VPS blows up, I can easily retrieve the backup.

After composing yesterday’s post, though, I ran into an error with the backup script. It basically runs the following:

tar -zvcf /home/fail/backups/failti_me.tar.gz blog

This started throwing a warning:

tar: blog/public: implausibly old time stamp 1754-08-30 16:53:05.128654848 -0550

It claimed the public directory where Hugo publishes the compiled site contents to was created in 1754… which is probably a bit older than seems plausible. My blog still published correctly; it was only tar being salty about the weird timestamp. I used stat to check the directory and confirmed that the timetsamp on when it was modified was completely borked:

stat blog/public/

That told me:

 File: blog/public/
 Size: 4096        Blocks: 8          IO Block: 4096   directory
 Device: fc01h/64513d    Inode: 512063      Links: 73
 Access: (0755/drwxr-xr-x)  Uid: ( 1000/    john)   Gid: ( 1000/    john)
 Access: 2020-07-21 18:58:28.669384769 -0500
 Modify: 1754-08-30 16:53:05.128654848 -0550
 Change: 2020-07-21 18:58:28.189382080 -0500
 Birth: -

After some searches online I found the following GitHub issue thread confirming that plenty of people other than me were seeing the same problem and that it was still present in the current version of Hugo. While I had initially been confused as to why I suddenly started seeing this now since I hadn’t upgraded Hugo or anything like that, I saw a few comments indicating that placing items in Hugo’s static directory seemed to trigger the issue; I had placed an image there from my last post, and it was the first addition to that directory in quite a while. With some additional searching and testing, I verified I could do the following to simply ignore the warning from tar:

tar -zvcf /home/fail/backups/failti_me.tar.gz blog --warning=no-timestamp

I didn’t like the idea of having such a wonky date on my filesystem; as a result, I started searching for how I could fix it by manually adjusting the “Modify” timestamp. touch seemed like a likely candidate, and after reviewing the man page I saw that there was a -t flag for it which would allow me to manually specify the timestamp. I basically just wanted to set it to the current time so I added the following to the my script which recompiles the site, placing it after the build and before rsyncing the contents of the public directory to the Nginx directory.

STUPID=$(date "+%y%m%d%H%M")
touch -t $STUPID /home/john/blog/public/

Sure enough, after running this the resulting tar command has no qualms. Likewise, re-running the stat command from above shows the current date and time as the modified time on the directory. I really hope this bug gets fixed soon since it seems to have been around for a hot minute, but at least I have a workaround for the time being.

Making A Hash For The HTML Integrity Property

I caused a little bit of chaos for myself the other night when I updated one of my websites. The site is a static, single page that I use for work-related bookmarks; it’s basically a site I stood up to have something at a domain I wanted to buy. While making a couple of changes to the links, I decided to update the background image. That was a little bit gross to do since the site was originally compiled with Hugo, but after the initial setup I just modified the HTML directly. In this case I used Vim to search the minified CSS to see where the background image was specified and update it to the file I wanted.

After making that change, I refreshed the page to be greeted by this:

Of course, originally I didn’t think to open the developer console. Instead, I just noticed that after making the change, none of my CSS was loading. Cool. Checking the source HTML file, I quickly noticed that the link tag in the header where I tell it which CSS file to use had an integrity property. Following that property was a straight that started with:


It was already pretty apparent what was happening. The integrity property is giving a SHA256 hash of my CSS file. Since I changed that file, the current hash no longer matched the specified hash. As a result, the CSS file was ignored. I verified this by removing the integrity tag and refreshing the page. Sure enough, everything now loaded as expected. Not wanting to leave it at that, though, I dug a bit more into the property. I eventually found a great MDN article on it. The idea is that you specify this property for files you’re pulling from a CDN, such as you’d commonly use for Bootstrap. Since you don’t control the source for those files, you can opt to not use them if they change without your knowledge. Pretty cool! The developer of my particular Hugo theme decided to include this in their code. While removing it fixed the issue, I figured it would be a good learning experience to use it, even if it didn’t really make sense considering I was hosting the CSS locally on the same server as the website.

The key was to figure out how to create the SHA256 hash. Without reading things completely (my bad; always RTFM), I first just created a hash the way I typically would:

shasum -a 256 ./my.css

I used the output of this to complete my integrity specification in HTML, and was greeted with the same disgusting, CSS-less webpage from the screenshot above. After a little more reading, I realized the hash needed to be a base64 encoding of binary. I tried again with the following commands:

openssl dgst -sha256 -binary ./my.css | openssl base64

Note that if you remove the pipe and the -binary switch, you’ll see the ouput of the openssl command matches the output of the shasum command.

I tried replacing the old hash in my HTML file with the new one, refreshed the page yet again, and sure enough this time everything worked as expected. As I mentioned, it doesn’t do me a ton of good in this scenario since the HTML and CSS files are all hosted off of the same machine that I control, so if someone else is changing that CSS file I have way bigger problems. It was a good learning experience for the future, though.

Stay pink!

Groovy Programming HttpClient

As a follow-up to my post on creating a JWT in Groovy, I did manage to figure out how to make an HttpClient in Groovy as opposed to making raw connections. You can see this implemented in the GitHub repository I used for the previously linked post. It was honestly pretty easy to do, and there are tons of tutorials out there; the code is essentially the same regardless of whether you’re doing it in Java, Kotlin, or Groovy. Similar to the last time, it’ll be easier to look at all of the code in the GitHub repo, but I’ll call out the specific snippets I’m referencing throughout the post.


I do need a handful of imports to get up and running with this:


If these fail to import when you try to execute the code, you are likely not operating at Java 11 or above. More on why I know that later…


In the last post I created a JWT. Now I need to take it and parse it into JSON to send it to the API endpoint so that I can get an access token. The simplest way to do this is to place it in a Map and then convert the Map to JSON.

Map payloadMap = [auth_token: jwt]
def payloadJson = JsonOutput.toJson(payloadMap)

Create The Client

The process of making requests in this method involves 3 steps. The first is to create the HttpClient object.

def httpClient = HttpClient.newBuilder()

You can set any number that fits with your use case for the timeout duration; 5 seconds was safe for me. There are a lot of other options for the client so be sure to check out the documentation if you need more.

Create The Request

The request object is where the specifics of the connection are identified, like the URL and the method.

def request = HttpRequest.newBuilder()
    .headers("Content-Type", "application/json; charset=utf-8", "Accept", "*/*")

In the second line, I’m specifying POST since I need to send data. I’ll cover a GET example later on. In the same line, I need to specify the format of my payload as being JSON; just throwing a JSON string at the endpoint will not work. If you only have a single header you can use .header instead of .headers. What caught me off guard with the headers is that you specify multiple of them not as key-value pairs like a Map but as a simple list, with each value just following its corresponding key.

Send The Request

With the client and request both created, now it’s time to send the request.

def response = httpClient.send(request, HttpResponse.BodyHandlers.ofString())

This is also pretty straightforward; it’s just worth mentioning that I’m specifying I want the response to be parsed into a string.

Error Checking

I can check the statusCode() method to verify my request was successful and then take action upon the reply.

if( response.statusCode() == 200 ) {
    println "${response.body()}"
} else {
    println "ERROR: Status code: ${response.statusCode()}"

From here I can parse the results to a Map and do my normal thing.

GET Example

Using GET is the same as POST except for some details in the request. Obviously POST is replaced with GET, and then I have additional headers to specify.

def request = HttpRequest.newBuilder()
    .headers("Content-Type", "application/json; charset=utf-8", "Accept", "*/*", "Authorization", "Bearer $token")

Naturally the exact headers you need will depend on the API you’re calling. Other than that, though, the process is the exact same as it was with POST, including the way you execute the request.

Of course, after I implemented all of this I discovered that the back-end in my environment that was actually executing the code was not running Java 11 or better, so I couldn’t even use this setup. It’s good to know for the future, though!

Creating a JWT in the Groovy Programming Language

On Friday I found myself in a new situation. I was working with APIs for a new service my company has started using, but their setup was a bit more involved than what I’ve typically experienced. Accessing many services via their REST API requires you to follow a few steps to generate an application ID and an API key, you pass those with your request, and you’re done. The downside to this is that it can open up security vulnerabilities; if something happens to your API keys, for example, they can be used for nefarious purposes. Enter the JSON Web Token or JWT. You create a JWT by parsing together a bunch of information, like your application ID, the validity period, etc., sign it with a secret, and exchange it for an access token you can actually use to make your normal API calls.

A JWT has a few advantages. First, it expires. The validity period varies based on the service; the one I was working with was 30 minutes. So if your access token is compromised, it can only work for however much time is left on the token. They can also be configured to leverage a UUID as a one-time nonce to prevent replay attacks.


The new service I was working with used JWTs for authentication, so I had to figure out how to do that. The vendor provided sample code, but they were leveraging Python and using a library in Python to handle the JWT. That didn’t help me too much because leveraging Python in my current setup would be difficult, and calling a library means the sample code didn’t show me how to piece things together. I’ve mentioned before that the best language for the platform I’m dealing with at the moment is Groovy, but after some searches I found essentially no information on parsing together a JWT in Groovy. My other option for languages is PowerShell, so after some searches I found a hero on Reddit who posted the exact code to create a JWT. I modified the code a little bit to account for the properties the service I was leveraging required in the payload; I managed to create a JWT, exchange it for an access token, and make successful requests from the API. Awesome!

Yesterday, though, I found myself sitting at home during another quarantine weekend, and I decided to see if I could recreate that code in Groovy since the PowerShell code was extremely readable; I just had to figure out how to do the same thing in Grooovy.

High Level

At a high level, the process of creating a JWT looks like this.

  1. Find the current time and the expiration time for the token, both in Unix time.
  2. Create a UUID to prevent replay attacks.
  3. Create maps for the header and payload.
  4. Convert those maps to JSON and then encode them as UTF-8 base64 strings.
  5. Combine the encoded header and payload. Then create a SHA256 signature for it based on the secret key of the application where I generated the application ID.
  6. Combine the header, payload, and signature together. Pass that to the service with HTTP POST, and receive back the authentication token.

The Code

I’ll paste individual snippets here, but the full code is in a Github repository. For reference with this post, my function definition looks like this; you can see that I’m passing a lot of information in for everything the payload will need:

def createJWT(JsonSlurper slurper, Integer validSeconds, String appID, String tenantID, String appSecret, String iss)

Unix Time

Getting Unix time was pretty straightforward given the currentTimeMillis function in Java’s System library.

def rightNowMilli = System.currentTimeMillis()
def rightNowSec = Math.round(rightNowMilli / 1000)
def expirationSec = rightNowSec + validSeconds

The only hangup was that I need the time in seconds, not milliseconds, thus why I divided the value by 1000. After that, I just had to add on the number of seconds for the lifetime of the token to get the expiration time; in my case it was 1800 seconds (30 minutes.)


Next up I needed to generate a UUID as a unique identifier so that no one can try to re-issue this exact same request. There’s a UUID object type already, so I could generate a new, random UUID with:

def jtiValue = UUID.randomUUID().toString()


For ease of later conversion to JSON, I next created maps for both the header and the payload.

Map header = [alg: "HS256", typ: "JWT"]
Map payload = [exp: expirationSec, iat: rightNowSec, iss: iss, sub: appID, tid: tenantID, jti: jtiValue]

I hard-coded the values in the header map, though it’s worth mentioning SHA256 could be different. Likewise, the payload will depend heavily on the service from which a token is being requested; this is where you’re most likely to need to make modifications specific to your use case.

JSON Conversion

Next the maps are converted to JSON strings. Groovy’s JsonOutput library makes this easy with a single method.

def headerJson = JsonOutput.toJson(header)
def payloadJson = JsonOutput.toJson(payload)

Note that I needed to import the library before calling it.

import groovy.json.JsonOutput

Base64 Conversion

The JSON values for the header and payload both need to be converted to base64 strings. I initially started doing this in a far more difficult manner by creating a function that would convert the bytes and return a byte array before stumbling across the fact that Strings have a getBytes method. Note that Strings also have a bytes property I could call directly; in my testing this seemed to give me the same result, but I liked using getBytes instead because I could specify that they were UTF-8.

def headerBase64 = headerJson.getBytes("UTF-8").encodeBase64().toString().split("=")[0].replaceAll("\\+", "-").replaceAll("/", "_")
def payloadBase64 = payloadJson.getBytes("UTF-8").encodeBase64().toString().split("=")[0].replaceAll("\\+", "-").replaceAll("/", "_")

I didn’t see any cases where the replaceAll methods really did anything, but since they were in the original code I figured there must have been some use.


This part basically involved a lot of DuckDuckGo searches and piecing together things I found. Most of this is just Java code, and I’m honestly a little surprised that it worked… this commit shows how confident I was feeling. Note that once again I had to make some imports to leverage different crypto libraries:

import javax.crypto.spec.SecretKeySpec
import javax.crypto.Mac

Then the code for it is:

def toBeSigned = headerBase64 + "." + payloadBase64
SecretKeySpec secretKeySpec = new SecretKeySpec(appSecret.getBytes("UTF-8"), "HmacSHA256")
Mac mac = Mac.getInstance("HmacSHA256")
byte[] digest = mac.doFinal(toBeSigned.getBytes("UTF-8"))
def signature = digest.encodeBase64().toString().split("=")[0].replaceAll("\\+", "-").replaceAll("/", "_")

This is first concatenating together the header and payload, with a period separating the two base64 encoded values. Then I create a secret key specification using the secret key from my app. Next I instantiate a message authentication code using SHA256. That’s used to create a signature against the aforementioned header/payload combination, the result of which is also a base64 encoded string.


The final step is to simply concatenate together the header, payload, and signature, all separated by periods:

def token = headerBase64 + "." + payloadBase64 + "." + signature

This value is what I return to the caller. I won’t go over the details here, but in the GitHub repository I have code where I actually make a POST against the API endpoint and receive an access token back in exchange for the JWT I created, so I know everything is working. I might update that code in the future to use a newer HttpClient from Java 11+ based on some things I had done last night in Kotlin; if I end up doing that I think it would be a good item for another post in the future.

Until next time, crypto on and stay pink!

GoToot CLI Mastodon Client

While I’m stuck at home for the foreseeable future, I’ve been trying to make the most of my time by using it for some learning instead of simply setting new personal records for how many hours of Netflix and YouTube I can watch in a single month. One of the things I decided to work on was creating a Mastodon client. If you aren’t familiar with Mastodon, it’s a social network most likened to Twitter. Instead of being centralized, though, Mastodon is federated, meaning different people can run their own “instance” of Mastodon, and through federation they can interact with other instances. This is cool for a few reasons, not the least of which is that if there’s an instance with terrible policies letting their users post all sorts of toxic garbage, an instance administrator can simply opt to not federate with that instance, meaning that users from the instances are effectively cut off from one another.

As I mentioned in my last post, I’m not much for making frontend interfaces so it only made sense that I’d make a CLI-based client. Note that this is by no means an original idea; I was actually inspired by the tootstream client that I regularly use. tootstream is written in Python, and I thought about making my own client in Python as well given that it’s a super fun language. I also thought about writing it in PowerShell since I have a lot of experience with it. Ultimately, though, I did the exact opposite and started working on it in Go since it’s always seemed like a really interesting language that people enjoy but that I’ve never had cause to use. I thought it would be a good learning experience to work with a compiled, statically typed language instead of the interpretted languages that I’m already (decently) experienced with. This is why I ended up stumbling across JSON-to-Go. It’s also been good experience with work with a slightly lower level language than whta I typically use; to say the least, getting back a byte array from my first HTTP GET request wasn’t something I expected.

I kicked off development of the client this weekend since, being a long weekend, I have even more free time than usual. It’s been quite the learning experience, and while I’ve felt frustrated at times it’s also been very rewarding. Luckily there’s a good bit of documentation and resources out there for Go so that I could look up what I needed without too much pain. The one thing I struggled with finding was how to create an HTTP client in Go that I could POST with that had both headers and form data; that might be a good post for another day.

For my own sanity, I spun up a GitHub repository for my project. It’s still extremely early, and I wouldn’t recommend anyone else try using it yet. Right before firing up this post, I managed to get my first toot posted with it followed shortly by one with corrected formatting. I think next will be displaying the Home and Local timelines, followed by being able to favorite, boost, and reply to those statuses. Suffice to say, there’s a lot of work left to do.

I also think this has been a good experience so far with maybe tempering my expectations a little when learning new programming langauges (or learning anything really.) I often endeavor to learn a new language only to not make the progress I think that I should be making, get frustrated, and then give up since I’m doing it for myself rather than because I’m required to for my job or anything like that. In this case it’s been nice to just sit down for a little bit, work on a tiny piece of code, make some incremental progress, and take things at a leisurely paace. If I get stuck, I simply do something else for a little while. As I remembered being the case back in college when I was taking Computer Science classes, the best way to overcome a hurdle in programming often is to stop looking at the problem; I’ve had ideas for solutions pop into my head at all sorts of random times.

Maybe by the time the quarantine is over I’ll have a fully function client!

Self-Hosting A Static Website

Earlier this week a friend reached out to me regarding a website. He had just finished developing his very first iOS game and was ready to submit it to Apple for approval. One of Apple’s myriad requirements, though, is a website containing the author’s privacy policy. My friend had no website and no idea how to make one, so he asked me if I could help. It seems wild to me that someone could have the chops to make an iOS app in Objective-C or Swift but not be able to make a website, but each of us has a different skill set.

We first took some early steps gathering requirements. What did he want for the site? Literally just the privacy policy. Where did he want to host it? Wherever was the cheapest. Did he have a domain name already? Yes! This was fairly straightforward; he literally just wanted the very basics. After a bit of discussion I convinced him to write up a quick “about me” type of page so that we could have more than just the privacy policy. From there I could get to work.


The first thing I did was have him head over to Vultr and spin up their cheapest instance. I think this is running him $5 USD per month. I had him pick Ubuntu as the server operating system given that it’s the one I’m most familiar with. My friend has some familiarity with Linux but not a lot of practical knowledge; when I asked him to shoot me some SSH credentials with sudo access he literally sent me the root account from Vultr. Ick.

Configuring The Host


My first goal was to configure the host. I started that off by creating user accounts for each of us:

adduser username
usermod -aG sudo username

After switching users and verifying my new account worked, I disabled root’s ability to log in:

sudo passwd -l root


Next I wanted to change the default SSH port since having 22 open means a million places from across the planet are going to throw garbage traffic at your server. I did this by modifying the SSH config at /etc/ssh/sshd_config, finding the line with #Port 22, uncommenting it, and changing the port to a high number of my friend’s choice. Then I restarted SSH:

sudo systemctl restart ssh


I wanted to enable the firewall as well, so I opened up with the new SSH port and 80 and 443 for our eventual website:

sudo ufw allow sshPortNumber/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp


I next needed a web server; Nginx has been my go-to choice for a long time. Rather than re-hashing all of the steps, I’ll just recommend following the excellent documentation from DigitalOcean which nicely covers the Nginx configuration. That takes you to the point where you are hosting a website. Then you just need content on it.


I’m an advocate of using HTTPS for everything, and with free certificates from Let’s Encrypt there’s no reason not to. Given that we have shell access, using certbot is the way to go. There’s also excellent documentation on that process on Ubuntu with Nginx. I highly recommend selecting the option to redirect any HTTP traffic to HTTPS.


Now for the website itself. I’m not really much of a web developer, and I dislike making anything frontend; I don’t exactly have the best design sense. So I once again opted to leverage Hugo to take care of that for me. I’ve written about the specifics of using Hugo in detail. Since we really just wanted a generic landing page with my friend’s socials and then links to the About and Privacy Policy pages, I ended up going with the Hermit theme. It has a nice, simple look. My friend’s favorite color is mint green, so the default background also works nicely with that when I changed the accent color. The theme nicely includes an exampleSite so that I can steal their config.toml file and also their “About” page to make things even easier for myself.


One of the nice things about Hugo is that, since everything is a simple text file, it’s very easy to compress your entire site and save a backup. Then if something terrible happens to your server, it’s extremely easy to get the site back up and running on a different machine. In this case, I made tarballs for both the finished, compiled site and the Hugo directory storing the configuration and Markdown.

tar -zvcf ~/temp/html_output.tar.gz /var/www/
tar -zvcf ~/temp/hugo_directory.tar.gz /var/www/

With the tarballs created, I used an SFTP client to copy them off the server for safe keeping.

Wrap Up

In total it took me about an hour and a half to get everything up and running. Having gone through this process many times for websites of my own, I’ve got a decent bit of experience with the process, but this shows it still doesn’t necessarily take a super long time to get a decent website up and running. The big benefits are:

  1. The site is cheap to run. Even the smallest instance at any VPS provider will be able to handle multiple sites with ease unless they start getting really popular, so if my friend wants to create any other sites in the future he won’t need additional hosting.
  2. Backups are stupid simple. My friend isn’t beholden to a hosting provider or trying to work within the confines of something more expensive like WordPress or Squarespace.

The downsides are present, though, so you have to be cool with them:

  1. Setup takes more technical chops than clicking through a Squarespace template editor. While the documentation for everything in this post is extremely good, if working out a terminal freaks you out then this likely isn’t for you.
  2. Content is authored in Markdown. This likely doesn’t matter for my friend at the moment since he’s not really posting anything new to the site, but it would be something to keep in mind if he decided to start a blog. In that scenario, I usually just SSH to the server and author my content in Vim. You could also author the Markdown elsewhere and copy it to the server, or use SFTP to open the Markdown file on the server from an editor on your local machine. It’s definitely not as simple as a WYIWYG editor in your browser, though.
  3. Maintenance is something that will need to be done at least periodically. The server will need to be patched. That’s easy enough to do with a simple sudo apt update && sudo apt upgrade and then reboot when necessary, but it’s just another step to keep in mind. Likewise, bouncing the server means that the website will be down, even if it’s typically only for a moment or two.

Being kind of pretentious, technical snob I personally find it easier to author my comment in Markdown on Vim instead of using a WYSIWYG editor in a GUI, but your mileage will vary based on your own prefrences.


Lately I’ve been working through the very arduous (for me) process of learning Go for some personal projects. I selected Go because I typically use interpretted, dynamically typed languages for work, so I thought it would be a good learning experience to work with a compiled, statically typed language. To me at least, Go seemed a bit more approachable than something like C or Rust. I started trying to learn Kotlin since I’ve been working with another JVM-based language in Groovy, but it’s extremely difficult to use Kotlin from just the command line without an IDE; when I couldn’t figure out how to add an external package to a project without an IDE I basically gave up on it since it didn’t fit at all into my workflow. Go, on the other hand, has a handy package manager built into the same binary you use to compile your own code.

After going through a book to get the basics of Go down (I won’t link the book because I actually thought it was a pretty terrible source, and I wouldn’t recommend it) I jumped in to doing a little bit of API work since that’ll be important for some of the project ideas I’m kicking around. It was fairly simple to look up how to leverage the io/ioutil and net/http packages to make an unauthenticated call to a REST API endpoint. This gives the data back in a byte slice. I can cast that to a string to view it and verify that I got back the expected data, but obviously I can’t actually do anything I want with the data in a byte slice. In many other languages this is where I would use some type of JSON library to parse the response into something like a map/hashtable/dictionary. I’m used to languages where the interpreter just kind of figures that out for you based on the syntax of the JSON.

Go isn’t like that, though. Instead, I need to define a struct that matches the format of the API response. If I have that, I can use the encoding/json package in Go to create a struct. That’s something I could do manually, but that would be extremely tedious. For example, this is what I see when dumping the byte array I get back from querying my own Mastodon account as a string:

            "note":"\u003cp\u003eProfessional loser.\u003c/p\u003e",
                    "value":"\u003ca href=\"\" rel=\"me nofollow noopener noreferrer\" target=\"_blank\"\u003e\u003cspan class=\"invisible\"\u003ehttps://\u003c/span\u003e\u003cspan class=\"\"\\u003c/span\u003e\u003cspan class=\"invisible\"\u003e\u003c/span\u003e\u003c/a\u003e",
                    "value":"\u003ca href=\"\" rel=\"me nofollow noopener noreferrer\" target=\"_blank\"\u003e\u003cspan class=\"invisible\"\u003ehttps://\u003c/span\u003e\u003cspan class=\"\"\\u003c/span\u003e\u003cspan class=\"invisible\"\u003e\u003c/span\u003e\u003c/a\u003e",

I really don’t want to have to go through that to create a struct out of it. I started digging around to see if there was a better way and, as is usually the case when dealing with code, came across a Stack Overflow post on the topic. Along with some other helpful information that I used to improve my code a little, one of the replies linked to JSON-to-Go. The service allows me to paste in JSON output like what I included above in this post, and it will automatically generate the corresponding struct. I tried it out, and it nicely gave me the following:

type AutoGenerated struct {
    ID             string        `json:"id"`
    Username       string        `json:"username"`
    Acct           string        `json:"acct"`
    DisplayName    string        `json:"display_name"`
    Locked         bool          `json:"locked"`
    Bot            bool          `json:"bot"`
    Discoverable   bool          `json:"discoverable"`
    Group          bool          `json:"group"`
    CreatedAt      time.Time     `json:"created_at"`
    Note           string        `json:"note"`
    URL            string        `json:"url"`
    Avatar         string        `json:"avatar"`
    AvatarStatic   string        `json:"avatar_static"`
    Header         string        `json:"header"`
    HeaderStatic   string        `json:"header_static"`
    FollowersCount int           `json:"followers_count"`
    FollowingCount int           `json:"following_count"`
    StatusesCount  int           `json:"statuses_count"`
    LastStatusAt   string        `json:"last_status_at"`
    Emojis         []interface{} `json:"emojis"`
    Fields         []struct {
        Name       string    `json:"name"`
        Value      string    `json:"value"`
        VerifiedAt time.Time `json:"verified_at"`
    } `json:"fields"`

I pasted it into my code, changed the name from AutoGenerated to something a little more fitting, and sure enough I was now able to Unmarshal my API response into a usable struct… without having to go through the pain of creating the struct myself. Huge kudos to the creator for such an awesome and useful service.

Thoughts On Apple’s WWDC 2020

Yesterday was the keynote for Apple’s 2020 Worldwide Developer Conference. Like so many things right now, the entire conference, keynote included, is virtual due to the coronavirus pandemic. In this case, it’s a blessing for the sessions since it means they’re all free for anyone to stream as opposed to being a $1500+ USD ticket. Admittedly, though, the keynote left something to be desired. Just a few weeks ago at Microsoft Build, I feel like Microsoft crushed it with their keynote. It was still streamed live, and it featured popular Microsoft employees all working remote. Scott Hanselman really killed it during the keynote with a ton of guests in a way that was still believable and relatable for everyone working from home.

Apple’s keynote was just a recording, and while it had all of the glitz and shine you’d expect from Apple, it really felt more like a 2 hour advertisement at times. At the end of the day, the presentation itself doesn’t matter nearly as much as the content, but in the future I’d like for Apple to make things feel a little more… human.

I thought it would be fun to rank some of the announcements (especially to help organize my thoughts since a discussion on them is a likely podcast topic for me in the near future) at least for products that I actually care about. You won’t see anything about the Apple Watch here, for example, because I don’t own one, don’t plan to buy one, and didn’t even really pay attention to those parts of the keynote. That’s not to say anything bad about the Apple Watch; I just don’t need a health tracker or notifications on my wrist when I sit at home every… single… day.


iOS App Library

The App Library is basically a series of folders where applications are auto-sorted, giving some order to the chaos without forcing users to spend hours manually putting apps into folders themselves. I’m definitely not opposed to the idea, but it gets a sold “meh” from me simply because I don’t have enough apps for that sort of thing to be useful. At the time of this writing, my phone has 3 pages of apps, none of which are even full. I rarely install apps unless I really need them, and I regularly prune any apps I haven’t opened in a few weeks.

App Clips

I like the idea of these; App Clips are small, partial versions of apps that you can access on the fly without needing to open the App Store, search for the correct app, and then download the whole thing. The example they gave is when you need a specific app to pay for parking, a situation I’ve definitely been in before. Beyond that scenario, though, I’m just skeptical over how useful this will be. When I go into stores, am I going to be willing to scan a special barcode to access their App Clip? Most likely that’s going to be a hard pass.


Like App Clips, I like the idea of Widgets, and having more options and sizes is cool. I’ll definitely take some time to play around with new ones in my sidebar. What I’m not as enthused about is the ability to start cluttering up the app list with widgets sprinkled everywhere. I’ve seen a few people draw parallels to the home screen of Windows Phone 8, and I don’t think that’s a particularly good thing. Obviously I can just choose to only use widgets in the sidebar where they live today, but I hope they don’t start to change the focus of the app list.



I’ve seen a lot of people throwing (warranted) shade that this is something which has existed for quite a while on Android, and that’s certainly true. Just because they’re late to the party, though, doesn’t mean Apple shouldn’t add picture-in-picture; that’s just silly. Having recently switched from being a long-time Android user to an iPhone user, picture-in-picture is one of the few things I miss from Android. It’s super handy when you’re watching a video to be able to pop out to picture-in-picture mode and quickly check something else. I also like the look of some of the intuitive controls Apple seems to have worked out to improve the experience over that of Android with the ability to easily resize and even hide the video while it plays.

Maps Improvements

The cycling-specific additions to Maps look slick, though they aren’t initially available in my area, and I don’t currently do a lot of cycling regardless.. Given my goal of eschewing all things Google, though, since switching to an iPhone I’ve been relying on Apple Maps in lieu of Google Maps. As such, I like seeing the commitment Apple has to improving the product. I also appreciate the fact that they themselves mentioned the privacy of Maps.

macOS Big Sur

One of the most immediately noticeable things about Big Sur are the UI tweaks. Most of them are small and subtle, but overall I think it really makes the operating system look significantly better. Having everything slightly tweaked with the uniformity and cohesion you’d expect from Apple just makes everything look and feel extremely polished. I’m looking forward to using it.

More specifically, I’m also looking forward to getting my hands on the updated versions of the Mail and Messages applications in Big Sur. I live in those applications pretty frequently, and they feel just a little dated in Catalina.

Aww Yeah!

Siri Improvements

I use Siri not infrequently on both my phone and iPad. One of those most jarring things about it is the experience you get when it takes over your entire screen to answer a simple query. The new UI for Siri looks like a massive improvement to me, with Siri appearing as a sphere toward the bottom of the screen and not covering all of the content with which you may have just been interacting. I think this will make for a much better workflow, especially for sequential questions to Siri where your next question is based off of the response to the previous question.

On top of that, it’s also nice to see a focus from Apple on giving Siri to the ability to respond to a broader range of questions. While it doesn’t happen all the time, it’s not unexpected when I ask Siri something only to be essentially given a list of links from a web search rather than an actual answer. The more that type of response can be eliminated, the better.

iPadOS Sidebar

This may seem silly, but I’m actually really stoked for the new sidebar UI in iPadOS. I think it’ll add a lot of uniformity to iPad apps that also make better use of the screen real-estate you get on an iPad. Too many apps that are “optimized” for an iPad (meaning they aren’t scaled iPhone apps), are still essentially just bigger versions of their iPhone counterparts. The more that can be done to make the iPad a unique thing of its own with its own strengths the better.

Hell Yeah!

macOS Running A-series ARM Processors

As the most-leaked announcement in human history, I think pretty much everyone was expecting this announcement. It was still exciting to see it made official, though. Given how stagnant Intel processors have been for the last few generations and the insane performance Apple is getting out of the A12Z Bionic in the iPad Pro, I think this is a smart and exciting move. A chip like the A12Z Bionic in something with active cooling? Sign me up. I was surprised that the first ARM macOS devices will ship before the end of the year, but it seems like the process for porting applications is fairly streamlined. Likewise, having a binary translator like Rosetta 2#Rosetta_2) for any applications not getting timely love is a nice safety net, though I don’t know if I’d really want to be playing games run through it like they had demoed.


At the end of the day, nothing absolutely Earth-shattering was announced at WWDC 2020 other than the macOS architecture switch that everyone already knew about. I still think there’s a lot of solid improvments coming on the horizon, though, and I’m eager to start upgrading my devices to the new software this fall.