Connecting Irssi To Twitch.tv IRC

A couple of days ago Brandi decided to try streaming her latest obsession, Animal Crossing: New Horizons on Twitch. As the unofficial game of the coronavirus quarantine, it seemed like a good idea. When she told me, I decided to do the comfortable thing and turn on her stream from the Twitch app on my Amazon Fire TV Stick so I could watch it on my TV. This works well for most of the streams I might happen to watch because I never actually care about interacting with the chat. While the app on my Fire TV Stick does allow me to view the chat, trying to type anything via a remote’s D-pad and an on-screen keyboard on my TV would be painful to say the least.

In this particular instance, though, I wanted to be able to chat with Brandi and a few other friends who popped into her stream. I initially had her stream open on my phone and was chatting there, but it was annoying to have to keep waking my phone up to look at the chat. Then I opened her stream in a browser on my Pinebook Pro. I simply paused the stream since I didn’t need it running twice, but this left a tiny sidebar for the chat that was less than ideal.

Being a regular user of IRC, I was aware that Twitch chat is just an IRC channel under the hood; this is pretty obvious given that the bots most decent-sized Twitch streamers employ in their chats are just IRC bots. I’d know since I once wrote a really shitty IRC bot for an IRC server I ran for my team at a previous job. As such, I figured I could just connect a regular IRC client to Twitch’s IRC servers, find Brandi’s channel, and chat that way. Given that I already use IRC regularly, I have a server that I normally connect from via my favorite client, Irssi.

Getting the connection right took a little doing as finding up-to-date information on Twitch’s IRC setup is a bit spotty. I ended up combining information from a bunch of different sites before I got everything to work properly.

I’m not going to cover how to get and access Irssi; if you need help with that you can use their own documentation. Assuming you’ve got access to Irssi and you’ve just launched the application, the first thing to do is to define a new network for Twitch; for any of the below examples just replace what appears in [brackets] with your own information:

/network add -nick [username] Twitch

This defines a new IRC network in Irssi and states what your nickname will be for it, which is just your Twitch.tv username. You could technically name the network anything you like, you don’t have to call it “Twitch”, but simplicity is generally the best approach. The next step is to define the server information for that network.

/server add -auto -ssl -network Twitch irc.twitch.tv 6697 [oauth password]

This defines the server of irc.twitch.tv for the Twitch IRC network, specifies the port to use, and gives the password to authenticate your account. Note: Your OAuth password is not the same as your regular password to log in to Twitch! You must set that up at the OAuth password setup page for Twitch. Then copy and paste the password value you’ve generated into the command above. You must leave it prefaced with “oauth:” just like how Twitch provides it to you in the command; don’t strip that off the front.

With all of that done, now you can simply connect to your newly defined IRC network via:

/connect Twitch

This should show you a message letting you know that you’ve successfully connected. The channel to join is simply the name of the Twitch account prefaced by a hashtag just like every other IRC channel. So to join a particular channel you would just type:

/join #favoriteStreamerName

That’s all there is to it! You can now happily chat with a stream from Irssi while watching the actual video feed on another device. I assume the steps above could be pretty easily translated into a different IRC client if you prefer something other than Irssi.

There are, however, just a couple of caveats to keep in mind. As you might imagine, there is no support for Twitch emotes within IRC. When someone else in the chat uses an emote, for example, you will simply see the plaintext rendition of it. So if someone uses the BibleThump emote, you will see that text rather than the character from The Binding of Isaac.

The other main caveat is that as a guest in the channel you cannot see the other participants of the chat from IRC if you check who is in the channel. You can, however, still see whatever messages they send, and Irssi will still auto-complete their names if you want to mention them. For example, when running Irssi’s /names command to see a list of channel members, I see only myself:

As you can see in the screenshot above, a friend of ours is also actively using Twitch chat, but I am the only member listed in the channel. This isn’t a big ordeal once you realize it (unless you’re really jonesing to see everyone’s name for some reason), but it did trip me up initially since I had assumed that I was not connecting to the proper channel. I eventually left both IRC and Twitch chat open in a browser until someone else typed something so that I could verify the same message appeared in both places.

On the whole I found this to be a streamlined, elegant way to actually participate in Brandi’s Twitch chat from my low-end laptop while watching her actual video stream on my TV from the comfort of my couch. Plus, you just get to feel awesome for using IRC.

Buy Literally Anything From Bandcamp Today

The coronavirus has been causing havoc for industries across the globe, including the music industry. Especially for small, indie artists, the revenue they make from doing live shows is vital for their ability to continue doing what they love. In the case of a global pandemic, though, all of those shows have been canceled, leaving artists struggling to make it by.

Having been social distancing/self quarantining for about two weeks now, I’ve been doing my best to live by the very wise, meaningful words of Craft Brew Geek when he told a group of us:

This is the time when we need to take care of the people who take care of us.

He said this within the context of doing what we can for local restaurants and breweries that are trying to weather this storm. There are plenty of places where we’re regulars and have been the recipients of preferrential treatment from those establishments for months or even years: getting your drink filled before other people, getting deals on your order, etc. As these businesses try to tackle the biggest threat to their very existence that they’ve ever seen, we’ve been doing our part to get carry-out food and drinks whenever possible and while leaving the most generous tips we can because every little bit helps. I’d highly encourage anyone reading this to do the same if it’s within your means.

But restaurants and breweries aren’t the only ones who take care of you. My memory tends to be pretty good, so I trust it when I recall Hayley Williams posting a message on Twitter years ago that amounted to:

Be there for music because music will always be there for you.

I tried searching for the post, but it was so long ago that I’ll never find it. The sentiment is certainly true though. Everyone has had that song that has made an event, be it a party, a roadtrip, or just a special moment with friends, unforgettable. Everyone has felt sorrow and heartache and didn’t know how they would’ve been able to get through if it wasn’t for that one song that knew exactly how you felt.

Music has had such a profound impact on my life, and I can’t imagine where I’d be without it. It pains me to see so many small artists having to cancel their shows, knowing they’re upset at having to let down their fans and worried about how they’re going to continue making ends meet and create the art we all love so much.

That’s why today is such a great day to support indie music. Today Bandcamp is waiving their revenue share on all purchases through the site. This means that 100% of the revenue is going to support artists who need it now more than ever. Dozens of artists I listen to regularly, and some of whom were mentioned in our music stats episode have Bandcamp profiles. Even if you don’t need a digital album because you always use Spotify or Apple Music, consider buying an album anyway just to help out and show some love for the people who have made such a difference in your life.

Pre-orders count as well, so you can show some support for artists with upcoming albums. Also note that most albums have a minimum price but offer you the ability to throw a little extra on top if you want. Please help out if you have the means.

If you don’t have the means, you can still help out! The payouts are garbage, but stream as much music from indie artists as you possibly can. Check out the Twitter profiles for artists you love; many have started doing online shows. For example, on her YouTube channel, Tessa Violet has been streaming what she’s coined “The Something To Look Forward To Tour”. There are still 3 shows left. Tune in for those and look for other artists who are doing the same!

Do whatever you can to support artists and musicians now because every little bit helps. It’s time that we take care of the people who take care of us.

Idiot’s Guide To Figuring Out How A Website Was Hacked

Full Disclosure: This won’t tell you exactly what was wrong with a website. This will just give you a pretty good, quick idea. I’m not in DFIR or even InfoSec. I’m just a sysadmin who has some familiarity with a decent number of systems. It’s also worth mentioning that I did all of these actions from my Linux machine. The same would be possible from macOS or from Windows 10 with the Windows Subsystem for Linux.

Last night, my good buddy Craft Brew Geek shot me a message because a website we both had something of an interest in (I won’t go into more specifics than that to protect the guilty… I’ll just say that it doesn’t belong to either of us) had suddenly exhibited weird behavior. Navigating to the website, either via directly typing their URL into my browser or by searching for them on Google and clicking the link, took me not to the expected website but to a super shady online pharmacy; there’s not enough booze in the world to get me drunk enough to type my credit card information into this site. Since we’re all stuck at home under quarantine, though, I figured I’d kill a little time digging into what, exactly, was going on.

The initial problem is that I navigate to desiredsite.com and it takes me to shadysite.com instead. A common way this type of thing happens without any degree of technical compromise is if someone allows their domain to expire rather than having it automatically renew. When that happens, it’s possible for an attacker to swoop in, buy the domain, and then change the DNS information to point to their desired site. It’s pretty uncommon since most DNS registrars will park domains for a month, giving the original owner time to renew. Failing that, they often go to auction rather than back into the pool. Additionally, under this scenario there would be no reason to redirect to shadysite.com. Still, it doesn’t hurt to check the DNS history through something like SecurityTrails. This showed the last DNS change was 3 months ago for the site in question; there’s no way the site had been redirected for 3 months so I could rule that out.

My next thought was to see if the sites were on the same server. If they were, that would tell me the entire server was wrecked, receiving my request, and was configured to load a different site instead. This was easy enough through dig:

dig +short desiredsite.com. a
dig +short shadysite.com. a

This gave me two different IP addresses. This tells me the sites aren’t hosted on the same server, which means that desiredsite.com is redirecting me to shadysite.com. For that to be the case, I have to be hitting desiredsite.com first, but then I’m redirected before I see anything. I needed to see what was up with the site before being redirected. Scripts on the web are most commonly executed not on the server side, but locally in the browser. As a result, I used wget to just try to snag the file living at desiredsite.com, which for most websites will be index.html:

wget http://desiredsite.com

This simply downloads the file to my local machine. Nothing is actually executing any scripts it might reference. Sure enough, this gives me an HTML file for desiredsite.com I can open in a text editor. I figured JavaScript was likely being used to handle the redirect. To test this, I turned off JavaScript in Google Chrome and once again navigated to http://desiredsite.com. This caused the expected website to load, albeit kind of broken since JavaScript wasn’t running.

Diving back into the index.html file, a quick search showed me that there were nearly 60 .js files for JavaScript. Ick. JavaScript can be written to be fairly easy to consume if you’ve got a passing familiarity with computer programming, but most JavaScript on the web is designed to be 1.) minified and 2.) obfuscated to make this nearly impossible. Seriously, this is what a typical JavaScript file looks like. Note how my editor is showcasing the fact that it’s all one line:

Clearly trying to read through 60 files of that isn’t going to happen; this isn’t my job, I’m doing it for fun. However, I still had some options for trying to quickly look for something flagrant. I saved down local copies of all 60 JavaScript files in the same directory, and then navigated to that directory from my terminal. I then used grep -R to recursively search through every JavaScript file at the same time.

cd /path/to/javascript
grep -R "search term here" .

What did I search for? I started off by searching for shadysite.com. No dice. Then I searched for the IP address I got for the site from my previous dig command. Also no dice. I didn’t think it would be anything that overt, but it was worth a shot. I decided to look at the source code for shadysite.com to see if there were any clues. I immediately noticed that the entire site was coded around the IP address for the site rather than the domain. For example, links in the source code of most sites are going to look like:

http://mydomain.com/folder/page.html

The links on this particular page were done like this:

http://192.168.254.254/folder/page.html

Obviously that wasn’t the IP address in use, but you get the point. This tells me that, unsurprisingly, they run into a lot of problems with their domain getting shut down. So they design the site to be domain agnostic, buy a new domain when the old one is shuttered, and then point it to the same IP address they’ve been using. Some quick searches online showed me a few tools I could use to plug in an IP address and get a historical list of domains tied to that IP. I used ViewDNS.info. This showed me 6 total domains that had been pointing to the same IIP address, one of which was what I saw now. I repeated my grep search above with the others to see if there were any hits, but sadly there was still no luck.

At this point, though, I still had a pretty good idea of what was happening. Out of the 60 JavaScript files referenced by the source code for desiredsite.com, most of them were in a sub-directory for WordPress, including some directories that noted they were for WordPress plugins. Having looked at enough compromised websites over the past 15 years, it’s a definite trend that WordPress (and especially WordPress plugins) tend to be Swiss cheese. WordPress plugins are a frequent target for attackers, and most people never think to update them. At this point, if I were determined to get to the bottom of things, it would be much quicker to just point some kind of vulnerability scanner like Nessus at the site and just let it find the vulnerable plugin(s) rather than tracking them down through obfuscated JavaScript.

All told, though, it was a fun exercise to dig into how the site was compromised and come away after only about 30 minutes of work with a pretty good guess.

Google Wifi And The Curse Of Simplication

As you likely know if you listen to the podcast, about 6 months ago I started a new job, and about 3 months ago I moved to be closer to that job. With Brandi’s help, I managed to get moved without too much hassle, but I cut a few corners in getting my new home set up. One of those corners was my home network; after using the “kit” from my ISP to get my Internet service activated I had never bothered to swap out their equipment with my own. Given the number of devices I had connected to the Wifi (2 laptops, a phone, a tablet, 2 streaming sticks, 3 smart speakers, etc.) I didn’t want to bother with getting everything connected to a different network.

That being said, I’ve been spoiled by mesh Wifi and coverage in my bedroom was occasionally problematic; in apartments like mine it’s easy for crowded channels to drown out your signal. I had a little free time one evening earlier this week and decided to finally bite the bullet.

My Wifi setup at my last apartment was Google Wifi with 3 access points. 3 APs may seem overkill for a roughly 900 square foot apartment, but when I bought them the bundle of 3 was essentially the same price as buying 2 individually… plus it ensures that I have total Wifi dominance over my neighbors.

The initial, out-of-box experience with setting up Google Wifi was fairly simple. Connect the first AP to the modem, install the Google Wifi mobile app, have it discover the AP, and then configure it through the app. Once your Wifi is working, repeat the process for the additional APs to get them to form the mesh network. The full instructions are available here.

However, configuring them when they’ve already been configured once turned out to be much more irritating and far less straightforward. Given that I had a brand new phone, I needed to install the Google Wifi app. I did that and logged in with my Google account. Naturally, Google had stored the information about my APs and how they were configured, but told me they were offline. Taking one of the 3, I connected it to my modem and bounced the modem. The light ring on the AP stayed orange, indicating that it didn’t have a WAN link.

I can only assume this is where I made the first of many mistakes that I’ll take partial responsibility for and partially blame on how kludgey this whole experience is when you’re tied to a mobile app. Rather than repeating this process potentially two more times to find the AP that was expecting to be the router (and that assuming the memory was still held despite being off for months), I figured I would just factory reset the APs and start from scratch. I could select the factory reset option from the app to clear the APs out of it, but naturally that wouldn’t do anything to the physical devices since they couldn’t connect to the Internet to see that configuration. A quick web search let me know that I needed to hold the button on the back of the AP for 10 seconds while the device is on, pull the power while continuing to hold the button, and then plug the power back in while still holding the button until I saw the light ring. Talk about a secret handshake. Regardless, I did that, and then proceeded to wait until the light ring began to pulse blue. That indicates it’s ready for setup, per the aforementioned instructions.

At this point, I open the app on my phone. It does a quick search, confirms for me that it sees the AP, and then starts trying to connect. For the next 15 minutes I stare at this:

Umm… not great. At this point, I figure something must be amiss in the app. The instructions in step #4 say I should be prompted to scan the QR code on the bottom of the device… a prompt that I never receive. I kill the app and re-launch. I get the exact same experience and end up stuck at the same screen as above.

This is what we in the IT industry call a problem. At this point I’m mumbling “stupid fucking app” to myself, so I grab my laptop and an Ethernet cable. I connect my laptop to the one LAN port on the AP; sure enough, I get a DHCP address and can reach the Internet. So the AP is making a WAN connection. The mobile app just won’t connect to the AP for some reason. I can check the network settings on my laptop, though, to see the local IP of the AP. While being salty at Google for designing this for the less technically inclined, I do a quick nmap to see if ports 80 and/or 443 are open on the AP, indicating HTTP and HTTPS, respectively.

nmap -Pn -p 80,443 192.168.x.y

80 was open while 443 was closed. I open a browser and navigate to:

https://192.168.x.y

Sure enough, this loads a web page on the device. In most consumer routers this is where you’d normally do all of your configuration. In Google Wifi it politely recommends that I get bent and use the mobile app:

Shit. With the app being my only option, I hop back over to the setup instructions online to see what I may have missed. At this point I notice a chat pop-up at the bottom right corner of the screen for Google support. I click on it and get connected to a support agent. After I describe the situation, she instructs me to:

  1. Kill the Google Wifi app on my phone.
  2. Turn off cellular data on my phone.
  3. Open the Wifi network settings, search for the network named on the bottom of the AP, and connect to that.
  4. Re-open the app and now go through the setup.

At this point, it actually connects and gets past the screen I was stuck on so that I could configure the device. I was happy to have it working, but I was also extremely confused; nothing in the setup mentions connecting to the setup network on the device. Nothing like this was mentioned on the common issues page, either.

Regardless, with the Wifi network running, I connect a second AP to build the mesh network. I even clarified with the support agent in chat that for this AP, I don’t connect to the setup network… I should be able to be connected to my new Wifi network, tap the option to add another AP in the app, and have it connect. Unfortunately for me, the exact same thing happens as before. The AP is found, but I again sit endlessly at the “Connecting to Wifi point” screen. The support agent has me reset the AP again to no avail. Next she has me use an Ethernet cable to physically connect the mesh AP to the first AP and then repeat the setup:

Unsurprisingly, this makes no difference. On a whim, I grab my iPad, install the Google Wifi app (which doesn’t actually have an iPad variant and looks like garbage), and repeat the process. The app on my iPad immediately connects and prompts me to scan the QR code on the bottom of the AP. Within two minutes the AP is connected and working.

I then head back to my bedroom where I had already plugged in the third AP during one of the instances of watching the app on my iPhone refuse to connect. It also immediately completes the setup when using my iPad.

At this point I’m just shy of 3 hours in to something that I assumed would take me 30 minutes at most, and I still have to go through the painful process of connecting my smart speakers to the new network. I’m glad it’s all working, but I’ve got so many points of confusion.

  1. Why are the setup instructions Google publishes wrong?
  2. WTF is going on between the mobile app and my iPhone 11 that it wouldn’t work, but it works fine on my iPad?
  3. Why on Earth does Google not permit you to do configuration locally, and then have the device sync that configuration to the cloud to keep everything in sync.

#3 is a particular sticking point for me. Google only permits the configuration to sync from the cloud down to the device. Why not permit me to configure the device locally, and then make the device sync any changes to the cloud; the sync could work in both directions instead of in 1 direction. I understand wanting to leverage the app to (hopefully) simplify the process for people who would find webbing to their router way too confusing, but not permitting you to do a local configuration when you know how to connect but the shitty app doesn’t work is infuriating to me.

That being said, this was 100% my own fault for not doing the research to realize that Google Wifi doesn’t permit this prior to making the purchase. And to be complete clear, I’m not opposed to the app. The first setup was good, and I love being able to quickly check and configure my network from my phone regardless of where I am. I just feel like it should be mandatory to allow some degree of local access for when the app goes haywire. Unfortunately, this appears to be a standard practice. I verified that Eero also uses an app-based setup, and that if you don’t have data on your phone, you still need to use your phone but do some janky local connections between your old router and your Eero. How that is less complicated than connecting a laptop and opening a browser I’ll never know.

Despite my copious amounts of salt, I’m glad I finally got everything connected. As Wifi 6 becomes more prevalent, though, I’ll be on the lookout for new networking hardware. This time I’ll be sure to verify that whatever I buy at least offers local configuration options in conjunction with an app.

SSHFS On macOS

Having switched to a MacBook Pro for work when I started a new job about 6 months ago, I’ve been on an interesting journey in finding new software to fit my workflow after over a decade of operating primarily on Windows for my professional computing. Given that I’ve used Linux for years at a personal level, I can typically do anything I need on macOS by opening a Terminal; it’s only when I’ve got to operate at the graphical level that I can get a little tripped up by the differences between macOS and Windows.

I recently found myself needing to copy some files from a cloud VPS running on Linode. The reverse had been simple enough; copying files from my local machine to the VPS was as simple as running one rsync command. Note that I swap the SSH port on all of my VPS instances to something non-standard so that I don’t get so many random root authentication attempts when nefarious people see that port 22 is open. Even though I disable root login I still want to discourage the attempts for anything that has SSH open to the Internet. If you think I’m overreacting and you have a server in the same position, just do this and see how you feel after:

sudo tail /var/log/auth.log

It’s low-key terrifying. That being said, it was still easy enough to rsync anything from my MacBook Pro to the VPS (usually a script I had been working on that I figured would take hours to run and thus would be better suited to running on my VPS) via:

rsync --delete -azP -e "ssh -p #####" LocalFolder/ usere@server.domain:/local/file/path/

Note that the ##### should be replaced by the SSH port number on your server if you’re using something other than the default of port 22. You can verify the SSH port in use by checking /etc/ssh/sshd_config on your server and looking for the Port entry.

For me, though, this typically only works in the direction of going from my local laptop to my VPS. After finishing one particularly log-running script, though, I had a file on my VPS that I needed to copy back to my MacBook. In that case, rsync wouldn’t work because my laptop doesn’t have a public IP address, and I really didn’t want to bother with trying to do port-forwarding for it. In the Windows world, I would’ve typically used something like WinSCP to copy the files. FileZilla also works but, in my previous experience, has been super buggy. I tried looking in the macOS app store, and I was almost immediately disappointed. Most of the offerings either 1.) looked super shady with bad reviews or 2.) were aubscription-based applications. Don’t get me wrong… I’m not particularly opposed to subscription services. In fact, I have a podcast episode waiting to be published on this very topic. But for something like a simple file transfer? I refuse to pay a subscription for something so basic.

As a result, I ended up looking for open-source alternatives. After just a little hunting I stumbled across sshfs. I installed the sshfs base from Homebrew:

brew install sshfs

Before sshfs will work, though, you also need to install FUSE, which I just did from their website. Note that after upgrading my device from Mojave to Catalina I needed a new version of FUSE; luckily the system is good about letting you know that your version is out of date if that’s a problem.

What sshfs allows me to do is to mount a remote filesystem like a local filesystem that I can then use to copy files back and forth. To be extra basic, I created my mount point in my ~/Downloads folder:

mkdir ~/Downloads/sftp

It took me some messing around to figure out the exact syntax I needed to successfully mount the filesystem for my VPS, but I eventually ended up with a shell script that just included this line:

sudo sshfs -o allow_other,defer_permissions -p ##### user@server.domain:/local/file/path /Full/Path/ToLocal/MountPoint/

Just like before, change the port number, server, and file path information to match your actual setup. After running this, you can then open Finder in macOS and easily copy files to and from the server without worry about port forwarding. The only caveat I’ve noticed is that sometimes macOS becomes extremely unhappy if the remote filesystem is left mounted and the local client loses connectivity to the remote system, for example because the laptop fell asleep and disconnected from WiFi. As a result, I only leave the connection up when I’m actively working with it. When I need to disconnect the mounted remote filesyste, I can either click the “eject” icon next to it in Finder or I can run:

sudo unmount /Full/Path/ToLocal/MountPoint

In this case the mount point is ~/Downloads/sftp. I now use this setup on a nearly daily basis, and I’ve not run into any issues. The free, open-source solution is working great without forcing me to try out random shady-looking apps or coughing up money for a subscription just to move files between devices.