During this era, I asked one of my users to create some ANSI art for me:
And they agreed!
11 days later, they uploaded the ANSI art to my BBS.
You may be wondering what the deal with these Windows XP-era screenshots is. Well, I used to backup my BBS with a Colorado Tape Backup drive. Here’s a picture of one I found on the Internet:
In 2001, I found a 250 MB tape that was the last backup of my old BBS and restored it to my computer. Feeling nostalgic, I logged in to it locally and took a few screenshots. When I first logged-in to it, it displayed the ANSI art that my user had made for me:
It scrolled by pretty fast, so I took a series of screenshots of it and then used Photoshop to combine them. I made a little webpage for it so I could look at it every now and then and went on with my life.
Shortly after this, my hard drive died. I think it was an IBM Deathstar. I have no idea what happened to the 250 MB tape, but I think it’s safe to say that even if I do find it, it probably isn’t going to work.
Luckily, thanks to the power of the web, I still have these screenshots. Unluckily, I never actually uploaded the original ANSI file, so I don’t have that. Every now and then I’ll search for it on the Internet, but I’ve concluded that it never made it into an artpack and thus I had the only copy of it, until I didn’t.
It’s not the most beautiful ANSI art, but it is something that someone made for me and I’ve always been a little bummed that I can’t look at it in one of the many ANSI viewers (or DOS emulators) that exist today.
Let’s fix that!
The first thing to understand about ANSI art is that it combines two things: characters from the IBM Code page 437 and ANSI escape sequences that do things like change colors and move the cursor around.
I found this extremely handy page that shows all the different characters in Codepage 437.
Characters #0 - #31 are control characters, and #127 is DEL, so we can ignore those. The rest of them are used in ANSI art, although the shade blocks and half blocks are predominately used.
The way that you would type one of these weird characters on an IBM PC is that you could hold down ALT and type the ASCII code on your numpad. When you released ALT, the character would show up. But most artists would use a program like TheDraw or ACiDDraw to design their art.
Speaking of TheDraw, let’s take a look at the color selection screen from it:
There are sixteen foreground colors and eight background colors. Ignore 16-31, I captured this screenshot mid-blink.
Changing the foreground and background colors and writing the characters from Codepage 437 produces the ANSI art that we know today:
To accurately display ANSI art, it’s important to use an appropriate IBM PC font, like this one. This will ensure that your art looks the way that it was intended by the artist.
The strategy for conversion that I came up with was: split the screenshot into individual characters. For each character, generate every possible permutation of background color, foreground color, and character from Codepage 437 and compare it to the character in the screenshot, then pick the one that is the most similar.
There’s probably a lot of different ways to do this, but I figured the easiest would be to make a webpage and use the Canvas API.
As a test case, I used a program called ansilove to take an existing .ANS
file and generate a PNG of it. It even came with an example ANS file:
The image is 640x464. Assuming that we have a 8x16 font, this means that it contains 80x29 characters.
We create a canvas and load the image into it:
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d", { willReadFrequently: true });
const img = new Image();
img.addEventListener("load", (e) => {
ctx.drawImage(img, 0, 0);
});
img.src = 'input.png';
Next, we have to have a list of the foreground and background colors. I found that iTerm2 has a color scheme for CGA that looks accurate, so I loaded it into iTerm and then extracted the hex codes from it, double-checking against the TheDraw color picker. This gave me two lists:
var fgColors = [
"000000",
"aa0000",
"00aa00",
"aa5500",
"0000aa",
"aa00aa",
"00aaaa",
"aaaaaa",
"555555",
"ff5555",
"55ff55",
"ffff55",
"5555ff",
"ff55ff",
"55ffff",
"feffff"
];
var bgColors = [
"000000",
"aa0000",
"00aa00",
"aa5500",
"0000aa",
"aa00aa",
"00aaaa",
"aaaaaa",
];
Now we need all the characters in the Codepage 437 to loop over. Luckily, Unicode provides a text file that translates all CP437 codes to their UTF-8 equivalents.
I took this text file, removed the control characters and DEL, and created an array of their UTF-8 counterparts.
Then I created another canvas, looped over each background color, foreground color, and character, and wrote them to the canvas using the IBM PC font.
const char1canvas = document.getElementById("char1");
const char1ctx = char1canvas.getContext("2d", { willReadFrequently: true });
char1ctx.fillStyle = "#000000";
char1ctx.fillRect(0,0,8,16);
var imgData = [];
for (var i = 0; i < bgColors.length; i++) {
for (var j = 0; j < fgColors.length; j++) {
if (bgColors[i] == fgColors[j]) {
continue;
}
for (var k = 0; k < chars.length; k++) {
char2ctx.fillStyle = "#" + bgColors[i];
char2ctx.fillRect(0,0,8,16);
char2ctx.fillStyle = "#" + fgColors[j];
char2ctx.font = "16px xx437";
char2ctx.fillText(chars[k], 0, 12);
if (!imgData[i]) {
imgData[i] = [];
}
if (!imgData[i][j]) {
imgData[i][j] = [];
}
imgData[i][j][k] = char2ctx.getImageData(0,0,8,16);
}
}
}
For some reason I had to offset the fillText
by 12 pixels to get it to correctly write it to the canvas as I would expect. I have no idea why, but CSS has never been a strength of mine. After each time that we write the character, we store the image data of the result in a lookup table, by background color, foreground color, and character.
Next, we loop over each character section of the original image’s canvas and extract the image data of this section:
for (var y = 0; y <= 24; y++) {
for (var x = 0; x <= 79; x++) {
char1ctx.drawImage(canvas, x*8, y*16, 8, 16, 0, 0, 8, 16);
var imgChar1 = char1ctx.getImageData(0,0,8,16);
}
}
I found a library called pixelmatch that allows you to compare two sets of imagedata. It returns the number of mismatched pixels and if you want it, a diff of the two.
var result = pixelmatch(imgChar1.data, imgChar2.data, null, 8, 16, {
threshold: 0.1,
});
So then we can loop over every permutation of background color, foreground color, and character and compare it to the image’s character and pick the one that has the lowest number of mismatched pixels - ideally 0.
Next we create an output canvas and using the identified combinations for each character, write back the same ANSI art to that canvas:
The image on the left is the input image and the image on the right is the generated ANSI art. It looks pretty good! The heart in the middle of ANSI
and LOVE
has been converted to a rectangular bullet - possibly because I excluded the control characters and the heart happened to be in them.
But this is only useful if we can generate the ANSI art file. Let’s go back to the ANSI escape codes:
\033[0m
resets everything back to normal.
\033[31m
will set the foreground to red. The possible colors are 30-37.
\033[44m
will set the background to blue. The possible colors are 40-47.
In order to get the bright foreground colors, we simply have to set the bold attribute, which we can use \033[1m
to set. We just have to remember to use \033[0m
to reset it after we’re done.
So for each character, we just always reset it, set the background color, set the foreground color, and optionally set bold. Then we write the character.
But if we just write this to a string, this will give us a file that contains ANSI escape codes but UTF-8 characters - we need to convert it to a DOS format.
There’s a useful package called iconv-lite that can encode different character encodings, so we can do something like:
iconv.encode(ansiTxt, 'cp437');
I reverse engineered how this CP437 converter works to output the file and I thought the way that it downloaded it was pretty clever, so I incorporated it as well:
var c = document.createElement("a");
c.href = "data:text/plain;base64," + iconv.encode(ansiTxt, 'cp437').toString('base64');
c.download = 'output.ans';
document.body.appendChild(c);
c.click();
document.body.removeChild(c);
This automatically started downloading the ANS file with the correct encodings.
Finally, it was time to send my screenshot through it!
The screenshots that I had taken in 2001 were one screen at a time, or 80x25 characters. This was the first one:
The dimensions were 560x300. But that means that each character was 7x12 instead of 8x16. And this is when I remembered that when I took the screenshots, I had taken them in a DOS window in Windows, which used whatever font that Windows used for DOS prompts. What if we just… resized the image so that each character was 8x16? I resized the image to 640x400, making sure to use “Nearest Neighbor” to try to keep the pixels correct, and it sort-of worked:
The text detection is laughably bad but it seemed to get the solid and half blocks - it was really mostly struggling with the shade blocks. There are only 3 shade blocks, 4 if you count the completely solid one.
I wrote an ANS file to output just three different shade blocks and exported an image of it using ansilove and it was clear - whatever font Windows was using for the DOS prompt had a completely different idea of what a shade block looked like than what the IBM PC font did.
Then I had an idea - what if I just taught my program what the weird Windows shade blocks looked like?
I zoomed in on the shade blocks in the screenshot and extracted one of each of the different types. From left to right, these represent light, medium, and heavy:
I loaded each one into a canvas and looped over each pixel. When I encountered red, I stored this as a 1 in a multi-dimensional array, and when I didn’t find red, I stored this as 0. Using this mapping, I again generated every permutation of background color, foreground color, and now these new shade blocks and stored their image data in a lookup table. When the image character matched it, I selected the appropriate shade block character.
And that worked! Here you can see the difference between the screenshot shade blocks and the actual shade blocks is quite stark:
Apparently I hadn’t seen what this ANSI art correctly looked like since the 90s. The text was still wrong, but I figured I could easily recreate it from the screenshots. So I started processing the other screenshots, which was working great until I got to this one:
The shading on the flames was completely gone, which meant that it more closely identified the character with the solid block than the shade block. That’s weird. I zoomed in closer to the block on the original screenshot. Here it is in the middle next to two of the original shade blocks that I mapped.
It seemed to be very similar to the one of the left, but flipped. I’m not sure how it got flipped, maybe something to do with the screen capture or resizing process, but I used the same procedure to map it, storing a 1 for each red pixel and generating all the permutations, finally storing it in the lookup table, and I ran it again. And.. it got the same result.
I was starting to feel like I was going crazy when I remembered the TheDraw color selection screen:
There are two foreground colors, brown and yellow. Yellow is the bold version of brown. But there is only one background color, which happens to be brown. It dawned on me that what I was looking at was not a red shade block on yellow, but a yellow shade block on red. It really makes you appreciate the constraints of the 90s ANSI artist. After retraining it to look for yellow pixels, it correctly identified the block.
The text was still a garbage fire, but when I combined the ANSI files in Pablodraw, I just retyped it.
Here’s what it looked like after I cleaned it up:
And here is the ANSI file: mp-h&c.ans.
You can view in something like nfov or Pablodraw, or even iconv -f 437 mp-h\&c.ans
if you’ve got the correct colors and font set up in your terminal.
I don’t intend on losing it again.
]]>It came with a floppy disk with a bunch of different utilities, but the one I used the most was DR SBAITSO.
From the manual, “DR SBAITSO is a program that seems to act intelligently by responding to your queries and pretending to solve your personal problems.”
Sound familiar?
When I first tried ChatGPT, it gave off a lot of DR SBAITSO vibes, especially the way that it slowly typed back to you.
If you’ve never played with it, DR SBAITSO is sort of a weird mash up between ELIZA and a Speak-and-Spell. Here it is embedded in this post, because we’re living in the future:
DR SBAITSO was never very convincing. If you spent any amount of time with it, you could tell it was just deflecting.
But it got me wondering, what if we replaced the internals of DR SBAITSO with ChatGPT but kept the weird synthesized voice?
Once again, we do things not because they are easy, but because we thought they were going to be easy.
I recently came across a project called 86Box which emulates entire IBM PC systems and thought it might be fun to try it out.
First I started with a 386 SX, changed my mind and upgraded it to a 486 DX2/66 and then thought, “Well, am I not made of virtual money?” and then upgraded it one final time to a Pentium 200 with 16 megs of RAM.
DOSBox-X just drops you into DOS but with 86Box you get the full bootup experience:
I added a 500 MB hard drive, installed MS-DOS v6.22 from 3 floppy disk images and it was up and running.
It was pretty easy to add a Sound Blaster card, here’s a Sound Blaster 2.0 that I configured:
90’s kids will remember that the next step is to add:
SET BLASTER=A220 I7 D1 T3
to the AUTOEXEC.BAT
. This is an environment variable that tells programs the settings of the sound card, most importantly the IRQ that it’s on.
Next, I copied the contents of the Sound Blaster utilies floppy to C:\SB
. DR SBAITSO actually relies upon a memory resident module called SBTALKER, which does the actual text-to-speech synthesis. So after running SBTALK.BAT
to load that into memory, I ran SBAITSO2.EXE
(because it’s v2.20?) and it was working fine.
As far as I know, the source code for DR SBAITSO has never been released, but alongside the SBTALKER module there is a utility called SAY.EXE
, which can say whatever you want from a text file or as a command-line parameter. So if we can recreate the frontend of DR SBAITSO, we can use that utility to output the actual synthesized voice.
At this point we have a few options - we could recreate the frontend in DOS, or we can write it on a modern computer and have it communicate to the emulated machine just for the text-to-speech synthesis. Since the frontend has to communicate with ChatGPT as well, I opted for the latter.
In order to bridge the emulated IBM PC to my host computer, I added a Novell NE2000 network card.
Next we’ll need packet drivers, you can find the driver for the NE2000 here. I copied the NE2000.COM
driver to C:\DOS\DRIVERS
and added this to my AUTOEXEC.BAT:
LH C:\DOS\DRIVERS\NE2000.COM 0x60 11 0x300
Make sure it matches the IRQ that you specified when configuring the card. In order to create a bridged network, we have to configure port forwarding, which can be done by adding these lines to 86box.cfg
while the emulated machine is off:
[SLiRP Port Forwarding #1]
0_protocol = tcp
0_external = 2048
0_internal = 2048
1_protocol = tcp
1_external = 2049
1_internal = 2049
This sets it up so that the two machines essentially share ports 2048 and 2049 - data sent from the emulated machine can be received on the same port on the host machine and vice-versa.
Finally, we need some sort of program to transfer data between the two. I found mTCP, which is a set of programs for DOS that provide basic functionality like FTP, HTGET (essentially wget/curl for DOS), NC, and PING.
To configure MTCP, we create a copy of the example configuration in C:\MTCP\MTCP.CFG
and add this to our AUTOEXEC.BAT
:
SET MTCPCFG=C:\MTCP\MTCP.CFG
Finally, we test that it can leverage the packet driver by running DHCP.EXE
.
Initially, I figured I could set up a web server on the emulated machine and one on the host machine and send HTTP calls from one to another, but if all the web server on the emulated machine was going to do was shell out to the text-to-speech program, it seemed like overkill. I decided instead to write a batch file LOOP.BAT
that looks like this:
@ECHO OFF
del line.txt
:loop
nc -listen 2048 > line.txt
say line.txt
echo OK | nc -target 127.0.0.1 2049
goto loop
What this does is starts a netcat server, waiting for data. When it receives it, it writes it to line.txt
and then calls the text-to-speech program to read that file. Afterwards, it starts a netcat client and signals to the host machine that it’s done talking and starts over again waiting for netcat data.
I tested it by running nc
on my host machine, like this:
echo "HELLO WORLD" | nc localhost 2048
and confirmed that the Sound Blaster said that in the DR SBAITSO voice.
Next I got to work on building the frontend. In order for the frontend to be convincing, ideally it should use an IBM PC font. Luckily, a font pack for modern computers exists, this is the font I ended up using - specifically the one labeled Mx437 IBM VGA 9x16
.
I changed iTerm2 to use the font, set to columns and rows to 80x25, and started looking up the ASCII codes for the box drawing characters. I found a library that handled ANSI escape codes and managed to make a pretty convincing replica of the original DOS program.
One of the things DR SBAITSO does when it first starts up is that it asks you for your name. Each character that you type of your name is spoken by the text-to-speech synthesizer. Since the loop from the host to the emulated machine back to the host takes a while, I found it difficult to have it go fast enough to keep up with the user typing. You’ll notice this problem even on the original program, it lags a lot, but in my frontend it was even worse. In order to overcome this, I launched DR SBAITSO in DOSBox-X and started recording the audio output. Then I typed every letter (A-Z), edited the recording in Audacity and exported 26 different .WAV files, one for each letter. When you type the letters for your name, it just plays the wav files, but the actual conversation text-to-speech synthesis is handled through the emulated machine.
The next part, integrating with ChatGPT, was surprisingly easy. OpenAI hasn’t released a public API yet, but a number of folks have reversed engineered how the API calls are made from their web frontend. Since these API calls end up costing OpenAI money, there is a bit of a cat-and-mouse game between the folks writing the libraries to interact with the ChatGPT API and OpenAI. The most recent development is a Cloudflare CAPTCHA - the library authors have in turn used puppeteer to log-in to OpenAI in a browser and automatically solve the CAPTCHA using a CAPTCHA-solving service if one appears.
One quirk of the ChatGPT API library that I chose is that it logs in to OpenAI each time, increasing the probability that I’ll hit a CAPTCHA for my IP address, so I wrapped it in a web service that I can send POST requests to and it’ll proxy them to OpenAI and return the response. This way I can restart my frontend as much as I want without having to re-authenticate with OpenAI.
The sbaitso2 program displays the introduction and asks for the user’s name, buffering the characters that the user types and playing each character’s WAV file when pressed. After the user hits ENTER, it inserts the name into its initial prompt and sends the payload, line by line, to the emulated machine over netcat. After the emulated machine says the line, it sends a netcat ping back to the host machine, which then resumes to the next line. Next it takes the input from the user and sends it to chatgpt-server, which in turns sends it to the ChatGPT API and returns the response. The response is word wrapped to 80 characters and sent, line by line, to the emulated machine to synthesize the speech.
And that’s how you smoosh together two AIs that were written 30 years apart!
Source code is available here.
]]>I knew a few things about credit card numbers that I had learned from the book Big Secrets and some issues of 2600. American Express cards were always 15 digits and started with a 3 and Visa cards were 16 digits and started with a 4. The first couple digits are the issuing bank, and then the account number, and then they use something called the Luhn algorithm to produce the last digit, which is a checksum of the previous digits.
The algorithm works like this - take a credit card number like:
4123456741234122
Remove the last digit, the check digit, and set it aside:
412345674123412 2
Now reverse the order of the digits:
214321476543214
Then multiple every other digit by 2, starting with the first digit:
2*2 1 4*2 3 2*2 1 4*2 7 6*2 5 4*2 3 2*2 1 4*4
That produces:
4 1 8 3 4 1 8 7 12 5 8 3 4 1 8
Next, separate any double digit numbers into single digits:
4 1 8 3 4 1 8 7 1 2 5 8 3 4 1 8
Add them all together:
4+1+8+3+4+1+8+7+1+2+5+8+3+4+1+8 = 68
Divide by 10 and keep the remainder:
68 mod 10 = 8
Subtract the remainder from 10:
10-8 = 2
This number is the check digit.
The algorithm is used to validate credit card numbers. If someone types in the wrong digit or mishears it on the phone, the check digit won’t match up. But this means that you could also generate valid credit card numbers. If you wanted to generate a Visa credit card number, you could start with 4, add a bunch of random numbers, and then calculate the check digit and add it to the end. The number would almost certainly not match to an actual credit card, but it would look like a valid credit card number.
So I wrote a program to generate credit card numbers exactly as I just described, called them back and gave it to them along with a random expiration date in the future and they said, “Great, now we just need a phone number to call you back.”
Now I’m not sure what the first rule of credit card fraud is but it might be “Don’t give the person you’re defrauding your actual telephone number”. I said I had just moved and needed to look it up and I’d call them back. Then I walked to a payphone outside a convenience store, wrote down the number, called them back and gave it to them. They hung up, waited a few minutes, and called the payphone, which I answered. They asked me for a username, gave me a random password, and thanked me for my business.
The Internet was everything I had hoped it would be, for a couple weeks at least. But one day my username and password didn’t work. They obviously had tried billing the credit card and failed. So I generated a new credit card number, walked back to the convenience store and called them from the payphone and asked to set up a new account. It was going great until I gave them the payphone number, at which point they did not thank me for my business.
My payphone number had been blacklisted, but the great thing about living in a metropolitan area in the U.S. in the 90s is that there was always another payphone. I called them from a different one, set up a new account, and was back on the Internet.
For the next few months, this became a bit of a cat-and-mouse game with me and the ISP. I printed out a list of credit card numbers that I had generated and would carry them around with me. Whenever I saw a payphone, I would either call them to set up a new account or write down its location for future use. Sometimes I would get a representative that I had talked to before that would recognize my voice, other times they would insist on calling back the next day, which meant camping out at a payphone hoping it would ring. If I was with a friend, I’d have them do it for me, writing down what I wanted them to say.
I did eventually get more permanent Internet access, but the experience made me fond of payphones and what they provided for me when I needed them the most.
Last year, my daughter Aurora and I were walking around in Muir Woods and we came across this payphone.
Excitedly, I explained what it was and we took turns listening to the dial tone. I put some quarters in and called my cell phone to show her how it worked.
I recently moved into a new place and finally got to ditch Comcast for Sonic, which previously wasn’t in my coverage area. This has a few benefits. It’s a fiber line, so my upload speeds are better. For some reason, they continue to run a server that you can play door games on:
And last but not least, they include a landline with your service. At first I ignored it, but it gnawed at me. A telephone jack that worked and nothing to plug into it.
So I went on The Internet (I have permanent access now!) and I found a phone that I remembered from my childhood.
Apparently they made a lot of these - the one I found was new old stock, still in its original packaging. I plugged it in to the telephone jack and showed it to Aurora. I called it from my cell phone, it rang, and she picked it up and we talked for a bit.
But still, I wasn’t quite satisfied. Another idea emerged.
I started researching payphones. Perhaps the easiest, most obvious route would just be to buy a payphone from payphone.com:
Have you ever used a payphone and thought to yourself, “That would be a great novelty idea for the pool room, family room, or office. What a conversation piece”.
Why yes, payphone.com, yes I have.
But I didn’t like the fact that they just gave you a random payphone, so I started perusing Ebay. I found a seller that had a handful of payphones. They appear to use a potato to take photos of them, but I took a chance and ordered a Pacific Bell payphone. It arrived pretty quickly.
It didn’t smell so great. It had seen some shit (hopefully not literally) but the handset seemed to be in pretty good condition. I looked up the address printed on it and it had come out of a casino in Vegas, which probably explained why it appeared to have personally smoked a pack of cigarettes. I put it on my balcony to air out and that seemed to help.
I’ve never opened up a payphone before, but they had given me the keys and a T-key, so I turned the key and slowly lowered the front of it down. It met some resistance and I had to unplug this plug, which seemed to connect the electronics on the front to the back:
After unplugging it, I was able to take the whole front off and was able to see this:
I also opened the coin box with the T-key, hoping to find a return on my investment, but it was empty and quite difficult to get back in. I’ll admit that I ended up using a hammer to get it back in, so any quarters I deposit into it may be there for a while.
After I determined that it worked by placing a few calls, I started trying to figure out how to mount it.
I ordered a mounting backplate from payphone.com, with the “phone mounting hardware” addon, which turned out to be 20 of these screws. I was scratching my head on how this was actually supposed to work - I could attach the mounting backplate to the wall but wasn’t sure how I would hold a 50lb payphone up while screwing it in to the backplate. What I quickly discovered is that I actually needed these mounting studs, so I had to order those separately and wait. The shipping cost for four studs was pretty frustrating, but I wasn’t sure that I could find the correct ones elsewhere. Once I had those, I could screw them onto the payphone and it would rest on the mounting backplate while I screwed in the other screws. The cover has to be off for you to screw in the remaining screws.
Next, I used a stud finder to detect a stud on the wall and traced the outline of where I wanted the mounting backplate to go. This was tricky because I wanted to run the RJ-11 cable behind the wall - I didn’t want it just hanging below the payphone, but I also wanted to mount the backplate directly to a stud for at least one of the screws. Here’s a good picture that I call “Measure once, cut three times”:
I believe I used one of these hole saw drill bits to cut the hole. I cut a similar hole below and used fish wire to run a RJ-11 cable through it.
Next, I used these 1/4” x 3” hex lag bolts to attach the mounting backplate to the stud, which felt pretty secure and I wasn’t worried about stripping the screw with my drill. At this weight, I probably could have used toggle bolt drywall anchors, but I was happy to be able to use the stud. I think it looks pretty good:
For the hole below, I went the easy route and used a RJ-11 Keystone Coupler so I could just plug the RJ-11 cable into it, snap it into a 1-Port keystone wall plate, and attach the wall plate over the hole. Then I ran a cable from that jack to the telephone jack that provided the landline and tried calling it. It worked!
At this point, I suppose I could have bought a telephone line splitter and hooked up the other phone so that both phones could make and receive calls and technically talk to each other. But there are a few problems with this:
This is when I discovered a crazy piece of technology, a phone line simulator.
This basically creates a closed network between two phones. You can configure it so that if you pick up one phone, the other phone rings, and when that phone picks up, you can talk, and vice-versa. So I bought one, drilled a hole to my daughter’s room, and ran another RJ-11 cable to her phone.
I’m not sure exactly what I was expecting, but at random hours in the day I’ll be working in my office and the payphone will ring and my daughter will tell me about her dolls for a minute or so and then I’ll say, “That’s great, but I have to get back to work.” and we’ll say goodbye until the next time. Most of the time, it’s a nice break.
]]>Each day a new 5-letter word is chosen and you have 6 tries to try to guess it using 5-letter words. The colors of the tiles will change to show how close your guess was to the word.
My friends and I have been playing it for a few days and I’ve been doing pretty good, averaging around 3 or 4 guesses per word. We take screenshots of our guesses each day and compare them.
Naturally the question has come up of what the best starting word is in WORDLE? Which word is most likely to set you up for success?
In the Wheel of Fortune bonus round they start the contestants off with certain letters - RSTLNE. I always assumed this is because these were the most common letters in the English language. If you use those letters, you can form a five letter word that has unique letters in that set - RENTS.
But it turns out that in terms of letter frequency, these aren’t the most common, at least according to an analysis of all the words in the Concise Oxford Dictionary.
Another strategy would be to pick a 5-letter word that contains the most vowels, like ADIEU. I’ve had mixed results with this starting word, sometimes it’s great, sometimes it’s not so helpful.
I think we can do better. If you View Source on the WORDLE page (possibly a crime in some states), you’ll see that the HTML is pretty barebones and mostly just links to a single Javascript file.
We can run this through js-beautify like this:
js-beautify main.js > main-beautified.js
which results in this somewhat easier to read source code.
About halfway down the page, we find two giant arrays:
var Ls = ["cigar", "rebut", "sissy", ...]
As = ["aahed", "aalii", "aargh", ...]
Searching for how these two arrays of 5-letter words are used, it becomes clear that the first array, Ls, is a list of solutions. Each day a different solution is chosen from this array. The way that each day’s solution is chosen is simple. A starting date is specified: June 19, 2021. This corresponds to the first entry in the array, “cigar”. It takes the current date and determines how many days exist between the starting date and today’s date. For example, today is November 24, 2021, so 158 days exist between those two dates. The entry at the 158 index is “retch”, so today’s solution is “retch”. If more days have passed since the starting date than solutions exist, it will start to loop around again.
The second array, As, is a list of 5-letter words that the game thinks are valid, but will never be the solution for the day.
We can then extract these words and put them in a JSON file, with two keys, solutions
and herrings
.
Now that we have the data set, we can easily calculate the letter frequencies for the actual solutions:
var words = require('./words.json');
var solutions = words.solutions;
var letterExistsInWord = function(letter, word) {
return word.indexOf(letter) > -1;
}
var letterCounts = {};
// 97 is the ASCII character code for lower case a, this simply loops over every letter in the alphabet
// https://www.asciitable.com/
for (var charCode = 97; charCode < 123; charCode++) {
var letter = String.fromCharCode(charCode);
letterCounts[letter] = 0;
for (var solution of solutions) {
if (letterExistsInWord(letter, solution)) {
letterCounts[letter]++;
}
}
}
const letterFrequency = Object.fromEntries(
Object.entries(letterCounts).sort(([, a], [, b]) => b - a)
);
console.log(letterFrequency);
Note: Github Copilot generated a disturbing amount of this code with little prodding.
This results in:
{
e: 1056,
a: 909,
r: 837,
o: 673,
t: 667,
l: 648,
i: 647,
s: 618,
...
}
So in this game, RSTLNE are not the most common letters. Instead it’s more like EAROTL. Is LATER a good starting word?
Let’s approach it from another angle. If we loop over every 5-letter word that the game knows about and compared it against each solution, calculating the number of “in the word, right spot” and “in the word, wrong spot” results, this would tell us which of the words, on average, produces the most successful result.
var words = require('./words.json');
var solutions = words.solutions;
var herrings = words.herrings;
String.prototype.replaceAt = function (index, replacement) {
return this.substr(0, index) + replacement + this.substr(index + replacement.length);
}
var processGuess = function (guess, solution) {
var correctSpot = 0;
var wrongSpot = 0;
// deep copy
var usedSolution = (' ' + solution).slice(1);
for (var i = 0; i < guess.length; i++) {
if (guess[i] === solution[i]) {
correctSpot++;
usedSolution = usedSolution.replaceAt(i, '0');
} else {
for (var j = 0; j < solution.length; j++) {
if (i === j) {
continue;
}
if (usedSolution[j] !== '0' && guess[i] === solution[j]) {
usedSolution = usedSolution.replaceAt(j, '0');
wrongSpot++;
break;
}
}
}
}
return [correctSpot, wrongSpot];
};
var wordResults = {};
for (var solution of solutions) {
for (var guess of solutions) {
var results = processGuess(guess, solution);
if (wordResults[guess]) {
wordResults[guess][0] += results[0];
wordResults[guess][1] += results[1];
} else {
wordResults[guess] = results;
}
}
for (var guess of herrings) {
var results = processGuess(guess, solution);
if (wordResults[guess]) {
wordResults[guess][0] += results[0];
wordResults[guess][1] += results[1];
} else {
wordResults[guess] = results;
}
}
}
Determining how many wrong spots exist is a little tricky - let’s say that the guess is “array” and the solution is “ropes”. When determining if each letter in the guess exists but is in the wrong spot, the correct spot has to be marked as used. So when processing the second “r” in the guess, it should not count as a wrong spot match, because the “r” in ropes was used as a wrong spot for the first “r” in array. The way that I’m maintaining this is by marking the letter position with the character “0” in the solution to signify it has been used.
Running this code will calculate the right spot and wrong spot results for all the possible 5-letter words and store them in wordResults
, but it doesn’t sort it.
Now we have a bit of ambiguity of what we consider to be the “best” result, considering there are two sets of results for each word.
If we think the most right spots is the best, we can sort by those first and if they’re the same, sort by wrong spots:
const wordResultsFrequency = Object.fromEntries(
Object.entries(wordResults).sort(([, a], [, b]) => (b[0] - a[0] === 0) ? b[1] - a[1] : b[0] - a[0] )
);
console.log(wordResultsFrequency);
This results in:
{
saree: [ 1575, 2366 ],
sooey: [ 1571, 1459 ],
soree: [ 1550, 2155 ],
saine: [ 1542, 2238 ],
soare: [ 1528, 2565 ],
...
But maybe maximizing the most right spots isn’t the most important thing. Let’s say that they’re twice as important as wrong spots:
const wordResultsFrequency = Object.fromEntries(
Object.entries(wordResults).sort(([, a], [, b]) => (b[0]*2 + b[1]) - (a[0]*2 + a[1]))
);
console.log(wordResultsFrequency);
This instead results in:
{
soare: [ 1528, 2565 ],
saree: [ 1575, 2366 ],
seare: [ 1491, 2450 ],
stare: [ 1326, 2761 ],
roate: [ 1254, 2888 ],
...
So under this arbitrary definition of best, the best starting word in WORDLE is SOARE, which is an obsolete word that means “a young hawk”.
]]>There isn’t a clever name for this system like “Getting Things Done”, I believe this is just called “Being a Human Being That Does Their Job”, but I’ve always wondered if there was a way to make completing tasks more exciting. Yes, there should be an innate satisfaction in just checking off tasks that you do, but what if there was something more tangible?
When I was growing up, my dad had a digital clock that he displayed next to his record player that looked like this:
One day I asked him, what’s the deal with the crappy LED clock in the living room and he proudly told me that he had built it himself, from a kit. What he didn’t tell me is that around the same time, probably in the 70s, LEDs had started to completely replace nixie tubes. And while LEDs are smaller, safer, and more reliable than nixie tubes, nixie tubes have the advantage of looking awesome.
There’s a growing community of nixie tube enthusiasts and they seem really intent on building clocks.
They buy a bunch of these tubes, figure out how to wire them up, build interesting bases for them to sit on, program a way for them to keep time, and sell them on Etsy to other nixie tube enthusiasts. Or at least, they try to sell them. If you want a clock with hours, minutes, and seconds, that’s 6 tubes to acquire, and large nixie tubes are kinda expensive. Clocks like these will cost you well over $1000.
Let’s say there’s a handful of people that can actually afford these things. Presumably once these clock builders have saturated that market, they’ll want to cater to regular people. Regular people can’t afford 6 large nixie tubes, but they might be able to afford one. But what can you do with one nixie tube? Well, I guess you can make a single digit clock:
How does a single digit clock work? It flashes the time, one digit at a time.
This seems like a bad idea. No one wants to tell time this way. It does look pretty cool, though.
It seems that you could take a single digit nixie clock and have it display the number of tasks that you had left for the day. Every time you completed a task, the number would go down. Watching that number go down seems like it would be pretty satisfying. And at the end of the day, hopefully it would display 0, but if not you could just move the remaining tasks to your backlog and have it display 0 because the point here is not to do all the tasks but to feel like you’ve accomplished something.
Unfortunately, all of the prebuilt single digit nixie tube clocks that I could find are self-contained units - they don’t connect to the Internet and if they do, it’s only to synchronize the correct time to them.
So let’s accomplish something. Let’s build one ourselves.
Nixie tubes come in a lot of different sizes but I wanted one with a giant digit. If you sort this list by symbol height (not an exhaustive list, but the person running the site seems to have a lot of tubes!), two of the largest ones are the Z568M and the IN-18, both with 50mm digits.
The Z568M is the largest European nixie tube. It has a red coating around it to filter out the blue glow, but they also produced the Z5680M which was the same tube but clear.
IN-18 is the largest Russian nixie tube, it’s a little skinnier and has elongated digits.
I’ve found that the Z568M sells for around $300-500 while the IN-18 runs around $70-100. These are usually new old stock and as far as I can tell there are just boxes and boxes of IN-18s sitting around in the Ukraine.
Alternatively, two companies seem to produce new versions of these tubes.
Dalibor Farny sells the R|Z568M, which is a beautiful “resurrection” of the Z568M, although it’s clear like the Z5680M. It costs $155 and comes with a black or stainless steel base.
Millclock makes a similar tube called the ZIN-70 ($129) and a “reborn” version of the IN-18 called the ZIN-18 ($99).
I ended up buying Dalibor Farny’s R|Z568M tube.
Once I had the tube, I needed a way to power it and also a way to set the digits on it.
Nixie tubes require high voltages to illuminate. The tube that I purchased needs 170 V to ignite. After doing some research, I discovered that what I needed was a step-up converter that takes a low voltage and boosts it to the higher voltage.
One of the downsides of working with voltages like these is the fact that they can kill you. Almost every component I purchased having to do with Nixie tubes would remind me of this fact. Like many things, after someone designs and produces a module like this, it is immediately replicated and counterfeited, often with lower quality components. This is why when I decided on the NCH6300HV DC-DC booster, I bought it directly from the manufacturer.
This is what it looks like:
It took me a little while to figure out what I was supposed to do with it.
I took a 5V power supply and cut off the end of it, which revealed two cables, power and ground.
Then I stripped those cables, soldered the wire terminals onto the module, and stuck them into the input wire terminal.
Next I took some 20 AWG wire and stuck it in the output wire terminal, ran those wires to my multimeter, plugged in the power supply and… the multimeter reported some really low voltage.
I re-read the product page for the module and discovered that on the input wire terminal, the white connector needs to be connected to power to actually enable high voltage. So I took a breadboard jumper wire and jammed it between the white connector and the power connector.
This time the multimeter read 126 V. Not quite?
Re-reading the product page a second time, I noticed that one of the reviews mentioned a “multi-turn pot”, which I determined to be the potentiometer in the bottom of the module. Sure enough, taking a jeweler’s screwdriver and slowly turning it to the left increased the voltage until it reached 170 V on the multimeter, at which point I stopped turning it and celebrated brute forcing my way into high voltage.
One thing I noticed is that immediately after disconnecting the power supply, the multimeter still registered a significant amount of voltage and took about a minute to fully drop to 0. I’d been very careful not to touch the output wires or module itself while the power was on, but from that point forward, whenever I turned off the power supply, I would set a timer on my phone for 60 seconds and continue to treat the module as “live” until the timer went off.
Now that I could power the tube, I needed a way to set the digits. I found a project called the Nixie Socket Driver. It consists of two parts, the socket driver and the socket itself.
The socket driver has three control inputs (CLK, EN, DIN) which you use to specify which digit to display. You can wire up a device like a ESP8266 or ESP32 to the control inputs and power the driver board, while also supplying the high voltage which it delivers to the tube.
The socket has pin sockets for the nixie tube to go into, a resistor for limiting the high voltage, and pins to connect to the socket driver.
The two part system means that you can buy different sockets for different nixie tubes and change them out. And the socket drivers can be run in series, so if you wanted to run more than one nixie tube, you can leverage the same high voltage for all of them and drive them all from the same device.
I ordered a socket for my tube and a socket driver. I’m not sure exactly what I was expecting, but perhaps I should have paid more attention to the “self-assembly kit” or “pin headers require soldering” parts of the description.
Here’s what was in the socket box:
The assembly instructions are pretty clear. First you stick the pin sockets in, solder them, and cut the excess pins off the bottom with some wire cutters.
Then you solder the pin headers on, this is the part that will stick into the socket driver board.
I’ve found that when soldering pin headers, sticking them on a breadboard helps keep them on straight. After I finished assembling it, I had something like this:
Next, I soldered the pin headers on the socket driver board.
As you can imagine, I was getting pretty tired of soldering by this point. When I was done, my socket driver board looked like this:
The socket driver comes with example Arduino code for running the board. I took a ESP8266 D1 Mini, soldered the pin headers on, and programmed it with the example code. I set the DIN, CLK, and EN to D2, D3, and D4 respectively.
Let’s take a look at the socket driver board pinout again:
You can connect the inputs on the left side (helpful if you’re running them in serial) or the bottom. I decided to just use the left side.
I connected DIN, EN, and CLK, to D2, D3, and D4 on the D1 mini board. HVIN to HV on the power supply and GND to GND on the power supply. Finally I connected VCC to the 5V on the D1 mini.
Socket Driver Board Pin | Pin | Board |
---|---|---|
DIN | D2 | D1 Mini |
CLK | D3 | D1 Mini |
EN | D4 | D1 Mini |
VCC | 5V | D1 Mini |
HVIN | HV | Power Supply |
GND | GND | Power Supply |
It worked! Here’s what it looks like cycling through all the numbers:
I’d been trying to get better at using Fusion 360 for a while but I’ve more or less given up at this point and mostly use OpenSCAD to design now. If you’re struggling with CAD software as well, give it a try, it’s surprisingly intuitive to write code instead of trying to manipulate a model.
I decided to use the hex standoffs at the bottom of the nixie socket driver board as a way of mounting it onto the enclosure - I made little feet to hold each of them in. Then I cut small holes in the back for both the D1 mini USB input and the cables for the high voltage power.
module stand() {
difference() {
difference() {
cube([100, 100, 48]);
translate([4, 4, 4])
cube([92, 92, 48]);
}
translate([40, 92, 4])
cube([20, 20, 8]);
translate([87, 92, 4])
cube([8, 20, 20]);
}
}
module foot() {
m3nut=6.6;
difference()
{
cube([10, 10, 10]);
translate([5, 5, 10-8])
cylinder(d=m3nut, h=8.01, $fn=6);
}
}
stand();
translate([(100-55)/2 + 9.19, (100-55)/2 - 2.5, 4]) foot();
translate([(100-55)/2 + 26.62 + 9.19, (100-55)/2 - 2.5, 4]) foot();
translate([(100-55)/2 + 9.19, (100-55)/2 + 49.48 - 2.5, 4]) foot();
translate([(100-55)/2 + 26.62 + 9.19, (100-55)/2 + 49.48 - 2.5, 4]) foot();
I printed it with Prusament PLA Prusa Galaxy Black filament, which embeds some glitter in it. After sticking the socket driver board into the enclosure, it looks like this:
I’d been walking around Heath Ceramics in San Francisco the other day and picked up a few tile samples for $1 each. Pro-tip: If they ask what you’re doing with them, saying something like, “getting some ideas for my kitchen backsplash” will produce less questions than “creating a nixie tube enclosure”.
I stuck one of these tiles to the front with Gorilla Glue, here’s what it looks like with the D1 Mini and power cables installed:
It’s kind of a tight fit in there, I had to bend the 3-pin header connectors on the left upward a bit to actually plug wires into them. Then I put the socket onto the socket driver and the tube onto the socket:
For the cover, I measured the diameter of the tube base and designed this floppy disk looking thing:
// Cover with the hole cut out for the tube
difference() {
cube([100, 100, 1], center=true);
translate([0, 0, -2])
cylinder(d=54, h=4);
}
// Ridge so that the cover will stay in place
translate([0, 0, 1])
difference() {
cube([91.5, 91.5, 2], center=true);
cube([88, 88, 4], center=true);
}
// Rectangle in the front for the tile
translate([-50, -50 - 7.55, -0.5])
cube([100,7.55,1]);
Next I printed it, flipped it over and snapped that into place on top. I didn’t glue it in case I need to perform maintenance:
I could have put the power supply into the nixie stand enclosure as well but decided to move it into its own enclosure to save space (perhaps a reason to use the smaller NCH8200HV module instead). I made something similar with cutouts for the input and output wires.
difference() {
cube([45.5+4,30+4,20+4+4]);
translate([2, 2, 2]) {
cube([45.5,30,20+4+4]);
};
translate([45.5+2, 21.5+2-2, 16.5+2]) {
cube([2, 8.5, 3]);
};
translate([0, 18, 16.5+2]) {
cube([2, 12, 3]);
};
}
It was very difficult to thread the wires through the cutouts, I’d probably make them a bit bigger if I were to do it again.
The lid was a little trickier, I wanted to add some holes for heat to dissipate but not big enough to stick my finger in. I’m not sure where I found this fenestration code but it seems to work:
x = 45.5+4;
y = 30+4;
z = 2;
fen_x = 14; // fenestrations on x axis
fen_y = 10; // fenestrations on y axis
fen_size = 5; // size of fenestrations as a % of total axis size
// calculate fenestration size
fen_size_x = fen_size * x / 100;
fen_size_y = fen_size * y / 100;
// calculate space remaining and then divide by number of windows needed + 1 to get the desired size of the struts
strut_x = (x - fen_x * fen_size_x) / (fen_x + 1);
strut_y = (y - fen_y * fen_size_y) / (fen_y + 1);
union() {
// take away windows from fenestrated surface
translate([2,2,0]) {
difference() {
cube(size=[x, y, z]); // fenestrated surface
for (i = [0:fen_x - 1]) {
translate([i * (fen_size_x + strut_x) + strut_x, 0, 0])
for (i = [0:fen_y - 1]) {
translate([0, i * (fen_size_y + strut_y) + strut_x, -1])
cube([fen_size_x, fen_size_y, z+2]); // the fenestrations have to start a bit lower and be a bit taller, so that we don't get 0 sized objects
}
}
}
}
difference() {
cube([45.5+4+4,30+4+4,6]);
translate([2, 2, 0]) {
cube([45.5+4,30+4,10]);
};
}
}
Nixie tubes can last hundreds of thousands of hours but you probably don’t want to just leave them on all the time - you’ll want a way to remotely turn them on and off. For this kind of thing, I’ve been using these smart plugs from Cloudfree (I’m pretty sure Cloudfree is just a guy in college flashing smart plugs between classes): CloudFree Smart Plug 2. They run an open-source firmware called Tasmota which has a little web server that you can access to turn it on and off, but it’s mostly controlled with MQTT, a messaging protocol for IoT devices.
I run Home Assistant on a Raspberry Pi on my network and it can be setup to control these devices. You can also configure schedules so that they turn on and off at certain times.
I found the initial experience of setting up Home Assistant to run a MQTT broker and detect these devices to be pretty confusing, but now that I’ve got it working I’m pretty happy with it.
The steps that I follow to provision new plugs are: plug in the smart plug, connect to the WiFi network (tasmota-xxx), and put in my WiFi credentials. Then I restart it, click Configuration, click Configure MQTT, and change the Host, User, and Password to my MQTT broker. Finally, I go to the console and type SetOption19 1
, which enables auto-discovery.
There’s a newer way to integrate Tasmota and Home Assistant with the official Tasmota integration, so maybe it’ll be easier for new users.
After it’s configured as a device in Home Assistant, you can create an access token for Home Assistant by clicking on your profile in the lower left of the web interface, going to the section called Long-Lived Access Tokens, and hitting Create Token.
In Home Assistant, determine the name of your device by clicking on the device and looking for the Entity ID. Mine is switch.nixie_bulb_2
.
Now we can toggle the nixie tube on and off by running:
curl -X POST -H "Authorization: Bearer ACCESS_TOKEN" -H "Content-Type: application/json" -d '{"entity_id": "switch.nixie_bulb_2"}' http://homeassistant.local:8123/api/services/switch/toggle
If you want to explicitly turn it on or off instead of toggling, replace toggle
with turn_on
or turn_off
.
I had programmed the ESP8266 D1 mini to cycle through numbers using the example Arduino code, but once I had verified that it was working, what I really wanted was a way to set the number via an API.
Here’s what I ended up with:
#include <Arduino.h>
#include <ArduinoJson.h>
#include <FS.h>
#include <map>
#include <functional>
#include <UrlTokenBindings.h>
#include <RichHttpServer.h>
using namespace std::placeholders;
#define XQUOTE(x) #x
#define QUOTE(x) XQUOTE(x)
#ifndef WIFI_SSID
#define WIFI_SSID mywifinetwork
#endif
#ifndef WIFI_PASSWORD
#define WIFI_PASSWORD hunter2
#endif
using RichHttpConfig = RichHttp::Generics::Configs::EspressifBuiltin;
using RequestContext = RichHttpConfig::RequestContextType;
SimpleAuthProvider authProvider;
RichHttpServer<RichHttpConfig> server(80, authProvider);
#define DIN_PIN D2 // Nixie socket driver (shift register) serial data input pin
#define CLK_PIN D3 // Nixie socket driver clock input pin
#define EN_PIN D4 // Nixie socket driver enable input pin
// Bit notation of 10-segment tube digits
uint16_t digit_nixie_tube[] = {
0b0000000000000001, // 0
0b0000000000000010, // 1
0b0000000000000100, // 2
0b0000000000001000, // 3
0b0000000000010000, // 4
0b0000000000100000, // 5
0b0000000001000000, // 6
0b0000000010000000, // 7
0b0000000100000000, // 8
0b0000001000000000 // 9
};
int number = 0;
// Function prototype with optional parameters
void NixieDisplay(int tube1 = 255, int tube2 = 255, int tube3 = 255, int tube4 = 255, int tube5 = 255, int tube6 = 255);
// Function with optional parameters
void NixieDisplay(int tube1, int tube2, int tube3, int tube4, int tube5, int tube6)
{
StartShiftOutData();
if (tube6 != 255)
ShowDigit(tube6); //ShowSymbol(tube6);
if (tube5 != 255)
ShowDigit(tube5); //ShowSymbol(tube5);
if (tube4 != 255)
ShowDigit(tube4); //ShowSymbol(tube4);
if (tube3 != 255)
ShowDigit(tube3); //ShowSymbol(tube3);
if (tube2 != 255)
ShowDigit(tube2); //ShowSymbol(tube2);
if (tube1 != 255)
ShowDigit(tube1); //ShowSymbol(tube1);
EndShiftOutData();
}
void ShowDigit(int digit)
{
ShiftOutData(digit_nixie_tube[digit]);
}
void StartShiftOutData()
{
// Ground EN pin and hold low for as long as you are transmitting
digitalWrite(EN_PIN, LOW);
}
void ShiftOutData(uint16_t character)
{
uint8_t first_half = character >> 8;
uint8_t second_half = character;
shiftOut(DIN_PIN, CLK_PIN, MSBFIRST, first_half);
shiftOut(DIN_PIN, CLK_PIN, MSBFIRST, second_half);
}
void EndShiftOutData()
{
// Return the latch pin high to signal chip that it
// no longer needs to listen for information
digitalWrite(EN_PIN, HIGH);
}
void handleStatus(RequestContext& request) {
request.response.json["ip_address"] = WiFi.localIP().toString();
request.response.json["free_heap"] = ESP.getFreeHeap();
request.response.json["version"] = "builtin";
request.response.json["number"] = number;
}
void handleSetNumber(RequestContext& request) {
JsonObject body = request.getJsonBody().as<JsonObject>();
if (! body["number"].isNull()) {
if (number != body["number"]) {
number = body["number"];
NixieDisplay(number);
}
request.response.json["number"] = number;
} else {
request.response.setCode(400);
request.response.json["error"] = "Must contain key `number'";
}
}
void handlePutNumber(RequestContext &request)
{
size_t number_to_set = atoi(request.pathVariables.get("number"));
Serial.println(number_to_set);
request.response.json["number"] = number;
}
void handleGetNumber(RequestContext& request) {
request.response.json["number"] = number;
}
void handleStaticResponse(const char* response) {
server.send(200, "text/plain", response);
}
void setup() {
Serial.begin(115200);
Serial.println("Starting up...");
WiFi.begin(QUOTE(WIFI_SSID), QUOTE(WIFI_PASSWORD));
authProvider.disableAuthentication();
server
.buildHandler("/number")
.on(HTTP_POST, handleSetNumber)
.on(HTTP_GET, handleGetNumber);
server
.buildHandler("/number/:number")
.on(HTTP_POST, handlePutNumber);
server
.buildHandler("/status")
.on(HTTP_GET, handleStatus);
server.clearBuilders();
server.begin();
pinMode(DIN_PIN, OUTPUT);
digitalWrite(DIN_PIN, LOW);
pinMode(CLK_PIN, OUTPUT);
digitalWrite(CLK_PIN, LOW);
pinMode(EN_PIN, OUTPUT);
digitalWrite(EN_PIN, LOW);
NixieDisplay(number);
}
void loop() {
server.handleClient();
}
This uses a library called Rich HTTP Server which lets you build modern REST APIs on ESP8266 devices. It conflicted with some newer ESP8266 libraries, so I had to patch it to get my code to compile correctly.
The code creates a web server with the following API:
number
key in the JSON payloadnumber
in the URL pathWhy two ways to set the number? I built the JSON payload one first and then figured I wanted a more lightweight way to call it and ending up leaving both in.
Now that we have an API for the nixie tube, we can build a simple test case: a fidget device.
I picked up the Stack Overflow The Key macropad when it was released, which probably wasn’t a great purchase, but the build quality was surprisingly good and you can configure it using QMK.
I swapped out the keycaps with some MiTo SA Laser Mitowaves and configured the keys to F13, F14, and F15.
key | action |
outrun | toggle the nixie tube on or off |
ramen | decrement the number |
pills | increment the number |
Then I configured Karabiner Elements to execute shell scripts when each of them are pressed.
The first script is the same one we saw before, it just toggles the nixie tube on and off:
curl -X POST -H "Authorization: Bearer ACCESS_TOKEN" -H "Content-Type: application/json" -d '{"entity_id": "switch.nixie_bulb_2"}' http://homeassistant.local:8123/api/services/switch/toggle
Since the nixie tube now has an API, we can write a small script called nixie
that sets the number by dispatching an HTTP request:
#!/bin/bash
DATA_RAW="'{\"number\": \"$1\"}'"
CURL_CMD="curl --location --request POST 'http://nixie.local/number' --header 'Content-Type: application/json' --data-raw $DATA_RAW --silent > /dev/null"
eval $CURL_CMD
When you type something like nixie 4
and it will set the number 4 on the nixie tube.
Then we can write another script nixie-change
that calls the nixie
script:
#!/bin/bash
CURRENT=`cat /Users/bert/code/nixie-change/current`
if [[ $1 == 'inc' ]]; then
((CURRENT=CURRENT+1))
elif [[ $1 == 'reset' ]]; then
CURRENT=0
else
((CURRENT=CURRENT-1))
fi
if [[ $CURRENT == -1 ]]; then
CURRENT=9
elif [[ $CURRENT -gt 9 ]]; then
CURRENT=0
fi
echo $CURRENT > /Users/bert/code/nixie-change/current
/Users/bert/bin/nixie $CURRENT
and set the other two keys to nixie-change dec
and nixie-change inc
.
It would make a lot of sense to add API methods to increment and decrement the current number, but I was feeling too lazy to actually reprogram the D1 mini, so I just stored the current state locally in the current
file.
Here’s what it looks like working together:
We could use the fidget device to set the correct number of tasks at the beginning of the day on the nixie tube and manually decrement it each time we completed one, but ideally the nixie tube would just automatically keep in sync with the TODO list.
I currently use Todoist to manage my TODO list. After Wunderlist shut down, I tried a bunch of different apps for my TODO list like Notion, Tot, and Apple Notes, but ultimately found that Todoist met my needs best.
Todoist has a Webhooks API, which allows you to receive an HTTP POST on events that you subscribe to. There are a bunch of different events that you can subscribe to, but I decided to subscribe to these:
Event Name | Description |
---|---|
item:added | An item was added |
item:updated | An item was updated |
item:deleted | An item was deleted |
item:completed | An item was completed |
item:uncompleted | An item was uncompleted |
In the Todoist Manage App page, it lets you define a Webhook callback URL, but it doesn’t ask you to define a shared secret to help you determine if the request is actually coming from Todoist. So the callback URL itself has to contain a secret. We can generate a new one like this:
cat /dev/urandom |base64|tr -dc 'a-z0-9'|fold -w 32|head -1
Then we’ll stick that at the end of our URL, like this:
https://todoist-webhook-receiver.example.org/process/ddpltohpmme2c7vmr36c6j98fjuevfkz
And write a script to handle this incoming HTTP POST. I wrote this in Deno:
import { Application, Router, Status } from "https://deno.land/x/oak@v9.0.1/mod.ts";
import "https://deno.land/x/dot_env@0.2.0/load.ts";
import ky from "https://cdn.skypack.dev/pin/ky@v0.28.5-EK5VERfsxXvTNWFPnGlK/mode=imports/optimized/ky.js";
const config = JSON.parse(Deno.readTextFileSync('./config/default.json'));
let triggered = false;
const router = new Router();
router
.post("/process/:secret", async (ctx) => {
if (!(ctx.params && ctx.params.secret && ctx.params.secret === Deno.env.get("SECRET"))) {
ctx.response.status = Status.Unauthorized;
ctx.response.body = 'Unauthorized';
return;
}
const result = ctx.request.body();
if (result.type === "json") {
const data = await result.value;
const validEventNames = new Set(["item:added", "item:updated", "item:completed", "item:uncompleted", "item:deleted"]);
if (data.event_name &&
validEventNames.has(data.event_name) &&
data.event_data &&
data.event_data.project_id &&
data.event_data.project_id == config.project_id) {
console.log(`Received webhook ${data.event_name} for project_id ${data.event_data.project_id}`);
triggered = true;
}
}
ctx.response.body = "OK";
});
const app = new Application();
app.use(router.routes());
app.use(router.allowedMethods());
app.addEventListener("listen", ({ hostname, port, secure }) => {
console.log(
`Listening on: ${secure ? "https://" : "http://"}${hostname ??
"localhost"}:${port}`,
);
});
const checkAndDispatch = async () => {
if (triggered) {
triggered = false;
const url = `https://api.todoist.com/rest/v1/tasks?project_id=${config.project_id}`;
try {
const data = await ky(url, {
headers: {
"Authorization": "Bearer " + Deno.env.get("TODOIST_ACCESS_TOKEN")
}
}).json();
const filteredData = data.filter((task: { section_id: number; }) => (task.section_id == 0));
const numberOfTasks = filteredData.length;
console.log(`Number of tasks: ${numberOfTasks}`);
console.log(`Dispatching to ${config.nixie_relay_url}`);
await ky(config.nixie_relay_url, {
method: 'POST',
json: {
number: numberOfTasks,
}
});
} catch (e) {
console.error('Failed to retrieve tasks', e);
}
}
await setTimeout(checkAndDispatch, 1000);
};
await checkAndDispatch();
await app.listen({ port: config.port });
When the HTTP POST is received, it doesn’t contain the number of tasks in the payload, so we have to make an API call to the Todoist API to get that count. I’ve configured my Todoist project to have a section called “Backlog” that I can move tasks into if I don’t complete them. I don’t want these to count towards the total, so I filter them out.
After it calculates the task count, it dispatches an HTTP request to the nixie tube to update the number. The checkAndDispatch
logic is to ensure that if we get a flood of task updates, it still only calculates and dispatches once per second so our API key doesn’t get rate-limited.
While this setup will work as-is, I don’t like running open web servers on my home Internet connection. If my Internet goes down, the webhook requests would start failing and Todoist might stop sending webhooks to me until I re-enable them. I’ll also have to open a port to the server running this code and something like Shodan will likely pick up the IP and soon I’ll be innudated with requests trying to hack the server.
Ideally I’d like to run this webhook receiver on an external server I control, but it still needs to communicate with the nixie tube to set the number, and I don’t want to open a port to my ESP8266 D1 Mini - that would be even worse.
I’ve been hearing a lot of great things about Tailscale and had been meaning to try it for a while, so I decided to deploy it for this.
What this does is create a VPN between the external server running the Webhook Receiver and the box in my home network running the relay, which happens to be a Mac Mini. The Mac Mini then exists on two networks, the home network and the VPN. The Tailscale IPs are in the 100.x.y.z range. Setting it up is as simple as downloading Tailscale and running it on both the external server and Mac Mini.
The relay code (also written in Deno), acts as a proxy:
import { Application, Router } from "https://deno.land/x/oak@v9.0.1/mod.ts";
import "https://deno.land/x/dot_env@0.2.0/load.ts";
import ky from "https://cdn.skypack.dev/pin/ky@v0.28.5-EK5VERfsxXvTNWFPnGlK/mode=imports/optimized/ky.js";
const config = JSON.parse(Deno.readTextFileSync('./config/default.json'));
const router = new Router();
router
.post("/number", async (ctx) => {
const result = ctx.request.body({ type: "json" });
const data = await result.value;
console.log(`Sending ${data.number} to Nixie Tube...`);
await ky(config.nixie_url, {
method: 'POST',
json: {
number: data.number,
}
}).json();
ctx.response.body = "OK";
});
const app = new Application();
app.use(router.routes());
app.use(router.allowedMethods());
app.addEventListener("listen", ({ hostname, port, secure }) => {
console.log(
`Listening on: ${secure ? "https://" : "http://"}${hostname ??
"localhost"}:${port}`,
);
});
await app.listen({ port: config.port });
It just receives the request to set the nixie tube number and replays it to the nixie tube. Note that there is no authentication because this server isn’t exposed to the Internet, it is only accessed via the Tailscale VPN.
Here you can see me adding a new task to Todoist and the count increasing and then slowly checking off tasks and the count decreasing. There’s a bit of a delay from when you take an action in Todoist to it updating the nixie tube. I ran some tests and the bulk of the latency is actually waiting for the webhook to get sent. The relay over Tailscale and setting the number on the nixie tube is basically instantaneous.
I hope that I never have more than 9 tasks in a day.
]]>But if you try to scan it in with the iOS Camera, it just says “No usable data found”. iOS 15 will support reading Smart Health cards but it won’t be released until the fall, so until then we have these mostly useless QR codes. So let’s do something with them!
First of all, it’s a giant QR code. That’s because it contains a lot of information including:
Basically it has everything you see on your physical COVID-19 Vaccination card, as well as some stuff to make it verifiable.
When you scan in the QR code, you’re left with a long string of numbers:
shc:/567629095243206034602924374044603122295953265460346029254077280433602870286471674522280928613331456437653141590640220306450459085643550341424541364037063665417137241236380304375622046737407532323925433443326057360106453131537170742424415029455972454462384130574231626537750944666231385374207252266370320021230732342826007357347620675225242542434443365967764574067073123636093772595838266432434143252441404569367567362665384022664500636159661038617441593755114420215361552769670906533871242972053171577335753774413320123639683322330612270052507227576577446606702204202156773729380633535507126624673205361109503236367143566269760425425534045727685543420036567429266507352272506326422069256658224033074566392260203158586562624362533276354430615956530739043522115636271143384533110575534261752367334226563521294568426754712127552677107668432172753255064552746554615739673550617728756871097033767337695966210304381130304500435475435210353973003960307652455467775511593263103465743029433823641211357534401010401077316961303852037607733907747764707339594027267441553712674409352125237729233960246853714132220729232331054233207512690023672252732528087722400556095771265527403011245744405432567511705040267037526962207633102740546134540569090872317700587438273575755455372905626630206937003536710575053558682341736729581176443953235350330326750665215966734053015903522428303161087254306800082240413276370671614523382904224405756140440576530835397175712910434009297334444500266824390007747123677140617403097326520442732041343460207258
You can take those numbers and convert them to a base64 string by splitting them into pairs, adding each of those pairs to the ASCII decimal of -
(45), and taking the ASCII character of the decimal sum. For example, the first two pairs are 56 and 76. So we add 45 + 56 = 101 which is represented as e
and 45 + 76 = 121 which is represented as y
and keep doing that until all the pairs are done and we have another long string:
eyJ6aXAiOiJERUYiLCJhbGciOiJFUzI1NiIsImtpZCI6IjNLZmRnLVh3UC03Z1h5eXd0VWZVQUR3QnVtRE9QS01ReC1pRUxMMTFXOXMifQ.3ZLLbtswEEV_JZhuZYkSVKfWLknRx6YokLSbwAuaGlsM-BD4MOIG-vfOyApaFEFWXYXQhpyZw3sv9QQ6RuhgSGmMXVXFEVUZrQxpQGnSUCoZ-ljho7SjwVhRd8YABbjdHrp63bStEJu2LtfvPxRwVNA9QTqNCN39H-a_uHfnzYo3sC1ABezRJS3Nbd49oEpM2Q86_MQQtXekry1FWdO1fHqdXW-QewJGn4PCu_lGWArFogCUN4ZoTCiALggnkkXkbMyPYKjheb4T1PC8eQH8XSZN82xbWjxDpNWGePBJZqWpctBHdGz7yqXBuxMd3ZawncjfTpP_jzIxqt6sNyvRrhoB01S8KKZ-XcxXa7PTv-TiKyaZcpzd8hMl7OnwKJXSDm98PxOU77U7zLrjKSa0y4vT4wzmsvThUHGwVdR9pY6PBFDzJDTiEqbtVMC4JDDL2WNAx9r-DpCavFI5zCU2e6ftGdHUK8EfYUcMex8s_UGsRarkAyN7HUcjOc2r65uLz-gwSHPxxcdRJ2koKArR-PQt2x2PgqDVvpJg8yYTbDb_N0Gx3nBhovUb.h0aEIKLj5ucKq-5CUVMyR3tjZDSJ1CY2xjUY2yb5PTtxtJ7XU6JvOYZ-GqET-4wtDptUjw06vGa1WvAVOOiAug
Readers that have spent too much time debugging JWTs or OpenID Connect will recognize the telltale ey
prefix of a JWT. In fact, throwing this into jwt.io will spit out the header for us:
{
"zip": "DEF",
"alg": "ES256",
"kid": "3Kfdg-XwP-7gXyywtUfUADwBumDOPKMQx-iELL11W9s"
}
The payload looks a little wacky though. That’s because it’s a base64url encoding. We can decode this encoding to get a bunch of bytes and then decompress those bytes using the INFLATE algorithm to a JSON string.
Here’s what that looks like in Node:
const pako = require('pako');
var payload = '3ZLLbtswEEV_JZhuZYkSVKfWLknRx6YokLSbwAuaGlsM-BD4MOIG-vfOyApaFEFWXYXQhpyZw3sv9QQ6RuhgSGmMXVXFEVUZrQxpQGnSUCoZ-ljho7SjwVhRd8YABbjdHrp63bStEJu2LtfvPxRwVNA9QTqNCN39H-a_uHfnzYo3sC1ABezRJS3Nbd49oEpM2Q86_MQQtXekry1FWdO1fHqdXW-QewJGn4PCu_lGWArFogCUN4ZoTCiALggnkkXkbMyPYKjheb4T1PC8eQH8XSZN82xbWjxDpNWGePBJZqWpctBHdGz7yqXBuxMd3ZawncjfTpP_jzIxqt6sNyvRrhoB01S8KKZ-XcxXa7PTv-TiKyaZcpzd8hMl7OnwKJXSDm98PxOU77U7zLrjKSa0y4vT4wzmsvThUHGwVdR9pY6PBFDzJDTiEqbtVMC4JDDL2WNAx9r-DpCavFI5zCU2e6ftGdHUK8EfYUcMex8s_UGsRarkAyN7HUcjOc2r65uLz-gwSHPxxcdRJ2koKArR-PQt2x2PgqDVvpJg8yYTbDb_N0Gx3nBhovUb';
var bytes = Buffer.from(payload, 'base64');
var inflatedPayload = pako.inflateRaw(bytes, { to: 'string' });
console.log(inflatedPayload);
which outputs:
{
"iss": "https://spec.smarthealth.cards/examples/issuer",
"nbf": 1624400941.658,
"vc": {
"type": [
"https://smarthealth.cards#health-card"
],
"credentialSubject": {
"fhirVersion": "4.0.1",
"fhirBundle": {
"resourceType": "Bundle",
"type": "collection",
"entry": [
{
"fullUrl": "resource:0",
"resource": {
"resourceType": "Patient",
"name": [
{
"family": "Fauci",
"given": [
"Anthony",
"S."
]
}
],
"birthDate": "1969-04-20"
}
},
{
"fullUrl": "resource:1",
"resource": {
"resourceType": "Immunization",
"status": "completed",
"vaccineCode": {
"coding": [
{
"system": "http://hl7.org/fhir/sid/cvx",
"code": "207"
}
]
},
"patient": {
"reference": "resource:0"
},
"occurrenceDateTime": "2021-01-01",
"performer": [
{
"actor": {
"display": "ABC General Hospital"
}
}
],
"lotNumber": "0000420"
}
},
{
"fullUrl": "resource:2",
"resource": {
"resourceType": "Immunization",
"status": "completed",
"vaccineCode": {
"coding": [
{
"system": "http://hl7.org/fhir/sid/cvx",
"code": "207"
}
]
},
"patient": {
"reference": "resource:0"
},
"occurrenceDateTime": "2021-01-29",
"performer": [
{
"actor": {
"display": "ABC General Hospital"
}
}
],
"lotNumber": "0000069"
}
}
]
}
}
}
}
The key iss
is the issuer. In this case, the issuer is https://spec.smarthealth.cards/examples/issuer
. Issuers publish their public keys at iss
value + /.well-known/jwks.json
so we should be able to go to https://spec.smarthealth.cards/examples/issuer/.well-known/jwks.json
and download them:
{
"keys": [
{
"kty": "EC",
"kid": "3Kfdg-XwP-7gXyywtUfUADwBumDOPKMQx-iELL11W9s",
"use": "sig",
"alg": "ES256",
"crv": "P-256",
"x": "11XvRWy1I2S0EyJlyf_bWfw_TQ5CJJNLw78bHXNxcgw",
"y": "eZXwxvO1hvCY0KucrPfKo7yAyMT6Ajc3N7OkAB6VYy8"
},
...
]
}
Unsurprisingly, the kid
matches the kid
found in the header of our JWT. By using the public key, we can verify that this card came from the issuer because only they have the private key necessary to sign it.
When you get a QR code from the Digital COVID-19 Vaccine Record portal, these are issued by https://myvaccinerecord.cdph.ca.gov/creds
and thus their public keys can be found at https://myvaccinerecord.cdph.ca.gov/creds/.well-known/jwks.json
. Since anyone can be an issuer by generating public/private key pairs, it’s important that you identify which issuers you actually trust.
If we can take a QR code, extract the vaccination record, and verify that it was issued by the state, we can integrate that into almost anything! Hear me out.
What if we prevented people from accessing content based on their vaccination status? What if you couldn’t read Reddit, watch Youtube, or post on Instagram without first proving that you’ve been vaccinated? I think we’d up our numbers pretty quick.
As a proof-of-concept, I’ve built a vaccination paywall to provide an example of how this can be done. In this example, the contents of the book The Great Gatsby are protected by a vaccination paywall. In order to read it, you simply have to get your digital vaccination record from Digital COVID-19 Vaccine Record portal, press the Scan Digital COVID-19 Vaccination Record button and scan it in.
Here’s a quick video of how it works:
You can try it out here: https://vaxpaywall.bert.org/.
The source code is available here.
No one has asked any questions yet, so this FAQ section below stands for Frequently Anticipated Questions.
What about people with medical conditions that can’t get vaccinated?!
This is a joke.
What about people that are too old or too young to get vaccinated?
See above, this is a joke.
Do you think that a person that won’t get vaccinated even wants to read The Great Gatsby?
No.
I’m not a resident of California so I have no way of testing this vaccination paywall.
The QR code at the top of this blog post may be used to test the paywall.
I don’t want to give you my digital vaccine record!
That’s not a question. The State of California suggests that you can ask organizations that will scan the QR code in your Digital COVID-19 Vaccine Record how they will use your data or if they will keep it. Only you can decide how and when to share your record.
How will you use my data?
The data is only used to verify your vaccination status and display it, in a joking matter, above the text of The Great Gatsby.
Will you keep it?
No. You can look at the source code but there’s no way for you to verify that the website is using the same code so I don’t know what else to tell you.
This is a bad idea and you should feel bad.
I do not. If you can be vaccinated and a vaccine is available to you and you’re not getting one, you should feel bad.
Can’t someone just use someone else’s digital vaccination card?
In a physical setting, the digital vaccination card is supposed to be used in conjunction with another proof of identification, like a driver’s license. This is trickier online. One way to prevent abuse would be to only allow a vaccination record to be associated with a single account. Instead of storing the entire payload, you could store a hash of the payload and check if it’s already been used.
Do you really expect anyone else to do this?
Not really, but I could see it being used as a promotion, like as a discount or credit for an online service. If Dunkin’ Donuts and Krispy Kreme can have vaccine incentive programs, maybe online services could too? Frankly, I don’t really get vaccine incentive programs, or lotteries, or giving people guns for getting vaccinated. If not dying is not enough of an incentive to get the vaccine, I’m not sure what will be.
]]>Yes, I’m on Zoom calls all day. Yes, I have a toddler that makes guest appearances on those calls. You win, Instagram ad algorithm, I want one.
But I have my reservations. This is an Instagram ad for a Kickstarter project. I don’t want to contribute to Facebook’s ad revenue by even clicking on it. There’s No Such Thing as a Free Watch by Jenny Odell sums up my perception of the quality of products in Instagram ads. And my Kickstarter Trivial Pursuit pie is pretty full.
I’ve backed a lot of projects on Kickstarter and sometimes the end result reminds me of the hamburger that Michael Douglas gets in Falling Down.
So let’s build one instead.
The first thing to consider is: What would be satisfying to press? If you’ve read my post about Yubikeys, you’ll know this is an important point for me.
I use Cherry MX switches in my keyboards. There are three types of mechanical switches: linear, tactile, and clicky. Linear is your basic switch that goes up and down without much feedback. Tactile switches have a bump in the middle of travel that let you feel that your key press happened. And clicky switches have stronger tactile feedback AND make an audible click when you press them.
Normally you’d buy a switch tester and figure out which one feels right to you and also survey your co-workers to determine what kind of sound they’d let your keyboard produce before murdering you. But we’re in the middle of COVID - you don’t have any co-workers around you! Let’s go with a Cherry MX Blue switch that has satisfying tactile feedback but is also extremely loud. Cherry MX’s website calls this switch “Clicky and Noticeable” which is quite an understatement.
Looks nice, but I think we can do even better. If a Cherry MX Blue switch is satisfying to press, wouldn’t a comically large Cherry MX Blue switch be even more satisfying to press?
This is the Novelkeys Big Switch.
It’s 4 times bigger on each size and 64 times bigger in volume than a normal switch. It even comes with a giant keycap!
Unfortunately, the Big Switch doesn’t come with a case, so we’ll need to 3D print one. I found a nice looking case on Thingiverse: NovelKeys Big Switch Case. It’s always worth looking through the remixes in case anyone has improved upon the original design. In this case, there’s a remix that adds a housing for a Pro Micro and makes a tighter fit for the switch, so I printed that one.
Now that we have the case, we’ll need a board to put into it and wire it up to the switch.
The Pro Micro has a ATmega32U4 chip that allows it to emulate a USB HID device, like a USB keyboard. It’s also tiny.
If you look at the bottom of the Big Switch, there are two metal contacts.
Inside the switch, pressing down on the key causes the circuit to be completed between these contacts.
If we look at the Pro Micro pinout:
We can connect GND to one metal contact and Pin 2 to the other metal contact. Pin 2 is a digital I/O pin which will read HIGH when the key is pressed and LOW when it’s not.
It would also be nice if we could have some sort of visual indicator of the mute status, so we can add an LED.
I ordered a 10mm LED:
And a 220 Ohm resistor:
For LEDs, the longer leg connects to PWR and the shorter leg connects to GND. We’ll stick the resistor between the longer leg and a different pin to lower the amount of current - I chose Pin 9 at the bottom of the board. The shorter leg I wired up to GND. I found this page about LEDs and resistors to be helpful.
I soldered this 20 AWG wire between the board and the switch:
which resulted in this mess:
that we just jam into our 3D printed case:
Now we get to write some software, the spiritual opposite of being on a Zoom call.
I started with some code that Sparkfun had written to build a giant Save Button and modified it a bit.
The basic idea behind our mute button is that when you press the key, it will emit the Zoom hotkey for muting and unmuting, which on a Mac is Cmd-Shift-A. You’ll want to change your Zoom settings so this keystroke will be recognized even when Zoom isn’t focused with the Enable Global Shortcut toggle:
We also want to toggle the LED on and off after each key press. I decided to treat the LED being on similar to an “On Air” light - when the blue LED is on, I’m unmuted and people can hear what I say.
But if we just toggle the LED on and off after each key press, how will it stay in sync with the actual mute status on Zoom?
One nice thing about the Pro Micro is that it also has serial communication. This is usually used for printing debugging information in the Arduino IDE, but we can use it to help us stay in sync with Zoom’s mute status.
Here’s the code that we upload to the Pro Micro itself:
#include "Keyboard.h"
// OS parameters
typedef enum {
LINUX,
WINDOWS,
MAC
} os_types;
// Change this to your operating system
const os_types OS = MAC;
// Pins
const int btn_pin = 2;
const int led_pin = 9;
// Constants
const int debounce_delay = 50; // ms
// Globals
int btn_state = HIGH;
int btn_prev = HIGH;
unsigned long last_debounce_time = 0;
int os_ctrl;
int led_state = LOW;
void setup() {
Serial.begin(57600); // opens serial port, sets data rate to 57600 bps
// Set up LED and button pins
pinMode(btn_pin, INPUT_PULLUP); // Set the button as an input
pinMode(led_pin, OUTPUT);
digitalWrite(led_pin, led_state);
// Begin keyboard
Keyboard.begin();
// Switch to correct control/command key
switch(OS){
case LINUX:
case WINDOWS:
os_ctrl = KEY_LEFT_CTRL;
break;
case MAC:
os_ctrl = KEY_LEFT_GUI;
break;
default:
os_ctrl = KEY_LEFT_CTRL;
break;
}
// Get initial timestamp
Serial.println("started");
}
void loop() {
// Read current state of the button
int btn_read = digitalRead(btn_pin);
// Remember when the button changed states
if ( btn_read != btn_prev ) {
last_debounce_time = millis();
}
// Wait before checking the state of the button again
if ( millis() > (last_debounce_time + debounce_delay) ) {
if ( btn_read != btn_state ) {
btn_state = btn_read;
if ( btn_state == LOW ) {
// Send cmd+shift+a
Keyboard.press(KEY_LEFT_SHIFT);
Keyboard.press(os_ctrl);
Keyboard.press('a');
delay(100);
Keyboard.releaseAll();
Serial.println("pressed");
if (led_state == LOW) {
led_state = HIGH;
} else {
led_state = LOW;
}
digitalWrite(led_pin, led_state);
}
}
}
// Remember the previous button position for next loop()
btn_prev = btn_read;
if (Serial.available() > 0) {
String incomingString = Serial.readStringUntil('\n');
if (incomingString == "muted") {
led_state = LOW;
} else if (incomingString == "unmuted") {
led_state = HIGH;
}
digitalWrite(led_pin, led_state);
}
}
Next, we can add an Applescript that will report back what the current Zoom status is. I found a Zoom plugin for a Streamdeck device that contained the initial Applescript and modified it to only report back whether Zoom was opened and its mute status. I also changed it to output JSON.
set zoomStatus to "closed"
set muteStatus to "disabled"
tell application "System Events"
if exists (window 1 of process "zoom.us") then
set zoomStatus to "open"
tell application process "zoom.us"
if exists (menu bar item "Meeting" of menu bar 1) then
set zoomStatus to "call"
if exists (menu item "Mute audio" of menu 1 of menu bar item "Meeting" of menu bar 1) then
set muteStatus to "unmuted"
else
set muteStatus to "muted"
end if
end if
end tell
end if
end tell
copy "{\"mute\":\"" & (muteStatus as text) & "\",\"status\":\"" & (zoomStatus as text) & "\"}" to stdout
Now when we run it while we’re on a Zoom call, we get output like this:
$ osascript get-zoom-status.scpt
{"mute":"muted","status":"call"}
Finally, I wrote a small node app that acts as a middleman between the Pro Micro and this script:
const { exec } = require('child_process');
const SerialPort = require('serialport');
const Readline = require('@serialport/parser-readline');
const port = new SerialPort('/dev/tty.usbmodemHIDPC1', {
baudRate: 57600
});
var checkStatus = function() {
console.log('Checking status...');
exec('osascript get-zoom-status.scpt', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
var status = JSON.parse(stdout);
if (status.mute == 'unmuted') {
port.write('unmuted');
} else {
port.write('muted');
}
});
}
const parser = port.pipe(new Readline({ delimiter: '\r\n' }))
parser.on('data', function (data) {
if (data == "pressed") {
console.log('Button pressed.');
checkStatus();
}
})
checkStatus();
setInterval(checkStatus, 30000);
This script does two things. When the button is pressed, the Pro Micro sends a “pressed” command over the serial port and this calls the Applescript to determine the current Zoom mute status. Then it sends either a “muted” or “unmuted” command back to the Pro Micro, which triggers the corresponding LED state. There’s also a timer that runs this every 30 seconds in case I accidentally mute or unmute using the Zoom UI instead of the key - otherwise it would only resolve state when the key is pressed.
This is what the button looks like when used on a Zoom call:
Please back my Kickstarter - just kidding, there is no Kickstarter, but hopefully now you can build one yourself.
]]>I don’t think I saw Terminator 2 in the theater. I probably watched it years later on LaserDisc but the scene left quite an impression on me. I had a similar reaction to the war dialer in WarGames and the black box in Sneakers.
Anyways, I was thinking about that scene from Terminator 2 recently which led me to googling, “What’s that laptop in Terminator 2?”
It’s an Atari Portfolio, the world’s first palmtop computer. It was released in June 1989.
It had a monochrome LCD with 240x64 pixels or 40 characters x 8 lines and ran off 3 AA batteries.
The next thing I wondered was, “How difficult would it be to write that program?” Not a program that actually cracks PIN numbers from debit cards, I don’t think you can actually do that with a serial cable and some aluminum foil wrapped around a debit card, but a program that can simulate the output of the palmtop in that scene.
Let’s gather some product requirements!
If we watch the video again, the first thing that happens is that it displays a banner for the program.
The image is clear enough that you can copy the banner easily.
PPPPP IIIIIII N N
P PP I NN N IDENTIFICATION
P PP I N N N
PPPPP I N N N PROGRAM
P I N NN
P IIIIIII N N
Strike a key when ready ...
At this point, John hits Enter and the numbers start scrolling. If we look a few frames in:
We can see that the first line of numbers is:
12345678901234567890123457890123456780
One might assume that this is just the digits 1 through 0 repeated four times, but upon closer look, it’s only 38 digits long. In the third set, the number 6 is omitted, and the last set, the number 9 is omitted.
The way the numbers decrease isn’t obvious either, but it seems like it prints about 5 lines at a certain length before decreasing the length by 1, but after the next set it decreases the length by 2 and then alternates back and forth until it identifies the 4-digit PIN code and dumps him back to a prompt.
Well, that seems pretty straightforward. I’ve been trying to get better at Python so here’s a script I wrote in Python 3:
#!/usr/bin/env python3
import time
import random
delay = 0.025
print("PPPPP IIIIIII N N")
time.sleep(delay)
print("P PP I NN N IDENTIFICATION")
time.sleep(delay)
print("P PP I N N N")
time.sleep(delay)
print("PPPPP I N N N PROGRAM")
time.sleep(delay)
print("P I N NN")
time.sleep(delay)
print("P IIIIIII N N")
time.sleep(delay)
print('')
input("Strike a key when ready ...")
print("\n\n12345678901234567890123457890123456780")
lines = 1
length = 38
decrease = 1
while True:
for i in range(0, length):
print(random.randint(0,9), end='')
print('')
time.sleep(delay)
lines += 1
if (lines == 5):
lines = 0
length -= decrease
if (decrease == 1):
decrease = 2
else:
decrease = 1
if (length <= 4):
break
for i in range(0, 10):
print("9003")
print("\nPIN IDENTIFICATION NUMBER: 9003")
print("\na>", end='')
The script runs really quickly so I added a delay between lines so that you can see the same progression as in the clip. I’m sure there are other optimizations that can be made, but if I were administering this as a bad coding challenge for a tech interview, I’d pass myself.
Using Google Image Search, I found a site selling OEM plastic bezels for the Atari Portfolio which had this nice graphic of the front of the screen:
After playing around with termtosvg a bit, in particular the SVG templates feature, I managed to produce this crazy SVG:
Despite running html5zombo.com for over 10 years now, I’m not sure I really appreciated what SVGs were capable of until I built this one. They can embed images? CSS? Javascript? Any site that allows users to upload arbitrary SVGs and renders them now has my utmost respect.
While I enjoyed making my little self-contained SVG, it bugged me that my Python code could never actually run on an Atari Portfolio. The Atari Portfolio runs “DIP Operating System 2.11” (DIP DOS) which is “mostly compatible” with MS-DOS.
In junior high, before anybody paid me to write software professionally, I used to write BBS software, mods, and door games in my spare time in a mix of Turbo Pascal and a scripting language called PCBoard Programming Language which was similar to BASIC. Based on my minimal research, if I could write this in Turbo Pascal and compile it, it’d probably run on an Atari Portfolio.
I haven’t written Turbo Pascal in about 25 years, but do you ever really forget?
I like a fork of DOSBox called DOSBox-X, so I downloaded and installed the most recent SDL2 variant for OS X. Then I found a copy of Borland Turbo Pascal 7.0, which I’ll put here because it was kind of a pain to find.
You’ll find 4 files in that ZIP which are images of floppy disks. If you put them in a directory like ~/tp
, after you start DOSBox-X and mount a C Drive, you can mount them to the A Drive like this:
imgmount a ~/tp/Disk01.img ~/tp/Disk02.img ~/tp/Disk03.img ~/tp/Disk04.img -floppy
and then switch over to the A: drive and run INSTALL:
A:
INSTALL
At some point in the installation, you’ll have to change floppy disks, because it’s 1992.
You can do this by selecting Drive -> A -> Swap disk in DOSBox-X. It’ll go from Disk 1 to Disk 2. Then just keep doing that and pressing enter until you’ve installed all four disks.
After the installation is done, it’ll ask you to configure your CONFIG.SYS
and AUTOEXEC.BAT
because again, 1992.
Neither of these are strictly necessary. DOSBox-X already sets the FILES higher than the recommendation and adding it to the path only really lets you run TURBO from anywhere. When it’s done, you can run:
C:
cd tp\bin
TURBO
I’d spent so much time looking at this IDE when I was a kid that it made me a bit nostalgic. But then I started porting my Python script to Pascal and that nostalgia faded quickly. I’d like to say that I wrote the whole thing in here but at a certain point I had to switch to VSCode and then copy the file back into the DOS directory. To the people that still run WordPerfect for DOS, I get it, but I also don’t get it.
Here’s the script I landed on after spending a lot of time on this Pascal tutorial:
program pinid;
uses crt;
var i: byte;
var pos: byte;
var lines: byte;
var length: byte;
var decrease: byte;
var delay_amount: integer;
begin
randomize;
delay_amount := 25;
clrscr;
writeln('PPPPP IIIIIII N N');
delay(delay_amount);
writeln('P PP I NN N IDENTIFICATION');
delay(delay_amount);
writeln('P PP I N N N');
delay(delay_amount);
writeln('PPPPP I N N N PROGRAM');
delay(delay_amount);
writeln('P I N NN');
delay(delay_amount);
writeln('P IIIIIII N N');
delay(delay_amount);
writeln('');
write('Strike a key when ready ...');
readln;
writeln('');
writeln('');
writeln('12345678901234567890123457890123456780');
pos := 0;
lines := 1;
length := 38;
decrease := 1;
while true do
begin
for i:= 1 to length do
write(random(9));
writeln('');
delay(delay_amount);
lines := lines + 1;
if (lines = 5) then
begin
lines := 0;
length := length - decrease;
if (decrease = 1) then
decrease := 2
else
decrease := 1;
end;
if (length <= 4) then
break;
end;
for i:= 1 to 10 do
begin
writeln('9003');
delay(delay_amount);
end;
writeln('');
writeln('PIN IDENTIFICATION NUMBER: 9003');
writeln('');
end.
Some quick explanations:
program
and the name of the program, presumably because all modules share the same namespace but the filename is irrelevantuses
. crt
is a module for manipulating the screenbegin
and end
instead of brackets or relying upon whitespace.randomize
at the beginning of your script, hilariously it always seeds random numbers with the same seed and the output is the same.Does it work? Here’s the program running in DOSBox-X:
In the spirit of owning less stuff, I’ve gone through the mental exercise of what it would take to make the rest of this a reality but will not follow through on any of it:
If you don’t work in tech but primarily work on your laptop, you probably should have a YubiKey. And if you work on a political campaign or as a journalist, you should definitely have one (or something similar). Talk to your IT Security department about that. This post will mostly be about something your IT Security department doesn’t want to hear about, though, so maybe don’t mention it to them.
YubiKeys act as two-factor authentication. This means that after you log-in to a system with your username and password, the system requires you to authorize in a second way as well. This way if your login credentials are compromised, the attacker would also have to compromise the second form of authentication, which is harder.
There are different forms of two-factor authentication - a common one is that a website will ask you to scan a QR code with the Google Authenticator app (or similar) on your phone which will generate 6 digit codes. The way this works is that the server and the app both have a shared secret. The phone generates codes based on that secret and the current timestamp and the server generates the same codes and sees if they match.
Another one is SMS-based 2FA, which is pretty widely regarded as insecure. In this case, the server generates a code and sends it to your phone via SMS. The reason it’s considered insecure is that an attack exists called SIM-jacking where someone convinces a cell phone carrier to port a number to a new SIM card, effectively directing all SMS traffic to their phone instead of yours.
YubiKeys are small devices that plug in to the USB port of your computer and emulate a keyboard. When tapped, they emit a one-time password (OTP) which can be then verified by a validation server. A private key exists on the device which is used to sign information, but it can never leave the device because it is stored in a tamper-resistant environment.
The YubiKey that I use is designed to always sit in a USB port of my laptop, so whenever I would take my laptop from my desk to a conference room or to another office, it was always available. But like many new remote workers, my laptop never leaves my desk anymore. I have it hooked up to an external monitor and to save some desk space, I have it in clamshell mode sitting vertically on a stand.
This makes tapping the YubiKey difficult, especially when I store my laptop far away from my keyboard and mouse. I solved this by buying a USB-C extension cable, which brought the YubiKey closer to my keyboard.
One thing I haven’t mentioned about the YubiKey 5C Nano is that it’s kind of difficult to tap, even without the distance issues. The target area that you need to touch is extremely small:
One of the features of the YubiKey is that the little metal strip determines that it is being tapped by a human - this prevents it from being accidentally triggered by bumping your laptop into something, but if you’ve ever seen a one-time password in a Slack channel or Google Doc like tlerefhcvijlngibueiiuhkeibbcbecehvjiklltnbbl
, you know it isn’t a perfect system. I would estimate that 1 in 5 times that I attempt to trigger it, it doesn’t register.
A lot of thought has gone into ensuring that the YubiKey can’t be triggered from software on the computer itself.
Before we go any further, I’d like to acknowledge the reasons for this. If a remote attacker were to compromise your laptop, being able to trigger the YubiKey from software on the computer defeats the whole point of using the YubiKey. But I think we always make tradeoffs between security and convenience - for example, you often don’t have to enter your YubiKey every time you access a system, some systems will only ask you once and not ask you again on subsequent logins for a certain amount of time. When you use a 2FA system and it gives you “backup codes”, do you always print those out and store them in a safe location? Everyone should figure out what level of security and convenience they are okay with.
With that being said, let’s talk about how you could trigger a YubiKey with software.
I’ve been calling this mechanism The Finger.
First, we need some way for the computer to talk to The Finger. I had a bunch of these IZOKEE D1 Mini development boards lying around, they are smaller versions of boards that use the infamous ESP8266 chip found in a lot of IoT devices.
We can connect this to the laptop and talk to it over USB serial, but since it has WiFi, we can also just run a webserver on it and send it HTTP requests.
Next, we need some way to push The Finger towards the Yubikey. After a little googling, I found that the 28BYJ-48 stepper motor interfaces well with the D1 Mini board.
Stepper motors convert electrical pulses into mechanical rotation and the D1 Mini has pins for sending electrical pulses.
But stepper motors rotate and we mostly just need to poke in a straight direction. So I searched on Thingiverse for “28BYJ-48” and found this: 28BYJ-48 Motor Halter.
This attaches a gear to the motor which can guide a long rack forward and backward. But if we’re going to push a long plastic thing toward the YubiKey, it might as well look like a finger. Back to Thingiverse, this time searching for “finger” and I found this model someone made for Halloween:
I opened up these two models in Fusion 360 and used an advanced CAD technique called “smooshing”, resulting in this:
Next, I exported the smooshed STL and 3D printed it in Prusament PLA Lipstick Red because that’s what I happened to have in my printer at the time. Then I took the plastic finger and touched the YubiKey which.. didn’t do anything. I picked up a metal screw on my desk and touched the YubiKey, which immediately spit out a OTP. So then I took the finger and secured it to my desk with a vise and drilled a small hole in it, then screwed the metal screw into it and touched it to the YubiKey, which again did nothing.
That’s when I realized that I’m an idiot and that when I had touched the metal screw to the Yubikey, it was just transmitting the electrical charge from my body to the metal screw, which then transmitted it to the capacitive touch sensor on the YubiKey. So how could I trick the capacitive touch sensor into thinking it was a real finger?
I guessed that the way that capacitive touch sensors work is that they’re measuring your body’s capacitance to ground, so if we just hook up the sensor directly towards ground, it’ll think that its really conductive or at least conductive enough for a human finger to be between the two. So I took an insulated wire, unscrewed the metal screw slightly, wrapped it around the screw and tightened it again. Then I took the other end and connected it the GND port on the D1 Mini board, touched it to the YubiKey, and it worked!
Now the driver board for the stepper motor already connects to the 5V and GND on the D1 Mini, so I thought I might have to strip the GND wire and run it to both the driver board and the screw, but on a whim I decided to just wedge the end of the wire from the metal screw between the stepper motor metal body (figuring the metal body case was grounded) and the plastic housing. This also worked!
Once I confirmed that the finger would trigger the YubiKey, I needed a way to mount the YubiKey close to the finger, so I used my digital calipers to measure the size of the USB-C extension cable and designed a holder in Fusion 360.
The USB-C extension cable would go into the hole on the left and the motor would mount on the right.
At this point, we have to wire the stepper motor driver board to the D1 Mini. This can be done by soldering some headers onto the D1 Mini and then connecting some Dupont jumper wires between them.
D1 Mini | 28BYJ-48 Driver Board |
---|---|
5V | 5V |
GND | GND |
D1 | IN1 |
D2 | IN2 |
D3 | IN3 |
D4 | IN4 |
Once we put the stepper motor into the housing and screw everything together, it should look like this:
The software is much more straightforward. The D1 Mini can be programmed using the Arduino IDE. First, we go into Preferences and add https://arduino.esp8266.com/stable/package_esp8266com_index.json
under Additional Board Manager URLs. Then when you go into the Boards Managers, you can install the esp8266
package which includes the board LOLIN(WEMOS) D1 R2 & mini, which should be selected under Tools.
At this point I’ll run a sketch for blinking the LED just to verify that it’s working:
#define LED 2 //Define blinking LED pin
void setup() {
pinMode(LED, OUTPUT); // Initialize the LED pin as an output
}
// the loop function runs over and over again forever
void loop() {
digitalWrite(LED, LOW); // Turn the LED on (Note that LOW is the voltage level)
delay(1000); // Wait for a second
digitalWrite(LED, HIGH); // Turn the LED off by making the voltage HIGH
delay(1000); // Wait for two seconds
}
I found this sketch that shows how to control the 28BYJ-48 Stepper Motor using WiFi.
Here are the parts that have to do with the motor:
int Pin1 = D1; //IN1 is connected
int Pin2 = D2; //IN2 is connected
int Pin3 = D3; //IN3 is connected
int Pin4 = D4; //IN4 is connected
int pole1[] ={0,0,0,0, 0,1,1,1, 0}; //pole1, 8 step values
int pole2[] ={0,0,0,1, 1,1,0,0, 0}; //pole2, 8 step values
int pole3[] ={0,1,1,1, 0,0,0,0, 0}; //pole3, 8 step values
int pole4[] ={1,1,0,0, 0,0,0,1, 0}; //pole4, 8 step values
int poleStep = 0;
int dirStatus = 3; // stores direction status 3= stop (do not change)
String argId[] ={"ccw", "cw"};
...
void loop(void) {
server.handleClient();
MDNS.update();
if (dirStatus == 1) {
poleStep++;
driveStepper(poleStep);
} else if (dirStatus == 2) {
poleStep--;
driveStepper(poleStep);
} else {
driveStepper(8);
}
if (poleStep>7) {
poleStep=0;
}
if (poleStep<0) {
poleStep=7;
}
delay(1);
}
/*
* motorControl()
* updates the value of "dirStatus" varible to 1, 2 or 3
* returns nothing
*/
void motorControl() {
if (server.arg(argId[0]) == "on") {
dirStatus = 1; // CCW
} else if (server.arg(argId[0]) == "off") {
dirStatus = 3; // motor OFF
} else if (server.arg(argId[1]) == "on") {
dirStatus = 2; // CW
} else if (server.arg(argId[1]) == "off") {
dirStatus = 3; // motor OFF
}
}
/*
* @brief sends signal to the motor
* @param "c" is integer representing the pol of motor
* @return does not return anything
*/
void driveStepper(int c)
{
digitalWrite(Pin1, pole1[c]);
digitalWrite(Pin2, pole2[c]);
digitalWrite(Pin3, pole3[c]);
digitalWrite(Pin4, pole4[c]);
}
The way it works is that the web server displays two buttons: CCW (Counter clock-wise) and CW (Clock-wise). If either of these two buttons is toggled, it changes the dirStatus
to 1 (for CCW) and 2 (for CW). If the button is toggled off, it changes the dirStatus
to 3 to stop the motor.
You can see in the loop()
function that it is checking this dirStatus
to either increment the poleStep (until it loops around), decrement it (until it loops around), or stop it completely.
What we would like instead is a single HTTP endpoint that will move the motor counter-clockwise until it hits the Yubikey and then clockwise to retract, and then stop.
So we can rewrite the motorControl()
function to simply:
void motorControl() {
dirStatus = 1; // CCW
}
which will start moving the motor counter-clockwise when the HTTP endpoint is called.
Then we modify the loop()
function to move for 400 steps, change the direction to CW and move 400 steps in that direction, and then stop.
int steps = 0;
void loop(void) {
server.handleClient();
MDNS.update();
if (dirStatus == 1) {
poleStep++;
driveStepper(poleStep);
steps++;
} else if (dirStatus == 2) {
poleStep--;
driveStepper(poleStep);
steps++;
} else {
driveStepper(8);
}
if (poleStep>7) {
poleStep=0;
}
if (poleStep<0) {
poleStep=7;
}
if (steps > 400) {
if (dirStatus == 1) {
dirStatus = 2; // CW
} else {
dirStatus = 3; // motor OFF
}
steps = 0;
}
delay(1);
}
One nice thing about the physical design of The Finger is that it’s very forgiving: the motor isn’t very strong so once it hits the YubiKey, it can continue pushing counter-clockwise without damaging anything. And when it goes clockwise, eventually the bone sticking out of the finger knocks against the edge of the base, preventing it from falling off to the right. I started with the steps
maxing out at 1000 and manually tested it, lowering it a bit each time until I found the sweet spot of 400 steps.
Then I checked the MAC address of the WiFi board, made a static IP mapping on my router for it, and made a local DNS entry of finger.localdomain
to it as well.
Now if I make a call to http://finger.localdomain/press
, the motor will push the finger towards the YubiKey and retract.
If pressing a YubiKey with your finger isn’t very satisfying, pressing a key on a mechanical keyboard is on the opposite end of the spectrum. It would be cool if we could trigger the YubiKey just by hitting a key on our keyboard.
If you use a TKL layout, in the top right of your keyboard, you have Print Screen, Scroll Lock, and Pause, all of which don’t make a whole lot of sense in 2020 and are ripe for remapping.
I use a program called Karabiner-Elements to help with this remapping. Much like the hardware portion of this project, it is also a bit Rube Goldberg-esque.
Using the UI, under the Simple modifications tab, you can change “From key” to “pause” and “To key” to “f14”.
Then switch to the Misc tab and click “Open config folder (~/.config/karabiner)”. Here you’ll find a file called karabiner.json
. Under the profiles.rules
key, you can add this JSON:
{
"description": "F14 to trigger Yubikey",
"manipulators": [
{
"from": {
"key_code": "f14",
"modifiers": {
"optional": ["any"]
}
},
"to": [
{
"shell_command": "osascript /Users/YOUR_USERNAME/bin/yubikey.scpt"
}
],
"type": "basic"
}
]
},
In ~/bin/yubikey.scpt
you can write an Applescript to call a shell script like this:
do shell script "/bin/sh ~/bin/yubikey.sh 2>&1 &"
Why have an Applescript call a shell script? I found that when I launched the shell script directly from Karabiner Elements, it opened a new instance of Terminal.app
and took focus away from the window that is prompting for the YubiKey. This causes everything to run in the background.
Finally, in ~/bin/yubikey.sh
, we have:
#!/bin/bash
curl "http://finger.localdomain/press" --silent >> /dev/null
This is what it looks like in action:
I made a fake YubiKey prompt to demo this, but I assure you that it works on real YubiKey prompts as well. Here’s the code for the fake YubiKey prompt in case for some reason you need to make fake YubiKey prompts:
#!/bin/bash
printf "YubiKey for 'bert': "
read yubikey
green=`tput setaf 2`
reset=`tput sgr0`
echo "${green}SUCCESS${reset}"
Now that we have that shell script, we can call it from other places as well. iTerm2 has a feature called Triggers, which can execute actions based on text matching a regex in your terminal. So we could write a regex to listen for “Yubikey for” and have it run the same script, eliminating the need to press buttons altogether.
Another option would be to make a custom Chrome extension that waits for certain URLs that request YubiKeys and makes a browser fetch to our HTTP endpoint in the background.
I showed this to someone and they said, “So.. you built a button that you press that will press a button? Why not just press the button?” which was a bit infuriating because they clearly missed the whole point. “Don’t you get it? This button BAD, but this button GOOD. Me want to press GOOD button.”
Work on the things that matter to you.
]]>In the first scene of the TV show Devs, a character wakes up and immediately reaches for his phone, presumably to check Twitter.
I’m reluctant to admit that I did the same thing, for years, but I’ve finally managed to break the habit.
Now I wake up, reach for my Switch, and head to Nook’s Cranny in Animal Crossing to see what Timmy and Tommy are selling. I’m not sure why, because I can already have any item in the game, because the Animal Crossing economy is meaningless to me.
When you first arrive on the island, you’re broke. Worse than broke, you’re in debt. Luckily, Tom Nook lets you pay in NookMiles which are earned through completing tasks (e.g. chop down some trees, talk to the villagers, catch some bugs). But after you pay off your tent in NookMiles, you’ll find an ever increasing need for more and more bells to improve your house and furnish it. And this is where it gets interesting.
At first, you shake a bunch of fruit trees on your island and get some fruit. When you sell it to Timmy and Tommy, they give you 100 bells each for the fruit. This is where your first idea comes from: What if you plant the fruit? If you plant the fruit, in 3 days, you’ll have a fruit tree. And every 3 days it’ll produce fruit! There seems to be no end in sight - you could easily plant more and more fruit trees until fruit trees cover your entire island.
But then you start talking to a friend and you ask them about their fruit and they say, “Apples? No, I don’t have apples on my island. I have oranges!” and you compare notes and discover that apples sell for 500 bells each on their island and oranges sell for 500 bells each on yours.
And this leads us to rule number 1 of the Animal Crossing economy: Animal Crossing rewards collusion.
So you strike a deal with your friend, they can come to your island and pick all of your fruit and you’ll go to their island and pick all of their fruit and you’ll each sell it for five times as much to Timmy and Tommy.
But then comes your next idea: Why do I even need my friend? You chop down all your apple trees and plant orange trees - cut out the middleman. Soon orchards of orange trees cover your entire island and every 3 days you walk around shaking all the trees and selling the fruit.
You start looking for ways to make money every day instead of every 3 days and you discover that if you find a glowing hole in the ground, digging it up and burying 10,000 bells will grow a 30,000 bell money tree in three days. And you find that hitting one of the six rocks on your island with a shovel will occasionally produce 20,000 bells. The combination of all these money making schemes gives you a small, steady income.
And then one Sunday morning you wake up before noon and a turnip seller named Daisy Mae is there, selling turnips. You pick up a few just to see how the whole thing works.
Over the course of the week, the price fluctuates but eventually you see a price that seems higher than what you paid and you unload the turnips for a decent profit. Now you’re hooked! You start saving up your money and waiting until Sunday mornings to spend literally every bell you have to buy turnips and obsessively track what the algorithm thinks the price of turnips will be. You can’t put turnips into storage so they’re littered all over your house and your island.
But wait: you’ve forgotten rule number 1: Animal Crossing rewards collusion.
The turnips are portable - you can sell them to any Timmy and Tommy, not just your own. Now you come crawling back to your friend and start asking them what their turnip price is every day. But just one friend isn’t enough, you need a lot of friends because the more friends you have, the higher likelihood that one of their turnip prices is going to spike. You can even collude at the buying time, going to the island with the cheapest prices to buy and going to the island with the highest prices to sell.
So let’s say you’ve netted a respectable 250,000 bells from your fruit harvesting operation. You buy turnips for around 100 bells and sell them on a friend’s island for 500 bells, now you’ve got 1.25 million bells! It’s easy money!
But wait, your friend’s island that spiked in price at 500 bells - you’re not the only person who wants to sell turnips. In fact, the whole time you’re trying to unload the turnips, you keep getting interrupted by folks entering and leaving the island. You can’t even move across a screen without watching someone you don’t care about walking through the airport gates.
This leads us to rule number 2 of the Animal Crossing economy: The standard unit that controls the Animal Crossing economy is the time it takes to get through a dialog or loading screen in Animal Crossing.
Don’t worry if you don’t understand that rule yet. We’ll revisit it soon.
The week after you make 1.25 million bells, you do it again, and now you have 6.25 million bells. It’s a little harder because you’ll reach the maximum number of turnips you can carry in one trips so you’ll have to make multiple trips, but it can be done. 6.25 million bells! You’re set for life. You’ve broken the Animal Crossing economy. But wait, have you?
After you’ve paid off your house and built a bunch of convenient bridges and inclines for your island, you start looking around and think, “What’s next?”
And then you remember one of the first DIY recipes you bought with NookMiles was for a Robot Hero. The Robot Hero looks pretty cool. You’ve got a handful of the ingredients, but 30 rusted parts? How do you get rusted parts?
Well, a seagull named Gulliver occasionally washes up on your island. You walk over to him and he’ll ask you to help him find his communicator parts. Just kidding, this is Animal Crossing, so you’ll have to talk to him like 5 times before he even asks you to find them. After you help him get rescued, the next day a rusted part will show up in the Recycling Bin in Resident Services.
So you’ll have to do this 30 times. 30 times! How many dialog screens is that?
Remember rule number 2: The standard unit that controls the Animal Crossing economy is the time it takes to get through a dialog or loading screen in Animal Crossing.
So you give up on the Robot Hero for a while. And then one of your villagers walks up to you and starts talking about pumping weights and working out again and you think, “Actually, what this island needs is a smug, heterochromatic, hipster cat.”
You read an article or two about Raymond and discover that the way to get Raymond is travelling to islands with Nook Mile Tickets and hoping that he’s there so that you can convince him to move to your island. But the statistical probability of you finding Raymond is quite low: it’s about 0.12% or 1 in a 1000. And imagine the amount of dialog screens going through the airport in order to find him.
But don’t forget rule number 1: Animal Crossing rewards collusion.
Another way to get Raymond is for someone else to convince him to move out and then you go to their island and ask Raymond to move in. But that begs the question, “How much would you have to pay that person to let you do that?”
This is where the Animal Crossing economy gets a little complicated.
The most expensive item in the game is the Royal Crown which sells at the Able Sisters shop for 1.2 million bells. So you need 1.2 million bells and a bit of luck (the Royal Crown has to be on sale that week) to get it.
If you decide you no longer want the Royal Crown, you can sell it to Nook’s Cranny for 300,000 bells. On the other hand, if you decide to sell your Robot Hero, you can only sell it to Nook’s Cranny for 250,000 bells.
But does this make the Royal Crown more valuable than the Robot Hero? Of course not, because the Robot Hero took significantly more dialog screens to get through!
If you can multiply your bells 5x in a week just by sitting through a few airport loading screens, clearly bells are not a good unit of the Animal Crossing economy. What is a good unit?
Animal Crossing was released during the pandemic, so I thought that unit might be N95 masks, so early on in the game I started stockpiling those.
I was wrong. No one wants the hundreds of masks that I have spread out all over my island.
Consider the Nook Mile Ticket. Arguably the most desirable item in the game, Raymond, can be gained by spending Nook Mile Tickets. So that fact alone makes them desirable. But wait - they’re really annoying to buy, too! You have to go through several dialog screens just to purchase one. And here’s the kicker, you can’t even buy them with Bells. You have to buy them with Nook Miles, which you gain by completing annoying tasks, which almost all of them have dialog screens associated with them.
Think about the layers of dialog screens here. You dig up a fossil and bring it to Blathers where you go through five dialog screens simply to identify and donate it. You do this a few times and you’ll complete a Nook Miles quest. Then you go through a bunch of dialog screens to exchange the Nook Miles for a Nook Miles Ticket, which you’ll spend at the airport for another set of dialog screens. If you do this 1000 times, statistically you might get Raymond.
In a way, the Nook Mile Ticket is the perfect unit of the Animal Crossing economy. But how do you get Nook Mile Tickets?
Due to some unfortunate timing on my part, Nintendo patched this bug while I was writing this blog post. If you’re still on v1.2.0b or lower, you can exploit it, but if you’ve updated to v1.2.1, it’s fixed. If you haven’t upgraded yet, I would recommend turning on Airplane Mode and switching on Bluetooth but switching off WiFi.
The method for getting Nook Mile Tickets that I’m about to describe is called the “mail duplication bug”. You may have read about it before, or watched a video on it before Nintendo took them all down from Youtube, but I’ve spent at least 6 hours exploiting it and optimizing it and have learned some things about it that I haven’t seen anywhere else.
Here’s the requirements: You need three characters on your island. Ideally one of those characters has upgraded their house where they can move their mailbox, but this is not strictly necessary. You also need two controllers. I use a set of joycons and a Pro controller, although I imagine you might be able to use two sides of a joycon.
Generally how the bug works is: let’s say you have three characters, Alice, Bob, and Charlie. Bob mails items to Alice and then starts a Party Play with Alice. Alice walks into a building like Resident Services and walks out, triggering the mail to appear in their mailbox. Immediately after an autosave happens, Alice retrieves all the items from the Mailbox, switches control back to Bob, and then Bob selects “Pick residents again” so that they’re playing with Charlie, not Alice. Finally, Bob ends the Party Play session. After this happens, the mail will still exist in the mailbox with the items attached and also exist in the inventory of Alice.
This bug requires a lot of setup which can be quite tedious, but once the mail is in the mailbox, the steps for duplication can be completed rather quickly and repeated.
Some key concepts that you should know:
The fastest way to switch between two characters without restarting Animal Crossing is to start Party Play by using the “Call a Resident” feature of your phone and selecting that resident. After Party Play is engaged, shake the first player’s controller (left joycon if you’re using joycons) and then hit A on the controller of the second player. Then you can hit (-) and select End Session. Now you’re playing as that character!
You should practice using the Call a Resident feature before trying to exploit this bug. This means figuring out how to press the L and R buttons on both controllers one at a time in order to activate Party Play. You’re going to be doing it a lot.
Create a third character.
Most people don’t have three characters on their island. If you’ve never added another character before, this just means adding another Profile to your Switch home screen, switching to it and starting Animal Crossing. I named mine “dupe”.
You’ll have to go through a lot of dialog screens.
If you want to delete this character later, you can do it from Settings on the title screen as your primary character.
Gather the items that you want to duplicate and drop them somewhere on your island. If you don’t have a bundle of 10 Nook Mile Tickets, this will mean purchasing them one at a time from the Nook Stop.
Switch to your primary character. Have your primary character pick up their mailbox and move it to the left or right of the Plaza in Resident Services. This isn’t strictly necessary but it will mean less walking later.
Switch to your secondary character. Have your secondary character pick up all the items you want to duplicate.
Go to the airport and mail them to your primary character. This requires a lot of dialog screens, almost 30 seconds per item.
Do this carefully - it is very easy to accidentally send mail without actually attaching an item and then you’ll have to skip over this piece of mail every time you want to engage the bug or clear it out.
Switch to your primary character. Walk into Resident Services and walk out. You should see that you have new mail in your mailbox. Do not open the mail yet. Remove everything from your inventory so you have a completely empty inventory.
Switch to your secondary character. Hit (-) and go to Call a Resident.
Select your primary character as the resident. Once Party Play has started, shake your left controller and hit A on your secondary controller. You should be focused on your Primary character.
This is where things get dicey. Let’s talk about autosaving.
Animal Crossing autosaves your state every 3 minutes. When it saves, there is a spinning icon in the top right of the screen.
Sometimes, when you see the island loading screen, like when you’re starting Party Play, you will also see the spinning icon on the top right, but - this is important - this is not the same as the autosave.
There are exceptions to this 3 minute cadence. For example, if you’re in your inventory and the 3 minutes elapses, it will not auto save at that point. It will auto save after you close your inventory. Similarly, if you’re in your mailbox and the 3 minutes elapses, it will not auto save at that point either, it will auto save after you close your mailbox.
The reason this is important is that autosave is not your friend when trying to exploit this bug. If the game autosaves after you’ve retrieved the packages from your mailbox but before you’ve managed to select “Pick Residents again” and chose your third character, the items will be permanently gone from the mailbox. They will be in your primary character’s inventory, so you will not have lost them, but you will not have duplicated them and you will have to go through the tedious process of mailing them again.
The key to this bug is that you want the autosave to happen right before you open the mailbox, to maximize the amount of time you have before it autosaves again.
I recommend having a timer so that you know how long you have. I use the timer on my iPhone. Immediately after I see the spinning icon in the top right, I start the timer.
So there are two methods for ensuring that the auto save happens right before you open the mailbox. One would be to stand in front of the mailbox and wait until you see the spinning icon in the top right, start your timer, and open the mailbox.
The other method is to stand in front of the mailbox, open your inventory, and wait at least 3 minutes. Then you know that after you close your inventory, the auto save will happen, and you can start your timer and open the mailbox.
Remember, you have 3 minutes! You have to do this quickly.
Go through your mail and put all the presents in your pocket.
Don’t bother opening your inventory and opening the presents, just get them from the mailbox into your inventory as quickly as possible.
Switch back to your secondary character by shaking your controller and hitting A on the other controller. Then hit (-) and select “Pick Residents again” and chose your third character.
After Party Play loads with your second and third characters, hit (-) and click End Session.
Steps 6-8 can be repeated over and over again, but eventually you will fill up the inventory of your primary character and need to open the presents and put them somewhere.
Most tutorials or videos I’ve read about this mail duplication bug get one of two things wrong:
If you’re starting from nothing, you may not have 40 items worth duplicating. But if you stop between each iteration of the exploit and mail yourself the duplicated items, you can quickly get from duplicating 10 Nook Mile Tickets at a time to 400 Nook Mile Tickets at a time. Ahh, the power of doubling. At that point, you’ll discover that the time consuming part of the exploit is not performing the exploit itself, but opening the presents and putting the items into storage to clear out your inventory.
As we’ve discussed, Nook Mile Tickets are the currency of the Animal Crossing economy, and thus can be traded on a secondary market for any desired item in the game, like a Robot Hero, Cresent Moon Chair, Royal Crown, or even Raymond, if you duplicate enough of them.
]]>