Greetings, Earthling 🖖

I’m Shantanu, aka Shaan.

Your friendly neighborhood co-inhabitant of this tiny speck of dust, I maintain this site as a stochastic log of my calculations towards the futile aim of weeding out the anomalies from the equation that gives me my “42”.

In my Clark Kent mode, I spend my day at The Trade Desk, trying to crunch through petabytes of data and trillions of queries every day to understand the human behavior and make the advertising technology world a little bit better.

Before that, I spent a couple of decades in the Semiconductors world at Qualcomm and Google, building processors and AI accelerators, tinkering with chips, operating systems, device drivers, human interface devices, security et al.

When the lights go out everywhere, I like to don my maker hat and build stuff that no one wants.

I like to make and break things around me ranging from my smart toaster/TV to my web and phone apps to my car, strumming a bit of guitar, 3d printing stuff, and of course, shit-posting on twitter @shantanugoel.

Sometimes I post some of my travel and 3d print outputs on instagram, because I’ve been told by my gen-z interns that that’s a thing to do.

Do check out some of the other subdomains that I run.

Bazel rules to auto generate files at compile time

Auto generated files in a project are pretty common. There are generally 3 scenarios for this in a project that needs auto-generated files:

  • An external pre-built tool generates files pre-compilation and then the generated files get checked in to the tree
  • The tool is compiled in-tree but is again used to generate files pre-compilation and then check them into tree
  • The tool is built during the main build step and then it also generates the needed files just in time.

Arguably, the last method is usually the better one since it keeps friction to a minimum during development by always generating latest files according to any changes done locally and also prevents against the human error of someone foregetting to commit the separately generated files, or having a time period where the tree is out of sync because the generated files and hand written files were committed separately. There is a con as well that such files are not available for someone going through the code statically for understanding or debugging. But one could always couple both methods if so desired.

Java “Object” in C++ using std::variant

I’m going through this brilliant book, Crafting Interpreters, these days to learn more about how interpreters are built. My attempts so far have been ameturish, as I’ve never had a formal CS course, and this looked useful to upskill my toolkit. However, the book implements the interpreter in Java (at least the first part, which I am going through now) and I’m trying to follow along in C++ instead since I don’t have much experience in Java, neither an inclination to learn it. In one of the chapters, the author uses a Java “Object” class to hold the literal values that may appear in the script being parsed by the interpreter. Java docs say that:

Automatic notes backup on macOS with hammerspoon

I’m a backup nerd and like to back-up everything I do. Not just that, I like to version them too so I can go back in time to any point. git serves as a good tool for me for versioning wherever small data/text files etc are concerned and then I back this up with remote git servers like my own git server on a raspberry pi at home, a gitlab server and a github server. Prime targets for this are my dotfiles that I edit in spacemacs and my notes that I take in Joplin. The only issue though is that every time I change something, which is pretty often (since I’m a tinkering nerd too) I’ve to manually commit the changes and push them upstream. Apart from being all kinds of nerds, I’m an automation nerd too, which is a fancy way of saying that I am lazy, but I digress.

Fixing the Raspberrypi 4 Ethernet disconnection problem

I added a Raspberry Pi 4B recently in my ever expanding homelab. To get the best network with the new gigabit ethernet port on the raspi, and still save power, I added a PoE hat to it so I could power the raspi as well as provide it data through the ethernet port. Everything worked fine except that I ocassionaly got into situations where the raspi stopped responding suddenly. Initially I thought that it’s crashing due to some issues and used to just restart it manually. I tried switching from manjaro-arm to raspbian as well but that showed same symptoms, despite updating to the latest bootloader and firmware as well. Then I noticed that I could hear the fans on the PoE hat whirring up and settling down even when the raspi was inaccessible. This put me in a doubt that the raspi hasn’t actually crashed, because the fans whir up and go down according to the CPU temperature. So this pattern meant that the CPU was still doing something to heat it up. A trip to the kernel logs via journalctl -xe confirmed the doubt and also showed a few weird messages about the ethernet.

Migrating my hugo blog from Gitlab/AWS S3 to Github Pages with Actions

Until Now - The State of the union

This blog is generated using hugo, an awesome static site generator. So far, the workflow I used to deploy it was:

  • Push commit to the source repository on GitLab
  • GitLab CI kicks off on receiving the push
    • CI downloads latest version of hugo and generates the static site
    • Runs aws-cli to sync the new files to AWS S3
  • S3 serves the static site
  • Cloudflare provides:
    • DNS services (so I can use https://shantanugoel.com without having to prefix it with a www)
    • CDN/Caching services for resilience and keeping S3 bills low for data transfer

Why? What broke the camel’s back

I was mostly happy with this setup with a couple of niggles at the back of my mind, vis. a vis.: