My development toolkit

2022-02-14
Table of contents

Over the years I’ve accumulated quite a collection of tools with which to perform my job. Most of these are CLI tools, some of them I built myself. In this post I’ll document the tools I use on a regular basis.

Vim

I’ve been using vim for over a decade now. I started while I was in school, and though I’ve tried various other editors over the years, I always end up back with vim. My vim config can be found in my dotfiles repo.

Notable plugins:

Tmux

I like to consider the combination of vim and tmux to be my IDE. If I need to run a command, I’ll open a pane, run the command, then close the pane when it’s no longer needed. My daily workflow involves dozens and dozens of ephemeral shells opened as tmux windows and panes throughout the day.

My tmux config can be found in my dotfiles repo. It’s quite basic. The only important plugin I have is christoomey/vim-tmux-navigator, which allows me to navigate between vim and tmux panes using the same hotkeys.

My workflow is to have a window for each project (or service). Within each window I have a main pane with vim. I’ll typically have at least 1 or 2 panes below the main pane that I use for running commands, with more being created on an ad-hoc basis as needed.

Knowledge base

I maintain a public knowledge base where I cultivate thoughts, ideas, and knowledge in general.

I wrote a CLI tool called kb for managing my knowledge base. I use this to quickly retrieve or add information into my knowledge base. I think a knowledge base is most effective when it’s as frictionless as possible to access it.

fzf

fzf is indispensible for anyone who spends a lot of time in their terminal. Fzf is at its core a fuzzy filter that can be used any list. Pipe some lines into fzf and use it to quickly filter through it.

I like to think of fzf as a force multiplier; it takes an existing workflow and makes it so much more efficient. You’ll notice a common theme in this post of fzf-powered commands. I use it everywhere.

SSH session multiplexer

I have a couple scripts I use to quickly shell into different remote machines and synchronize my inputs across all shells. The script requires tmux. Typically I use this for checking logs and investigating incidents.

This capability is kind of a superpower. It turns a five minute job of checking a couple dozen servers into five seconds. If you don’t use tmux, search for “cluster SSH” for your particular OS and terminal app, there should be something available.

K8s tooling

I use these commands multiple times daily. In Carousell, our staging environment consists of isolated k8s deployments where can test our features in isolation from other features. I’m often switching between different k8s contexts, and shelling into different pods. These commands all use fzf for quick filtering.

/etc/hosts management

I have a couple scripts to update my /etc/hosts file with hostnames of machines in my company. At Carousell we have a combination of cloud-managed VMs and K8s pods, so I also frequently find myself needing to SSH into various VMs. Having records in /etc/hosts enables host completion. I only generate the more relevant hosts into my /etc/hosts; the rest go into a separate file I access with the allhosts command.

I can use the hosts command in conjunction with my mssh script to quickly SSH into multiple hosts. For example, to SSH into all foo-db-slave-* hosts:

hosts | grep foo-db-slave- | xargs mssh

jq

jq is another indispensible tool that should be in every engineer’s toolkit. Apart from the day-to-day exploration and visualization use-cases, I’ve also used jq for analysis heavy jobs involving large blobs of JSON. Entire Python scripts replaced with simple jq pipelines.

I have a wrapper around jq and fzf called jqfzf that allows me to interactively construct jq queries on some JSON input.

gron

gron transforms JSON into discrete assignments to make it easier to grep.

A side effect of this is that it also makes JSON easily diffable. I have a script called jsondiff that diffs two JSON files. An example use case is to compare two different API responses. Most of the time I use it for ad-hoc operations, since I frequently work with JSON.

I sometimes use gron as an alternative to jq for JSON exploration. Piping data into gron is a quick way to visualize its JSON structure.

Using gron and grep is also a quick way to get a jq query. Consider a large document where you have a value of interest, but are not sure what the JSON path is to that value. You can easily find out with gron and grep:

$ cat input.json | gron | grep "foo"
.[0].some.very.nested[3].path.to[2].your.value = "foo"

One scenario where I use this pattern is when investigating Sentry events. The Sentry UI is kind of slow and doesn’t support the kinds of analysis I want to do when investigating an issue. Instead I download the JSON blob of Sentry events, then use jq to parse it. The JSON is rather heavily nested, so instead of manually figuring out how to reach the field I’m interested in, I just do cat events.json | gron | grep "SomeError" to immediately find its JSON path.

diff-so-fancy

diff-so-fancy makes diff output more readable. You may also be interested in delta, which does more than diff-so-fancy. I tried delta for a little but found its output too busy; too many colors, too many unicode symbols, and the syntax highlighting was distracting.

I recommend using either of the above as your git pager:

[pager]
	diff = diff-so-fancy | less

VisiData

VisiData is a tabular visualization tool for different data formats. I’ve only started using it recently, and have added a reference page for it on my knowledge base.

Git aliases

I have a number of git aliases that enhance common git commands with fzf. You can find the full list of aliases in my knowledge base.

An example is git fixup, which allows me to quickly add fixup commits, which I later rebase before I open a PR.

[alias]
fixup = !git commit --fixup $(git log --oneline --color=always | fzf --ansi | awk '{print $1}')

Plantuml and Graphviz

I often work on PlantUML or Graphviz diagrams, usually for documentation and RFCs. I like being able to version control my diagrams. I use Graphviz for simpler diagrams like state machines and PlantUML for the more complex diagrams.

Sequence diagrams in particular is where PlantUML shines. It’s so much easier to build a sequence diagram using PlantUML compared to visual tools like draw.io.

md-code-renderer – Render diagrams in markdown

I wrote a program called md-code-renderer that renders PlantUML or Graphviz code blocks in markdown files into diagrams.

You can see the program being used in these knowledge base pages:

setop – Perform set operations on files

Occasionally I’ll need to perform set operations on lines in two files. For instance, to find the lines that exist in one file but not another. To do this I wrote a small program called setop. I rarely need to use it, but whenever I do it always feels good to be able to do so without a second thought.

Go test helpers

These are some commands I use occasionally when working with Go tests. go-test-runner allows me to run a single Go subtest. It uses fzf for filtering. go-test-example-diff enhances the output of failed ExampleXXX Go tests by providing a diff.

Work-specific tooling

I’ve built many other programs for use in my work. Usually these programs are interfaces into various services that my team manages. I have some examples of such tools in this blog post: Build CLI tools for common tasks.