I frequently build command-line (CLI) tools to perform common tasks. These tools not only help me to do my job more effectively, but also have other compounding effects on productivity.
Here’s an example. I find myself often needing to fetch information about particular entities during my day-to-day work. Sometimes it’s to investigate an issue, or simply to test some feature I’m developing. Simple fetches are often the first thing I write tooling for:
$ listingclient get <ID>
{"title": "Macbook Pro", "description": "", "price": 1200.00, ...}
Why build tooling?
- Minimized effort for repeated tasks - By implementing your common tasks as commands, you make them easily repeatable with minimal effort. It’s easier than
curl
ing an API, or querying a database, or interacting with a web interface. - Consistency - CLI tools provide consistency as well. I know many engineers who would have a Postman instance filled with queries that they would edit on an ad-hoc basis everytime they need to perform some task.
- Productivity multiplier for your team - Chances are if you find yourself needing to perform a task often, your teammates will have that same need as well. Why not build something once, and share it amongst your team? As your and your team’s needs evolve, features can be incrementally added. Eventually you end up with set of feature-rich tooling, battle-tested over years of use.
- Your tools can act as a public interface for other teams to use - The tooling owned by your team can be shared with other teams, effectively providing them with an interface into your team’s services. For example, my team owns the Listing entity at Carousell. Other teams can simply use our client to get listing information, without having to worry about which internal API they should call or what its API contract is.
How about a web interface?
At Carousell we have an internal web platform. However it’s not built in a way that makes it easy for backend engineers to contribute to, which means its feature set is often lagging behind what the engineers need. The platform often only exposes high-level information, since the primary user demographic are non-tech folks. Unless you’re fortunate enough to work somewhere that has a robust internal platform you can easily contribute to, it’s far easier to spin up some CLI tooling.
In addition, here are some reasons why I generally prefer working with CLI tools:
- Integrates with terminal-based workflows - For engineers who work exclusively in the terminal, CLI tooling integrates more effectively. When I need to use one of my commands, I just open a tmux split, run that command, then close the split once I’m done. It’s much more efficient than tabbing into a browser and clicking around some UI.
- IDE integration - For example, we have few JSON configs that have rather complex schemas. Whenever I need to make a change to one of them, I want reassurance that the change I’m making is valid. To aid this I have some simple programs that validates these configs, which I can then invoke from my editor (vim makes this a trivial operation).
- Easier to consume - The programs I write generally output in JSON. I’ll typically either use jq or gron+grep to filter through output. It’s a lot easier to consume output in this manner compared to a web page.
- Scriptability - The programs you write can be composed together into some larger script. It’s always nice when you realize that the task you’re working on can be easily done by chaining a few of the commands you’ve already written, instead of having to write something bespoke.
Tips
This is a good reference for CLI guidelines.
Here are some of my personal ones:
- Follow the Unix philosophy - Have your commands do one thing and one thing only. Let your users compose commands together if they need to perform a sequence of steps with your tool.
- Output in unindented JSON - If someone wants colorized or indented output, they can pipe it through
jq
. - Use a subcommand structure - Subcommands help you group related commands together, as well as make it easier for you to extend your program with more features. A common framework for Go is spf13/cobra, but the standard library works fine as well. A good real-world example of subcommands using the stdlib is golang-migrate/migrate.
- Return a zero exit code on success, non-zero on failure - Scripts use exit codes to determine if your program is successful, so you should return these correctly.
Examples
My most common use-case is to get information about various entities.
listingclient get <ID>
categoryclient get <ID>
We also have commands that implement CRUD operations. Here’s a set of commands (tweaked for brevity) we have for performing bulk jobs.
listingclient bulkjob get --limit 5 // get the 5 most recent jobs
listingclient bulkjob create <...args> // create a new job
listingclient bulkjob [pause|unpause|cancel] <ID> // manage the job state
As mentioned before, I subscribe to the Unix philosophy of building tools that do one thing and do it well. I like to build in such a way that workflows can be composed on top of the commands. For instance, if I wanted to monitor the progress of a particular job, I could simply do the following.
watch 'listingclient bulkjob get <ID> | jq'
Here’s another example of composing a workflow. Let’s say I have some entity that I’d like to edit. I could very easily compose a script that (1) gets the entity, (2) opens it in an editor for editing, and (3) updates the entity.
#!/bin/bash
ID=$i
fooclient foo get $ID | jq '.' > /tmp/foo_$ID.json
vim /tmp/foo_$ID.json
fooclient foo update --file /tmp/foo_$ID.json --id $ID
Hopefully this post has given you some inspiration!