As I mentioned before, I run almost all of my infrastructure on containers. I build every image myself, from scratch. They’re minimal, and they do what I need.
Most of this pipeline looks after itself: update checking, image building, pushing to the registry. The bit I was still doing by hand was deploying, mostly because I wasn’t happy with the existing options:
Polling
The most common approach is polling: a tool runs on a schedule, checks the registry for new images, and updates containers if it finds any. Polling means there’s either latency or constant, wasteful requests to the registry.
The less obvious problem is that polling doesn’t know what actually changed. It has to check every image, compare tags or digests, and decide what needs updating. With many images across multiple servers, this is wasteful - constant work for something that only matters when you push a change.
Many of these polling solutions also try to reimplement how compose works, leading to some weird subtle bugs on restarts.
Generic Webhook Receivers
You can use a generic webhook tool like webhook or an automation platform like n8n to receive the registry webhook and then trigger an update.
This works, but you end up writing the Docker interaction logic yourself anyway: finding the right containers, matching images, pulling updates, running Compose. They also break easily when you push lots of images at once — concurrent runs step on each other, and file locks are fragile.
CI/CD Deployment
Another approach is to have your CI/CD pipeline push updates directly. Your GitHub
Actions workflow builds the image, pushes it to the registry,
then SSHs into the server and runs docker compose up.
This works, but every CI job has the potential to break production. Your build pipeline is now tightly coupled to your infrastructure setup. Move a service to a different server? Update your CI configuration. Add a new server? Update every workflow that might deploy to it. CI gets compromised? Your infrastructure is compromised.
Orchestration
Then there are the full orchestration platforms: Portainer, Kubernetes, and similar. They solve problems you don’t have and create ones you didn’t. K8s adds a control plane, schedules, service meshes, and so much YAML. Portainer adds its own state, a slightly weird way to manage stacks and forces you into a web UI.
I’m fairly happy with Docker Compose and don’t need all that overhead. I just want my compose files to be the source of truth.
My solution
I wanted something with the same philosophy as the images: minimal, purpose-built, doing one thing well.
I’d tried building something like this before, but Docker Compose wasn’t available as a library back then — I had to shell out to the CLI, which ran into the same problems described above. Recent versions changed that, exposing Compose as a Go library with the CLI as a thin wrapper on top.
So I built Adze. It does one thing: listens for registry pushes, then finds and redeploys any Compose or Swarm services using that image. No polling, no CI access to production, no orchestration. Updates queue and process one at a time, because I’ve learned the hard way what happens otherwise.
I push an image, and within seconds the relevant services are updating. The last piece of a pipeline that now runs end to end, without me touching anything in between.
I’ve been criticised for my… blunt naming scheme, so this time I decided to break my tradition and pick a name for the project. An adze is an old woodworking tool, traditionally used for shaping and maintaining wooden boats. Handcrafted vessels need maintaining, and this felt like the right name for a tool that keeps handcrafted infrastructure in shape.