← Back to the blog

A small, boring deploy pipeline

GitHub Actions, a single shell script, and a Caddy server. What our deploys actually look like after the shiny wears off.

We deploy most of our services with one shell script, one GitHub Actions job, and a Caddy reverse proxy on a single VPS. No Kubernetes, no Helm, no service mesh. After the shiny wears off, this is what our deploys actually look like — and it covers more services than you'd think before it stops scaling.

Quick answerA small deploy pipeline: GitHub Actions builds a binary or container on every push to main, copies it to a server via SSH, and runs a deploy script that swaps the running process behind Caddy. Logs to journald. Health checks via systemd. Rollback by re-running the previous tag. Total moving parts: under five.

The shape of the pipeline

The whole thing in one diagram, without the diagram:

git push main
  → GitHub Actions: test, build binary, tag, upload artifact
  → SSH to server, scp the artifact
  → ./deploy.sh: stop service, swap binary, start service
  → Caddy reverse-proxies the service on :443 with auto-TLS
  → systemd-journald keeps logs; systemd restarts on failure

One shell script

The deploy script lives on the server. It's under 40 lines. It stops the systemd unit, swaps the binary, runs migrations, starts the unit, and waits for the health check.

#!/usr/bin/env bash
# /opt/app/deploy.sh — runs on the server.
set -euo pipefail

ARTIFACT="$1"  # path to the new binary
SERVICE="app"

# Keep the previous binary for one-command rollback.
cp /opt/app/bin/app /opt/app/bin/app.previous || true
mv "$ARTIFACT" /opt/app/bin/app
chmod +x /opt/app/bin/app

# Run migrations before swapping the running process.
/opt/app/bin/app migrate up

systemctl restart "$SERVICE"

# Wait for /healthz to come back 200 — fail the deploy if it doesn't.
for i in {1..30}; do
  if curl -fsS http://localhost:8080/healthz >/dev/null; then
    echo "deploy ok"; exit 0
  fi
  sleep 1
done
echo "health check failed; rolling back"
mv /opt/app/bin/app.previous /opt/app/bin/app
systemctl restart "$SERVICE"
exit 1

One GitHub Actions job

The CI side is a single workflow. Test, build a static Go binary, copy it over, run the script.

# .github/workflows/deploy.yml
on: { push: { branches: [main] } }

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-go@v5
        with: { go-version: "1.23" }
      - run: go test ./...
      - run: CGO_ENABLED=0 go build -o app ./cmd/app
      - name: Copy to server
        uses: appleboy/scp-action@v0
        with:
          host: ${{ secrets.HOST }}
          username: deploy
          key: ${{ secrets.SSH_KEY }}
          source: "app"
          target: "/tmp/"
      - name: Run deploy script
        uses: appleboy/ssh-action@v1
        with:
          host: ${{ secrets.HOST }}
          username: deploy
          key: ${{ secrets.SSH_KEY }}
          script: /opt/app/deploy.sh /tmp/app

Caddy on a single VPS

Caddy gives you HTTPS for free. The whole config is one file:

# /etc/caddy/Caddyfile
api.example.com {
  reverse_proxy localhost:8080
  encode gzip
}

That's it. Caddy provisions a Let's Encrypt certificate on first start and renews it automatically. The systemd unit for the app is another ten lines, and journald handles all log collection.

What this isn't

This pipeline is deliberately small. It does not give you:

  • Blue-green deploys. There's a brief restart window — usually under a second.
  • Multi-region. One server, one region. Add a second VPS and Caddy load balancing if you outgrow it.
  • Container orchestration. The binary runs as a systemd unit. If you need containers, swap the binary swap for a Docker pull.
  • A staging environment. We deploy to a separate server with the same script and a different secret set.

The cost of those features is real — operational complexity, cognitive overhead, the occasional incident caused by the deploy infrastructure itself. For a service doing a few hundred requests per second, none of them earn their keep.

When to evolve it

The signals that this pipeline is no longer enough:

  • You have multiple services that all need the same deploy machinery, and the script is forking.
  • The restart window during deploys is causing user-visible failures.
  • You need to deploy to multiple regions or run multiple replicas of the same service for redundancy.
  • You're hitting the limits of a single VPS — usually database, not application, and that's a different conversation.

When those signals appear, the next step is usually a small Nomad or Docker Swarm cluster, or a managed platform like Fly.io. Skip Kubernetes until you have at least 10 services and a dedicated infra person. Kubernetes is a great answer to a problem most teams don't have.

If you're standing up a new service and want a sanity check on the deploy story, we do platform reviews. The shape of the pipeline is one of those decisions that's easy to over-engineer and hard to walk back from.

★ ★ ★

End of article · Thanks for reading

Subscribe

More of this, once a month.