We've shipped Go backends with golang-migrate, Atlas, Flyway, and raw SQL scripts checked into git. After enough of those, we built gomigrate — a focused migration tool for Go and Postgres that does one thing well: runs plain SQL migrations in the right order, atomically, with a clean CLI and no configuration overhead.
The problem with migration tools
Most migration tools have one of two failure modes. The first is complexity: YAML config, driver wrappers, source types, a ten-flag CLI. golang-migrate is powerful but the surface area is large and the error messages are terse. The second is coupling: tools that generate Go code from your schema, then require the generated code to run the migrations. If the migration and the code diverge, you have two sources of truth.
gomigrate has neither problem. It reads plain SQL files, runs them in a transaction, and records what ran. The binary knows where to find the files because you embed them.
How gomigrate is designed
Three principles drove the design:
- Plain SQL, always. Migrations are
.sqlfiles. Any Postgres client can read them. No DSL, no Go structs, no YAML. - Embedded into the binary. Migrations travel with the binary that needs them. No separate deployment artifact, no S3 bucket, no mounted volume.
- Atomic by default. Each migration file runs in a transaction. If the file fails partway through, nothing is applied. The version table is updated inside the same transaction.
Your first migration
Create a migrations/directory and embed it with Go'sembed package:
// migrations/0001_create_users.sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);// db/migrations.go
package db
import "embed"
//go:embed migrations/*.sql
var Migrations embed.FS// main.go — run migrations on startup
m, err := gomigrate.New(db.Migrations, "migrations", connStr)
if err != nil {
log.Fatal(err)
}
if err := m.Up(ctx); err != nil {
log.Fatal(err)
}CI and deploy integration
gomigrate can run as a separate pre-deploy step or inline at startup. For most teams, inline at startup is correct: the service won't start if the migration fails, which means a broken migration fails the deploy before any traffic reaches the new version.
For zero-downtime deploys where the old and new versions must coexist, see the expand/contract post.
The CLI
gomigrate ships a CLI for the cases where you want explicit control:
# Apply all pending migrations
gomigrate up --dsn $DATABASE_URL --dir ./migrations
# Roll back the last migration
gomigrate down --dsn $DATABASE_URL --dir ./migrations
# Show migration status
gomigrate status --dsn $DATABASE_URL --dir ./migrations
# Apply exactly one migration
gomigrate up --steps 1 --dsn $DATABASE_URL --dir ./migrationsNo config file required. The DSN comes from an environment variable; the directory is the path to your SQL files. That's the entire surface area. Full documentation and source at github.com/taqnihub/gomigrate.
End of article · Thanks for reading