← Back to the blog

SaaS architecture for early-stage products

Early-stage SaaS doesn't need microservices, Kubernetes, or a service mesh. It needs a boring monolith and a database you understand.

Early-stage SaaS doesn't need microservices. It doesn't need Kubernetes. It doesn't need a service mesh, an event bus, or a multi-region deploy. It needs a boring monolith and a database you understand.

Be aggressively boring

The two most expensive mistakes we see in pre-revenue SaaS are premature distribution (microservices before you have one user) and premature abstraction (interfaces and ports for things that have one implementation). Both are products of engineers building for an imagined future instead of an actual one.

The architecture you want at $0 ARR is the same one that ran Basecamp to $50M ARR: a single application, a single database, deployed in one place.

The default stack

Our default for early-stage SaaSA monolith in your team's strongest language (often TypeScript or Go), Postgres for data, Redis if and only if you need a queue or cache, S3-compatible storage for files, and a single VM or container per environment. That covers 90% of products through product-market fit.

What that doesn'tinclude, on purpose: a separate service for auth, a separate service for billing, a separate service for notifications. They're modules in the monolith until they have a reason to be services.

Multi-tenancy without microservices

Multi-tenancy in early SaaS usually means one of three patterns:

  • Shared schema, tenant_id column. Simplest. Use Postgres row-level security to enforce isolation — we wrote about that.
  • Schema per tenant. Useful for compliance-heavy verticals. More operational overhead.
  • Database per tenant. Almost never the right choice early. Reserve it for enterprise contracts that demand it.

Pick the simplest one that satisfies your real constraints. The "real" is doing a lot of work in that sentence — most compliance constraints people invoke aren't actual requirements, they're assumptions.

When to break the boring rule

Boring scales further than people think. The signals that it's time to break it:

  • A specific component is genuinely scaling differently from the rest (e.g. a video transcoding service inside a CRUD app).
  • Different parts of the team are blocking each other on deploys.
  • Compliance requires hard isolation between subsystems.

Until then: one app, one database, one deploy. Boring is a feature. The deploy pipeline post covers what that looks like in practice.

★ ★ ★

End of article · Thanks for reading

Subscribe

More of this, once a month.