I still like serverless for small teams.
That is not because it is perfect. It is not. Debugging can be annoying, visibility is often worse than it should be, local development can drift from production, and the architecture can turn into a pile of invisible coupling if you are careless. I have run into all of that.
I still think it is a good default when the team is small and the product is still changing quickly.
The main reason is simple
Small teams usually do not lose because they picked the wrong abstraction layer. They lose because they spend too much time keeping the system alive.
Serverless helps with that. You do not have to manage servers, patch hosts, think about capacity in the same way, or spend as much time building a platform before you have a product. The early trade is usually worth it.
For me, that matters more than the abstract debate about whether serverless is elegant enough.
It buys you speed where small teams need it
The useful part of serverless is not that it is trendy. It is that it removes a lot of obvious work.
If you are a small team, you usually want:
- a short path from idea to deployed code
- fewer moving parts to own
- less infrastructure to explain to new people
- fewer reasons to keep a person awake for routine operations
Serverless is good at that when the workload fits. A lot of CRUD-style APIs, event-driven jobs, webhook handlers, scheduled tasks, and lightweight background processing fit it just fine.
That is why I keep using it. It gets the unglamorous stuff out of the way.
The problems are real, though
The main complaint I hear is usually some version of “it is harder to debug.”
That is fair. It often is.
Distributed systems are harder to reason about than a single process, and serverless makes it easy to spread logic across functions, queues, triggers, and managed services without noticing how much you have done it. When that happens, tracing a failure can become a scavenger hunt.
The fix is not to pretend that is not happening. The fix is to stay disciplined:
- keep the boundaries obvious
- log enough to answer the obvious questions
- use consistent correlation IDs
- avoid making one request fan out into six hidden systems unless you really need to
- be honest about what you can and cannot replay safely
If you do not do that, serverless can become a maze.
Hidden complexity is the real risk
The other problem is that serverless can look simpler than it is.
A diagram with three boxes and a trigger can hide a lot of real complexity: IAM permissions, deployment order, retries, idempotency, dead-letter handling, cold starts, config drift, and weird edge cases around third-party APIs. None of that is specific to serverless, but serverless makes it easier to accumulate quietly.
That is why I do not think serverless is a substitute for engineering judgment. It is just a different shape of tradeoff.
If the team is small, focused, and still learning what the product needs, that tradeoff is usually fine. In some cases it is better than fine. It is the difference between shipping and getting stuck building infrastructure that nobody asked for.
Where I would still choose it
I like serverless most when the team needs to move quickly, the traffic pattern is uneven, and the system does not need a dedicated always-on service for everything.
I am less enthusiastic when the workload is highly stateful, latency-sensitive in a very tight way, or so interconnected that the debugging cost dominates everything else. At that point, the convenience starts to disappear.
But for small teams, I still think serverless is one of the better defaults.
It is not the cleanest architecture on paper. It is usually just a practical one.
