"So you think you cannot use Kubernetes" might be how you feel after reading about control planes, nodes, operators, and multi-cloud clusters. Maybe you've heard that K8s is overkill for small apps, that it eats resources and budget, or that you need a team of specialists just to run it. Some of that is true some of the time — but it doesn't mean you have to write off K8s entirely. This post looks at when Kubernetes is worth it, when it isn't, and how you can still get hands-on with K8s basics without betting the farm on a full production cluster.
When Kubernetes Shines
Kubernetes has become the default choice for container orchestration for good reasons. It fits certain problems very well.
Scalability and resilience. K8s can scale workloads up and down with demand and keeps applications available through self-healing, load balancing, and rolling updates. For systems that need to handle variable load or stay up under failure, that automation is a major benefit.
Portability and multi-cloud. The same deployment model can run on-premises or on any major cloud (e.g. GKE, EKS, AKS). You get a consistent way to describe and run applications without locking into one vendor's proprietary runtime.
Automation that supports delivery. Deployments, scaling, and many operational tasks are expressed as declarative config and handled by the system. That fits well with CI/CD and reduces manual, error-prone steps.
Efficient use of resources. Kubernetes is good at placing containers onto nodes and packing workloads efficiently, so you can get more out of your CPU and memory than with ad-hoc placement.
Ecosystem and maturity. The tooling, integrations, and community around K8s are large and growing. Monitoring, security, GitOps, and platform tooling are widely available and well understood.
So for large, complex, or high-demand applications — or teams that already need orchestration, multi-environment, and automation — K8s is often a sensible, strategic choice.
When Kubernetes Is a Poor Fit
For many use cases, the cost and complexity of K8s outweigh the benefits. It's worth being straightforward about that.
Complexity and learning curve. K8s introduces many concepts and abstractions. Operating and maintaining a cluster usually requires skilled (and often costly) engineering. For a small team or a simple product, that overhead can dominate.
Overkill for simple apps. For a single monolith, a small internal tool, or a business with modest traffic, a full K8s cluster can be unnecessary — like using a jackhammer to hang a picture. Simpler options (e.g. a single server, Docker Compose, or a small set of VMs) may be easier and cheaper.
Resource cost at small scale. At low scale, the control plane and system components can consume a noticeable share of the hardware. That capacity might be better used by your application if you choose a lighter-weight setup.
Friction in development and testing. Developing and testing against "real" K8s often means running a full cluster or a single-node approximation — for example minikube or k3s on your machine or in a VM — just to run and test your components. That can slow down day-to-day development compared to simpler runtimes.
Security and operational surface. More moving parts mean more to harden and maintain. Doing K8s securely often implies more expertise and potentially dedicated focus on platform and security.
Expectation of stateless workloads. Without significant risk acceptance or dedicated storage and data integrations, workloads on K8s should typically be stateless. Persisted data should live in backing services — databases, object storage, shared filesystems, etc. — reachable over the network from the cluster; the systems that hold that data usually should not run on the same K8s cluster but in managed or dedicated infrastructure. If your design assumes running stateful data stores on K8s and you don't have the integrations or appetite for that complexity, K8s may be a poor fit — or you need to plan for stateless apps on K8s talking to external data services.
So the decision to use Kubernetes should be deliberate: aligned with your application's shape, expected growth, and team capacity — not adopted just because "everyone uses K8s."
You Can Still Learn, Experiment and/or Grow into K8s Without a Full Cluster
Here's the part that often gets missed: you don't need a production-grade, multi-node cluster to start learning and experimenting. If your goal is to understand pods, deployments, services, and basic operations, you have options that are much cheaper and simpler.
Lightweight distributions. Distributions like k3s or minikube are designed to be small and easy to run. You can run a usable K8s API and scheduler on a single machine with minimal resource use, so the concepts you learn transfer to "big" Kubernetes.
A single VM in the cloud. You can run k3s or minikube (or another minimal K8s setup) on one VM on a cloud provider such as AWS. Use a small instance type, install k3s or minikube, and you have a real cluster to experiment with — no need for multiple servers or a managed control plane at day one.
Pay only when you use it. If you stop the VM when you're not studying or experimenting, you only pay for the hours it's on. That makes it possible to learn and experiment with K8s basics at very low cost compared to leaving a multi-node or managed cluster running 24/7.
Progressive steps. You can start with that single VM and k3s or minikube, get comfortable with workloads, networking, and maybe a simple GitOps or deployment flow. When and if you outgrow it, you'll have a clearer picture of whether a larger cluster or a managed service makes sense — and you'll have learned and experimented without a big upfront commitment.
So even if you decide that "we cannot use Kubernetes" in production right now, you don't have to avoid it entirely. You can still learn and experiment with the fundamentals in a low-cost, low-risk way and then make a better-informed decision later.
The Bottom Line
Using Kubernetes is a strategic choice: it depends on your application architecture, growth expectations, and team skills. For many workloads, K8s is the right fit; for others, simpler alternatives are better. The important thing is to decide based on your context, not on hype or fear.
And if you're learning and experimenting — or just curious — you don't have to dive straight into a traditional multi-server cluster. Options like k3s or minikube on a single cloud VM that you stop when you're done can give you a real, inexpensive environment to learn and experiment with K8s. When you're ready to scale up or adopt K8s for real, you'll be better prepared to do it wisely.
Previous: How essesseff Embodies 'Slow Is Smooth; Smooth Is Fast' in DevOps