How to make a $200k engineer sweat 2 cents of memory: Kubernetes

The truth about Kubernetes is it is a very heavy complexity layer that burns resources like a drunken sailor, and has all kinds of limits that force application layer changes. It is not the run anything anywhere future we were promised. Just try fitting a single Windows server into a Kubernetes environment. You can’t.

A new tool by a new consulting firm aims to gain insight into container health, and readiness. Problem is there are many different ways to define both, and wrong definitions result in wrong behavior. So this tool invites a team of people, all making north of $100/hr, to deep dive these definitions on an individual process level, and it also encourages performance tuning on a per process level to allocate just enough, but not too much memory in an age when memory is asymtoting to free.

When humans are manually managing per process memory at the container orchestration level, the fractal pattern has looped back to low level programming.

Docker, and containers stood on the verge of making everything stupid easy. Kubernetes came to the rescue of consultants, and public cloud revenue streams.

Kubernetes nodes needs 32 GB of memory which is a $600/mo c5.4xlarge instance.
3x to cluster that
2x result for dev, and prod