Is there a way to allocate cost to every pod on a node when node cost is given without break down by resource type and pod resources are not in same ratio as node resources?
Lets say node has 8 CPUs and 32 GB RAM (1:4 ratio). If every pod uses same ratio for its CPU:MEM then math is simple: node cost is split across all pods proportional to their resource allocation.
How to make fair calculation if pod resource ratio is different? In extreme it is still simple - lets say there is a pod with 8 CPU and 2 GB RAM, because no pods can fit into node whole node cost is allocated to that pod.
What if running pod is 6 CPU and 16 GB RAM and another pod with 2 CPU and 16 GB RAM is squeezed in. How to allocate node cost to each? It can't be just node cost / # of pods, because intiutively beefier pods should recive larger share of node cost as they prevent more smaller pods to fit in, but how exactly to calculate it? "weight" of pod on CPU dimenstion is different than on MEM dimension.
Red Hat Insights Cost Management does cost calculation and it calculates exactly how much each pod is costing, no matter what ratios, or node sizes, or discounts you may have.
It looks at what nodes are running on each cluster, how much each node is costing (it reads the actual cost from your cloud bill, including any discounts you may have), then it looks on which node(s) each pod is running, and then it calculates how much each pod on each node is costing.
It's free for Red Hat customers, both for cloud costing (AWS, Azure, GCP, OCI) and OpenShift costing. No support for EKS, AKS or other third-party Kubernetes, though.
So in your example, 6 CPU + 16GiB is roughly 2x more than 2 CPU and 16GiB, so if that node cost say $6/hr, you'd expect it to be allocated $2 to the first and $4 to the second.
I thought about it, but then 2 pods each almost maxing out one dimension, for isntance 7.5 CPU 0.5 GB and 0.5 CPU and 31.5 GB will account together for more than node cost.
The Cost and Usage Report (CUR) from AWS is just a fine-grained listing of all the resources in your account and their cost. It can be dumped out on different schedules (hourly, daily, monthly) and in different formats (CSV, Parquet).
It is pretty common to configure the CUR files to be dumped into your S3 account and query them via Athena. Athena is billed as $ per TB scanned ($5 last time I looked), so the cost will be based on how often the data is being queried. Downside is that each query can take quite a while to execute depending on data size.
The other common option is to ingest the CUR data into Redshift which gives you better control / options for performance, manipulation, etc. but requires that you set up and manage Redshift.
Hard to tell exactly what the Athena cost here would be as it depends on the number of assets in the account and the frequency in which you are querying the CUR. However, you can issue quite a bit of Athena queries on CUR data for most AWS use cases without incurring too much cost. Unless you have a rapidly changing environment (e.g. hundreds of k of assets turning over daily) or just tons of standing assets, you should be safe to assume hundreds a day at the most? Probably much less for most use cases. This is assuming they are querying once and storing rather than real time querying all the time and normal usage patters, etc.
Is the cost shown only the prices incurred post plugin integration or is there a way to show retroactive costs by comparing k8s object creation dates for example?
We, the Headlamp project, don't make any claims about being state-of-the-art as that's hard to define. But we do think Headlamp ranks high among having the best user experience and believe the fact that we're a 100% open-source project is a huge plus compared to some other projects in the space.
I think one area that we are rather different than other projects is that Headlamp is not only focused on end-users but also for teams looking to build their own Kubernetes UX by leveraging the Headlamp plugin system. Our thinking is that this will foster broader community participation and make Headlamp the most viable project in the space.
I'm usually a fan of TUIs and think they can be incredibly powerful, but with k9s I couldn't feel comfortable in the day I spent trying it out. I think the problem is that I'm not intimately familiar with kubernetes, being more on the dev rather than ops side, and all that power of the TUI comes at the cost of some discoverability which I desperately need as I fuck around and find out.
I didn't try Headlamp, but I moved to AptKube from Lens and been happy since then. It might not be a best in class, but it is snappy and doesn't require any cloud accounts.
There is no free version and comparing to something like Jetbrains IDEs price is a bit high for such a small tool. It is made by a single dev in a market where not that many paying companies, so higher price is understandable.
Lets say node has 8 CPUs and 32 GB RAM (1:4 ratio). If every pod uses same ratio for its CPU:MEM then math is simple: node cost is split across all pods proportional to their resource allocation.
How to make fair calculation if pod resource ratio is different? In extreme it is still simple - lets say there is a pod with 8 CPU and 2 GB RAM, because no pods can fit into node whole node cost is allocated to that pod.
What if running pod is 6 CPU and 16 GB RAM and another pod with 2 CPU and 16 GB RAM is squeezed in. How to allocate node cost to each? It can't be just node cost / # of pods, because intiutively beefier pods should recive larger share of node cost as they prevent more smaller pods to fit in, but how exactly to calculate it? "weight" of pod on CPU dimenstion is different than on MEM dimension.