Only get the AWS cost data you need

The first FinOps Toolkit tool, cost_and_usage.py, was built for simplicity: one command, one CSV, two columns — date and number. It pulls AWS cost data with minimal permissions and zero dashboards, proving that FinOps clarity starts with small, local, human-readable data.

Only get the AWS cost data you need

The context

The starting point for the FinOps Toolkit had to be something simple.
I wanted a tool that could prove a point: that FinOps can be hands-on, lightweight, and human-readable.

The context for cost_and_usage.py was precisely that — I needed a way to get AWS cost data out of the cloud without friction.

No need to learn the Cost Explorer API.
No need to navigate the console and download messy CSVs.
Just a small command that gives you what matters most: a date and a number.

The idea was to create the smallest possible working representation of FinOps — a tool that could extract data with minimal permissions and output it in the simplest possible format.

A file with two columns — date and value.
That’s it.
No layers of formatting, no dashboards, no noise.

From there, the simplicity becomes power: you can graph it in Excel, combine it with another script, or pipe it directly into the next tool.
It’s how context becomes concrete — one small, clear dataset at a time.


Why it exists

Most people start with AWS Cost Explorer. It’s powerful, but also overcomplicated.
You click, filter, wait, export, clean, and repeat. It’s a workflow that breaks focus.

I wanted the opposite — a way to stay in the flow.
One command. One output.
From there, you decide what to do with it.

So cost_and_usage.py became the first FinOps Toolkit script.
It pulls cost and usage data directly via the AWS CLI, groups it by your chosen dimension, and outputs clean CSV lines to standard out.

It’s small, transparent, and practical — the way command-line tools should be.