NAT Gateway is eating your AWS bill
Posted in Aws, Infrastructure, Startups
By Dušan Dželebdžić

NAT Gateway costs $0.045 an hour just for existing. That's about $33 a month per gateway before a single byte passes through it, plus another $0.045 per gigabyte for every byte that does. Two of them across two availability zones, like every reference architecture suggests, and you're already at $66 a month before your app does anything.
This is the part of AWS that quietly eats bootstrapped startups. You read a few "production-ready" tutorials, you copy a sensible-looking diagram, and the next thing you have is private subnets, ECS on Fargate, RDS in two availability zones, an Application Load Balancer, NAT Gateways, and CloudWatch shoveling logs at every layer. It feels solid. It feels like the grown-up version of what you used to do on a $5 VPS.
Then the first real bill lands. Twelve users, almost no traffic, barely any revenue, and AWS already wants several hundred dollars a month for the privilege.
Most of that bill isn't your app. It's the plumbing around your app, and NAT Gateway is the biggest piece of it.
The architecture everyone copies
The pattern the tutorials hand you usually looks like this: public subnets for the load balancer, private subnets for the application containers, private subnets for the database, outbound internet routed through a NAT Gateway.
From a security angle it's fine. The application servers aren't directly exposed. The database is isolated. The diagram looks clean enough to put in a deck.
The issue is who that pattern was actually written for. AWS reference architectures are aimed at companies with compliance teams, funded startups with platform engineers, and internal systems where six-figure infra bills don't show up in anyone's personal life. They aren't written for one person trying to ship version one without burning rent money.
Copying the enterprise pattern at the prototype stage isn't being responsible. It's just expensive.
Private subnets aren't free internet
Here's the part the tutorials don't dwell on: a container in a private subnet can't reach the internet on its own. Every time it pulls a Docker image, hits Stripe, calls SendGrid, fetches an OAuth token, ships logs to a third party, or talks to S3 over the public endpoint, that traffic has to go somewhere. AWS routes it through the NAT Gateway, and now you're paying $0.045 a gigabyte for the privilege of letting your container reach Stripe.
Most of this is invisible while you're building. Your app might be quiet, but the infrastructure around it is chatty all day. Logs, metrics, package updates, container pulls, health checks, background workers, third-party API calls, all of it flowing through the same expensive pipe.
Why this hits bootstrapped people the hardest
NAT Gateway isn't bad. The problem is what it does to the early-stage P&L.
It introduces a fixed monthly infrastructure tax that exists before the product earns a single dollar. If you raised funding, that's annoying but invisible. If you're a solo founder paying out of personal savings while validating an idea, it's the difference between "I can run this for a year" and "I have to pull the plug in three months."
The shape of the bill ends up bizarre. Tiny database, minimal compute, almost no users, and yet the AWS invoice looks like you're running a medium-sized SaaS. I've seen setups where NAT Gateway alone cost more than the entire application compute it was supposedly serving.
That's a strange place to be when you're still trying to find out whether anyone wants the product.
The cross-AZ surprise
This is the part that catches people who thought they were being careful.
Common shape: ECS tasks in one AZ, NAT Gateway in another, usually because somebody set up the VPC with a single NAT Gateway "for cost reasons" and didn't realize the tasks would land elsewhere. Now every outbound request from those tasks pays cross-AZ data transfer on top of NAT processing.
AWS doesn't pop up a banner saying "congratulations, your tiny side project is now paying inter-AZ networking fees." You discover it later, staring at Cost Explorer and trying to work out why "outbound HTTPS to api.stripe.com" turned into a four-figure annual line item.
Most early products don't need private subnets yet
This is the part that makes people uncomfortable.
For a lot of early-stage products, the right answer is genuinely a single EC2 instance. Or App Runner. Or a small ECS service in a public subnet with a security group doing the actual exposure control. Or a Hetzner box that costs less per month than the NAT Gateway alone. The internet will not collapse because your MVP isn't wired up like a Fortune 500 banking platform.
What early infrastructure should optimize for is simplicity, cost, and how fast you can change things. Not how impressive the diagram looks on LinkedIn.
There are also middle paths nobody talks about. VPC endpoints for S3 and DynamoDB skip NAT entirely for that traffic, often for free. Interface endpoints handle a lot of AWS service calls. IPv6 egress-only gateways give you outbound internet for IPv6 without NAT pricing. Each of these is a small piece of work and a real line off your bill.
AWS isn't the villain
NAT Gateway exists for good reasons. At scale it simplifies networking, centralizes outbound access, and gives security teams a chokepoint they can audit. In a mature business the cost is rounding noise.
The mismatch is what kills you. Applying enterprise architecture to a product that's still searching for product-market fit is the same mistake as hiring a CFO before you have revenue. It looks like seriousness. It's actually just overhead.
Takeaway
There's a difference between architecture that scales technically and architecture that scales economically. Most early-stage AWS bills are paying for the first one before the business needs either.
Good engineering at the prototype stage isn't picking the most sophisticated infrastructure on offer. It's picking the smallest thing that does the job, and only adding the next layer once the business actually asks for it. Sometimes the most senior decision in the room is admitting you don't need the fancy setup yet.
If your AWS bill is way bigger than your product, send me the details and I'll take a look.