How to Avoid Unexpected AWS Lambda Costs

Lambda is pay-per-use, so it should be cheap — until it is not. A recursive invocation bug, an oversized memory setting, or a missing timeout can turn a $2/month function into a $200 surprise. This post covers the most common causes of unexpected AWS Lambda costs and what to do about each one.

The AWS CLI examples in this guide are run on WSL2 Ubuntu in Windows, but they work the same on any system with the AWS CLI installed.

How Lambda Pricing Works

Lambda charges for two things:

  1. Requests — $0.20 per 1 million invocations
  2. Duration — billed per millisecond based on allocated memory

The free tier covers 1 million requests and 400,000 GB-seconds per month. Most small workloads fit within this. Costs spike when duration or invocation count goes higher than expected — and that usually happens because of one of the issues below.

Common AWS Lambda Cost Traps

1. Recursive or Infinite Loop Invocations

This is the most expensive mistake. It happens when a Lambda function triggers itself — directly or through a chain of services. Classic example: a function writes to an S3 bucket that has an event notification triggering the same function.

AWS added recursive loop detection in 2023 for SQS, SNS, and Lambda event sources. When Lambda detects a loop (after about 16 recursive calls), it stops the invocation and sends the event to a dead-letter queue. This is on by default for supported triggers.

For S3-triggered Lambdas, the safest pattern is to write output to a different bucket or prefix than the one triggering the function. If you must write to the same bucket, use a prefix filter on the trigger so the output does not match.

2. Timeout Set Too High

The default Lambda timeout is 3 seconds, but many developers bump it to the maximum (15 minutes) “just in case.” The problem: if something goes wrong — a downstream API hangs, a database connection stalls — the function sits there burning duration charges until it times out.

Set the timeout to just above the expected execution time. If your function normally finishes in 8 seconds, a 15-second timeout is reasonable. A 900-second timeout is not. You can check actual durations in CloudWatch metrics and size the timeout based on real data.

3. Over-Provisioned Memory

Lambda bills per GB-second. A function configured with 3,008 MB that only uses 200 MB is paying 15x more per millisecond than necessary. But there is a tradeoff — more memory also means more CPU. Sometimes bumping memory from 128 MB to 256 MB makes a function run 3x faster, which actually costs less overall.

The right approach is to test different memory settings and find the lowest cost. The open-source AWS Lambda Power Tuning tool (by Alex Casalboni) does this automatically — it runs your function at multiple memory configurations and shows you the best price-performance point. You can find it at alexcasalboni/aws-lambda-power-tuning on GitHub.

4. CloudWatch Logs Retention

Every Lambda invocation writes logs to CloudWatch. By default, these logs are kept forever. Over months, log storage adds up — especially for high-traffic functions or functions that log verbose output.

Set a retention policy on every Lambda log group. For most functions, 7–30 days is enough. You can do this from the console or with the CLI:

aws logs put-retention-policy \
  --log-group-name /aws/lambda/my-function \
  --retention-in-days 14

If you have dozens of Lambda functions with bloated logs, you can bulk-delete old CloudWatch logs with a Python script.

5. SQS Polling on Empty Queues

When Lambda is configured with an SQS event source, it continuously polls the queue. If the queue is empty most of the time, those polling requests still count as SQS API calls — and SQS charges $0.40 per million requests. For rarely-used queues, this background polling can cost more than the actual processing.

To reduce this, you can lower the MaximumBatchingWindowInSeconds or disable the event source mapping when the queue is not in use. Also, make sure your SQS visibility timeout is set correctly — if messages become visible again before Lambda finishes processing, they get reprocessed, doubling invocations.

6. Provisioned Concurrency Left On

Provisioned Concurrency keeps a set number of Lambda instances warm to avoid cold starts. It charges you for every second those instances are kept warm — even if no requests come in. If you enabled it for a demo or a load test and forgot to turn it off, you are paying 24/7 for idle capacity.

Check for provisioned concurrency across all functions:

aws lambda list-provisioned-concurrency-configs \
  --function-name my-function

To remove it:

aws lambda delete-provisioned-concurrency-config \
  --function-name my-function \
  --qualifier my-alias

7. No Concurrency Limit

Lambda can scale to 1,000 concurrent executions by default (higher in some regions). If a spike hits — a sudden burst of S3 events, an SQS backlog, or a DDoS on an API Gateway — Lambda scales out to handle it, and you pay for every concurrent execution.

Set a reserved concurrency limit on functions to cap how many run at once:

aws lambda put-function-concurrency \
  --function-name my-function \
  --reserved-concurrent-executions 50

This limits the function to 50 concurrent executions. Excess invocations get throttled (HTTP 429) instead of running. This protects both your costs and downstream services from being overwhelmed.

Set Up a Billing Alarm

A billing alarm catches cost spikes early. This creates a CloudWatch alarm that emails you when estimated charges exceed $10:

aws cloudwatch put-metric-alarm \
  --alarm-name "billing-alarm-10usd" \
  --metric-name EstimatedCharges \
  --namespace AWS/Billing \
  --statistic Maximum \
  --period 21600 \
  --threshold 10 \
  --comparison-operator GreaterThanThreshold \
  --dimensions Name=Currency,Value=USD \
  --evaluation-periods 1 \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:billing-alerts
  • --threshold 10 — triggers when charges exceed $10 (adjust to your budget)
  • --period 21600 — checks every 6 hours
  • --alarm-actions — the SNS topic ARN that sends the email notification

You need to enable billing alerts first in the AWS Billing console under Billing Preferences. The alarm must be created in us-east-1 — billing metrics are only available in that region.

Monitor Lambda Costs with Cost Explorer

AWS Cost Explorer lets you filter costs by service and see exactly which Lambda functions are driving spend. Open Cost Explorer, filter by “AWS Lambda”, and group by “API Operation” to see the breakdown between invocations, duration, and provisioned concurrency.

For function-level cost visibility, enable cost allocation tags. Tag your Lambda functions with a project or team name, activate those tags in the Billing console, and Cost Explorer will show costs per tag within 24 hours.

Quick Checklist

Action Why It Matters
Set timeouts to realistic values Prevents hung functions from burning duration charges
Right-size memory with Power Tuning Finds the cheapest memory/speed combination
Set CloudWatch log retention to 7-30 days Stops log storage from growing forever
Add reserved concurrency limits Caps runaway scaling during traffic spikes
Check for provisioned concurrency left on Avoid paying for idle warm instances
Avoid recursive invocation patterns Prevents infinite loops that rack up millions of calls
Set SQS visibility timeout correctly Prevents duplicate processing and double charges
Create a billing alarm Get notified before a small issue becomes a big bill
Tag functions and use Cost Explorer Know exactly which functions cost what

Conclusion

Most unexpected Lambda costs come from a handful of misconfigurations — high timeouts, missing concurrency limits, infinite retention logs, or recursive triggers. Setting realistic limits and a billing alarm catches problems early before they become expensive.

If your Lambda functions process data from queues, make sure to also read Setting Up Cross-Account SNS to SQS Subscription in AWS to structure your event-driven architecture correctly. For monitoring Lambda performance over time, consider setting up Datadog monitoring for your Lambda functions.