How to Set Up Datadog Monitoring for Containerized AWS Lambda Functions

6 min read

When you deploy Lambda functions as Docker container images, adding Datadog monitoring works differently than the standard Lambda layer approach. You need to bake the Datadog Lambda extension and library directly into your Dockerfile. This guide shows you how to set that up so you get traces, metrics, and logs from your containerized Lambda functions in Datadog.

How Datadog Monitoring Works with Lambda Containers

With zip-based Lambda deployments, you’d add Datadog as a Lambda layer. With container images, layers aren’t an option — so instead, you copy the Datadog Lambda extension into the image during the Docker build. Datadog provides a public ECR image for this.

The other piece is the handler wrapper. Instead of pointing Lambda directly at your handler function, you point it at Datadog’s handler (datadog_lambda.handler.handler). Datadog’s handler wraps your actual function, captures traces and metrics, then calls your code. You tell it which function to call using the DD_LAMBDA_HANDLER environment variable.

Prerequisites

  • An AWS account with permissions to manage Lambda functions and ECR
  • A Datadog account with an API key
  • Docker installed locally
  • An existing Lambda function deployed as a container image (or a new one you’re building)

Step 1: Update Your Dockerfile

Here’s a minimal Dockerfile that shows the Datadog-specific parts. Your actual Dockerfile will have more project-specific steps, but these are the lines that matter for monitoring:

FROM public.ecr.aws/lambda/python:3.11

# Copy the Datadog Lambda extension into the image
COPY --from=public.ecr.aws/datadog/lambda-extension:latest /opt/. /opt/

WORKDIR /var/task

# Install your Python dependencies (include datadog-lambda)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy your function code
COPY src/*.py .

# Datadog environment variables
ENV DD_SERVICE="my-lambda-function"
ENV DD_ENV="production"
ENV DD_VERSION="1.0.0"
ENV DD_TRACE_ENABLED="true"
ENV DD_LOGS_INJECTION="true"
ENV DD_LOG_LEVEL="INFO"
ENV DD_SITE="datadoghq.com"
ENV DD_LAMBDA_HANDLER="lambda_function.lambda_handler"

# Point Lambda at Datadog's wrapper handler, not your handler directly
CMD ["datadog_lambda.handler.handler"]

The key lines explained:

  • COPY --from=public.ecr.aws/datadog/lambda-extension:latest /opt/. /opt/ — pulls the Datadog Lambda extension from Datadog’s public ECR image and copies it into your image. This is a multi-stage copy, so it doesn’t add the full base image to your build.
  • DD_LAMBDA_HANDLER="lambda_function.lambda_handler" — tells Datadog where your actual handler function is. Replace this with your real handler path.
  • CMD ["datadog_lambda.handler.handler"] — sets the Lambda entry point to Datadog’s wrapper. Datadog intercepts the invocation, captures monitoring data, then calls your handler via DD_LAMBDA_HANDLER.
  • DD_SITE — the Datadog site to send data to. Use datadoghq.com for US1, datadoghq.eu for EU, or your specific site URL.

Step 2: Add datadog-lambda to Your Requirements

Add the datadog-lambda package to your requirements.txt:

datadog-lambda
boto3

The datadog-lambda package includes the Python tracing library and the handler wrapper. Add your other project dependencies as normal.

Step 3: Handle the Datadog API Key Securely

Datadog needs your API key to send data. You have a few options, and they’re not all equal.

Don’t hardcode the API key in the Dockerfile. If you set ENV DD_API_KEY="your-key" directly, it gets baked into the image and is visible to anyone who can pull or inspect it.

Better options:

  • Lambda environment variable — set DD_API_KEY in the Lambda function configuration (not the Dockerfile). This keeps it out of the image.
  • AWS Secrets Manager — store the key in Secrets Manager and set DD_API_KEY_SECRET_ARN as the environment variable instead. The Datadog extension retrieves the key at runtime. This is the most secure option.
  • Docker build argument — pass it as --build-arg DD_API_KEY=your-key during the build. This avoids hardcoding but the key is still visible in the image layers unless you use a multi-stage build.

The recommended approach is the Lambda environment variable or Secrets Manager. If you’re already using Secrets Manager for other credentials, check out How to Access AWS Secrets Manager from Another Account for cross-account setups.

Step 4: Build and Push the Docker Image

Build the image for the linux/amd64 platform (required for Lambda):

docker buildx build --platform linux/amd64 --provenance=false -t your-ecr-repo:latest .
  • --platform linux/amd64 — Lambda runs on x86_64. If you’re building on an ARM Mac (M1/M2/M3), you need this flag or the image won’t work.
  • --provenance=false — disables build attestation metadata. Without this, ECR may reject the image or Lambda may fail to load it.

Then tag and push to ECR:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com

docker tag your-ecr-repo:latest 123456789012.dkr.ecr.us-east-1.amazonaws.com/your-ecr-repo:latest

docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/your-ecr-repo:latest

Replace 123456789012 with your AWS account ID and us-east-1 with your region.

Step 5: Deploy the Lambda Function

In the Lambda console, create a new function or update an existing one. Choose “Container image” as the deployment type and point it to the ECR image you just pushed.

If you’re setting the Datadog API key as a Lambda environment variable (instead of Secrets Manager), add it here under Configuration > Environment variables:

DD_API_KEY = your-datadog-api-key

Make sure the Lambda function’s IAM role has the permissions your function needs. Datadog itself doesn’t require any extra IAM permissions — the extension sends data outbound over HTTPS.

Verifying the Setup

Invoke the Lambda function with a test event. Then check Datadog:

  1. Go to Infrastructure > Serverless — your function should appear here within a few minutes.
  2. Check APM > Traces — you should see traces for each invocation if DD_TRACE_ENABLED is set to true.
  3. Check Logs — if DD_LOGS_INJECTION is enabled, your function’s log output should show up with trace correlation IDs.

If nothing appears, check the Lambda function’s CloudWatch logs. The Datadog extension logs its startup and any connection issues there. Common problems are wrong API key, incorrect DD_SITE value, or the function not having outbound internet access.

Datadog Environment Variables Reference

Here’s a quick reference for the environment variables used in this setup:

Variable Purpose Example Value
DD_SERVICE Service name shown in Datadog APM my-lambda-function
DD_ENV Environment tag (dev, staging, production) production
DD_VERSION Version tag for tracking deployments 1.0.0
DD_TRACE_ENABLED Enable APM tracing true
DD_LOGS_INJECTION Inject trace IDs into logs for correlation true
DD_LOG_LEVEL Log level for the Datadog extension itself INFO
DD_SITE Datadog intake site datadoghq.com
DD_LAMBDA_HANDLER Path to your actual handler function lambda_function.lambda_handler
DD_API_KEY Datadog API key (set via Lambda config, not Dockerfile) Set at runtime

A Note on Lambda Base Images

This guide uses the public.ecr.aws/lambda/python:3.11 base image, which runs on Amazon Linux 2 and uses yum for system packages. If you’re using Python 3.12 or later, the base image switches to Amazon Linux 2023, which uses dnf/microdnf instead. The Datadog setup itself stays the same — just the system package manager changes. See Solve ‘yum: command not found’ in AWS Lambda Python 3.12 base image for more on that.

Conclusion

That’s the full setup for adding Datadog monitoring to a containerized Lambda function. The two core pieces are the Datadog extension (copied from their public ECR image) and the handler wrapper that intercepts invocations to capture traces and metrics.

If you’re also running Datadog on EC2 instances, check out How to Install Datadog Agent with Apache2 on EC2 Ubuntu 22.04. For parsing custom log formats in Datadog, see How to Parse Custom Logs in Datadog Using Grok Rules.