Every AWS Lambda function in Python starts with the same two parameters: event and context. The event carries the data that triggered the function, and context gives you runtime information about the execution itself. This guide covers what each one contains, how their structure changes depending on the trigger source, and how to use them in practice.
The Lambda Handler Signature
Every Python Lambda function uses this handler signature:
def lambda_handler(event, context):
# your code here
return {"statusCode": 200, "body": "OK"}
Lambda will raise an error if your handler only accepts one argument — both event and context are required even if you don’t use them.
The Event Parameter
The event parameter is a dictionary containing the data that triggered the function. Its structure is different for every trigger source. You’ll reference the event to extract request data, file info, message payloads, or whatever the source sends.
API Gateway Event
When Lambda sits behind API Gateway, the event contains the full HTTP request — method, path, headers, query string, and body.
import json
def lambda_handler(event, context):
http_method = event["httpMethod"]
path = event["path"]
query_params = event.get("queryStringParameters") or {}
headers = event.get("headers") or {}
body = json.loads(event["body"]) if event.get("body") else {}
return {
"statusCode": 200,
"body": json.dumps({
"method": http_method,
"path": path,
"query": query_params
})
}
Use .get() for optional fields like queryStringParameters and body — they can be null when no query string or body is sent.
S3 Event
When S3 triggers Lambda (for example, on file upload), the event contains bucket and object details inside a Records array.
def lambda_handler(event, context):
for record in event["Records"]:
bucket = record["s3"]["bucket"]["name"]
key = record["s3"]["object"]["key"]
size = record["s3"]["object"]["size"]
print(f"New file: s3://{bucket}/{key} ({size} bytes)")
Always loop through Records instead of hardcoding [0] — S3 can batch multiple events into a single invocation.
SQS Event
SQS delivers messages in batches. Each record has a body (the message content) and a messageId.
import json
def lambda_handler(event, context):
for record in event["Records"]:
message_id = record["messageId"]
body = json.loads(record["body"])
print(f"Processing message {message_id}: {body}")
If you’re running into issues where SQS messages get processed more than once, check out How to Fix SQS Visibility Timeout in AWS Lambda for the fix.
SNS Event
SNS events also arrive in a Records array. The actual message is nested under Sns.Message.
import json
def lambda_handler(event, context):
for record in event["Records"]:
subject = record["Sns"].get("Subject", "No subject")
message = json.loads(record["Sns"]["Message"])
topic_arn = record["Sns"]["TopicArn"]
print(f"SNS [{subject}] from {topic_arn}: {message}")
CloudWatch Scheduled Event (EventBridge)
Scheduled triggers send a simple event with the rule ARN and scheduled time. Useful for cron-like tasks.
def lambda_handler(event, context):
rule_arn = event["resources"][0]
scheduled_time = event["time"]
print(f"Triggered by rule: {rule_arn} at {scheduled_time}")
# Run your scheduled task here
Quick Reference: Event Structure by Source
| Trigger Source | Key Fields | Notes |
|---|---|---|
| API Gateway | httpMethod, path, body, headers, queryStringParameters |
REST API format; HTTP API uses a different v2 payload |
| S3 | Records[].s3.bucket.name, Records[].s3.object.key |
Can batch multiple object events |
| SQS | Records[].body, Records[].messageId |
Messages arrive in configurable batch sizes |
| SNS | Records[].Sns.Message, Records[].Sns.Subject |
Message is a JSON string — parse it |
| EventBridge | source, detail-type, detail, resources |
Also used for CloudWatch scheduled rules |
The Context Parameter
The context object gives you runtime information about the current invocation. You don’t parse it like a dictionary — it’s an object with properties and one method.
Context Properties
| Property | What It Returns |
|---|---|
function_name |
Name of the Lambda function |
function_version |
Version of the function being executed |
invoked_function_arn |
Full ARN used to invoke the function (includes alias or version if specified) |
memory_limit_in_mb |
Memory allocated to the function |
aws_request_id |
Unique ID for this invocation — useful for tracing in CloudWatch Logs |
log_group_name |
CloudWatch Log group for this function |
log_stream_name |
CloudWatch Log stream for this invocation |
The get_remaining_time_in_millis() Method
This is the most practical part of context. It returns how many milliseconds are left before the function times out. You’ll want this when processing batches of records — it lets you stop gracefully before Lambda kills the execution.
import json
def lambda_handler(event, context):
for record in event["Records"]:
# Stop if less than 10 seconds remain
if context.get_remaining_time_in_millis() < 10000:
print("Running low on time, stopping early")
break
process_record(record)
return {"statusCode": 200, "body": "Done"}
This pattern is especially useful when your Lambda processes SQS messages or paginates through API responses. For a real-world example, see How to Avoid AWS Lambda Timeout When Processing HubSpot Records.
Logging Context for Debugging
When debugging a production issue, log the context properties so you can trace the exact invocation in CloudWatch.
def lambda_handler(event, context):
print(f"Request ID: {context.aws_request_id}")
print(f"Function: {context.function_name} v{context.function_version}")
print(f"ARN: {context.invoked_function_arn}")
print(f"Memory: {context.memory_limit_in_mb} MB")
print(f"Log Group: {context.log_group_name}")
print(f"Log Stream: {context.log_stream_name}")
print(f"Time Remaining: {context.get_remaining_time_in_millis()} ms")
# Your function logic here
return {"statusCode": 200, "body": "OK"}
The aws_request_id is the most useful one here — you can search for it directly in CloudWatch to find logs from a specific invocation.
Putting It Together: A Practical Example
Here’s a function that processes S3 uploads, uses context for timeout protection, and logs the request ID for traceability.
import json
import boto3
s3_client = boto3.client("s3")
def lambda_handler(event, context):
print(f"Request ID: {context.aws_request_id}")
print(f"Time limit: {context.memory_limit_in_mb} MB, "
f"{context.get_remaining_time_in_millis()} ms remaining")
for record in event["Records"]:
if context.get_remaining_time_in_millis() < 5000:
print("Less than 5 seconds left, stopping")
break
bucket = record["s3"]["bucket"]["name"]
key = record["s3"]["object"]["key"]
response = s3_client.get_object(Bucket=bucket, Key=key)
content = response["Body"].read().decode("utf-8")
print(f"Processed s3://{bucket}/{key} ({len(content)} chars)")
return {"statusCode": 200, "body": "Processing complete"}
Conclusion
The event parameter gives you the trigger data and its structure depends entirely on the source — API Gateway, S3, SQS, SNS, or EventBridge each send a different payload. The context object gives you runtime metadata, with get_remaining_time_in_millis() being the most practically useful for avoiding timeouts. Together, they’re everything your handler needs to know about why it was invoked and the environment it’s running in.
If you’re building Lambda functions that need external Python packages, check out How to Build and Deploy Python Libraries for AWS Lambda Layers. For organizing your Lambda code as projects grow, see How to Structure Your Python Projects for AWS Lambda, APIs, and CLI Tools.


