If you have a Lambda function in one AWS account that needs to upload files to an S3 bucket in a different account, you need to set up permissions on both sides. This guide covers the exact IAM policies, S3 bucket policy, and Python Lambda code for a working cross-account S3 upload — no sts:AssumeRole needed for this pattern.
We’ll call them Account A (where the Lambda runs) and Account B (where the S3 bucket lives).
How It Works
There are two common ways to do cross-account S3 access: using sts:AssumeRole to assume a role in the target account, or using a resource-based policy on the S3 bucket to grant access directly to the Lambda execution role. This guide uses the resource-based approach — it’s simpler when all you need is PutObject access.
The setup requires two things:
- Account B — S3 bucket policy that allows Account A’s Lambda role to upload objects
- Account A — IAM policy on the Lambda execution role that allows writing to Account B’s bucket
Both sides need to grant permission. If either one is missing, you’ll get an Access Denied error. For the AssumeRole approach, check out How to Set Up Cross-Account Access in AWS with AssumeRole.
Prerequisites
- Two AWS accounts (Account A and Account B)
- An S3 bucket in Account B (e.g.,
my-target-bucket) - A Lambda function in Account A with a Python runtime
- The Lambda execution role ARN from Account A (you’ll need this for the bucket policy)
Step 1: Get the Lambda Execution Role ARN
Before configuring anything, grab the ARN of your Lambda function’s execution role. Go to the Lambda console in Account A, open your function, and check the Configuration > Permissions tab. The execution role ARN looks like this:
arn:aws:iam::123456789012:role/my-lambda-execution-role
Copy this — you’ll need it for the bucket policy in Account B.
Step 2: Configure the S3 Bucket Policy in Account B
In Account B, go to S3 > your bucket > Permissions tab > Bucket policy, and add this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCrossAccountUpload",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/my-lambda-execution-role"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-target-bucket/*"
}
]
}
Replace the role ARN with your actual Lambda execution role from Step 1, and my-target-bucket with your bucket name.
A few things to note about this policy:
- Principal — points to the Lambda execution role in Account A, not the Lambda function ARN or the account root
- s3:PutObject — allows uploading objects
- s3:PutObjectAcl — allows setting object ACLs (needed if you want Account B to own the uploaded objects, more on that below)
- Resource — uses
/*to allow uploads to any key prefix. If you want to restrict to a specific folder, use something likearn:aws:s3:::my-target-bucket/uploads/*
Object Ownership (Important)
By default in newer S3 buckets, Bucket owner enforced is the object ownership setting. This means the bucket owner (Account B) automatically owns all uploaded objects regardless of who uploaded them. If your bucket uses this setting, you don’t need to worry about ACLs.
But if your bucket has Object writer as the ownership setting, the uploader (Account A) owns the objects. Account B can’t read or manage them unless the uploader sets the ACL to bucket-owner-full-control. Check your bucket’s ownership setting under Permissions > Object Ownership.
Step 3: Add IAM Policy to the Lambda Execution Role in Account A
Now in Account A, go to IAM > Roles > find your Lambda execution role, and attach this policy (inline or managed):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUploadToAccountBBucket",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-target-bucket/*"
}
]
}
This grants the Lambda function permission to upload objects to Account B’s bucket. The bucket policy on the other side is what actually allows cross-account access — both policies work together.
Also verify that the Lambda execution role has the standard trust policy for Lambda:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
This trust policy should already be there if you created the role through the Lambda console. It just means Lambda can use this role — it has nothing to do with cross-account access.
Step 4: Lambda Function Code
Here’s a Python Lambda function that uploads a file to Account B’s S3 bucket:
import json
import boto3
import os
from datetime import datetime
def lambda_handler(event, context):
bucket_name = os.environ.get("TARGET_BUCKET", "my-target-bucket")
# Generate a unique key with timestamp
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
object_key = f"uploads/data-{timestamp}.json"
# Data to upload
payload = {
"source": "account-a-lambda",
"timestamp": timestamp,
"data": event.get("data", {})
}
s3_client = boto3.client("s3")
try:
s3_client.put_object(
Bucket=bucket_name,
Key=object_key,
Body=json.dumps(payload),
ContentType="application/json"
)
print(f"Uploaded {object_key} to {bucket_name}")
return {
"statusCode": 200,
"body": json.dumps({
"message": "Upload successful",
"bucket": bucket_name,
"key": object_key
})
}
except Exception as e:
print(f"Upload failed: {e}")
return {
"statusCode": 500,
"body": json.dumps({"error": str(e)})
}
A few things about this code:
- The bucket name comes from an environment variable — set
TARGET_BUCKETin your Lambda configuration so you don’t hardcode it - No need to specify credentials or call
sts:AssumeRole— the Lambda execution role already has the permissions, andboto3.client("s3")uses the role automatically ContentTypeis optional but good practice, especially if Account B serves these files or processes them downstream
If you’re new to Lambda’s event and context parameters, Understanding Lambda’s Event and Context Parameters explains how they work.
If You Need bucket-owner-full-control
If the bucket’s Object Ownership is set to Object writer (not the default on newer buckets), add the ACL parameter so Account B can manage the uploaded objects:
s3_client.put_object(
Bucket=bucket_name,
Key=object_key,
Body=json.dumps(payload),
ContentType="application/json",
ACL="bucket-owner-full-control"
)
Without this, Account B wouldn’t be able to read or delete the objects that Account A uploaded — one of those things you discover during testing when you can upload fine but the other team says they can’t see the files.
Step 5: Test the Setup
Test from Lambda Console
Create a test event in the Lambda console:
{
"data": {
"test": true,
"message": "cross-account upload test"
}
}
Run it and check the response. A 200 status means it worked. Then go to Account B’s S3 console and verify the file exists under the uploads/ prefix.
Test with AWS CLI First
If you want to test the IAM and bucket policies before writing Lambda code, you can use the AWS CLI with the Lambda role’s credentials. If you have a way to assume the Lambda role locally:
aws s3 cp test.txt s3://my-target-bucket/uploads/test.txt
If this works, you know the policies are correct and the Lambda code just needs to do the same thing with boto3.
Troubleshooting
Access Denied on PutObject
This is the most common error. Check these in order:
- Bucket policy in Account B — does the Principal match your Lambda execution role ARN exactly? A typo in the ARN will fail silently with Access Denied
- IAM policy in Account A — does the Resource ARN match the bucket name? Make sure it ends with
/* - Bucket name in code — is it the correct bucket? Sounds obvious, but wrong bucket name gives the same Access Denied error
- S3 Block Public Access — this setting doesn’t block cross-account access via IAM policies. But if you’re using ACLs and the bucket has “Block public access” enabled for ACLs, the
PutObjectAclcall might fail
Files Upload But Account B Can’t Read Them
This happens when the bucket’s Object Ownership is set to Object writer. The objects are owned by Account A’s role, and Account B doesn’t have access by default. Fix this by either:
- Changing Object Ownership to Bucket owner enforced (recommended for new setups)
- Adding
ACL="bucket-owner-full-control"to yourput_objectcall
Lambda Times Out
If the Lambda function times out during the S3 upload, check whether the Lambda has internet access. If it’s running in a VPC, it needs a NAT Gateway or VPC endpoint for S3 to reach the S3 API. Lambda functions outside a VPC have internet access by default.
Uploading Larger Files
The put_object method works for files up to 5 GB. For larger files, use multipart upload. But for most Lambda use cases (JSON payloads, CSV exports, log files), put_object is fine.
If you’re uploading files from local disk in Lambda’s /tmp directory (limited to 512 MB or 10 GB with ephemeral storage), use upload_file instead:
# Write to /tmp first, then upload
file_path = "/tmp/export.csv"
with open(file_path, "w") as f:
f.write("col1,col2\nval1,val2\n")
s3_client.upload_file(file_path, bucket_name, "exports/export.csv")
Summary
| Where | What to Configure |
|---|---|
| Account B — S3 bucket policy | Allow Account A’s Lambda role ARN to s3:PutObject |
| Account A — Lambda IAM policy | Allow s3:PutObject on Account B’s bucket ARN |
| Account A — Lambda code | Use boto3 with put_object — no AssumeRole needed |
| Account B — Object Ownership | Use “Bucket owner enforced” or set ACL in code |
Conclusion
The resource-based policy approach is the simplest way to do cross-account S3 uploads from Lambda when you only need write access. Set the bucket policy in Account B, add the IAM policy in Account A, and the Lambda code doesn’t need anything special — just a regular boto3 S3 client.
For copying existing objects between accounts (not from Lambda), see How to Copy S3 Bucket Objects Across AWS Accounts. If your Lambda also needs to read secrets from Account B, How to Access AWS Secrets Manager from Another Account covers that cross-account pattern with AssumeRole.


