ECS Exec gives you shell access to running Fargate containers — similar to docker exec but for AWS. This guide shows you how to enable ECS Exec on Fargate by configuring the required IAM roles, updating your service, and connecting to a container from the CLI.
You’d use this when you need to debug a running container: check environment variables, inspect files, test network connectivity, or troubleshoot an app that’s behaving differently in ECS than it does locally.
Prerequisites
- An ECS Fargate service already running (platform version 1.4.0 or higher)
- AWS CLI v2 installed
- The Session Manager plugin for AWS CLI (install below)
- IAM permissions to modify ECS task roles and services
- The container’s security group must allow outbound traffic on port 443 — ECS Exec uses SSM, which communicates over HTTPS
Install the Session Manager plugin
ECS Exec uses AWS Systems Manager (SSM) under the hood. The CLI plugin is required to establish the session. The examples here are run on WSL2 Ubuntu in Windows, but the same steps work on any Ubuntu system.
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" -o "session-manager-plugin.deb"
sudo dpkg -i session-manager-plugin.deb
rm session-manager-plugin.deb
Verify it installed:
session-manager-plugin --version
Step 1: Create the ECS Task IAM Role
ECS Exec runs an SSM agent inside your container. The task role (not the task execution role) needs permission to communicate with the SSM service.
Create a new IAM role with the ECS Tasks trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Attach an inline policy with the SSM messaging permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
}
]
}
Name the role something clear like ecsExecTaskRole. These four ssmmessages actions are the minimum required — they allow the in-container SSM agent to establish a session back to the SSM service.
Step 2: Grant Your IAM User the ExecuteCommand Permission
The IAM user or role you use to run aws ecs execute-command also needs permission. Attach this policy to your IAM user, group, or the role you assume:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ecs:ExecuteCommand",
"Resource": "*"
}
]
}
For production, scope the Resource to specific cluster and service ARNs instead of *.
Step 3: Assign the Task Role to Your Task Definition
Open your task definition in the ECS console, create a new revision, and set the Task Role to the role you created in Step 1.
Make sure you’re setting the Task Role — not the Task Execution Role. These are different:
| Role | Purpose |
|---|---|
| Task Role | Permissions for your running container (app code, SSM agent) |
| Task Execution Role | Permissions for ECS itself (pulling images, writing logs) |
Step 4: Enable ECS Exec on the Service
Update your ECS service to enable the execute-command flag and deploy the new task definition revision:
aws ecs update-service \
--cluster your-cluster-name \
--service your-service-name \
--region ap-southeast-1 \
--enable-execute-command \
--force-new-deployment
--enable-execute-command— turns on ECS Exec for the service--force-new-deployment— forces ECS to launch new tasks with the updated config (existing tasks won’t have ECS Exec enabled)
Wait for the new tasks to reach the RUNNING state before proceeding.
Step 5: Validate the Configuration
Before trying to connect, verify that ECS Exec is properly configured using the Amazon ECS Exec Checker — an open-source tool created by the AWS Containers team.
bash <(curl -Ls https://raw.githubusercontent.com/aws-containers/amazon-ecs-exec-checker/main/check-ecs-exec.sh) your-cluster-name your-task-id
This script checks your IAM roles, task definition, SSM agent status, and VPC configuration. Fix any items it flags before continuing.
Step 6: Connect to the Container
Get the task ARN for a running task:
TASK_ARN=$(aws ecs list-tasks \
--cluster your-cluster-name \
--service your-service-name \
--region ap-southeast-1 \
--output text \
--query 'taskArns[0]')
Open an interactive shell:
aws ecs execute-command \
--region ap-southeast-1 \
--cluster your-cluster-name \
--task $TASK_ARN \
--container your-container-name \
--command "/bin/sh" \
--interactive
If your container has bash installed, you can use /bin/bash instead of /bin/sh.
Troubleshooting
Common issues when ECS Exec doesn’t work:
| Problem | Fix |
|---|---|
| “The execute command failed” with no further detail | Check that the task role (not execution role) has the ssmmessages permissions |
| SSM agent not running in the task | The task was launched before --enable-execute-command was set — force a new deployment |
| Session Manager plugin not found | Install the Session Manager plugin (see Prerequisites) |
| Timeout or connection hanging | The security group doesn’t allow outbound HTTPS (port 443) — SSM needs it |
| Platform version error | ECS Exec requires Fargate platform version 1.4.0+. Update your service or task definition. |
Conclusion
You now have ECS Exec enabled on your Fargate service and can open a shell into any running container for debugging. The key pieces are the task role with SSM permissions, the --enable-execute-command flag on the service, and the Session Manager plugin on your local machine.
If you’re managing multiple AWS accounts, the same IAM pattern works with cross-account AssumeRole. For monitoring your containerized workloads, check out Setting Up Datadog Monitoring for Containerized AWS Lambda Functions.


