If you’re running MySQL on EC2, you need automated backups. A single disk failure or accidental DROP TABLE can wipe out your data. This guide shows you how to automate MySQL database backups on EC2 to Amazon S3 using mysqldump, gzip compression, and a cron job. Backups upload directly to S3 using the AWS CLI — no s3fs mounts or access keys required.
Prerequisites
- An EC2 instance running Ubuntu 22.04 — follow How to deploy EC2 Ubuntu 22.04 LTS on AWS if you need one
- MySQL installed — see How to Install MySQL Database on EC2 Ubuntu 22.04 LTS
- AWS CLI v2 installed
- An S3 bucket for storing backups
Step 1: Create an IAM Role for EC2
Instead of using access keys, attach an IAM instance profile to your EC2 instance. This gives the instance permission to upload to S3 without storing any credentials on disk.
Create a policy that grants write access to your backup bucket. Save this as s3-backup-policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-db-backups",
"arn:aws:s3:::my-db-backups/*"
]
}
]
}
Create the IAM role and attach the policy:
aws iam create-role \
--role-name EC2-S3-Backup \
--assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"ec2.amazonaws.com"},"Action":"sts:AssumeRole"}]}'
aws iam put-role-policy \
--role-name EC2-S3-Backup \
--policy-name S3BackupAccess \
--policy-document file://s3-backup-policy.json
aws iam create-instance-profile \
--instance-profile-name EC2-S3-Backup
aws iam add-role-to-instance-profile \
--instance-profile-name EC2-S3-Backup \
--role-name EC2-S3-Backup
Attach the instance profile to your EC2 instance (you can also do this in the EC2 console under Actions > Security > Modify IAM role):
aws ec2 associate-iam-instance-profile \
--instance-id i-0123456789abcdef0 \
--iam-instance-profile Name=EC2-S3-Backup
Verify it works from the EC2 instance:
aws s3 ls s3://my-db-backups
If this runs without errors, the instance profile is working.
Step 2: Store MySQL Credentials Securely
Never put MySQL passwords directly in a backup script. Use a MySQL option file instead. Create /root/.my.cnf:
sudo nano /root/.my.cnf
Add your MySQL credentials:
[mysqldump]
user=backup_user
password=YOUR_SECURE_PASSWORD
Lock down the permissions:
sudo chmod 600 /root/.my.cnf
With this in place, mysqldump picks up the credentials automatically when run as root — no -u or -p flags needed in the script.
Step 3: Create the Backup Script
Create the script at /usr/local/bin/mysql-backup-s3.sh:
sudo nano /usr/local/bin/mysql-backup-s3.sh
Add the following:
#!/bin/bash
set -euo pipefail
# Configuration
DATABASE="myapp"
S3_BUCKET="s3://my-db-backups"
TIMESTAMP=$(date +%Y-%m-%d_%H-%M-%S)
BACKUP_FILE="/tmp/${DATABASE}-${TIMESTAMP}.sql.gz"
# Dump and compress
mysqldump --single-transaction --routines --triggers "$DATABASE" | gzip > "$BACKUP_FILE"
# Upload to S3
aws s3 cp "$BACKUP_FILE" "${S3_BUCKET}/${DATABASE}/${TIMESTAMP}.sql.gz"
# Clean up local file
rm -f "$BACKUP_FILE"
echo "[$(date)] Backup completed: ${DATABASE}/${TIMESTAMP}.sql.gz"
set -euo pipefail— stops the script on any error, including pipe failures--single-transaction— takes a consistent snapshot without locking tables (InnoDB)--routines --triggers— includes stored procedures and triggers in the dumpgzip— compresses the dump (typically 5–10x smaller)
Make it executable:
sudo chmod +x /usr/local/bin/mysql-backup-s3.sh
Test it manually:
sudo /usr/local/bin/mysql-backup-s3.sh
Check that the file appeared in S3:
aws s3 ls s3://my-db-backups/myapp/ --human-readable
Step 4: Back Up All Databases (Optional)
To back up every database on the server, replace the single database dump with --all-databases:
mysqldump --single-transaction --routines --triggers --all-databases | gzip > "$BACKUP_FILE"
Step 5: Schedule with Cron
Open the root crontab:
sudo crontab -e
Add a daily backup at 2:00 AM:
0 2 * * * /usr/local/bin/mysql-backup-s3.sh >> /var/log/mysql-backup.log 2>&1
Logs go to /var/log/mysql-backup.log so you can check if backups are running and catch errors.
| Schedule | Cron Expression |
|---|---|
| Every day at 2 AM | 0 2 * * * |
| Every 6 hours | 0 */6 * * * |
| Every hour | 0 * * * * |
| Twice a day (2 AM and 2 PM) | 0 2,14 * * * |
Step 6: Set Up S3 Lifecycle Policy for Retention
Instead of deleting old backups with a script, let S3 handle retention automatically with a lifecycle policy. This moves old backups to cheaper storage and eventually deletes them:
aws s3api put-bucket-lifecycle-configuration \
--bucket my-db-backups \
--lifecycle-configuration '{
"Rules": [
{
"ID": "BackupRetention",
"Status": "Enabled",
"Filter": {"Prefix": ""},
"Transitions": [
{
"Days": 30,
"StorageClass": "GLACIER"
}
],
"Expiration": {
"Days": 90
}
}
]
}'
- After 30 days, backups move to Glacier (much cheaper storage)
- After 90 days, backups are deleted automatically
Adjust the days based on your retention requirements.
Restoring from a Backup
Download the backup from S3 and restore it:
aws s3 cp s3://my-db-backups/myapp/2025-06-15_02-00-01.sql.gz /tmp/restore.sql.gz
gunzip /tmp/restore.sql.gz
mysql myapp < /tmp/restore.sql
Test your restores periodically — a backup that can’t be restored is worthless.
Conclusion
The setup is: IAM instance profile for S3 access, a bash script that runs mysqldump with gzip and uploads to S3, a cron job for scheduling, and a lifecycle policy for automatic retention. No access keys, no s3fs mounts, no manual cleanup scripts.
If you need to copy these backups to another AWS account for disaster recovery, see How to Copy S3 Bucket Objects Across AWS Accounts. For mounting S3 as a filesystem for other use cases, check out How to Mount S3 Bucket on Ubuntu 22.04 with S3FS Fuse.


