In this tutorial, you will learn how to install S3fs and mount S3 bucket on Ubuntu 18.04 server within Amazon EC2 instance.
If you have a WordPress websites and you are planning to backup your data, this guide will help you through step process installing S3fs and mounting the S3 bucket directly to your Ubuntu server using the s3fs software, and then after mounting the bucket, you can easily replicate your files to Amazon S3 bucket.
S3fs is a Linux tool that can be managed to mount your S3 buckets on the Ubuntu filesystem and use your S3 buckets as a network drive. S3fs is a fuse based file system backed by Amazon S3.
What will you do
- Installing S3fs on EC2 Ubuntu
- Setup IAM User to access on S3 bucket
- Creating S3fs Credentials file
- Mounting S3 Bucket on Ubuntu Filesystem
- Test uploading files
- Verify the files on S3 bucket
- Automount S3fs on FStab file
- Unmount S3 bucket
Requirements
- AWS Account. Create your own AWS Account
- Ubuntu 18.04. Learn to deploy EC2 Ubuntu instance on AWS Console
- A user with sudo privilege command.
To get started, you will learn through step process on how to install S3fs and mount S3 bucket on EC2 instance.
Step 1. Installing S3fs on EC2 Ubuntu
Open terminal console on your system and SSH remote into your EC2 Ubuntu server. Update your system repository, run the command.
sudo apt-get update
After installation update completed, type command below to install S3fs on your system.
sudo apt install s3fs awscli -y
And verify your S3fs if installed properly, run command:
which s3fs
Output:
/usr/bin/s3fs
If everything is fine, continue on second step.
Step 2. Setup IAM User to access on S3 bucket
Signed into your AWS Management Console and open the AWS IAM console at https://console.aws.amazon.com/iam/home.
To manage your EC2 instance to access on S3 bucket. You need to create an IAM policy that have permission to execute PUT, GET and DELETE objects on S3 bucket. Check out this guide on how to create an IAM users and policy.
You can use the default AmazonS3FullAccess
policy or either create your custom policy below:
Step 3. Creating S3fs Credentials file
Switch into your terminal on EC2 Ubuntu server. Create a file to manage your IAM user access ID and secret key, use the command:
echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /home/ubuntu/.s3fs-creds
And secure the file credentials by setting the correct access permissions, type the command:
chmod 600 /home/ubuntu/.s3fs-creds
And verify the file permission, type command:
ls
Output:
-rw------- 1 ubuntu ubuntu 10 Jul 19 14:12 .s3fs-creds
Step 4. Mounting S3 Bucket on Ubuntu Filesystem
Create a folder use for mounting the S3 bucket.
mkdir /home/ubuntu/s3_uploads
To mount the S3 bucket on Ubuntu server, type command:
sudo s3fs mys3_bucket_name /home/ubuntu/s3_uploads -o passwd_file=/home/ubuntu/.s3fs-creds
Optional Guide:
If you want to mount the prefix subdirectory of your S3 bucket, use the following command:
sudo s3fs mys3_bucket_name:/data/www/public /home/ubuntu/s3_uploads -o passwd_file=/home/ubuntu/.s3fs-creds,nonempty
Note: If files exist already on S3 bucket use this option nonempty.
Now, Verify the S3fs on Ubuntu filesystem table if properly mounted, type command.
df -h
The output looks like this:
Step 5. Test uploading files
Open the mount point directory, type command:
cd /home/ubuntu/s3_uploads
And create dummy files using touch
command.
touch file{1..10}.txt
Verify the created files using:
ls -lt
Output:
-rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file10.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file9.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file8.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file7.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file6.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file5.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file4.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file3.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file2.txt -rw-r--r-- 1 ubuntu ubuntu 0 May 10 22:30 file1.txt
Step 6. Verify the files on S3 bucket
While signed to AWS Management console, go to your S3 service and open your created S3 bucket console. Now you will see the files are automatically replicated on the bucket.
You are almost done. Next, continue on step 7.
Step 7. Automount S3FS on Fstab file
When you restart the server the mounted directories will be disappear. To prevent this from happening on your server. You need to setup an auto mount point using Fstab file.
To open your Fstab file, type command:
sudo vim /etc/fstab
Then add the following entry to the bottom line:
s3fs#<mys3bucket> /home/ubuntu/s3_uploads fuse _netdev,allow_other 0 0
Note: Please included the hashtag #.
If your server attempts to reboot, the command assigned on the Fstab is automatically executed to remount S3fs during system startup.
Note: Be careful with the Fstab file if you made a mistake with that file, there is a possibility that you will not be able to boot up your server.
Autoremount Using custom Bash Script
I have created a simple Bash script to remount your S3 bucket automatically, this guide will show in stey-by-step process in a great way to avoid failure on your end.
Step 1. Create a bash file using:
sudo vim /home/ubuntu/mount.sh
Step 2. Add the following entry below:
#!/bin/bash sudo s3fs mys3bucket /home/ubuntu/s3_uploads -o passwd_file=/home/ubuntu/.passwd-s3fs
Save and close the file.
Step 3. Make the file executable, run command
sudo chmod +x mount.sh
Step 4. And add the bash file on Cron scheduling jobs, type command:
crontab -e
Step 5. To automatically remount your S3 bucket on the filesystem, add the following command below:
@reboot /bin/bash -c /home/ubuntu/mount.sh
Save and exit the file.
Now reload your Cron job service, run command:
sudo service cron reload
Output:
Reloading configuration files for periodic command scheduler cron [ OK ]
If everything is fine, you can now begin to reboot your instance using your AWS console or type the command:
sudo reboot now
Step 8. Unmount S3 bucket
If you are planning to unmount the S3 bucket on the filesystem, type the command:
sudo umount /home/ubuntu/s3_uploads
Finally, you can copy your backups locally in the S3fs mount point directory, and the files you copied are automatically replicated to the S3 bucket within a milliseconds.
Also, you can learn how to setup your MySQL backup automatically move to S3 bucket.
That’s all.
I hope this tutorial helped you and feel free to comment section below for more suggestions.
Tags: aws, ec2, linux, mount, nfs, s3, s3fs, ubuntu, ubuntu 18.04
Thanks mate! This documentation was really helpful and it worked fine. Thanks again, you are great!
how can i access it without root user?
Hi Vijay,
Which part of the steps do you mean?
i wanted to remount after reboot
but i am unable to do that with the above commands
Have you tried to install `sudo apt install awscli -y` command on your machine?
Excellent tutorial, congratulations Gerald.
I think I did everything right, but do not mount the s3fs in step 4, but there is no error message, as if the s3fs was created and thus, does not copy the files to S3 bucket.
Can you help me?
Tank you
Try following:
Step 1. make sure the secret key on IAM user have administrative privilege rights
(e.g using “AmazonS3FullAccess” policy).
Step 2. try to mount using “sudo” e.g: sudo s3fs mys3_bucket_name /home/ubuntu/s3_uploads -o passwd_file=/home/ubuntu/.s3fs-creds
let me know if it’s not working, so let’s chat on skype directly for free assistant. 🙂
This goes with me. It does not mount at all. Can you help
Hey Joel, have you tried using “sudo”?
“`
sudo s3fs mys3_bucket_name /home/ubuntu/s3_uploads -o passwd_file=/home/ubuntu/.s3fs-creds
“`
Thank you for the tutorial. I’ve followed the steps but am getting a 403 error
sudo s3fs cdn-mybucket-com /s3/bucket-test -o passwd_file=/home/s3fs/.s3fs-creds -o dbglevel=info -f -o curldbg -o nonempty
Results…
[INF] curl.cpp:RequestPerform(1957): HTTP response code 403 was returned, returning EPERM
[ERR] curl.cpp:CheckBucket(2953): Check bucket failed, S3 response:
i’ve verified the IAM credentials and that the user has full S3 privs
Any tips?
Hi Steve, to mount the s3fs on fstab system you need to include the # sign. Here’s an example: s3fs# /path/to/mountpoint fuse allow_other,use_cache=/tmp/cache,uid=userid,gid=groupid 0 0
Good Doc mate!!!
Mounting S3fs bucket is successful but its not showing files which I created in mount point are not visible also the fles i moved onto s3 bucket using aws s3 mv command also not visible on my Linux terminal.
Hi Suraj, Correct me if I’m wrong. You mention that you have successfully mounted the S3 bucket to you local folder using the S3fs. That’s cool. Now if you have an existing files from S3 bucket then you want to verify it from your local mounted directory run list command using: `ls -lah`. and make sure your user has a root privileges or switch your user to root like `sudo su`.
Hi Gerald Alinio!
I can mount S3 bucket to the wp-content but the wp-content have owned by root:root and my server run webserver as nobody:nogroup.
How to mount S3 Bucket with owner nobody:nogroup please?
when you create cron. dont use “sudo crontab -e” use “crontab -e”. if you use then delete “@reboot… row” for sudo command.
Hi, thx for great tutorial.
i was execute this command:
sudo s3fs elearningdataun48 /home/ubuntu/s3_elearning -o passwd_file=/home/ubuntu/.s3fs-creds,nonempty
but when execute command : df -h, no s3fs filesystem appears. And when i uploaded file to s3_elearning, no file uploaded in s3 bucket. Can you help me?
Thank you
Great article!!
How can we change permissions on the mount point. It gets mounted as root:root. How can I give 777 permissions on this mount point so that apart from root others can access this as well. Chmod doesn’t work when it is mounted. When you unmount.then mount back root is the owner group