Backup Of MySQL Database To Amazon S3 Using BASH Script Is Not Rocket Science! Learn Them Now!

s3backupBackup of MySQL Database to Amazon S3 using BASH Script

This is a easy way to backup your Mysql Database  to Amazon S3 for a regular interval backup – this is all to be done on your server.

First need to have the checklist  i.e must important for the any task you want to do in a proper way.

  • S3cmd  command line configure on server
  • A bucket over S3 to store dump file
  • Make Bash Script  i.e Contains MySQL Credential ( Hostname, Username, DB Name ), Location on your server where you want to store dump (PATH), Log Path.
  • Give chmod  +x on mysqlbackup Script (mysql.sh)
  • Test it and check S3 bucket
  • Schedule with cron as per your requirement

1. Install S3cmd

In Ubuntu Run  from command prompt  

$ sudo apt-get update

$ s3cmd --configure

In Red Hat 

# cd /etc/yum.repos.d/
# sudo wget http://s3tools.org/repo/CentOS_5/s3tools.repo
# sudo yum install s3cmd

Setup s3cmd
# s3cmd --configure

The s3cmd configuration file is named .s3cfg and it is located in the user’s home directory, e.g. /home/username/ ($HOME).

In which You will be asked for the two keys (Access key and Secret key are your identifiers for Amazon S3.) – copy and paste them from your confirmation email or from your Amazon account page.

They are case sensitive and must be entered accurately or you’ll keep getting errors about invalid signatures or similar.

You can optionally enter a GPG encryption key that will be used for encrypting your files before sending them to Amazon. Using GPG encryption will protect your data against reading by Amazon staff or anyone who may get access to your them while they’re stored at Amazon S3.

Other advanced settings can be changed (if needed) by editing the config file manually. Some of the settings contain the default values for s3cmd to use.

The following is an example of a s3cmd config file: (.s3cfg)

[default]
access_key = <your key>
access_token = 
add_encoding_exts = 
add_headers = 
bucket_location = Mumbai 
ca_certs_file = 
cache_file = 
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date = 
expiry_days = 
expiry_prefix = 
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase = 
guess_mime_type = True
host_base = s3.amazonaws.com
host_bucket = %(bucket)s.s3.amazonaws.com
human_readable_sizes = False
etc.............

2. S3mySQl-Backup Script

#vim mysqlbackupS3.sh

#!/bin/bash

#I use this to create a little bash script that will backup the database at regular intervals, and I’ll even chuck in deleting backups older than 15 days and move the dump_file in S3_bucket.

#create a few variables to contain the Database_credentials.

# Database credentials

USER="DB-USER"
# PASSWORD="PASSWORD"
HOST="DB-host-name"
DB_NAME="Database-name"

#Backup_Directory_Locations

BACKUPROOT="/backup/mysql_dump"
TSTAMP=$(date +"%d-%b-%Y-%H-%M-%S")
S3BUCKET="s3://bucket-name-dump"

#logging
LOG_ROOT="/backup/mysql_dump/logs/dump.log"

#Dump of Mysql Database into S3\
echo "$(tput setaf 2)creating backup of database start at $TSTAMP" >> "$LOG_ROOT"

mysqldump  -h <HOST>  -u <USER>  --database <DB_NAME>  -p"password" > $BACKUPROOT/$DB_NAME-$TSTAMP.sql

or
#mysqldump -h=$HOST -u=$USER --database=$DB_NAME -p=$PASSWORD > $BACKUPROOT/$DB_NAME-$TSTAMP.sql

echo "$(tput setaf 3)Finished backup of database and sending it in S3 Bucket at $TSTAMP" >> "$LOG_ROOT"

#Delete files older than 15 days

find  $BACKUPROOT/*   -mtime +15   -exec rm  {}  \;

s3cmd   put   --recursive   $BACKUPROOT   $S3BUCKET

echo "$(tput setaf 2)Moved the backup file from local to S3 bucket at $TSTAMP" >> "$LOG_ROOT"

echo "$(tput setaf 3)Coll!! Script have been executed successfully at $TSTAMP" >> "$LOG_ROOT"


###Save it and exit (:wq)
  1. Make it Executable and Test it

I am sure that you are login with root user on your server or if local user, have to sure that with "sudo"


# chmod +x mysqlS3.sh
# Run the script to make sure it's all good
# ./mysqlS3.sh

If everything going well without have any issue follow  next step

4. Schedule it with Cron

Assuming the backup script is stored in /opt/scripts  directory we need to add a crontask to run it automatically on weekly basis:

So we have to edit the crontab  file 
 #vim /etc/crontab
    # Add the following lines:
     # Run the database backup script on every week at 12.00
 0 0 * * 0  bash /opt/scripts/mysqlbackupS3.sh to  >/dev/null 2>&1

###save and exit (:wq)

Now you have your Database backup (Dump) in your S3 Bucket.

I would love to hear your own recommendations and experiences in the comments below. Please share your Valuable feedback.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s