Live MySQL Backup without Locking Tables

Mysqldump client utility is the easiest and favorite tool for DBAs to backup a MySQL database. It supports numerous options to configure the backup such as type of backup format, what to backup, how to deal with key constraints, how to deal with transactions etc. The output of a mysqldump backup operation is named as a dump file and it can be restored to recreate the backed up database. The dump files can be in formats like CSV, tab-delimited, SQL, XML etc. The dump file can be further compressed using a suitable compression utility like Tar, Gzip, XZ, Bzip2 etc to reduce the dump file size to make it easier for storage and transfer.

Following are the typical scenarios where a database backup is necessary:

  • To recover data in case of data corruption or loss.
  • Creating dumps of existing databases to create replication slaves.
  • To make a copy of the database, typically for use cases like database upgrades, data processing etc.

The basic mysqldump invocations are:

shell> mysqldump [options] db_name [tbl_name ...]

shell> mysqldump [options] --databases db_name ...

shell> mysqldump [options] --all-databases

Some basic options that can be provided with mysqldump command are used for:

  1. Provide protocol, host, user, and password.
  2. DDL options to specify whether to add/drop existing database/triggers during restore, use REPLACE instead of INSERT, create a table only if it does not exist, etc.
  3. Log errors to a log file, skip comments in the current database, print verbose output during mysqldump operations.
  4. Specify character set to be used.
  5. Output format type and file name.
  6. Database selection – 1 or more databases, whether to include – tables, events, routines, triggers.
  7. Add table locks before dumping, flush logs, etc.

Example mysqldump commands:

To backup an entire database to a sql file:
shell> mysqldump db_name > backup-file.sql

To restore the database from the dump file:
shell> mysql db_name < backup-file.sql

Running mysqldump on live databases

When performing a backup process with mysqldump on a small live database is fine with the default options. But when dealing with a medium to large database that is live, running mysqldump with default options can cause service unavailability issues for the application users, causing a temporary downtime. This is because mysqldump locks the tables during a backup process also known as export process. So when the application tries to perform an INSERT on the locked table during the export process, it will fail – causing service unavailability.
To overcome this issue there is a configuration parameter that can be applied to transactional storage engine tables like InnoDB.

This option is related to the Transaction Isolation level – REPEATABLE READ or Consistent Read, that is default to the InnoDB storage engine. In this transaction isolation mode, the READS or SELECT statements can see the latest snapshot or data in the tables, as well as older versions of the data and can read the latest data. This feature is highly beneficial in a situation when multiple processes are READING and WRITING to tables. The parameter –single-transaction sets transaction isolation mode to REPEATABLE READ and also issues a START TRANSACTION command to the server before starting the dumping. Then mysqldump will perform the dumping as a new SQL Transaction after flushing all the pending WRITES. Tables are not locked thus allowing WRITESduring the backup process. The data records read by mysqldump will be consistent with the latest state of the database throughout the transaction. Thus the backup file will have the newest data and thus enforces high data integrity without downtime.

shell> mysqldump –uUserName –pPassword –hHost --single-transaction db_name > latest_backup.sql

Note that this parameter is not valid for non transactional engines like MyISAM. In order to work this kind of backup process effectively, no other DDL or alteration statements should be used concurrently, such as ALTERCREATEDROPRENAME and TRUNCATE.

For dumping very large tables the –single-transaction option can be used along with the –-quick option. This option tells mysqldump to read one row at a time, rather than reading the entire rows and buffering them in memory before writing to the dump file.

Leave a Reply