Monitoring Your MySQL Database Using mysqladmin


As with any other dynamic running process, the MySQL database also needs sustained management and administration of which monitoring is an important part. Database monitoring allows you to find out mis-configurations, under or over resource allocation, inefficient schema design, security issues, poor performance that include slowness, connection aborts, process hanging, partial resultsets etc. It is giving you an opportunity to fine tune your platform server, database server, its configuration and other performance factors. This article provides you an insight into monitoring your MySQL database through tracking and analyzing important database metrics, variables and logs using mysqladmin.

Is your MySQL server running?
Obviously availability or up-time is the primary concern of any production deployment. But at the same time it does not guarantee that the running server will accept connection or whether it is running out of space. Still, it is worth to know about the server status. There are many methods to detect a running MySQL server:
1. Ping the server
mysqladmin –u root –p ping
Returns 0 if server is running and 1 if not. Even for access denial cases, 0 will be returned since server is still running. A remote server can be pinged by specifying the host name (IP or Domain if DNS resolution is available).
Eg: mysqladmin -h –u root –p ping
Eg: mysqladmin -h –u root –p ping
A running server will return:
mysqld is alive
2. Get the database server status
Eg: mysqladmin -h -u root -p status
service mysqld status
3. Get server version
mysqladmin -u root -p version
The results will include version, uptime, threads, slow queries etc.

Get the MySQL Server Metrics and Optimize Database
The important metrics (server status variables) that reflect the health and status of a MySQL server are:

Aborted_clients: number of connections aborted because client did not closed the connection properly. Possible reasons are: client did not close connection using mysql_close() before exiting, client was sleeping for a time interval (seconds) more than that is set in the variables wait_timeout or interactive_timeout without notifying server and client program ended abruptly during a data transfer. After aborting a connection, server will log the details in error log:
[Note] Aborted connection 854 to db: 'customers' user: 'john'
Aborted_connects: number of failed connection attempts. Possible reasons could be: a client without required privileges tried to access a database, client used wrong password, the connection packet was missing some required information, and there was a delay more than the connect_timeout seconds to receive a packet from the client. Check error log to find out the affected user and grant needed privileges. It can also be due to an attack like spamming, DOS or brute force.
A common reason for above scenarios can be: the max_allowed_packet variable that contains the maximum memory size in bytes is too small or queries needed more than the allocated memory for mysqld. You can increase this memory through any of the following methods:

shell> mysqld --max_allowed_packet=32M
shell> mysql --max_allowed_packet=32M

Other reasons could be related to network protocols, faulty hardware, OS thread level issues etc.

Connections: number of connection attempts (both successful and failed)
Uptime: number of seconds server has been up. A smaller value my point to intermediate shutdowns or server crashes.
Threads: number of active threads (client connections). This should reflect an average concurrent amount of connections. Otherwise the Aborted_clients and Aborted_connections should be examined for high values.
Threads_connected: number of threads created to connect clients at the current moment. A zero and too high values can be issues. Too high can be due to either smaller thread_cache_size value or too many concurrent client connections happening. This need to be investigated against database attacks especially DOS.
Threads_created: Threads created to handle connections. An optimal setup ensures threads to be reused through thread cache. A too high value for this variable can be due to smaller thread_cache_size value and it should be increased.
Questions: number of queries received from clients since last server start.
Slow queries: number of queries that have taken more than long_query_time seconds. Identify the slow queries and optimize them for improving performance. Check whether the database uses proper indexing and queries are utilizing this.
Open tables: number of tables that are currently open. A low value for this and high value set for table_cache means memory is free and table_cache size can be reduced and vice versa.
Bytes_received: bytes of data received from clients.
Bytes_sent: bytes sent to clients.
Max_connections: maximum permitted number of simultaneous client connections.
Max_used_connections: maximum number of concurrent connections occurred at any time. If max_used_connections/max_connections is high like more than 75%, there are chances for server to run out of connection slots and denies further connections.
Max_execution_time_exceeded: number of SELECT statements that caused a time out in execution. Check the slow query log and error log to find and optimize slow queries, if this value is high.

Check above 2 values for abnormal hike in traffic.

The server status variables can be displayed by issuing the SHOW STATUS command:
| Variable_name | Value |
| Aborted_clients | 0 |
| Aborted_connects | 0 |
| Bytes_received | 155372598 |
| Bytes_sent | 1176560426 |
| Connections | 30023 |
| Created_tmp_disk_tables | 0 |
| Created_tmp_tables | 8340 |
| Created_tmp_files | 60 |
| Open_tables | 1 |
| Open_files | 2 |
| Open_streams | 0 |
| Opened_tables | 44600 |
| Questions | 2026873 |
| Table_locks_immediate | 1920382 |
| Table_locks_waited | 0 |
| Threads_cached | 0 |
| Threads_created | 30022 |
| Threads_connected | 1 |
| Threads_running | 1 |
| Uptime | 80380 |
shell> mysqladmin -u root -p extended-status processlist
| Id | User | Host | db | Command | Time | State | Info |
| 51 | monty | localhost | | Query | 0 | | show processlist |
Uptime: 1473624 Threads: 1 Questions: 39487
Slow queries: 0 Opens: 541 Flush tables: 1
Open tables: 19 Queries per second avg: 0.0268

To see values of specific variables, use LIKE clause with or without patterns:
$ mysql -u root -p
mysql> SHOW GLOBAL STATUS LIKE 'aborted_connects';
mysql> SHOW STATUS LIKE 'Key%';
| Variable_name | Value |
| Key_blocks_used | 14955 |
| Key_read_requests | 96854827 |
| Key_reads | 162040 |
| Key_write_requests | 7589728 |
| Key_writes | 3813196 |

Check Logs Frequently
The slow query log and error log needs to be checked regularly to detect issues. An effective method is to monitor and investigate both logs and server status variables to identify issues and space for further optimization.

InnoDB Table Monitoring
The command SHOW ENGINE INNODB STATUS can be used to detect aborted or rolled back transactions. This indicates presence of deadlocks and the InnoDB configuration as well as client application’s deadlock handling mechanism should be enhanced. Below are some server status variables for InnoDB that need to be monitored:

Innodb_row_lock_waits: number of times operations on InnoDB tables had to wait for a row lock.
Innodb_buffer_pool_wait_free: number of times InnoDB had waited for memory pages to be flushed. This happens when there are no free pages available for writing and InnoDB flushes some pages first and waits for this operation to be finished. A too high value indicates that innodb_buffer_pool_size is too small and needs to adjust according to the work load.

The mysqladmin tool along with other MySQL administrative tools is sufficient enough for properly monitoring the MySQL database. Regular monitoring, investigation of any issues and performance degradations and taking corrective measures on a daily basis will make your database robust and well optimized through an evolution process. This is the desired and feasible way of adapting your MySQL database to fulfill its responsibilities like a faultless engine working harmoniously.

Read more

Securing MySQL NDB Cluster

As discussed in a previous article about MySQL NDB Cluster, it is a database cluster setup based on the shared-nothing clustering and database sharding architecture. It is built on the NDB or NDBCLUSTER storage engine for MySQL DBMS. Also the 2nd article about MySQL NDB Cluster Installation outlined installation of the cluster through setting up its nodes – SQL node, data nodes and management node. This article discusses how to secure your MySQL NDB Cluster through the below perspectives:
1. Network security
2. Privilege system
3. Standard security procedures for the NDB cluster.
Detailed information is available at MySQL Development Documentation about NDB Cluster Security.
NDB Cluster Network Security
Cluster communication can be divided into client to cluster and cluster node to node categories. Node to node (between SQL, Data and Management nodes) communication are not recommended to be through SSL or using any other encryption protocols. This is because of the extra overhead due to network lag and latency that can affect the cluster performance. A network related security concern that needs to be addressed for the NDB Cluster is specified below.
For the config.ini file of an NDB Cluster, if the HostName parameter under [mysqld el_id='59ad103a65a56'] is blank or absent, or if there are empty slots under the [mysqld el_id='59ad103a65a93'] section, then the management node will not check about from which host, an SQL/API node or MySQL Server is attempting a connection to it. In such a case an SQl/API node can connect to the management node and access its data. To prevent this, specify a HostName parameter for all [mysqld el_id='59ad103a65acc'] and [api] sections in the config.ini file. Additionally, in such a scenario and management node client (ndb_mgm client) that knows the management node’s host address can connect to it and execute administrative commands to control the cluster.
The above issue can be resolved by many ways. One is by isolating cluster node network from external public network from which connections to the cluster is initiated. This can be done by placing a hardware or network firewall between the public network and SQL nodes.
Though costly, this setup is dependable and fault proof. It does not allow any access to data and management nodes from outside or any communication from the data and management nodes to the public network. Obviously the SQL nodes need to be secured against hacking.
Another option is to use software firewalls on all host nodes and configure them to block data packets that need not come or go outside the cluster.
The firewall on the data and management nodes hosts allow connection from SQL nodes only. Outside applications or clients can only communicate with cluster through SQL nodes. Each host machine will allow and deny traffic as specified below:
SQL node: Allows traffic from management and data nodes from within the cluster network that’s IP address is allowed, allows traffic from within the network through the assigned port and denies all other traffic.
Management and Data nodes: Allows traffic from management and data nodes, allows traffic from SQL nodes through the allowed IP address and denies all other traffic.
A combination of hardware and software firewall (network and host based) also can be used to secure the cluster.
This setup keeps the cluster behind the network firewall, but allows traffic between the management/data nodes and SQL nodes.
MySQL Privileges and NDB Cluster Security
Like any other MySQL storage engines, user privileges information for NDB storage engine tables are stored in mysql system database. Since MySQL grant tables use the MyISAM storage engine, those tables are not duplicated or shared across the SQL nodes in the NDB cluster. So changes in users and privileges made in one SQL node are not propagated to other SQL nodes by default. To enable this propagation, or distributed privileges across all SQL nodes, following steps need to be implemented:
On each of the SQL node MySQL server, import the script share/ ndb_dist_priv.sql available inside the MySQL installation directory.
shell> mysql options -uroot < share/ndb_dist_priv.sql
This script will create a number of stored routines on the SQL node. To verify this, use the below query to get the output as shown:
    -> WHERE ROUTINE_NAME LIKE 'mysql_cluster%'
| mysql_cluster_privileges_are_distributed | mysql | FUNCTION |
| mysql_cluster_backup_privileges | mysql | PROCEDURE |
| mysql_cluster_move_grant_tables | mysql | PROCEDURE |
| mysql_cluster_move_privileges | mysql | PROCEDURE |
| mysql_cluster_restore_local_privileges | mysql | PROCEDURE |
| mysql_cluster_restore_privileges | mysql | PROCEDURE |
| mysql_cluster_restore_privileges_from_local | mysql | PROCEDURE |
7 rows in set (0.01 sec)
The stored procedure mysql_cluster_move_privileges creates 2 sets of copies of the mysql database, one as MyISAM and other as NDBCLUSTER storage engine. Then the script invokes mysql_cluster_move_grant_tables that contain ALTER TABLE statements to convert the mysql system tables to NDB storage engine.
Once this conversion to distributed privileges is made at each SQL nodes, any change in users or privileges at any one SQL node will be distributed to all SQL nodes in the cluster.
As stated above in Cluster Network Security, it is important to secure the users in any SQL Node, so that a compromised node is not used to access and corrupt data in other SQL nodes. So ensure that the config.ini does not contain an empty [mysqld] block or use the HostName parameter. Also a firewall based security system is necessary to segregate the cluster from public network to ensure proper security.
Standard Security Procedures for NDB Cluster
The security procedures applicable to a standard MySQL Server is also valid for a MySQL NDB Cluster.
1. Always run the MySQL server (mysqld) as the mysql system user, that is part of the mysql user group.
2. Ensure that the system’s MySQL data directory is owned by the mysql user.
3. Ensure that the my.cnf of each SQL node contains user=mysql in the [mysqld] section.
4. Never run mysqld as the system root user.
5. Delete the anonymous user account created during installation.
6. Always keep in mind that the different nodes inside the cluster have high mutual transparency and hence it is highly recommended to secure the cluster through tight user privilege management and network security.

Read more

Why Your Business Need Database Versioning?

How database versioning boosts your business growth?
Applications are neither ideal nor perfect. Along with the dynamic implementation environments and rapid technology changes, certain features can become obsolete and some new ones might be needed. Workflows and processes in the application need to be changed. Bugs and vulnerabilities can be reported any time. Competitors may release a new feature that demands you to reciprocate with matching or better feature. The chances are huge for you to initiate, perform and deliver a stable minor or major version. A delay or absence in this can negatively affect the reputation and/or monetary attributes like market performance, profit etc. In these circumstances, it is essential for you to make available for your teams, tools that automates and simplifies processes as much as possible. Applications or services have code and databases are integral parts of any product. A properly version controlled database boosts and simplifies development and production deployment processes. It also helps in critical analysis, comparison and cross review of your application sets. The results give you new insights and directions for a new promising version release and you can focus on your business growth, since everything is automated.
Advantages of general version control
Version Control (VC) is important to organizations due to the following factors:
1. It allows tracking changes and modifications.
2. It allows comparison of any 2 versions and to roll back to a previous version if something went wrong in the current version.
3. Multiple teams can share a single repository with their own branches (that represent a different functionality or feature) and can pull from the main branch or root and can push into the main branch to merge their work with that of other teams’ branches.
4. Interim releases or minor versions and major version releases can be effectively performed.
5. Provides an automated and robust workflow for fixing errors or bugs in the form of patches (a new branch that fixes the bug) and hotfixes that addresses customer’s revised needs.
6. Enables organizations to adopt agile development and delivery in which new features can be added as per market needs efficiently.
Why businesses need database versioning – technology and process perspective.
While the above advantages are clear and a necessity for application codebase, when it comes to databases where the applications work with real data and data structures or schemas, version control achieves more significance. This can be attributed to many reasons, some of which are:
1. Technology, business environment and customer requirement changes pushes businesses to alter and adapt their database to best suite the necessity of the current time. If this is ignored, businesses suffer setbacks like losses in market share, decrease in product popularity and reduced relevance of the product due to lack of required features etc.
2. Even if application codebase utilizes version control to implement agile delivery models that enable the product to be in sync with market needs through minor and major version releases and patches, code developers at some point of time get stuck due to the complexity of database modifications that occurred in tandem with feature or code changes. At certain levels of branch merges, a mismatch or disparity in database related code or the database itself can break the entire merging process.
3. Database changes (structure, metadata and data) made by one team or branch, need to fully managed and tracked to aid error-free merging for minor and major releases. If database versioning is not implemented along with code versioning, some branches will not merge properly resulting in bugs and errors.
4. It might need to go back for a previous version of code at some point of time. This rollback requires the database at that time so that rolled back version is bug-free. Also there can be occasions when you need to audit your database versions to identify the best performed one. Database versions along with corresponding code versions enable auditing and continuous delivery patterns.
What to be brought under database versioning?
The database schema, metadata and data itself are the main elements that need to be version controlled. While it is not logical or recommended to version the database source code or binary, the above mentioned elements fit perfectly as candidates for version control in an agile, continuous integration and delivery scenario. All 3 elements can be represented as SQL files and it becomes as easy as file versioning with SQL files and other needed log files only to be put under version control. Another prominent element that needs versioning is backups. These will be compressed SQL files and will be represented as physical or logical backups. They also can be incremental backups where the binary log implements some kind of version control without much automation.
Implementing database versioning
A best practice to make your database fit for version control is using scripts, preferably SQL or otherwise any programming language of your choice to create and modify your database schema. There will be an initial script that creates the first version and further changes in schema, metadata and data are to be done through scripts only. Multiple teams in a project working on different branches of the application can maintain their own database instances and the versions of scripts that represent various points of branches (pulls and pushes) are stored along with the corresponding code. Some kind of automation script needs to work on the team’s database instance to run the current version of the database scripts to bring the database to the state of code at that level. This ensures that during a merge process, the database is updated automatically to reflect the current state of the application.
Incremental versioning of the database scripts is a simple and straightforward method to track changes in incremental versions of SQL scripts and logs. During the merge all increments are applied to the database to bring it to the current state. The extra overhead here is the need of manual actions from developers or DBA to run the incremental scripts during merges.
Who is responsible – DBAs or Developers?
Once an application is deployed, the DBA takes control of the database administration. While developers continue to work on the application codebase repository, modifying code and database, and finally do a merge for a release, DBA gets the new code with installation and upgrade script for database. In this model of continuous delivery, DBAs scope does not cover database version control but developers do. The exception is that DBA can monitor and audit database versions if he can be a part of database versioning. For this purpose, it is recommended to provide the DBA with their own test and QA database version repository. They can comfortably perform all the needed activities like benchmarking, analysis, optimization, auditing etc. and can find the best possible schema, structure and configuration. All these attributes and information can be conveyed to the developers as SQL scripts or as metadata or as plain text and developers can take this information as guidelines for next release. Like this, DBAs can be active contributors to the growth and development of entire application including the database.

Read more

How to Secure your MariaDB Database Server

At the time of writing this article, the current MariaDB 10.2 offers MariaDB TX Subscription to access the feature MariaDB MaxScale. MariaDB TX subscription offers high-end functionality like implementing business-critical production, test and QA deployments, management, monitoring and recovery tools, mission-critical patching and above all, access to the MariaDB Database Proxy, MaxScale.
Following are the security related features of MariaDB MaxScale:
1. Database firewall filter.
2. Denial of Service protection.
3. Data Masking to protect sensitive data.
This article discusses steps to secure your MariaDB Server installation from the installation phase onwards.
Install MariaDB with mysql_secure_installation
mysql_secure_installation is an executable shell script available with MariaDB to install it with added security measures like setting password for root accounts, remove root accounts that are accessible from outside localhost, remove anonymous user accounts and remove test database (along with its privileges) that can be accessed by anonymous users.
shell> mysql_secure_installation
Below are the prompts being asked by the script:
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Change the root password? [Y/n] n
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] y
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
Avoid running mysqld as Linux root user
If the mysqld user has Linux System user “root” access and MariaDB FILE privilege, then he can modify any files in the system. To prevent this, always create or use the included user mysql to run the mysqld process. For this start mysqld with mysql as user.
mysql> mysqld_safe –user=mysql
Below is a list of user management guidelines for a secure MariaDB implementation:
1. Grant root access only to local clients. Example: apache server installed on the same machine or restrict host name to specific host only. ‘root’ @ ‘’ – specifies access to user root from the host only.
2. Use strong passwords for users.
3. Have separate users for each database.
4. Restrict the hosts or IPs that can access the database server.
5. Grant only required privileges to client users. Administrative privileges like GRANT etc. are not needed for client users.
Password Check and Validation
The Password Validation Plugin API in MariaDB allows for ensuring that user passwords meet at least minimum security requirements. There are 2 plugins – simple_password_check plugin and cracklib_password_check plugin that use this API. Since they are not enabled by default install them manually. Atleast one plugin needs to be enabled to implement password validation checks. If multiple plugins are loaded, all validation checks must be passed for setting or changing passwords.
To install simple_password_check plugin:
INSTALL SONAME 'simple_password_check';
INSTALL PLUGIN 'simple_password_check';
The default values for the options for this plugin enforce the below password validation rules. Password should have:
1. At least 8 character in length.
2. Requires at least one digit.
3. Requires at least one uppercase and one lowercase letter.
4. Requires at least one symbol or special character.
The configuration options that can change the validation requirements are set either through SQL SET GLOBAL statements or through my.cnf/my.ini. Various options are:
--simple-password-check-digits=# minimum number of digits needed in password (range: 0 to 1000, default = 1)
--simple-password-check-letters-same-case=# needs this many upper and lower case letters (range: 0 to 1000, default = 1)
--simple-password-check-minimal-length=# minimum password length (range: 0 to 1000, default = 8)
--simple-password-check-other-characters=# needs this many special characters or symbols (range: 0 to 1000, default = 1)
SET GLOBAL simple_password_check_minimal_length = 10;
simple-password-check-minimal-length = 10
If password is directly set by hash, then these validations are bypassed. In order to avoid this, enable the strict_password_validation variable (along with validation plugin) that reject passwords that cannot be validated (hash passwords).
SET GLOBAL strict_password_validation = 1;
strict-password-validation = 1
The SET DEFAULT ROLE statement uses MariaDB’s User Roles feature to assign a default set of packaged privileges as roles to new users.
To remove the Default Role from the current user:
Use Secure Connections to transfer data
The Transport Layer Security (TLS) can be used to encrypt data during transfer. In MariaDB TLS support is pre-compiled, but disabled by default. To enable it, start the server with --ssl option. Configure the server for encrypted connections by adding the Certificate Authority (CA) certificate, Server Public Key Certificate and the Server Private Key in my.cnf.
Use the file names as applicable.
For client side configuration, use CA certificate, Client Public Key certificate and Client Private Key. Invoke the client like:
mysql --ssl-ca=ca.pem
Encrypt Tables and Data
Table encryption ensures that even if someone intrudes into database or access data, the data is protected due to encryption and is immune to tampering. In MariaDB the storage engines XtraDB, InnoDB and Aria supports encryption with Aria needing tables to be created with the default option ROW_FORMAT=PAGE. All tablespaces, individual tables and log files can be encrypted. Only data that are at rest in the database are encrypted and transferring data sent to the client are not encrypted. For that, secure connections using TLS protocol needs to be used.
The limitations are:
1. Disk based Galera gcache is not encrypted.
2. The Audit Plugin cannot create encrypted output. Instead the output needs to be sent to syslog for configuring protection.
3. File based general query and slow query logs cannot be encrypted.
4. Aria log is not encrypted.
5. Error log is not encrypted since it contains query text, data, crashes, assertion failures etc. It can also be protected through syslog.
For table encryption, use an encryption plugin like file_key_management plugin and configure storage engines to use it. This plugin has the following components to be configured:
1. file_key_management_filename : file that stores the encryption keys and plugin read the keys from this file.
2.file_key_management_filekey : an optional key to decrypt the key file. The key can also be a filename that contains the key preceded by the string FILE (eg: FILE:/path/to/filekey).
3. file_key_management_encryption_algorithm : the encryption algorithm
An example configuration in my.cnf or my.ini:
file_key_management_filename = /home/mdb/keys.enc
file_key_management_filekey = secret
To encrypt XtraDB and InnoDB tables, use the below configuration (values are examples) in my.cnf or my.ini:
file-key-management-filename = /mount/usb1/keys.txt
Then use CREATE TABLE or ALTER TABLE queries with the below options:
ENCRYPTION_KEY_ID : Positive Integer
CREATE TABLE T (id int, value varchar(255)) ENCRYPTED=YES ENCRYPTION_KEY_ID=17;
To disable encryption and decrypt the table:
ALTER TABLE T encrypted=NO;
Aria tables can be encrypted by the below configuration in my.cnf/my.ini:
To encrypt temporary tables in disk created during query execution, set the below parameter in my.cnf/my.ini:
To encrypt temporary files created on disk during binary log transaction caching, filesorts etc:
To encrypt binary logs and relay logs:
Setup Auditing and Logging
The MariaDB Audit Plugin logs lot of security and performance related relevant information like incoming connections, executed queries, accesses to individual tables and information about users who read/write at what time to the database. The logs are entered on a file or syslog. This information helps us to detect suspicious activities and to investigate security breaches.
Enable logging by setting server_audit_logging to ON.
SET GLOBAL server_audit_logging=on
The events that can be logged are grouped as CONNECT, QUERY and TABLE events. To log these events, set the server_audit_events variable:
SET GLOBAL server_audit_events = 'CONNECT,QUERY,TABLE';
User activities are also logged since the activities belong to any one of the types connect, query or table. You can include or exclude users whose events are to be logged using the server_audit_incl_users variable.
To add a user without removing previous usernames:
SET GLOBAL server_audit_incl_users = CONCAT(@@global.server_audit_incl_users, ',maria');
To specifically include or exclude users:

Read more

How to install MySQL NDB Cluster on Linux

As discussed in this previous article about MySQL NDB Cluster overview, it is built on the NDBCLUSTER storage engine for MySQL and provides high availability and scalability. This article explains installation of MySQL NDB Cluster on Linux environment. The detailed documentation is available at the official MySQL Developer Documentation page.
To avoid a single point of failure, the example cluster is installed on separate hosts that represent each of the four nodes needed to run the cluster. So this is a four node, four host cluster that we are setting up. Instead of different hosts, virtual hosts also can be used for this setup. Each node/host is assigned separate IP addresses on the Ethernet Network as follows:
Management node (mgmd) -
SQL node (mysqld) –
Data node “A” (ndbd) –
Data node “B” (ndbd) –
NDB Cluster setup on multi-computer Ethernet network
Installing the NDB Cluster Binary Release
Install the NDB Cluster binary from the MySQL NDB Cluster download page. At the time of this article, NDB Cluster 7.5 binary archive is available as mysql-cluster-gpl-7.5.8-linux-i686-glibc23.tar.gz. Download it to /var/tmp.
SQL Node setup
A user named mysql on group mysql is needed on the system. Verify these are present after login as root user and if not, manually create them.
Unpack the archive to /usr/local and create a symbolic link named mysql to the mysql directory found inside the package.
shell> cd /var/tmp
shell> tar -C /usr/local -xzvf mysql-cluster-gpl-7.5.8-linux2.6.tar.gz
shell> ln -s /usr/local/mysql-cluster-gpl-7.5.8-linux2.6-i686 /usr/local/mysql
Now the symbolic link mysql points to the NDB Cluster Directory mysql-cluster-gpl-7.5.8-linux2.6-i686 inside /usr/local.
After changing to the mysql directory, initialize the MySQL installation (mysqld) by creating the data directory and populating the mysql system table.
With --initialize option a random password will be generated to the MySQL root user. If no password generation is needed during this stage, use --initialize-insecure option instead of --initialize and set password later.  The database base directory and data directory also can be specified during initialization:
shell> mysqld --initialize --user=mysql
Set permission for root and mysql users over the database directories and assign the base directory to the group mysql:
shell> chown -R root .
shell> chown -R mysql data
shell> chgrp -R mysql .
In order for mysqld to start during system startup, add the startup script to /etc/rd.d/init.d scripts, make it executable and add it as startup service.
shell> cp support-files/mysql.server /etc/rc.d/init.d/
shell> chmod +x /etc/rc.d/init.d/mysql.server
shell> chkconfig --add mysql.server
Thus mysql.server starts the SQL node. For each host machine containing SQL nodes, above steps are to be repeated.
Data node setup
Data nodes do not need the mysqld binary. To setup the data node, either the single-threaded NDB Cluster data node executable ndbd or multi-threaded executable ndbmtd is required. These binaries are also included in the NDB Cluster binary downloaded above. Download and unpack the binary in the data node host. The data node binaries will be inside the bin directory of the unpacked cluster binary directory. From the downloaded location (assuming as /var/tmp), move the data node binaries to the /usr/local/bin location for binaries. Perform the below operations as root user.
shell> cd /var/tmp
shell> tar -zxvf mysql-5.7.19-ndb-7.5.8-linux-i686-glibc23.tar.gz
shell> cd mysql-5.7.19-ndb-7.5.8-linux-i686-glibc23
shell> cp bin/ndbd /usr/local/bin/ndbd
shell> cp bin/ndbmtd /usr/local/bin/ndbmtd
Delete the directory that contains the unpacked NDB cluster archive.
Now make both the data node binaries (ndbd and ndbmtd) executable:
shell> cd /usr/local/bin
shell> chmod +x ndb*
Repeat the above steps for each data node host. Note that the MySQL data directory is /usr/local/mysql/data.
Management node setup
Management node does not need the mysqld binary. It needs only the NDB Cluster management server ndb_mgmd. The management client named ndb_mgm is highly desirable to use the management node. Both binaries are located in the bin folder inside the NDB Cluster binary downloaded (inside /var/tmp) as mentioned above. As root user, unpack the cluster binary, copy the management node and client binaries to /ust/local/bin and make them executable.
shell> cd /var/tmp
shell> tar -zxvf mysql-5.7.19-ndb-7.5.8-linux2.6-i686.tar.gz
shell> cd mysql-5.7.19-ndb-7.5.8-linux2.6-i686
shell> cp bin/ndb_mgm* /usr/local/bin
Delete the directory that contains the unpacked NDB cluster archive.
Now make both the management node and client binaries executable:
shell> cd /usr/local/bin
shell> chmod +x ndb_mgm*
NDB Cluster Initial Configuration
After downloading and installing the SQL, Data and Management nodes, now it is time to configure the NDB Cluster.
Data and SQL Nodes Configuration
Each of the data and SQL nodes need the configuration file my.cnf to specify the connection string to connect to the management node and another parameter that enables the NDBCLUSTER storage engine on this MySQL Server. Create the my.cnf file, if not already exists, or open with vi editor and add the configuration parameters.
shell> vi /etc/my.cnf
Below shown is the part of the my.cnf file that holds the NDB Cluster related parameters.
# Options for mysqld process:
ndbcluster # run NDB storage engine
# Options for NDB Cluster processes:
ndb-connectstring= # location of management server
Note: the IP address of management node above is as per this example setup only and needs to be replaced by the actual IP address in your case.
In our example, these my.cnf settings are to be added to Data Node A, Data Node B and SQL Node hosts.
In our example, these my.cnf settings are to be added to Data Node A, Data Node B and SQL Node hosts.
Management Node Configuration
The Management Node needs a config.ini file that specifies the following information:
1. How many replicas (data nodes) to be managed in the cluster.
2. How much memory is to be allocated for data and indexes on each data node.
3. IP address or Domain Names (with DNS Resolution) of Data and SQL Nodes in the cluster.
4. Data directories of each Data Node to save database data.
Create the configuration directory for the Management Node and create config.ini file inside it, running as root user.
shell> mkdir /var/lib/mysql-cluster
shell> cd /var/lib/mysql-cluster
shell> vi config.ini
For our example setup, the config.ini of Master Node should read as shown below after writing the settings:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2 # Number of replicas
DataMemory=80M # How much memory to allocate for data storage
IndexMemory=18M # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example NDB Cluster setup.
ServerPort=2202 # This the default value; however, you can use any
                  # port that is free for all the hosts in the cluster
                  # Note1: It is recommended that you do not specify the port
                  # number at all and simply allow the default value to be used
                  # instead
                  # Note2: The port was formerly specified using the PortNumber
                  # TCP parameter; this parameter is no longer available in NDB
                  # Cluster 7.5.
# Management process options:
HostName= # Hostname or IP address of MGM node
DataDir=/var/lib/mysql-cluster # Directory for MGM node log files
[ndbd el_id='59a5076ed1de1']
# Options for data node "A":
                                # (one [ndbd el_id='59a5076ed1e2b'] section per data node)
HostName= # Hostname or IP address
NodeId=2 # Node ID for this data node
DataDir=/usr/local/mysql/data # Directory for this data node's data files
[ndbd el_id='59a5076ed1e68']
# Options for data node "B":
HostName= # Hostname or IP address
NodeId=3 # Node ID for this data node
DataDir=/usr/local/mysql/data # Directory for this data node's data files
# SQL node options:
HostName= # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)
Note: the IP addresses are for this example setup only and need to be replaced by your actual IP addresses or Domain names.
The above are the minimal settings for the my.cnf and config.ini files.
Initial Startup of the NDB Cluster
The nodes must be started in the following order:
1. Management Nodes
2. Data Nodes
3. SQL Nodes.
Start the management node process ndb_mgmd in the management node host. During the initial startup, the node must be supplied as parameter, the location of its configuration file, config.ini, through the option -f or --config-file:
shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini
Next start the data nodes, ndbd processes, at the data node hosts:
shell> ndbd
Now start the SQL node process, mysqld, on the SQL node host:
shell> cd mysql_installation_directory
shell> bin/mysqld_safe &
shell> mysql.server start
In the example, mysql_installation_directory is /usr/local/mysql
If the setup is correct and everything has gone well, the cluster will be operational now. Use the ndb_mgm client to test the cluster. A running cluster should produce the below output for this example:
shell> ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
[ndbd(NDB)] 2 node(s)
id=2 @ (Version: 5.7.19-ndb-7.5.8, Nodegroup: 0, *)
id=3 @ (Version: 5.7.19-ndb-7.5.8, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @ (Version: 5.7.19-ndb-7.5.8)
[mysqld(API)] 1 node(s)
id=4 @ (Version: 5.7.19-ndb-7.5.8)
Here the [mysqld(API) ] reference is for the SQL Node process mysqld and it is acting as the NDB Cluster API node to accept data access requests from clients.
Now you can create tables using the NDBCLUSTER storage engine:
CREATE TABLE tbl_name (col_name column_definitions) ENGINE=NDBCLUSTER;
Alter existing tables to NDBCLUSTER storage engine, if any:
To shutdown the cluster, issue below command on the management node host:
shell> ndb_mgm -e shutdown
To restart the cluster, issue below command on the management node host:
shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini
In data node hosts:
shell> ndbd
In SQL node hosts, from the mysql installation directory:
shell> bin/mysqld_safe &
shell> mysql.server start

Read more

Symptoms of a bad IT Architecture

IT Architecture
How significant is the IT Architecture for your business
Every organization, SMBs to Enterprises, relies upon their IT architecture for running the business, its workflows, planning, processes, policies, principles, operations, services, projects and to fulfill their obligation towards value proposition. It is not merely the data storage and management system, as existed till the late 80’s but incorporates the complete domain of business management as of now. Right from the aim, planning, policies and principles, the IT architecture starts to play a significant and profound role in the organization’s existence, operations and development.
Your IT architecture coordinates and interlinks the organization and its management functions by providing the platform, tools and interface for organizational foundation, functioning, administration, and development. It enforces the standards and regulations upon the processes and workflows to produce the desired outcome. It controls and regulates the functioning by providing tools and interface for monitoring, analyzing and decision making. It provides historical and statistical data with analytical tools for planning and implementation and at the same time providing resources for the expansion and development of the organization.
What is needed in the IT Architecture?
The below elements or components are crucial for the IT architecture:
1. Back office
2. Operations
3. Project execution
4. Public interface
5. Data management
6. Infrastructure
There are many architecture implementation models adopted by various organizations. Some select a particular architecture after detailed studies and research while some architecture will evolve with time and experience. In any case, since IT architecture is the backbone and powering factor of the organizational realm, any discrepancies in the selection, configuration, implementation and management of the IT architecture will produce negative signs and side effects to the smooth functioning of the organization.
An IT architecture can be bad or the domain related synonyms as applicable to it, like non-scalable, outdated, redundant, inconsistent, unreliable, etc. depending upon the environment that demands from it. Whatever the technology be, if your IT architecture is just not suitable for your needs, it will start producing warning signs for your business.
What are the symptoms of a bad IT Architecture?
Frequent Malfunctioning and Increased Downtime
Is your IT staff busy running around departments responding to case reports and troubleshooting? Are there frequent system crashes and your staff is stuck with an abrupt system shutdown or crash? Or are they not getting the data and functionality they need for their duties? Obviously this is a clear and concise indication of an outdated or mis-configured IT architecture.
Redundant Interfaces
If the number of interfaces is not proportionate to data view and management requirements, it means you are forced to do the same action across different screens on the same data. You are wasting the operational time as well as computing resources. A poorly designed CRM or Finance Software can create this issue and it may be on a certain point of time that you have detected it. It indicates either lack of horizontal scalability or poor design patterns.
Data Redundancy
It depends upon the data and entity relationship design of your IT architecture that decides how much data redundancy you are experiencing either visible or invisible. Copying databases or data tables across departmental databases create extreme amount of data duplication and redundancy. If you HR could not find an interface to review the salary hike pattern of an employee through the synchronized but logically separated data views of the HR and Finance departments, it indicates that a data copy mechanism is in use to provide this data to the HR. This is a clear case of Data Redundancy and associated overheads in data synchronization, storage issues etc.
Data Inconsistency
It is related to data synchronization across the organizational departments. Many cases are reported where the organizational data is either corrupted or not vertically scalable due to missing data entities or units. For organizations that heavily depend upon traditional and in-house data management systems, this risk is high due to the lack of adequate human, hardware and software resources.
Lack of features and tools
Is your CRM not allowing or integrating with your Email Marketing Software to send marketing and other types of emails to your customers? Are you unable to export data in bulk? Is your DBA struggling with growing slow query log and underperforming database? Are you lacking a security system to filter your incoming and outgoing emails? Is your disk storage congesting with incremental backups? All these indicate that your IT architecture is hungry for additional functionality and capabilities. Ignoring this scarcity will be a huge road block between your business and its growth.
Complex Workflow
Organizations now depend heavily on networked (web based or LAN) applications for managing their customer relations, marketing, finance and accounts, communication, commerce, stock management, data management etc. When the integration of these applications is not done properly, or if they are not compatible with each other, it results in complex workflows. This complexity will be experienced in the form of multiple and redundant data processing tasks, lack of tools to process data, rigid and tedious procedures to apply when it comes to combine data from various departments etc. Employees will start feeling the overhead and unnecessary extra effort will be spent on otherwise easy processes. The main culprit here is an architecture that lacked proper planning and selection of compatible applications or failure to select a business software suite matching the organization’s requirements. Foreseeing the business and data growth also contributes to this complexity.
Communication and Collaboration bottlenecks
Communication is not only IMs, telephony, Voice/Video Conferences, but for the current world it extends to collaborative sharing of resources like files, media and even a code base. If you are facing issues with concurrent file editing, version control, task management, multiplexed resource sharing etc. then these are signs of your IT architecture lacking the most critical feature of Collaborative Sharing. Managers need to have multi-channels of inward and outward communication and broadcasting facilities to ensure processes and work-flows are executed flawlessly.
High Costs
When you face a situation to purchase a costly hardware or software to add additional functionality to your IT architecture, it points to its weakness in scalability with cost-affordable commodity hardware. This is a prominent sign of your IT architecture needing a revamp. Additionally the consultation and human resources expenses starting from recruitment to salary and perks to manage highly proprietary application suite further complicates the issue.
Little Customizable
Are you forced to be satisfied with what your IT architecture is offering and your requirement for a flexible work-flow is to be disregarded due to the complexity or non-feasibility in customizing the components of your IT architecture, then it is good time for initiating a re-thinking. Customization is one of the promising features any IT architecture should offer and that are your right rather than a privilege.
Customer Dissatisfaction
The million, if not a billion dollar concern – how your IT architecture is affecting your deliverability and customer satisfaction? Are they forced to wait for a manual approval of an order or transaction they have placed online? Are your embedded business rules are outdated and you are not able to detect or if so, not able to enhance it? Then customer dissatisfaction is obvious. Other features that you may look as being available in your IT architecture are continuous delivery and agility, improved customer participation and transparency in processes, easy and minimum levels of work-flows, smart and pro-active customer support not only through conventional means, but also through live chat, document sharing, transfer of issues to appropriate departments, track-able support tickets etc. are must haves for your IT architecture.
Where are we now?
Not only for SMBs who have resource and budget constraints, in today’s enterprise scenario, even corporations are forced to save every penny in the business model. It is not the actual budget that matters, but whether you will get the ROI and QOS you are looking for your investment. Your IT architecture today have a horizon full of options to support your business with the tools and suites that changes your business environment to a highly productive, collaborative, controlled, and dynamic one.
Data Management and computing are domains where technology has grown enormously. The enterprise service architecture based technologies are making life simpler and richer. If you need a scalable and fully managed database management system to store all your data, your biggest asset, then there are the DBaaS and highly scalable cluster systems like the MySQL Cluster. Managed microservices, cloud based SaaS technologies for communication, collaboration and sharing offer enormous quantity of availability and scalability. Virtualization and Containerization has emerged as powerful alternatives for costly hardware changes. You need not house any hardware in-house that adds to the cost of server maintenance and security, but can purchase the highly scalable, rapidly provisioned computing and data nodes deployed through the cloud and placed in data centers across the world. There is no such thing like one-time investment and periodic investments for upgrades; you need to pay only for what you need at the current moment and foreseeable future.
It is the matter of choices and drive for embracing technology for nourishing your business and productivity through wise decisions. And available for you is a broader spectrum of options, to be picked up and added to your IT architecture shopping cart. What you are supposed to get from this most modern and sophisticated infrastructure is efficiency and growth, with the least effort and cost. And you will start the process of continuous evolution again only to be the best than the previous you.

Read more

An Overview of MySQL NDB Cluster

mysql ndb cluster
The “Shared Nothing” Architecture and Database Sharding
It is a distributed computing architecture with interconnected, but independent computing nodes with their own memory and disk storage that are not shared. Compared to a centralized controller based distributed architecture, the SN architecture eliminates presence of a “single point of failure”. An SN architecture offers great scalability by adding as many nodes as wanted. The data is distributed or partitioned among the different nodes that run on separate machines. To respond to user’s queries, some kind of coordination protocol is being used. This is also called database sharding, where the data is horizontally partitioned. A database shard that contains a partition will be holding on a separate database server instance to spread load and to eliminate failures. In horizontal partitioning, table rows are separated, rather than as columns in vertical partitioning or normalization. Each data set will be part of the shard that is distributed across different database servers and also at different physical locations, if needed.
MySQL Cluster and NDB Cluster
A MySQL Cluster is based on the shared-nothing clustering and database sharding architecture. It is built on the NDB or NDBCLUSTER storage engine for MySQL DBMS. MySQL Cluster provides high availability and scalability, high throughput with low latency, etc. It’s architecture is a real-time distributed, multi-master, ACID compliant and transactional, with no single point of failure. It has adopted the horizontally scalable auto-sharding on in-expensive commodity hardware and serves for intense workload read/write operations through both SQL and NoSQL queries.
MySQL Cluster is built on top of NDB Cluster while NDB Cluster is an independent MySQL storage engine named NDBCLUSTER (In-Memory type) that stores tables of rows as in database sharding. It can be accessed using the C++ based NDB API by any application. For the NDB Cluster, MySQL Server Instance is an API process that accesses the NDB Cluster using its API, along with other API processes that can access it, using API formats like Memcached, Node.JS, Java, HTTP/REST, etc. MySQL Cluster adds additional capabilities to the NDB Cluster, like: SQL processing, application connectors like JDBC/ODBC, Table Joins Mechanism, User authentication and authorization and Asynchronous data replication.
The NDB Cluster eliminates the “single point of failure” using the shared-nothing system in which each node has its own memory and disk. Network shares and file systems are not used by NDB Cluster. In technology documentations, the term NDB refers to the part of the setup related to storage engine and MySQL NDB Cluster refers to the integration of MySQL Server(s) with NDB storage engine. In an NDB Cluster there are multiple hosts that run their own node processes. These processes include MySQL server to access NDB data, data nodes for storing data, management servers and other co-ordination processes.

NDB Cluster Components
Image courtesy of MySQL Cluster Documentation - 
MySQL Cluster Manager
MySQL Cluster Manager is responsible for creating, administering, monitoring and managing and MySQL Cluster setup. It is used to automate the common management tasks like scaling through node allocation, upgrades, configuration management, backup and restore. It manages both the MySQL server nodes and data nodes.
In the MySQL Cluster, tables are stored in data nodes, instead of the MySQL server nodes. This data can be accessed using SQL through the MySQL Server and also by using NoSQL API. For this MySQL Cluster Libraries are there that can be used by any application to access the cluster data. The client libraries are available in the following formats:
1. Memcached
2. Node.js / JavaScript
3. Java and JPA
5. NDB API (C++)
MySQL Cluster Nodes
There are 3 types of nodes or processes in a MySQL Cluster:
1. Data node (ndbd/ndbmtd process) – The database tables are automatically sharded across the data nodes. Data nodes are programmed to handle replication, load balancing and failover.
2. Cluster manager node (ndb_mgmd process) – Configures, administers, monitors and perform other management activities like start/restart, arbitration etc.
3. Application node or SQL node (mysqld process) – This MySQL Server process connects to all data nodes for data access, reads and writes. This node is optional and other applications can access the cluster data directly through the NDB API.
Each node will run on separate hosts, either physical or VM or cloud instance.
Working of the MySQL Cluster
MySQL Cluster uses synchronous replication to commit data transactions across the multi-masters in the cluster. This is in contrast to the asynchronous MySQL replication. To ensure availability, replicas of the data, usually two, are created or mirrored and synchronously replicated to the nodes. This ensures fast failover and data loss protection. Sometimes nodes/clusters are hosted in geographically distant data centers and asynchronous data replication is used. This is for ensuring high data security, disaster recovery and to avoid latency by serving from the data center closest to the user.
The individual nodes can independently started/stopped/restarted and can re-join to the system after a failover. For making configuration changes and upgrades that are common to all nodes, a mechanism called Rolling Restarts is used in which all nodes are restarted in turn. Backup and restore of the MySQL Cluster is managed by the NDB Cluster Management client and the ndb_restore program included in the CDB Cluster Software. NDB Cluster nodes use TCP/IP over standard 100 Mbps pr faster hardware for inter-node communication.
Logs and Check Points
The NDB Cluster classifies events based upon categories like startup, shutdown etc), priority and severity. There are two types of event logs used by the cluster:
1. Cluster log - to log events for the cluster as a whole.
2. Node log – separate logs maintained by each nodes.
Nodes save data to the disk and the transaction is committed by adding a date/time checkpoint to identify last modified point. There are two types of checkpoints by which consistency of cluster data is maintained.
1. Local Checkpoint (LCP) – checkpoint specific to each node’s data saving activity. These depend upon the data activity involved with that node as well as on the cluster.
2. Global Checkpoint (GCB) – It occurs every few seconds after synchronizing transactions across all nodes and re-do log (logs data changes) is flushed to disk.

Read more

MariaDB Backup and Restore with mysqldump

database backup
In MariaDB both logical and physical backups are possible. Logical backups are made up of SQL statements that actually performed CREATE, SELECT, UPDATE, INSERT, etc. operations on the database and Physical backups are copying the data and index files that store the table information including structure and data. While logical backups are highly portable, physical backups are less portable and compatible between not only different hardware, OS, etc. but also between different storage engines of the same DBMS. This is due to the difference in file structure used by different intra-database storage engines and between different OS, hardware, drivers etc.
Performing logical backup using mysqldump tool

mysqldump is the favorite DBA tool for creating quick backups in different output formats such as SQL, CSV, XML, etc. Triggers are dumped as part of the table structure while stored procedures, events and views can be included in the dump file (output backup file) using mysqldump options like --routines and --events. 
Typically the username and password are specified as options along with mysqldump:
shell> mysqldump –u user_name –p[password el_id='599c0ffea9f20'] db_name [tbl_name ...] > backup-file.sql
To compress the dump file using pipe:
shell> mysqldump –u user –p[password el_id='599c0ffea9f5d'] dbname | gzip -9 > backupfile.sql.gz
To decompress before restore, use gunzip backupfile.sql.gz.
Below are the basic usages of mysqldump to backup tables, single database and all databases:

shell> mysqldump [options el_id='599c0ffeaa64a'] db_name [tbl_name ...] > backup-file.sql
shell> mysqldump [options el_id='599c0ffeaa696'] --databases db_name ... > backup-file.sql
shell> mysqldump [options el_id='599c0ffeaa6d0'] --all-databases > backup-file.sql
To restore a dump file into a database:
shell> mysql -u user_name –p[password] db_name < backup-file.sql
Note: a CREATE DATABASE statement will be included only if --databases option is included.
Below are some default options that are needed to perform a valid backup.

--add-drop-table: drop table if exists before a CREATE TABLE statement.
--add-locks: add table lock statements with INSERT statements so that during restore, INSERTS become fast.
--create-options: to include all options related to table creation with CREATE TABLE statements.
--disable-keys: Keys are not created during CREATE TABLE and key checks are not performed during INSERTs (non-unique indexes only) to speed up INSERTs. After completing all INSERTs, indexes (keys) are created.
--extended-insert: to use multiple rows in one INSERT statement to speed up INSERTs and to reduce dump file size.
--lock-tables: to lock all tables before creating the INSERT statements to create the table dumps.
--quick: to speed up dumping on larger tables. Tells mysqldump to read and dump one row at a time rather than dumping the entire row set, reading from memory and placing in buffer prior to insertion.
--set-charset: to use the default charset for the tables.
The above options are grouped in the --opt option and this group is the default option set with a mysqldump command if no other options are specified. These options and the option group are set as default to make the dumping faster and to create a dump file that can be restored quickly.

These default options can be negated or skipped by specifying corresponding skip options.
--skip-add-locks option disables --add-locks option.
--skip-create-option disables --create-options option.
--skip-disable-keys option disables --disable-keys
To apply the default option group --opt with some individual group options skipped, use appropriate skip options along with --opt.
Example: to skip extended inserts and quick options.

shell> mysqldump --opt –skip-extended-insert --skip-quick db_name > backup-file.sql
The above statement can be replaced by:
shell> mysqldump --skip-extended-insert --skip-quick db_name > backup-file.sql
Since --opt is ON by default.
To skip the entire --opt:
shell> mysqldump --skip-opt db_name > backup-file.sql
To skip --opt, but at the same time use some of the default options, for example:
--add-drop-table and --disable-keys, use:
shell> mysqldump --skip-opt --add-drop-table --disable-keys db_name > backup-file.sql
The order of parameters is important in selective enabling/disabling of options. In the above statement, instead of specifying--skip-opt in the beginning, if it is included at the end, it will skip all --opt group options including the --add-drop-table and --disable-keys, that precedes the skip opt.
Another option group is --compact that includes the below options:

--skip-add-drop-table, --skip-add-locks, --skip-comments, --skip-disable-keys and --skip-set-charset
This option group disables structure comments, header and footer statements like DROP TABLE IF EXISTS, enables key building and checks, add table locks etc. These are intended for a debugging operation to analyze and troubleshoot and database.
Below are some options that have importance in specific scenarios:

--debug-check: before exiting, mysqldump will check memory and prints file usage stats.
--debug-info: before exiting, print debug information
--log-error=file_name: to log all errors and warnings encountered during the dumping operation, in the specified file name.
--log-queries: if logging is turned on, the slow queries will be recorded in the slow query log and general queries in query log.
Master Slave Replication

--master-data[=#]: this option is used to create a dump file from a master server for creating a slave server or modify an existing slave for the master server from which the dump is being taken. The binary log position and filename (binary log coordinates) needed for replication slave setup is appended to the output dump file. Setting this option to 1 will add the CHANGE MASTER statement in the dump file, so that the receiving slave will use the parameters for connecting with and replicating the master. An option value of 2 will comment out the above command. Read master-slave replication implementation article here.
shell> mysqldump --master-data=1 --all-databases > dumpfile.sql
--delete-master-logs: this option is used on a master server that has binary logging enabled. This option appends a PURGE BINARY LOGS statement to delete all binary log files older than the log file specified in the PURGE BINARY LOGS command.
    { TO 'log_name' | BEFORE datetime_expr }
Deletes all log files older than the file ‘log_name’ or the datetime specified. In effect, --delete-master-logs option enables --master-data to create a dump file for a slave server.
--dump-slave[=value]: this option is used to create a dump file from an existing slave server to create another slave server for the pre-existing master. Value 1 is used to obtain binary log coordinates.
To take an old backup for a slave and replay the old changes since that backup at the master:
shell> mysqldump --all-databases --master-data=2 > all_databases.sql
Creating mysqldump backups for text file importing
--tab=directory_name: this option is used for creating tab-separated text-format (.txt) data files along with the SQL file (.sql) that holds the table’s CREATE TABLE statement in the specified directory name value. So the output will be an SQL file and a TXT file. This text file can be used by the LOAD DATA INFILE statement to dump data into a database table. You must have the FILE permission and mysqldump should be in the same server that runs mysqld and it should have file write permissions on the specified directory. By default, this option places tabs between field values and newline (\n) at the end of line, in the .txt file.
The LOAD DATA INFILE statement imports rows from a text file in to the database at high speed. This text file can be created using mysqldump by adding delimiters and line separators to the rows and fields in the dump file.
    INTO TABLE tbl_name
    [CHARACTER SET charset_name]
        [TERMINATED BY 'string']
        [[OPTIONALLY] ENCLOSED BY 'char']
        [ESCAPED BY 'char']
        [STARTING BY 'string']
        [TERMINATED BY 'string']
    [IGNORE number LINES]
    [SET col_name = expr,...]
When LOAD DATA INFILE statement is used along with the text file that contains the rows, MariaDB identifies fields by:
1. A string that terminates a field, mainly tab or space.
2. (Optionally) enclosed by a char like single or double quotes.
3. Escaped by chars like \t – tab or \s - space.
Individual rows or text file lines are identified by:
1. Starting by a string, generally quotes.
2. Terminated by string, generally newline (\n) and carriage return (\r).
To make such an INFILE, below options are to be used along with mysqldump that works with above --tab option to produce a .txt formatted data file.
--fields-terminated-by=name : used with --tab option to specify a field terminator, usually tab.
--fields-enclosed-by=name : used with --tab option to specify field enclosing string, usually space.
--fields-optionally-enclosed-by=name : same as above, but depends on --fields-terminated-by value.
--fields-escaped-by=name : used with --tab option to specify an escape character (\) for the fields, if they contain instances of tab, newline, or \ that occur within field values.
--lines-terminated-by=name : used with --tab option to specify a string as the end of line (row), generally \r\n.
Hexadecimal codes can also be used to specify the delimiters.
Example: using the hexadecimal 0x0d0 for \r\n (EOL) character as line termination markers
shell> mysqldump --tab=/tmp --fields-terminated-by=’\t’,
         --fields-enclosed-by='"' --lines-terminated-by=0x0d0a database1
Transactional Table Backups
When taking backups of transactional tables like InnoDB, the option --single-transaction is used to dump and restore the operation as a single transaction to get a consistent state of the database.
shell> mysqldump --all-databases --single-transaction all_databases.sql

Read more

Basic MariaDB Database Administration

MariaDB is an open source RDBMS released under GPL License. Currently there are millions of MariaDB installations across the world. MariaDB’s popularity is due to its speed, security and scalability and capability to handle massive data sets. This article discusses about basic to intermediate MariaDB Administration tasks. More details are available at the MariaDB official knowledgebase.
MariaDB is available for installation in the form of tarballs and binaries for Linux, Solaris and Windows. There are packages for Linux distributions also available at MariaDB Downloads website. MariaDB can also be upgraded from an existing MySQL installation. There are various clients and utilities available for MariaDB configuration, optimization and administration. Some important categories of these clients and utilities are:
1. mysql command line client (SQL shell)
2. Aria table clients and utilities
3. Backup, restore and import clients
4. MyISAM clients and utilities
5. EXPLAIN Analyzer
6. innochecksum
7. mysqladmin
8. mysqlaccess
9. mysqlbinlog
10. mysqlcheck
11. mysqldumpslow
12. mysql_install_db
13. mysql_plugin
14. mysqlreport
15. mysql_secure_installation
16. mysql_setpermission
17. mysqlshow
18. mysqlslap
19. mysql-test
20. mysql_upgrade
Additionally Graphical and Enhanced clients are also available. A few of them are:
Database Workbench, dbForge Studio for MySQL, HeidiSQL, JPDB Admin for MariaDB, MONyog and SQLyog, mycli, phpMyAdmin, Sequel Pro, DBeaver, TOAD for MySQL, etc.
Starting and Stopping MariaDB
The MariaDB Server Binary is called mysqld. There are many ways to start it.
1. Starting from the source directory:
shell > ./mysqld &
2. Use the startup script mysqld_safe, from the MariaDB installation directory:
shell > mysqld_safe &
mysqld_safe starts mysqld.
3. Use the mysql.server startup script located on the /etc/init.d directory to start/stop mysqld:
mysql.server start
mysql.server stop
Once the server is started, the client utility mysql can be used to perform various tasks. To start mysql client and connect to a database:
shell > mysql --user=user_name --password=your_password db_name
Once mysql client starts running the command line prompt changes to mysql >, where you can execute SQL queries.
mysql> SELECT * FROM users where user_name = ‘Robert’
MariaDB User Account Management
1. Create a user from a host if already does not exist:
Query OK, 0 rows affected (0.00 sec)
Valid host names include:
‘sam’@ ‘localhost’ – local server only
‘sam’@ ‘%’ – any host
‘sam’@ ‘190.190.190.%’ – any host from the network mask
‘sam’@ ‘ip_addr’
‘sam’@ ‘host1’ – from the host – host1
2. Create a user, replace if already exists:
CREATE OR REPLACE USER ‘sam’@ ‘localhost’ IDENTIFIED BY 'password';
Query OK, 0 rows affected (0.00 sec)
3. Change or alter a user
To force a user to use SSL connections:
Set or restrict connection variables for a user:
4. Change password
SET PASSWORD FOR 'sam'@'' = PASSWORD('newpassword');
5. Rename a user
RENAME USER 'donald' TO 'tom'@'localhost';
6. Grant/Revoke privileges for the user
Granting all privileges:
CREATE USER 'bob'@'localhost';
There are different levels of privileges from Global to Procedure privileges.
1. Global privileges
2. Database privileges
3. Table privileges
4. Column privileges
5. Function privileges
6. Procedure privileges
The ALL PRIVILEGES privilege grants all privileges. The clause WITH GRANT OPTION gives the user ability to grant privileges for other user.
Giving all privileges on a specific database db1 only.
GRANT ALL PRIVILEGES ON db1.* to 'bob'@'localhost;
7. Revoke privileges from a user
8. Remove a user
Query OK, 0 rows affected (0.00 sec)
After all user management operations, the privileges table needs to be reloaded:
MariaDB Backup and Restore
The mysqldump client utility can be used for backup and restore.
Backup to a sql file:
shell> mysqldump db_name > backup-file.sql
Restore the backup
shell> mysql db_name < backup-file.sql
Creating/Selecting/Dropping Databases
create database mydb;
use mydb;
drop database mydb;

Read more

Galera Cluster for MySQL and MariaDB

Galera Cluster for MySQL is a MySQL multi-master cluster setup that uses the Galera Replication Plugin. The replication is synchronous so that any changes happened at any one master node is immediately replicated to other master nodes as broadcasted transaction commits. This synchronization provides high-availability, high up-time and scalability. All master nodes in the cluster are available both READS/WRITES. The Galera Replication Plugin provides automatic node control to implement dropping of failed nodes from the cluster and rejoining recovered nodes back to the cluster. This prevents data loss and clients can connect to any node, as decided by the Replication Load Balancer. Since changes are synchronized between all nodes, unlike conventional replication, there is no slave lag, lost transactions and client latencies are kept at a minimum level.

The Galera Replication Plugin
Galera Cluster is implemented by activating the Galera Replication Plugin on the MySQL or MariaDB database server. This transactional replication plugin extends the MySQL replication plugin API and is called the Write-Set Replication API or wsrep API. This API creates the interface between the Galera Replication and the MySQL/MariaDB DBMS Engine. It is responsible for producing the multi-master synchronous replication that is also a certification-based replication. In certification-based replication, the Galera Plugin prepares WRITE transactions also called write-sets in each node, that include database row changes and all locks applied to the database at the time of transaction. This write-set is added to the transaction queue and each node certifies this write-set against other write-sets in the applier queue. This certification causes transaction commit and the WRITES are applied to the nodes’ tablespace. Even though this is a “logical synchronization”, since each node needs to certify the write-set independently, the actual writing and committing to the node’s tablespace is also independent and hence asynchronous.
Synchronous Vs. Asynchronous Replication
1. In synchronous replication, any changes happened in one node is applied to other nodes in the cluster with guarantee as certified transactions. In asynchronous replication, changes in master are replicated to slave nodes only upon request from the slave and hence there is no guarantee for the transaction to occur within a certain time limit. If master crashes or if there is network/propagation delay, the data replication is negatively affected.
2. Synchronous replication provides high-availability, since any node crash will not cause data loss or affect data integrity, since other nodes maintain a consistent replica of data and state.
3. Since the replication is transaction based, they are applied in parallel across all nodes.
However, the extra overhead in the form of complexity and distributed locking for synchronous replication cause delays in the cluster, compared to asynchronous replication.
Replication in the Galera Cluster
The wsrep API that powers Galera Replication Plugin uses the database-state model for implementing replication. The data, READ/WRITE/COMMIT etc. locks at a given point of time is the state of that database. When clients perform READ/WRITES, the database state changes. These state changes are translated as write-sets and recorded by the wsrep API as transactions. The nodes synchronize their states by applying these write-set transactions broadcasted through a transaction queue, in the same serial order. It is the Galera Replication Plugin that manages the write-set certification and its replication across the cluster.
The different components of the Galera Replication Plugin are:
  • Certification Layer: creates write-sets from database state changes, perform certification checks on these write-sets and ensures that the write-sets can be applied to the nodes in the cluster. 
  • Replication Layer: Manages the entire replication protocol implementation and controls the cluster. 
  • Group Communication Framework: Provides a plugin architecture based communication system for the various component groups that connect to the Galera Cluster. 
Advantages of Galera Cluster

  1. High availability due to multiple masters synchronized together.
  2. True multi-master – read/write to any node.
  3. Tightly coupled by synchronization – all nodes maintain same state.
  4. No data loss – changes are transaction committed as and when they occur without delay. So node failures will not affect data integrity.
  5. Automatic node provisioning.
  6. Hot standby – no downtime, since failed nodes are compensated by other nodes.

Read more