Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. ProxySQL Series : Percona Cluster/MariaDB Cluster (Galera) Read-write Split

    ProxySQL is the most preferred and is widely used for load-balancing MySQL workload, thanks to Rene Cannon & Team for the great tool, and kudos on the recent release of ProxySQL 1.4.10, with a lot of bug fixes. ProxySQL is simple in design, lightweight, highly efficient and feature rich, We have been working with ProxySQL in production for our client quite a sometime, we have also shared some of our encounters/experience and use cases in the below blogs. ProxySQL for replication – Read Write split ProxySQL for handling bad queries ProxySQL Mirroring ProxySQL and Replication switch over using MHA ProxySQL HA using Keepalived In this blog, we will see how to setup ProxySQL for Percona XtraDB cluster with the READ-WRITE split with Failover handling for various node states with Galera. PXC / MariaDB Clusters really works better with writes on single ode than multi node writes. That makes the read/write split up ideal for PXC. Application benefits a lot with PXC /MariaDB Cluster as they avail synchronous reads and High availability with these clusters. For the purpose of this blog I have set up a three node cluster as below Operating System: Centos 7.4 Cluster version : Percona XtraDB cluster 5.7.21 ProxySQL version : proxysql-1.4.8-1.1.el7.x86_64 ProxySQLNode : 192.168.33.11 Node1 : 192.168.33.12 Node2 : 192.168.33.13 Node 3: 192.168.33.14 Setting up the cluster is beyond the scope of this, I will just directly move to proxySQL setup for a cluster in Single-writer mode, Which is the most recommended for Cluster to avoid of conflicts of writes and split-Brain scenarios. ArchitectureInstalling ProxySQL: Easy and robust way to have install proxySQL is by using the percona repo, because it comes with tool/scripts such as proxysql-admin and proxysql_galera_checker, which helps in the easy configuration, monitoring of cluster and also fail-over #yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm #yum install proxysql-1.4.8-1.1.el7.x86_64 -y #chkconfig proxysql on (enabling service at startup) # service proxysql start Now proxysql is up and running on the node 192.168.33.11 and listening on ports 6032 for proxysql admin interface and 6033 for MySQL interface by default, They can be changed if needed Adding Cluster Nodes to proxySQL Now connect to proxySQL as below $ mysql -u admin -padmin -h 127.0.0.1 -P6032 In this setup we will be using two hostgroups Hostgroup 10 (writer group)Hostgroup 11 (Reader group) mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (10,'192.168.33.12',3306,1000000,'WRITE'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (11,'192.168.33.13',3306,1000,'READ'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO mysql_servers(hostgroup_id, hostname, port,weight,comment) VALUES (11,'192.168.33.14',3306,1000,'READ'); Query OK, 1 row affected (0.00 sec) mysql> load mysql servers to runtime; Query OK, 0 rows affected (0.00 sec) mysql> save mysql servers to disk; Query OK, 0 rows affected (0.03 sec) Which looks as below mysql> SELECT * FROM mysql_servers; +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 10 | 192.168.33.12 | 3306 | ONLINE | 1000000 | 0 | 1000 | 0 | 0 | 0|WRITE | | 11 | 192.168.33.13 | 3306 | ONLINE | 1000 | 0 | 1000 | 0 | 0 | 0 |READ | | 11 | 192.168.33.14 | 3306 | ONLINE | 1000 | 0 | 1000 | 0 | 0 | 0 |READ | +--------------+---------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ Query Rules: Query rules will be important for query routing with proxySQL, incoming query pattern will be matched based on which it will be routed to the hostgroup accordingly, here I will be using the default and basic query rules for Read-Write splitting as below, INSERT INTO mysql_query_rules (rule_id,active,match_pattern,destination_hostgroup,apply) VALUES (1,1,'^SELECT.*FOR UPDATE$',10,1), (2,1,'^SELECT',11,1); Query OK, 2 rows affected (0.00 sec) mysql> load mysql query rules to runtime; Query OK, 0 rows affected (0.00 sec) mysql> save mysql query rules to disk; Query OK, 0 rows affected (0.01 sec) Now the query rules are added. Integrating Application user: Now it’s time to add embed application user into proxySQL through which application connects via proxysql, here I have assigned the default host group for the user as 10 writer group, when an incoming query without a matching pattern in a query rule, it routes automatically to the default hostgroup ie., 10 INSERT INTO mysql_users (username,password,default_hostgroup) VALUES ('app_user','application_user',10); Query OK, 0 rows affected (0.00 sec) mysql> load mysql users to runtime; Query OK, 0 rows affected (0.00 sec) mysql> save mysql users to disk; Query OK, 0 rows affected (0.01 sec) Application user now is loaded and kept ready. ProxySQL Monitoring: Proxysql Monitors the node, added under it by making checks at regular interval, you can enable the monitoring as below, mysql> UPDATE global_variables SET variable_value="monitor" WHERE variable_name="mysql-monitor_username"; Query OK, 1 row affected (0.00 sec) mysql> UPDATE global_variables SET variable_value="monitor" WHERE variable_name="mysql-monitor_password"; Query OK, 1 row affected (0.00 sec) mysql> LOAD MYSQL VARIABLES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) mysql> SAVE MYSQL VARIABLES TO DISK; Query OK, 94 rows affected (0.00 sec) Monitor check: mysql> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 5; +---------------+------+------------------+-------------------------+---------------+ | hostname | port | time_start_us | connect_success_time_us | connect_error | +---------------+------+------------------+-------------------------+---------------+ | 192.168.33.14 | 3306 | 1534068625859885 | 1452 | NULL | | 192.168.33.13 | 3306 | 1534068625849598 | 1392 | NULL | | 192.168.33.12 | 3306 | 1534068625838742 | 1505 | NULL | | 192.168.33.14 | 3306 | 1534068565850089 | 1389 | NULL | | 192.168.33.13 | 3306 | 1534068565839890 | 1713 | NULL | +---------------+------+------------------+-------------------------+---------------+ It shows that proxySQL is able to connect to all the nodes. Scheduler: Here we come to the important aspect of the configuration, With cluster, we have different node states (1-4) , in any state comment beyond SYNCED , these node state switches are tracked by proxySQL health checks ( Monitoring Galera ).  And it routes the traffic accordingly with the help of  scheduler scripts. Scheduler with help of the proxysql_galera_checker script checks the node states in a periodic configurable interval (ms), When a node state change is detected, It makes changes to the Hostgroup and takes the nodes out of traffic accordingly. An advanced scheduler was written by Marco Tusa is on his github ( proxy_sql_tools ). For instance, if the writer node goes down, the scheduler script promotes a node from reader group to accept the writes, and when the node comes back online and add it back to hostgroup, The change is very quick that application doesn’t notice any interruption INSERT INTO scheduler(id,active,interval_ms,filename,arg1,arg2,arg3,arg4,arg5) VALUES (1,'1','500','/usr/bin/proxysql_galera_checker','10','11','0','1', '/var/lib/proxysql/proxysql_galera_checker.log'); Query OK, 1 row affected (0.00 sec) mysql> LOAD SCHEDULER TO RUNTIME;SAVE SCHEDULER TO DISK; Query OK, 0 rows affected (0.00 sec) mysql> select * from scheduler\G *************************** 1. row *************************** id: 1 active: 1 interval_ms: 3000                     #check interval in Ms filename: /bin/proxysql_galera_checker #Check script arg1: 10                               # Writer group arg2: 11                               # Reader group arg3: 0                               # No of writers arg4: 1                                # Writers are readers Arg5:/var/lib/proxysql/mycluster_proxysql_galera_check.log  #log file comment: mycluster 1 row in set (0.00 sec) Scheduler start checking immediately and writes the status to log as below Logs: [2018-07-15 08:24:11] log file /var/lib/proxysql/mycluster_proxysql_galera_check.log [2018-07-15 08:24:11] ###### HANDLE WRITER NODES ###### [2018-07-15 08:24:12] --> Checking WRITE server 10:192.168.33.12:3306, current status ONLINE, wsrep_local_state 4 [2018-07-15 08:24:12] ###### HANDLE READER NODES ###### [2018-07-15 08:24:12] --> Checking READ server 11:192.168.33.13:3306, current status ONLINE, wsrep_local_state 4 [2018-07-15 08:24:12] server 11:192.168.33.13:3306 is already ONLINE [2018-07-15 08:24:12] --> Checking READ server 11:192.168.33.14:3306, current status ONLINE, wsrep_local_state 4 [2018-07-15 08:24:12] server 11:192.168.33.14:3306 is already ONLINE [2018-07-15 08:24:12] ###### SUMMARY ###### [2018-07-15 08:24:12] --> Number of writers that are 'ONLINE': 1 : hostgroup: 10 [2018-07-15 08:24:12] --> Number of readers that are 'ONLINE': 2 : hostgroup: 11 Now the setup is complete, with Read/Write split and failover, Now we will proceed to test Read-write Split I will be using the below command snippet to watch the connection and failover scenarios # watch -n 1 'mysql -h 127.0.0.1 -P 6032 -uadmin -padmin -t -e "select * from stats_mysql_connection_pool where hostgroup in (10,11) order by hostgroup,srv_host ;" -e " select hostgroup_id,hostname,status,weight,comment from mysql_servers where hostgroup_id in (10,11) order by hostgroup_id,hostname ;"' +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+--------- -------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us | +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 10 | 192.168.33.12 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 604 | | 11 | 192.168.33.13 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 509 | | 11 | 192.168.33.14 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 601 | +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ +--------------+---------------+--------+---------+---------+ | hostgroup_id | hostname | status | weight | comment | +--------------+---------------+--------+---------+---------+ | 10 | 192.168.33.12 | ONLINE | 1000000 | WRITE | | 11 | 192.168.33.13 | ONLINE | 1000 | READ | | 11 | 192.168.33.14 | ONLINE | 1000 | READ | +--------------+---------------+--------+---------+---------+ Read/Write testing: Only write test with sysbench. sysbench --test='/usr/share/sysbench/oltp_update_index.lua' --table-size=1000000 --mysql-host=192.168.33.11 --mysql-db='sbtest' --mysql-user=app_user --mysql-password='application_user' --mysql-port=6033 --time=60 --threads=4 --db-driver=mysql run ProxySQL stats: +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us | +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 10 | 192.168.33.12 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 29326 | 703876 | 0 | 0 | | 11 | 192.168.33.13 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | | 11 | 192.168.33.14 | 3306 | ONLINE | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 || +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ As you can see the queries stats getting changed for the writer group and the readers do not receive any writes. Read ( Select ) test: Simulating a readonly primary key based ( select ) on cluster with sysbench. # sysbench --test='/usr/share/sysbench/oltp_point_select.lua' --table-size=1000000 --mysql-host=192.168.33.11 --mysql-db='sbtest' --mysql-user=app_user --mysql-password='application_user' --mysql-port=6033 --time=60 --threads=4 --db-driver=mysql run ProxySQL stats: +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | hostgroup | srv_host | srv_port | status | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us | +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ | 10 | 192.168.33.12 | 3306 | ONLINE | 0 | 4 | 4 | 0 | 29326 | 703876 | 0 | 0 | | 11 | 192.168.33.13 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 95848 | 2300380 | 5731992 | 0 | | 11 | 192.168.33.14 | 3306 | ONLINE | 0 | 3 | 3 | 0 | 96929 | 2326324 | 5818176 | 0 | +-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+ Now you can see the change of query count in our reader hostgroup comparing the previous result. In the next blog we will see about the switch over and fail-over with cluster using proxySQL Image Courtesy:Photo by Tim Gouw on Unsplash
  2. Database Security Monitoring for MySQL and MariaDB

    Data protection is one of the most significant aspects of administering a database. Depending on the organizational structure, whether you are a developer, sysadmin or DBA, if you are managing the production database, you must monitor data for unauthorized access and usage. The purpose of security monitoring is twofold. One, to identify unauthorised activity on the database. And two, to check if databases ´and their configurations on a company-wide basis are compliant with security policies and standards. In this article, we will divide monitoring for security in two categories. One will be related to auditing of MySQL and MariaDB databases activities. The second category will be about monitoring your instances for potential security gaps. Query and connection policy-based monitoring Continuous auditing is an imperative task for monitoring your database environment. By auditing your database, you can achieve accountability for actions taken or content accessed. Moreover, the audit may include some critical system components, such as the ones associated with financial data to support a precise set of regulations like SOX, or the EU GDPR regulation. Usually, it is achieved by logging information about DB operations on the database to an external log file. By default, auditing in MySQL or MariaDB is disabled. You and achieve it by installing additional plugins or by capturing all queries with the query_log parameter. The general query log file is a general record of what MySQL is performing. The server records some information to this log when clients connect or disconnect, and it logs each SQL statement received from clients. Due to performance issues and lack of configuration options, the general_log is not a good solution for security audit purposes. If you use MySQL Enterprise, you can use the MySQL Enterprise Audit plugin which is an extension to the proprietary MySQL version. MySQL Enterprise Audit Plugin plugin is only available with MySQL Enterprise, which is a commercial offering from Oracle. Percona and MariaDB have created their own open source versions of the audit plugin. Lastly, McAfee plugin for MySQL can also be used with various versions of MySQL. In this article, we will focus on the open source plugins, although the Enterprise version from Oracle seems to be the most robust and stable. Characteristics of MySQL open source audit plugins While the open source audit plugins do the same job as the Enterprise plugin from Oracle - they produce output with database query and connections - there are some major architectural differences. MariaDB Audit Plugin – The MariaDB Audit Plugin works with MariaDB, MySQL (as of version 5.5.34 and 10.0.7) and Percona Server. MariaDB started including the Audit Plugin by default from versions 10.0.10 and 5.5.37, and it can be installed in any version from MariaDB 5.5.20. It is the only plugin that supports Oracle MySQL, Percona Server, and MariaDB. It is available on Windows and Linux platform. Versions starting from 1.2 are most stable, and it may be risky to use versions below that in your production environment. McAfee MySQL Audit Plugin – This plugin does not use MySQL audit API. It was recently updated to support MySQL 5.7. Some tests show that API based plugins may provide better performance but you need to check it with your environment. Percona Audit Log Plugin – Percona provides an open source auditing solution that installs with Percona Server 5.5.37+ and 5.6.17+ as part of the installation process. Comparing to other open source plugins, this plugin has more reach output features as it outputs XML, JSON and to syslog. As it has some internal hooks to the server to be feature-compatible with Oracle’s plugin, it is not available as a standalone plugin for other versions of MySQL. Plugin installation based on MariaDB audit extension The installation of open source MySQL plugins is quite similar for MariaDB, Percona, and McAfee versions. Percona and MariaDB add their plugins as part of the default server binaries, so there is no need to download plugins separately. The Percona version only officially supports it’s own fork of MySQL so there is no direct download from the vendor's website ( if you want to use this plugin with MySQL, you will have to obtain the plugin from a Percona server package). If you would like to use the MariaDB plugin with other forks of MySQL, then you can find it from https://downloads.mariadb.com/Audit-Plugin/MariaDB-Audit-Plugin/. The McAfee plugin is available at https://github.com/mcafee/mysql-audit/wiki/Installation. Before you start the plugin installation, you can check if the plugin is present in the system. The dynamic plugin (doesn’t require instance restart) location can be checked with: SHOW GLOBAL VARIABLES LIKE 'plugin_dir'; +---------------+--------------------------+ | Variable_name | Value | +---------------+--------------------------+ | plugin_dir | /usr/lib64/mysql/plugin/ | +---------------+--------------------------+ Check the directory returned at the filesystem level to make sure you have a copy of the plugin library. If you do not have server_audit.so or server_audit.dll inside of /usr/lib64/mysql/plugin/, then more likely your MariaDB version is not supported and should upgrade it to latest version.. The syntax to install the MariaDB plugin is: INSTALL SONAME 'server_audit'; To check installed plugins you need to run: SHOW PLUGINS; MariaDB [(none)]> show plugins; +-------------------------------+----------+--------------------+--------------------+---------+ | Name | Status | Type | Library | License | +-------------------------------+----------+--------------------+--------------------+---------+ | binlog | ACTIVE | STORAGE ENGINE | NULL | GPL | | mysql_native_password | ACTIVE | AUTHENTICATION | NULL | GPL | | mysql_old_password | ACTIVE | AUTHENTICATION | NULL | GPL | | wsrep | ACTIVE | STORAGE ENGINE | NULL | GPL | | MRG_MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL | | MEMORY | ACTIVE | STORAGE ENGINE | NULL | GPL | | CSV | ACTIVE | STORAGE ENGINE | NULL | GPL | | MyISAM | ACTIVE | STORAGE ENGINE | NULL | GPL | | CLIENT_STATISTICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INDEX_STATISTICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | TABLE_STATISTICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | USER_STATISTICS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | PERFORMANCE_SCHEMA | ACTIVE | STORAGE ENGINE | NULL | GPL | | InnoDB | ACTIVE | STORAGE ENGINE | NULL | GPL | | INNODB_TRX | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_LOCKS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_LOCK_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_CMP | ACTIVE | INFORMATION SCHEMA | NULL | GPL | ... | INNODB_MUTEXES | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_SYS_SEMAPHORE_WAITS | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | INNODB_TABLESPACES_ENCRYPTION | ACTIVE | INFORMATION SCHEMA | NULL | BSD | | INNODB_TABLESPACES_SCRUBBING | ACTIVE | INFORMATION SCHEMA | NULL | BSD | | Aria | ACTIVE | STORAGE ENGINE | NULL | GPL | | SEQUENCE | ACTIVE | STORAGE ENGINE | NULL | GPL | | user_variables | ACTIVE | INFORMATION SCHEMA | NULL | GPL | | FEEDBACK | DISABLED | INFORMATION SCHEMA | NULL | GPL | | partition | ACTIVE | STORAGE ENGINE | NULL | GPL | | rpl_semi_sync_master | ACTIVE | REPLICATION | semisync_master.so | GPL | | rpl_semi_sync_slave | ACTIVE | REPLICATION | semisync_slave.so | GPL | | SERVER_AUDIT | ACTIVE | AUDIT | server_audit.so | GPL | +-------------------------------+----------+--------------------+--------------------+---------+ If you need additional information, check the PLUGINS table in the information_schema database which contains more detailed information. Another way to install the plugin is to enable the plugin in my.cnf and restart the instance. An example of a basic audit plugin configuration from MariaDB could be : server_audit_events=CONNECT server_audit_file_path=/var/log/mysql/audit.log server_audit_file_rotate_size=1073741824 server_audit_file_rotations=8 server_audit_logging=ON server_audit_incl_users= server_audit_excl_users= server_audit_output_type=FILE server_audit_query_log_limit=1024 Above setting should be placed in my.cnf. Audit plugin will create file /var/log/mysql/audit.log which will rotate on size 1GB and there will be eight rotations until the file is overwritten. The file will contain only information about connections. Currently, there are sixteen settings which you can use to adjust the MariaDB audit plugin. server_audit_events server_audit_excl_users server_audit_file_path server_audit_file_rotate_now server_audit_file_rotate_size server_audit_file_rotations server_audit_incl_users server_audit_loc_info server_audit_logging server_audit_mode server_audit_output_type Server_audit_query_log_limit server_audit_syslog_facility server_audit_syslog_ident server_audit_syslog_info server_audit_syslog_priority Among them, you can find options to include or exclude users, set different logging events (CONNECT or QUERY) and switch between file and syslog. To make sure the plugin will be enabled upon server startup, you have to set plugin_load=server_audit=server_audit.so in your my.cnf settings. Such configuration can be additionally protected by server_audit=FORCE_PLUS_PERMANENT which will disable the plugin uninstall option. UNINSTALL PLUGIN server_audit; ERROR 1702 (HY000): Plugin 'server_audit' is force_plus_permanent and can not be unloaded Here is some sample entries produced by MariaDB audit plugin: 20180817 20:00:01,slave,cmon,cmon,31,0,DISCONNECT,information_schema,,0 20180817 20:47:01,slave,cmon,cmon,17,0,DISCONNECT,information_schema,,0 20180817 20:47:02,slave,cmon,cmon,19,0,DISCONNECT,information_schema,,0 20180817 20:47:02,slave,cmon,cmon,18,0,DISCONNECT,information_schema,,0 20180819 17:19:19,slave,cmon,cmon,12,0,CONNECT,information_schema,,0 20180819 17:19:19,slave,root,localhost,13,0,FAILED_CONNECT,,,1045 20180819 17:19:19,slave,root,localhost,13,0,DISCONNECT,,,0 20180819 17:19:20,slave,cmon,cmon,14,0,CONNECT,mysql,,0 20180819 17:19:20,slave,cmon,cmon,14,0,DISCONNECT,mysql,,0 20180819 17:19:21,slave,cmon,cmon,15,0,CONNECT,information_schema,,0 20180819 17:19:21,slave,cmon,cmon,16,0,CONNECT,information_schema,,0 20180819 19:00:01,slave,cmon,cmon,17,0,CONNECT,information_schema,,0 20180819 19:00:01,slave,cmon,cmon,17,0,DISCONNECT,information_schema,,0 Schema changes report If you need to track only DDL changes, you can use the ClusterControl Operational Report on Schema Change. The Schema Change Detection Report shows any DDL changes on your database. This functionality requires an additional parameter in ClusterControl configuration file. If this is not set you will see the following information: schema_change_detection_address is not set in /etc/cmon.d/cmon_1.cnf. Once that is in place an example output may be like below: It can be set up with a schedule, and the reports emailed to recipients. ClusterControl: Schedule Operational Report MySQL Database Security Assessment Package upgrade check First, we will start with security checks. Being up-to-date with MySQL patches will help reduce risks associated with known vulnerabilities present in the MySQL server. You can keep your environment up-to-date by using the vendors’ package repository. Based on this information you can build your own reports, or use tools like ClusterControl to verify your environment and alert you on possible updates. ClusterControl Upgrade Report gathers information from the operating system and compares them to packages available in the repository. The report is divided into four sections; upgrade summary, database packages, security packages, and other packages. You can quickly compare what you have installed on your system and find a recommended upgrade or patch. ClusterControl: Upgrade Report ClusterControl: Upgrade Report details To compare them manually you can run SHOW VARIABLES WHERE variable_name LIKE "version"; With security bulletins like:https://www.oracle.com/technetwork/topics/security/alerts-086861.htmlhttps://nvd.nist.gov/view/vuln/search-results?adv_search=true&cves=on&cpe_vendor=cpe%3a%2f%3aoracle&cpe_produhttps://www.percona.com/doc/percona-server/LATEST/release-notes/release-notes_index.htmlhttps://downloads.mariadb.org/mariadb/+releases/https://www.cvedetails.com/vulnerability-list/vendor_id-12010/Mariadb.htmlhttps://www.cvedetails.com/vulnerability-list/vendor_id-13000/Percona.html Or vendor repositories: On Debian sudo apt list mysql-server On RHEL/Centos yum list | grep -i mariadb-server Accounts without password Blank passwords allow a user to login without using a password. MySQL used to come with a set of pre-created users, some of which can connect to the database without password or, even worse, anonymous users. Fortunately, this has changed in MySQL 5.7. Finally, it comes only with a root account that uses the password you choose at installation time. For each row returned from the audit procedure, set a password: SELECT User,host FROM mysql.user WHERE authentication_string=''; Additionally, you can install a password validation plugin and implement a more secure policy: INSTALL PLUGIN validate_password SONAME 'validate_password.so'; SHOW VARIABLES LIKE 'default_password_lifetime'; SHOW VARIABLES LIKE 'validate_password%'; An good start can be: plugin-load=validate_password.so validate-password=FORCE_PLUS_PERMANENT validate_password_length=14 validate_password_mixed_case_count=1 validate_password_number_count=1 validate_password_special_char_count=1 validate_password_policy=MEDIUM Of course, these settings will depend on your business needs. Remote access monitoring Avoiding the use of wildcards within hostnames helps control the specific locations from which a given user may connect to and interact with the database. You should make sure that every user can connect to MySQL only from specific hosts. You can always define several entries for the same user, this should help to reduce a need for wildcards. Execute the following SQL statement to assess this recommendation (make sure no rows are returned): SELECT user, host FROM mysql.user WHERE host = '%'; Test database The default MySQL installation comes with an unused database called test and the test database is available to every user, especially to the anonymous users. Such users can create tables and write to them. This can potentially become a problem on its own - and writes would add some overhead and reduce database performance. It is recommended that the test database is dropped. To determine if the test database is present, run: SHOW DATABASES LIKE 'test'; If you notice that the test database is present, this could be that mysql_secure_installation script which drops the test database (as well as other security-related activities) was not executed. LOAD DATA INFILE If both server and client has the ability to run LOAD DATA LOCAL INFILE, a client will be able to load data from a local file to a remote MySQL server. The local_infile parameter dictates whether files located on the MySQL client's computer can be loaded or selected via LOAD DATA INFILE or SELECT local_file. This, potentially, can help to read files the client has access to - for example, on an application server, one could access any data that the HTTP server has access to. To avoid it, you need to set local-infile=0 in my.cnf. Execute the following SQL statement and ensure the Value field is set to OFF: SHOW VARIABLES WHERE Variable_name = 'local_infile'; Monitor for non-encrypted tablespaces Starting from MySQL 5.7.11, InnoDB supports data encryption for tables stored in file-per-table tablespaces. This feature provides at-rest encryption for physical tablespace data files. To examine if your tables have been encrypted run: mysql> SELECT TABLE_SCHEMA, TABLE_NAME, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%ENCRYPTION="Y"%'; +--------------+------------+----------------+ | TABLE_SCHEMA | TABLE_NAME | CREATE_OPTIONS | +--------------+------------+----------------+ | test | t1 | ENCRYPTION="Y" | +--------------+------------+----------------+ As a part of the encryption, you should also consider encryption of the binary log. The MySQL server writes plenty of information to binary logs. Encryption connection validation In some setups, the database should not be accessible through the network if every connection is managed locally, through the Unix socket. In such cases, you can add the ‘skip-networking’ variable in my.cnf. Skip-networking prevents MySQL from using any TCP/IP connection, and only Unix socket would be possible on Linux. However this is rather rare situation as it is common to access MySQL over the network. You then need to monitor that your connections are encrypted. MySQL supports SSL as a means to encrypting traffic both between MySQL servers (replication) and between MySQL servers and clients. If you use Galera cluster, similar features are available - both intra-cluster communication and connections with clients can be encrypted using SSL. To check if you use SSL encryption run the following queries: SHOW variables WHERE variable_name = 'have_ssl'; select ssl_verify_server_cert from mysql.slave_master_info; That’s it for now. This is not a complete list, do let us know if there are any other checks that you are doing today on your production databases. Related resources   An Overview of Database Operational Reporting in ClusterControl Read the blog   How to Achieve GDPR Compliance: Documenting Our Experience (I) Read the blog   How to Achieve GDPR Compliance: Documenting Our Experience (II) Read the blog   What DBAs Should Know About HIPAA and Other Data Protection Regulations Read the blog   How to Secure Your Open Source Databases with ClusterControl Read the blog Tags:  monitoring security MySQL MariaDB compliance
  3. How to set up a Replication User

      A replication user is necessary to set up the relationship Primary/Replica. This is a short step but it needs a bit more of attention. From the MySQL 5.7 documentation (highlights are my own): Although you do not have to create an account specifically for replication, you should be aware that the replication user name and password are stored in plain text in the master info repository file or table (see Section 16.2.4.2, “Slave Status Logs”). Therefore, you may want to create a separate account that has privileges only for the replication process, to minimize the possibility of compromise to other accounts. The following command specifically will allow replication from all databases and tables connecting from all hosts. For security reasons you may want to limit access to replication only to the IP address of the server doing the replication. Log into the MySQL console using a user with GRANT privileges in the primary server and execute the following: CREATE USER 'replication'@'%' IDENTIFIED BY 'mysupersecretpassword' GRANT REPLICATION SLAVE ON *.* TO 'replication'@'%'; My advice is instead of using the % wildcard, set up the IP address of your replica. This user will be added to the primary ’s MASTER_USER option, and in theory could be any user as long it also has REPLICATION SLAVE privileges. After that, the replica will connect to the primary and perform some kind of handshake with those credentials and if they match, theprimary will allow replication to occur. See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.
  4. Using AWS EC2 instance store vs EBS for MySQL: how to increase performance and decrease cost

    If you are using large EBS GP2 volumes for MySQL (i.e. 10TB+) on AWS EC2, you can increase performance and save a significant amount of money by moving to local SSD (NVMe) instance storage. Interested? Then read on for a more detailed examination of how to achieve cost-benefits and increase performance from this implementation. EBS vs Local instance store We have heard from customers that large EBS GP2 volumes can be affected by short term outages—IO “stalls” where no IO is going in or out for a couple of minutes. This can happen, especially, in the largest AWS region us-east-1. Statistically, with so many disks in disk arrays (which back EBS volumes) we can expect frequent disk failures. If we allocate a very large EBS GP2 volume, i.e. 10Tb+, hitting such failure events can be common. In the case of MySQL/InnoDB, such an IO “stall” will be obvious, particularly with the highly loaded system where MySQL needs to do physical IO. During the stall, you will see all write queries are waiting, or “hang”.  Some of the writes may error out with “Error 1030” (MySQL error code 1030 (ER_GET_ERRNO): Got error %d from storage engine). There is nothing MySQL can do here – if the IO subsystem is not available, it will need to wait for it. The good news is: many of the newer EC2 instances (i.e. i3, m5d, etc) have local SSD disks attached (NVMe). Those disks are local to the physical server and should not suffer from the EBS issues described above. Using local disks can be a very good solution: They are faster, as they are local to the server, and do not suffer from the EBS issues They are much cheaper compared to large EBS volumes. Please note, however, that local storage does not guarantee persistence. More about this below. Another potential option will be to use IO1 volumes with provisional IOPS. However, it will be significantly more expensive for the large volumes and high traffic. A look at costs To estimate the costs, I’ve used the AWS simple monthly calculator. Estimated costs are based on 1 year reserved instances. Let’s imagine we will need to use 14TB volume (to store ~10Tb of MySQL data including binary logs). The pricing estimates will look like this: r4.4xlarge, 122GB RAM, 16 vCPUs + EBS, 14TB volume (this is what we are presumably using now) Amazon EC2 Service (US East (N. Virginia)) $ 1890.56 / month Compute: $ 490.56 EBS Volumes: $1400.00 Local storage price estimate: i3.4xlarge, 122GB RAM, 16 vCPUs, 3800 GiB disk (2 x 1900 NVMe SSD) Amazon EC2 Service (US East (N. Virginia)) $ 627.21 / month Compute: $ 625.61 i3.8xlarge, 244GB RAM, 32 vCPUs, 7600 GiB disk (4 x 1900 NVMe SSD) Amazon EC2 Service (US East (N. Virginia)) $1252.82 / month Compute: $ 1251.22 As we can see, even if we switch to i3.8xlarge and get 2x more RAM and 2x more virtual CPUs, faster storage, 10 gigabit network we can still pay 1.5x less per box what we are presumably paying now. Include replication, then that’s paying 1.5x less per each of the replication servers. But wait … there is a catch. How to migrate to local storage from EBS Well, we have some challenges here to migrate from EBS to local instance NVMe storage. Wait, we are storing ~10Tb and i3.8xlarge have 7600 GiB disk. The answer is simple: compression (see below) Wait, but the local storage is ephemeral, if we loose the box we will loose our data – that is unacceptable.  The answer is also simple: replication (see below) Wait, but we use EBS snapshots for backups. That answer is simple too: we can still use EBS (and use snapshots) on 1 of the replication slave (see below) Compression To fit i3.8xlarge we only need 2x compression. This can be done with InnoDB row compression (row_format=compressed) or InnoDB page compression, which requires sparse file and hole punching support. However, InnoDB compression may be slower and will only compress ibd files—it does not compress binary logs, frm files, etc. ZFS Another option: use the ZFS filesystem. ZFS will compress all files, including binary logs and frm. That can be very helpful if we use a “schema per customer” or “table per customer” approach and need to store 100K – 200K tables in a single MySQL instance. If the data is compressible, or new tables were provisioned without much data in those, ZFS can give a significant disk savings. I’ve used ZFS (followed Yves blog post, Hands-On Look at ZFS with MySQL). Here are the results of data compression with ZFS (this is real data, not a generated data): # du -sh --apparent-size /mysqldata/mysql/data 8.6T /mysqldata/mysql/data # du -sh /mysqldata/mysql/data 3.2T /mysqldata/mysql/data Compression ratio: # zfs get all | grep -i compress ... mysqldata/mysql/data compressratio 2.42x - mysqldata/mysql/data compression gzip inherited from mysqldata/mysql mysqldata/mysql/data refcompressratio 2.42x - mysqldata/mysql/log compressratio 3.75x - mysqldata/mysql/log compression gzip inherited from mysqldata/mysql mysqldata/mysql/log refcompressratio 3.75x - As we can see, the original 8.6Tb of data was compressed to 3.2Tb, the compression ratio for MySQL tables is 2.42x, for binary logs 3.75x. That will definitely fit i3.8xlarge. (For another test, I’ve generated 40 million tables spread across multiple schemas (databases). I’ve added some data only to one schema, leaving others blank. For that test I achieved ~10x compression ratio.) Conclusion: ZFS can provide you with very good compression ratio, will allow you to use different EC2 instances on AWS, and save you a substantial amount of money. Although compression is not free performance-wise, and ZFS can be slower for some workloads, using local NVMe storage can compensate. You can find some performance testing for ZFS on linux in this blog post: About ZFS Performance. Some benchmarks comparing EBS and local NVMe SSD storage (i3 instances) can be found in this blog post: Percona XtraDB Cluster on Amazon GP2 Volumes MyRocks Another option for compression would be using the MyRocks storage engine in Percona Server for MySQL, which provides compression. Replication and using local volumes As the local instance storage is ephemeral we need redundancy: we can use MySQL replication or Percona XtraDB cluster (PXC). In addition, we can use one replication slave—or we can attach a replication slave to PXC—and have it use EBS storage. Local storage is not durable. If you stop the instance and then start it again, the local storage will probably disappear. (Though reboot is an exception, you can reboot the instance and the local storage will be fine.) In addition if the local storage disappears we will have to recreate MySQL local storage partition (for ZFS, i.e. zpool create or for EXT4/XFS, i.e. mkfs) For example, using MySQL replication: master - local storage (AZ 1, i.e. 1a) -> slave1 - local storage (AZ 2, i.e. 1b) -> slave2 - ebs storage (AZ 3, i.e. 1c) (other replication slaves if needed with local storage - optional) Then we can use slave2 for ebs snapshots (if needed). This slave will be more expensive (as it is using EBS) but it can also be used to either serve production traffic (i.e. we can place smaller amount of traffic) or for other purposes (for example analytical queries, etc). For Percona XtraDB cluster (PXC) we can just use 3 nodes, 1 in each AZ. PXC uses auto-provisioning with SST if the new node comes back blank. For MySQL replication we need some additional things: Failover from master to a slave if the master will go down. This can be done with MHA or Orchestrator Ability to clone slave. This can be done with Xtrabackup or ZFS snapshots (if using ZFS) Ability to setup a new MySQL local storage partition (for ZFS, i.e. zpool create or for EXT4/XFS, i.e. mkfs) Other options Here are some totally different options we could consider: Use IO1 volumes (as discussed). That can be way more expensive. Use local storage and MyRocks storage engine. However, switching to another storage engine is another bigger project and requires lots of testing Switch to AWS Aurora. That can be even more expensive for this particular case; and switching to aurora can be another big project by itself. Conclusions Using EC2 i3 instances with local NVMe storage can increase performance and save money. There are some limitations: local storage is ephemeral and will disappear if the node has stopped. Reboot is fine. ZFS filesystem with compression enabled can decrease the storage requirements so that a MySQL instance will fit into local storage. Another option for compression could be to use InnoDB compression (row_format=compressed). That may not work for everyone as it requires additional changes to the existing server provisioning: failover from master to a slave, ability to clone replication slaves (or use PXC), ability to setup a new MySQL local storage partition, using compression. The post Using AWS EC2 instance store vs EBS for MySQL: how to increase performance and decrease cost appeared first on Percona Database Performance Blog.
  5. Use cases for MySQL Cluster

    Third chapter of "MySQL Cluster 7.5 inside and out".NDB was designed for a number of networking applications. In particular the originaldesign focused a lot on telecom applications. The telecom applications have extremerequirements on availability, many networking applications as well.This chapter goes through a number of application types where NDB have been usedand also uses a number of public examples of use cases.These application types include DNS servers, DHCP servers, Intelligent Network (IN)applications, 3G/4G/5G applications, RADIUS servers, number portability, DIAMETERserver, Video-on-demand applications, Payment Gateways, Software DefinedNetworking (SDN), Network Function Virtualization (NFV), Voice over IP.In addition also many internet applications such as Gaming servers, financialapplications such as stock quote servers, whitelist/blacklist handling. eCommerceapplications, payment services, web applications, fraud detection, online banking,session database.In addition a number of frameworks have used NDB and presented it publically suchas a GE presentation, HopsFS (Hadoop distribution), HopsYARN, Oracle OpenStackand OpenLDAP.With the possibility to handle more data in MySQL Cluster 7.6 it is likely that thislist of application types will grow even more.