Planet MySQL

Planet MySQL -
  1. Shinguz: MySQL sys Schema in MariaDB 10.2

    MySQL has introduced the PERFORMANCE_SCHEMA (P_S) in MySQL 5.5 and made it really usable in MySQL 5.6 and added some enhancements in MySQL 5.7 and 8.0. Unfortunately the PERFORMANCE_SCHEMA was not really intuitive for the broader audience. Thus Mark Leith created the sys Schema for an easier access for the normal DBA and DevOps and Daniel Fischer has enhanced it further. Fortunately the sys Schema up to version 1.5.1 is available on GitHub. So we can adapt and use it for MariaDB as well. The version of the sys Schema in MySQL 8.0 is 1.6.0 and seems not to be on GitHub yet. But you can extract it from the MySQL 8.0 directory structure: mysql-8.0/share/mysql_sys_schema.sql. According to a well informed source the project on GitHub is not dead but the developers have just been working on other priorities. An the source announced another release soon (they are working on it at the moment). MariaDB has integrated the PERFORMANCE_SCHEMA based on MySQL 5.6 into its own MariaDB 10.2 server but unfortunately did not integrate the sys Schema. Which PERFORMANCE_SCHEMA version is integrated in MariaDB can be found here. To install the sys Schema into MariaDB we first have to check if the PERFORMANCE_SCHEMA is activated in the MariaDB server: mariadb> SHOW GLOBAL VARIABLES LIKE 'performance_schema'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | performance_schema | OFF | +--------------------+-------+ To enable the PERFORMANCE_SCHEMA just add the following line to your my.cnf: [mysqld] performance_schema = 1 and restart the instance. In MariaDB 10.2 the MySQL 5.6 PERFORMANCE_SCHEMA is integrated so we have to run the sys_56.sql installation script. If you try to run the sys_57.sql script you will get a lot of errors... But also the sys_56.sql installation script will cause you some little troubles which are easy to fix: unzip mysql -uroot < sys_56.sql ERROR 1193 (HY000) at line 20 in file: './procedures/diagnostics.sql': Unknown system variable 'server_uuid' ERROR 1193 (HY000) at line 20 in file: './procedures/diagnostics.sql': Unknown system variable 'master_info_repository' ERROR 1193 (HY000) at line 20 in file: './procedures/diagnostics.sql': Unknown system variable 'relay_log_info_repository' For a quick hack to make the sys Schema work I changed the following information: server_uuid to server_id @@master_info_repository to NULL (3 times). @@relay_log_info_repository to NULL (3 times). For the future the community has to think about if the sys Schema should be aware of the 2 branches MariaDB and MySQL and act accordingly or if the sys Schema has to be forked to work properly for MariaDB and implement MariaDB specific functionality. When the sys Schema finally is installed you have the following tables to get your performance metrics: mariadb> use sys mariadb> SHOW TABLES; +-----------------------------------------------+ | Tables_in_sys | +-----------------------------------------------+ | host_summary | | host_summary_by_file_io | | host_summary_by_file_io_type | | host_summary_by_stages | | host_summary_by_statement_latency | | host_summary_by_statement_type | | innodb_buffer_stats_by_schema | | innodb_buffer_stats_by_table | | innodb_lock_waits | | io_by_thread_by_latency | | io_global_by_file_by_bytes | | io_global_by_file_by_latency | | io_global_by_wait_by_bytes | | io_global_by_wait_by_latency | | latest_file_io | | metrics | | processlist | | ps_check_lost_instrumentation | | schema_auto_increment_columns | | schema_index_statistics | | schema_object_overview | | schema_redundant_indexes | | schema_table_statistics | | schema_table_statistics_with_buffer | | schema_tables_with_full_table_scans | | schema_unused_indexes | | session | | statement_analysis | | statements_with_errors_or_warnings | | statements_with_full_table_scans | | statements_with_runtimes_in_95th_percentile | | statements_with_sorting | | statements_with_temp_tables | | sys_config | | user_summary | | user_summary_by_file_io | | user_summary_by_file_io_type | | user_summary_by_stages | | user_summary_by_statement_latency | | user_summary_by_statement_type | | version | | wait_classes_global_by_avg_latency | | wait_classes_global_by_latency | | waits_by_host_by_latency | | waits_by_user_by_latency | | waits_global_by_latency | +-----------------------------------------------+ One query as an example: Top 10 MariaDB global I/O latency files on my system: mariadb> SELECT * FROM sys.waits_global_by_latency LIMIT 10; +--------------------------------------+-------+---------------+-------------+-------------+ | events | total | total_latency | avg_latency | max_latency | +--------------------------------------+-------+---------------+-------------+-------------+ | wait/io/file/innodb/innodb_log_file | 112 | 674.18 ms | 6.02 ms | 23.75 ms | | wait/io/file/innodb/innodb_data_file | 892 | 394.60 ms | 442.38 us | 29.74 ms | | wait/io/file/sql/FRM | 668 | 72.85 ms | 109.05 us | 20.17 ms | | wait/io/file/sql/binlog_index | 10 | 21.25 ms | 2.13 ms | 15.74 ms | | wait/io/file/sql/binlog | 19 | 11.18 ms | 588.56 us | 10.38 ms | | wait/io/file/myisam/dfile | 79 | 10.48 ms | 132.66 us | 3.78 ms | | wait/io/file/myisam/kfile | 86 | 7.23 ms | 84.01 us | 789.44 us | | wait/io/file/sql/dbopt | 35 | 1.95 ms | 55.61 us | 821.68 us | | wait/io/file/aria/MAI | 269 | 1.18 ms | 4.40 us | 91.20 us | | wait/io/table/sql/handler | 36 | 710.89 us | 19.75 us | 125.37 us | +--------------------------------------+-------+---------------+-------------+-------------+ Taxonomy upgrade extras:  mariadb sys performance_schema 10.2
  2. MySQL Log Rotation

    Overview I find far too often that MySQL error and slow query logs are unaccounted for.  Setting up log rotation helps make the logs manageable in the event that they start to fill up and can help make your troubleshooting of issues more efficient. Setup All steps in the examples below are run as the root user. The first step is to setup a user that will perform the log rotation.  It is recommended to only give enough access to the MySQL user for the task that it is performing. Create Log Rotate MySQL User mysql > CREATE USER 'log_rotate'@'localhost' IDENTIFIED BY '<ENTER PASSWORD HERE>'; mysql > GRANT RELOAD,SUPER ON *.* to 'log_rotate'@'localhost'; mysql > FLUSH PRIVILEGES;</pre>   The next step is to setup the MySQL authentication config as root.  Here are two methods to set this up.  The first method will be the more secure method of storing your MySQL credentials using mysql_config_editor as the credentials are stored encrypted. But this first method is only available with Oracle MySQL or Percona MySQL client greater than 5.5. It is not available with Maria DB MySQL client. Method 2 can be used with pretty much any setup but is less secure as the password is stored in plain text.   Method 1 bash # mysql_config_editor set \ --login-path=logrotate \ --host=localhost \ --user=log_rotate \ --port 3306 \ --password Method 2 bash # vi /root/.my.cnf [client] user=log_rotate password='<ENTER PASSWORD HERE>' bash # chmod 600 /root/.my.cnf   Now we will test to make sure this is working as expected   Method 1 bash # mysqladmin --login-path=logrotate ping Method 2 bash # mysqladmin ping   The paths for the error and slow query log will need to be gathered in order to place them in the logrotate config file   Method 1 bash # mysql --login-path=logrotate -e "show global variables like 'slow_query_log_file'" bash # mysql --login-path=logrotate -e "show global variables like 'log_error'" Method 2 bash # mysql -e "show global variables like 'slow_query_log_file'" bash # mysql -e "show global variables like 'log_error'"   Finally we will create the log rotation file with the following content. Make sure to update your error and slow query log paths to match the paths gathered in previous steps. Start by opening up the editor for a new mysql logrotate file in the /etc/logrotate.d directory.   Method 1 Content bash # vi /etc/logrotate.d/mysql /var/lib/mysql/error.log /var/lib/mysql/slow.queries.log { create 600 mysql mysql daily rotate 30 missingok compress sharedscripts postrotate if test -x /usr/bin/mysqladmin && env HOME=/root /usr/bin/mysqladmin --login-path=logrotate ping > /dev/null 2>&1 then env HOME=/root/ /usr/bin/mysql --login-path=logrotate -e 'select @@global.long_query_time into @lqt_save; set global long_query_time=2000; set global slow_query_log=0; select sleep(2); FLUSH ERROR LOGS; FLUSH SLOW LOGS;select sleep(2); set global long_query_time=@lqt_save; set global slow_query_log=1;' > /var/log/mysqladmin.flush-logs 2>&1 fi endscript } Method 2 Content bash # vi /etc/logrotate.d/mysql /var/lib/mysql/error.log /var/lib/mysql/slow.queries.log { create 600 mysql mysql daily rotate 30 missingok compress sharedscripts postrotate if test -x /usr/bin/mysqladmin && env HOME=/root /usr/bin/mysqladmin ping > /dev/null 2>&1 then env HOME=/root/ /usr/bin/mysql -e 'select @@global.long_query_time into @lqt_save; set global long_query_time=2000; set global slow_query_log=0; select sleep(2); FLUSH ERROR LOGS; FLUSH SLOW LOGS;select sleep(2); set global long_query_time=@lqt_save; set global slow_query_log=1;' > /var/log/mysqladmin.flush-logs 2>&1 fi endscript } Validation For final validation force a log rotate. Update the path in the ls command to match the path of the logs gathered earlier. bash # logrotate --force /etc/logrotate.d/mysql bash # ls -al /var/lib/mysql
  3. Docker Compose Setup for InnoDB Cluster

    In the following we show how InnoDB cluster can be deployed in a container context. In the official documentation (Introducing InnoDB Cluster), InnoDB is described as: MySQL InnoDB cluster provides a complete high availability solution for MySQL. MySQL Shell includes AdminAPI which enables you to easily configure and administer a group of at least three […]
  4. How to install XMB forum on Ubuntu 16.04 LTS

    XMB forum also known as eXtreme Message Board is a free and open source forum software written in PHP and uses MySQL database backend. XMB is a simple, lightweight, easy to use, Powerful and highly customizable. In this tutorial, we will learn how to install XMB forum on Ubuntu 16.04.
  5. FLUSH and LOCK Handling in Percona XtraDB Cluster

    In this blog post, we’ll look at how Percona XtraDB Cluster (PXC) executes FLUSH and LOCK handling. Introduction Percona XtraDB Cluster is a multi-master solution that allows parallel execution of the transactions on multiple nodes at the same point in time. Given this semantics, it is important to understand how Percona XtraDB Cluster executes statements regarding FLUSH and LOCK handling (that operate at node level). The section below enlist different flavors of these statements and their PXC semantics FLUSH TABLE WITH READ LOCK FTWRL is normally used for backup purposes. Execution of this command establishes a global level read lock. This read lock is non-preemptable by the background running applier thread. PXC causes the node to move to DESYNC state (thereby blocking emission of flow-control) and also pauses the node. 2018-03-08T05:09:54.293991Z 0 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 1777) 2018-03-08T05:09:58.040809Z 5 [Note] WSREP: Provider paused at c7daf065-2285-11e8-a848-af3e3329ab8f:2002 (2047) 2018-03-08T05:14:20.508317Z 5 [Note] WSREP: resuming provider at 2047 2018-03-08T05:14:20.508350Z 5 [Note] WSREP: Provider resumed. 2018-03-08T05:14:20.508887Z 0 [Note] WSREP: Member 1.0 (n2) resyncs itself to group 2018-03-08T05:14:20.508900Z 0 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 29145) 2018-03-08T05:15:16.932759Z 0 [Note] WSREP: Member 1.0 (n2) synced with group. 2018-03-08T05:15:16.932782Z 0 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 29145) 2018-03-08T05:15:16.988029Z 2 [Note] WSREP: Synchronized with group, ready for connections 2018-03-08T05:15:16.988054Z 2 [Note] WSREP: Setting wsrep_ready to true Other nodes of the cluster continue to process the workload. DESYNC and pause node continue to see the replication traffic. Though it doesn’t process the write-sets, they are appended to Galera cache for future processing. Fallback: When FTWRL is released (through UNLOCK TABLES), and if the workload is active on other nodes of the cluster, FTWRL executed node may start emitting flow-control to cover the backlog. Check details here. FLUSH TABLE <tablename> (WITH READ LOCK|FOR EXPORT) It is meant to take global level read lock on the said table only. This lock command is not replicated and so pxc_strict_mode = ENFORCING blocks execution of this command. This read lock is non-preemptable by the background running applier thread. Execution of this command will cause the node to pause. If the flush command executing node is same as workload processing node, then the node will pause immediately If the flush command executing node is different from workload processing node, then the write-sets are queued to the incoming queue and flow-control will cause the pause. End-result is cluster will stall in both cases. 2018-03-07T06:40:00.143783Z 5 [Note] WSREP: Provider paused at 40de14ba-21be-11e8-8e3d-0ee226700bda:147682 (149032) 2018-03-07T06:40:00.144347Z 5 [Note] InnoDB: Sync to disk of `test`.`t` started. 2018-03-07T06:40:00.144365Z 5 [Note] InnoDB: Stopping purge 2018-03-07T06:40:00.144468Z 5 [Note] InnoDB: Writing table metadata to './test/t.cfg' 2018-03-07T06:40:00.144537Z 5 [Note] InnoDB: Table `test`.`t` flushed to disk 2018-03-07T06:40:01.855847Z 5 [Note] InnoDB: Deleting the meta-data file './test/t.cfg' 2018-03-07T06:40:01.855874Z 5 [Note] InnoDB: Resuming purge 2018-03-07T06:40:01.855955Z 5 [Note] WSREP: resuming provider at 149032 2018-03-07T06:40:01.855970Z 5 [Note] WSREP: Provider resumed. Once the lock is released (through UNLOCK TABLES), node resumes apply of write-sets. LOCK TABLE <tablename> READ/WRITE LOCK TABLE command is meant to lock the said table in the said mode. Again, the lock established by this command is non-preemptable. LOCK is taken at node level (command is not replicated) so pxc_strict_mode = ENFORCING blocks this command. There is no state change in PXC on the execution of this command. If the lock is taken on the table that is not being touched by the active workload, the workload can continue to progress. If the lock is taken on the table that is part of the workload, said transaction in the workload will wait for the lock to get released, in turn, will cause complete workload to halt. GET_LOCK It is named lock and follows same semantics as LOCK TABLE for PXC. (Base semantics of MySQL are slightly different that you can check here). LOCK TABLES FOR BACKUP As the semantics goes, this lock is specially meant for backup and blocks non-transactional changes (like the updates to non-transactional engine = MyISAM and DDL changes). PXC doesn’t have any special add-on semantics for this command LOCK BINLOG FOR BACKUP This statement blocks write to binlog. PXC always generates a binlog (persist to disk is controlled by the log-bin setting). If you disable log-bin, then PXC enables emulation-based binlogging. This effectively means this command can cause the cluster to stall. Tracking active lock/flush If you have executed a flush or lock command and wanted to find out, it is possible using the com_% counter. These counters are connection specific, so execute these commands from the same client connection. Also, these counters are aggregate counters and incremental only. mysql> show status like 'Com%lock%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | Com_lock_tables | 2 | | Com_lock_tables_for_backup | 1 | | Com_lock_binlog_for_backup | 1 | | Com_unlock_binlog | 1 | | Com_unlock_tables | 5 | +----------------------------+-------+ 5 rows in set (0.01 sec) mysql> show status like '%flush%'; +--------------------------------------+---------+ | Variable_name | Value | +--------------------------------------+---------+ | Com_flush | 4 | | Flush_commands | 3 | * Flush_commands is a global counter. Check MySQL documentation for more details. Conclusion By now, we can conclude that the user should be a bit more careful when executing local lock commands (understanding the semantics and the effect). Careful execution of these commands can help serve your purpose. The post FLUSH and LOCK Handling in Percona XtraDB Cluster appeared first on Percona Database Performance Blog.