Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. The state of Orchestrator, 2020 (spoiler: healthy)

    This post serves as a pointer to my previous announcement about The state of Orchestrator, 2020. Thank you to Tom Krouper who applied his operational engineer expertise to content publishing problems.
  2. ProxySQL Config file creation | Backup solution

    We are well aware that ProxySQL is one of the powerful SQL aware proxy for MySQL. The ProxySQL configuration is flexible and the maximum part of configurations can be done with the ProxySQL client itself. The latest ProxySQL release ( 2.0.9 ) has few impressive features like “SQL injection engine, Firewall whitelist, Config file generate” . In this blog I am going to explain, how to generate the ProxySQL config file using the proxySQL client . Why configuration file ? Backup solution Helpful for Ansible deployments in multipul environments There are two important commands involved in the ProxySQL config file generation. Print the config file text in ProxySQL client itself ( like query output ) Export the configurations in separate file Print the config file text in ProxySQL client ( like query output ) : cmd : SELECT CONFIG FILE ; Export the configurations in separate file : cmd : SELECT CONFIG INTO OUTFILE /path/config Below is the bash script , which will helps to backup the ProxySQL configuration . It can be schedule in the cron with convenient time . Script : [root@ip-172-31-8-156 ProxySQL]# cat backup.sh #!/bin/sh #variable backup_path="/var/lib/ProxySQL_backup/data" user=admin pass=admin port=6032 host=127.0.0.1 back_name=ProxySQL_backup_$(date -u +%Y-%m-%dT%H-%M-%S) log_path="/var/lib/ProxySQL_backup/log" #live_check ProxySQL_livecheck() { if [[ $(pgrep proxysql) ]]; then ProxySQL_Backup else echo "Backup ( $back_name ) failed" >> /var/lib/ProxySQL_backup/log/backup.err exit fi } #backup ProxySQL_Backup() { mysql -u$user -p$pass -P$port -h$host -e "select config into outfile $backup_path/$back_name" echo "Backup ( $back_name ) completed" >> /var/lib/ProxySQL_backup/log/backup.log } #call ProxySQL_livecheck Thanks !!!
  3. How to Deploy and using MySQL InnoDB Replica Set in Production ?

    How to Deploy MySQL InnoDB Replica Set in Production? Before i talk about Deployment process of MySQL InnoDB Replica Set , it is more important to know below details:- What is MySQL InnoDB Replica Set? What is prerequisite and limitation of using MySQL Replica Set?  In what kind of scenarios MySQL Replica Set is not recommended.  How to configure and deploy MySQL Replica Set- (step by step guide ) How to use InnoDB Replica Set? What if Primary goes down? Does select query re-routed to another server? What if Secondary goes down while executing select queries? §      I will answer these all question in this blog.    What is Replica Set ? MySQL InnoDB ReplicaSet a quick and easy way to get MySQL replication(Master-Slave), making it well suited to scaling out reads, and provides manual failover capabilities in use cases that do not require the high availability offered by MySQL InnoDB cluster. Suppose you have one server is running for deriving workloads and you have to bring high availability in place for an application, basic says in MySQL to achieve high availability you require minimum 02 MySQL Server running in two different host. And to set up link between these two host until earlier we have to prepare and qualify server to be part of HA, which requires you must know basis of MySQL BUT from MySQL 8.0.19 you don’t have to spend time on preparation and qualification and doing configuration level changes , MySQL InnoDB REPLICA SET makes your JOB  AUTOMATED. MySQL Replica Set is set of three components which is ·       MySQL Shell ·       MySQL Router ·       Set of MySQL Servers(min no of server – 02) It works only with Single Primary and multiple secondary server, which is in ASYNC mode. MySQL Shell includes AdminAPI, which enables you to easily configure, administrator, and deploy a group of MySQL Servers. More Info:- https://dev.mysql.com/doc/refman/8.0/en/mysql-shell-userguide.html MySQL Router which is part of Replica Set and is lightweight middleware that provides transparent routing between your application and back-end MySQL Servers. Purpose is to serve R/W request to primary instance through port 6446 and R/O request to multiple primary instance through port 6447. It is always recommended to Install MySQL Router into app server because of below reasons ·       app is the one who has to send request to . Application------->Router------->List of MySQL Servers. ·       To decrease network latency More Info:- https://dev.mysql.com/doc/mysql-router/8.0/en/mysql-router-general.html What is prerequisite and limitation of using MySQL Replica Set? §  Manual Failover. §  No Multi Primary Topology. §  All Secondary members replicate from primary. §  GTID based. §  All MySQL Server version 8.0.19. §  Rows based replication supported. §  Replication Filter is not supported. §  Replica Set must be managed by MySQL Shell. §  Try to always use MySQL Cloning over Incremental Recovery as Recovery Method. More Limitations:-  https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-replicasets-introduction.html In what kind of scenarios MySQL Replica Set is Recommended ? Below are top Features which makes life of DBA simple:- ü  To scale Read workloads. ü  Manual failover in event of primary node goes down. ü  Useful where we can compromise RPO/RTO time. ü  MySQL Shell Automatically configures users and Replication. ü  Easy to deploy without editing into my.cnf/my.ini file. ü  Not to spend time on Backup àRestore to provision new node , MySQL CLONE feature in-built which will save a lot  time to bring another server for replication. More on Cloning:- https://mysqlserverteam.com/clone-create-mysql-instance-replica/ ü  Integrated MySQL Router Load balancing . ü  Easy to getting started into MySQL high availability for all tier type applications. How to configure and deploy MySQL Replica Set Step by step guide to deploy MySQL Replica Set in Production In this tutorial I will use two machine where MySQL is running Machine 01:- 10.0.10.33 Machine 02:-  10.0.10.38 Make sure below software is installed:- 1.       Mysql Server 8.0.19 2.       MySQL Shell 3.       MySQL Router. (it can install on either MySQL Server or Application Server which is Recommended). Step 1:- Configure Machine to participate into InnoDB Replica Set ##In Machine 01 mysqlsh shell.connect("root@10.0.10.33:3306"); Creating a session to 'root@10.0.10.33:3306' Please provide the password for 'root@10.0.10.33:3306': ******** Save password for 'root@10.0.10.33:3306'? [Y]es/[N]o/Ne[v]er (default No): Y Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 13 Server version: 8.0.19-commercial MySQL Enterprise Server - Commercial No default schema selected; type \use <schema> to set one. <ClassicSession:root@10.0.10.33:3306>  MySQL  10.0.10.33:3306 ssl  JS > dba.configureReplicaSetInstance("root@10.0.10.33:3306",{clusterAdmin: "'rsadmin'@'10.0.10.33%'"}); dba.configureReplicaSetInstance("root@10.0.10.33:3306",{clusterAdmin: "'rsadmin'@'10.0.10.33%'"}); Configuring local MySQL instance listening at port 3306 for use in an InnoDB ReplicaSet... This instance reports its own address as Workshop-33:3306 Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed. Password for new account: ******** Confirm password: ******** NOTE: Some configuration options need to be fixed: +--------------------------+---------------+----------------+--------------------------------------------------+ | Variable                 | Current Value | Required Value | Note                                             | +--------------------------+---------------+----------------+--------------------------------------------------+ | enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server | | gtid_mode                | OFF           | ON             | Update read-only variable and restart the server | | server_id                | 1             | <unique ID>    | Update read-only variable and restart the server | +--------------------------+---------------+----------------+--------------------------------------------------+ Some variables need to be changed, but cannot be done dynamically on the server. Do you want to perform the required configuration changes? [y/n]: y Do you want to restart the instance after configuring it? [y/n]: y Cluster admin user 'rsadmin'@'10.0.10.33%' created. Configuring instance... The instance 'Workshop-33:3306' was configured to be used in an InnoDB ReplicaSet. Restarting MySQL... NOTE: MySQL server at Workshop-33:3306 was restarted. ##In Machine 2 mysqlsh shell.connect("root@10.0.10.38:3306"); Creating a session to 'root@10.0.10.38:3306' Please provide the password for 'root@10.0.10.38:3306': ******** Save password for 'root@10.0.10.38:3306'? [Y]es/[N]o/Ne[v]er (default No): Y Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 10 Server version: 8.0.19-commercial MySQL Enterprise Server - Commercial No default schema selected; type \use <schema> to set one. <ClassicSession:root@10.0.10.38:3306> dba.configureReplicaSetInstance("root@10.0.10.38:3306",{clusterAdmin: "'rsadmin'@'10.0.10.38%'"}); Configuring local MySQL instance listening at port 3306 for use in an InnoDB ReplicaSet... This instance reports its own address as Workshop-38:3306 Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed. Password for new account: ******** Confirm password: ******** NOTE: Some configuration options need to be fixed: +--------------------------+---------------+----------------+--------------------------------------------------+ | Variable                 | Current Value | Required Value | Note                                             | +--------------------------+---------------+----------------+--------------------------------------------------+ | enforce_gtid_consistency | OFF           | ON             | Update read-only variable and restart the server | | gtid_mode                | OFF           | ON             | Update read-only variable and restart the server | | server_id                | 1             | <unique ID>    | Update read-only variable and restart the server | +--------------------------+---------------+----------------+--------------------------------------------------+ Some variables need to be changed, but cannot be done dynamically on the server. Do you want to perform the required configuration changes? [y/n]: y Do you want to restart the instance after configuring it? [y/n]: y Cluster admin user 'rsadmin'@'10.0.10.38%' created. Configuring instance... The instance 'Workshop-38:3306' was configured to be used in an InnoDB ReplicaSet. Restarting MySQL... NOTE: MySQL server at Workshop-38:3306 was restarted.  MySQL  10.0.10.38:3306 ssl  JS > Step 2:- Create Replica Set and Add database node to form Replica Set. ##Connect to Machine 01 :- mysqlsh shell.connect("root@10.0.10.33:3306"); var rs = dba.createReplicaSet("MyReplicatSet") A new replicaset with instance 'Workshop-33:3306' will be created. * Checking MySQL instance at Workshop-33:3306 This instance reports its own address as Workshop-33:3306 Workshop-33:3306: Instance configuration is suitable. * Updating metadata... ReplicaSet object successfully created for Workshop-33:3306. Use rs.addInstance() to add more asynchronously replicated instances to this replicaset and rs.status() to check its status.  MySQL  10.0.10.33:3306 ssl  JS > rs.addInstance("10.0.10.38:3306"); Adding instance to the replicaset... * Performing validation checks This instance reports its own address as Workshop-38:3306 Workshop-38:3306: Instance configuration is suitable. * Checking async replication topology... * Checking transaction state of the instance... The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'Workshop-38:3306' with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'. WARNING: It should be safe to rely on replication to incrementally recover the state of the new instance if you are sure all updates ever executed in the replicaset were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the replicaset or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Incremental state recovery was selected because it seems to be safely usable. * Updating topology ** Configuring Workshop-38:3306 to replicate from Workshop-33:3306 ** Waiting for new instance to synchronize with PRIMARY... The instance 'Workshop-38:3306' was added to the replicaset and is replicating from Workshop-33:3306.  MySQL  10.0.10.33:3306 ssl  JS > rs.status(); {     "replicaSet": {         "name": "ReplicatSet",         "primary": "Workshop-38:3306",         "status": "AVAILABLE",         "statusText": "All instances available.",         "topology": {             "10.0.10.39:3306": {                 "address": "10.0.10.39:3306",                 "instanceRole": "SECONDARY",                 "mode": "R/O",                 "replication": {                     "applierStatus": "APPLIED_ALL",                     "applierThreadState": "Slave has read all relay log; waiting for more updates",                     "receiverStatus": "ON",                     "receiverThreadState": "Waiting for master to send event",                     "replicationLag": null                 },                 "status": "ONLINE"             },             "Workshop-38:3306": {                 "address": "Workshop-38:3306",                 "instanceRole": "PRIMARY",                 "mode": "R/W",                 "status": "ONLINE"             }         },         "type": "ASYNC"     } } Step 3:- Configure Router to talk from App to Replica Set. mysqlrouter --force  --user=root --bootstrap root@10.0.10.38:3306 --directory myrouter #In Case Router from Remote Machine:-cluster in 10.0.10.14 mysqlrouter --bootstrap root@10.0.10.14:3310 --directory myrouter Step 4: Start Router myrouter/start.sh   Step 5: Using Replica Set mysqlsh MySQL JS> shell.connect("root@127.0.0.1:6446"); \sql SQL>SELECT * FROM performance_schema.replication_group_members; CREATE DATABASE sales;USE sales; CREATE TABLE if not exists sales.employee(empid int primary key auto_increment,empname varchar(100),salary int,deptid int); INSERT sales.employee(empname,salary,deptid) values('Ram',1000,10); INSERT sales.employee(empname,salary,deptid) values('Raja',2000,10); INSERT sales.employee(empname,salary,deptid) values('Sita',3000,20); SELECT * FROM  sales.employee; Connect Router to another machine to verify changes. mysqlsh JS>shell.connect("root@127.0.0.1:6447"); \sql SQL>SELECT * FROM sales.employee; INSERT sales.employee values(100,'Ram',1000,10); <Error> because this machine is not allowed to execute DML,DDL statements.> ##Create Disaster #service mysqld stop RS1= dba.getReplicaSet() RS1.status(); MySQL  10.0.10.38:3306 ssl  JS > RS1.status() ReplicaSet.status: Failed to execute query on Metadata server 10.0.10.38:3306: Lost connection to MySQL server during query (MySQL Error 2013)  MySQL  10.0.10.38:3306 ssl  JS > RS1.status() ReplicaSet.status: The Metadata is inaccessible (MetadataError)  MySQL  10.0.10.38:3306 ssl  JS > RS1.status() ReplicaSet.status: The Metadata is inaccessible (MetadataError)  MySQL  10.0.10.38:3306 ssl  JS > MySQL-JS>shell.connect("root@localhost:6446"); Creating a session to 'root@10.0.10.38:6446' Please provide the password for 'root@10.0.10.38:6446': ******** Shell.connect: Can't connect to remote MySQL server for client connected to '0.0.0.0:6446' (MySQL Error 2003) #service mysqld start MySQL  10.0.10.38:3306 ssl  JS > RS1.status() ReplicaSet.status: The Metadata is inaccessible (MetadataError)  MySQL  10.0.10.38:3306 ssl  JS > RS1=dba.getReplicaSet() You are connected to a member of replicaset 'ReplicatSet'. <ReplicaSet:ReplicatSet> RS1=dba.getReplicaSet() RS1.status() ##Again Connect to Router to send the traffic mysqlsh shell.connect("root@localhost:6447"); \sql SQL>SELECT * FROM sales.employee; Scenario#1 Assume primary goes down :and if you run MySQL  10.0.10.38:3306 ssl  JS > RS1.status() Error :- ReplicaSet.status: The Metadata is inaccessible (MetadataError)  MySQL  10.0.10.38:3306 ssl  JS > Now Primary machine UP and if you run MySQL  10.0.10.38:3306 ssl  JS > RS1.status() ReplicaSet.status: The Metadata is inaccessible (MetadataError) >>It not get refreshed. Fix :- RS1= dba.getReplicaSet() RS1.status(); Scenario #02 Create Disaster # What if Primary Node Fails while executing below query from application while [ 1 ]; do sleep 1; mysql -h127.0.0.1 -uroot -p123456  -P6446 -e " INSERT sales.employee(empname,salary,deptid) values('Ram',1000,10); select count(*) from sales.employee"; done \JS #Stop Primary MySQL Instance service mysqld stop You can see Insert Query is stopped working , Ended with ERROR Now Lets execute only SELECT query let see what happens... since primary node goes down which means mysql router will stopped send any query into 6446  BUT router has another port OPEN for sending ONLY SELECT query. which meant router will use port 6447 to send select query.  see below Let's re-execute same query with only SELECT query connecting to R/O port 6447 while [ 1 ]; do sleep 1; mysql -h127.0.0.1 -uroot -p123456  -P6447 -e " Select count(*) from sales.employee"; done You are able to access another machine which is Replica (10.0.0.38). Now , Let's Re-connect to Primary Node(10.0.10.33) what will happen? it will work or not?... Which means that even if primary node goes down and second replicas are alive then select query will work select @@hostname; --> 10.0.10.38 Scenario #03 Create Disaster # What if Secondary Node Fails… #Stop MySQL Instance 10.0.10.38$service mysqld stop Primary will still works even though Secondary node goes down… that’s by design of MySQL Replication. Now since secondary node goes down let’s connect to 6447 and send only SELECT query while [ 1 ]; do sleep 1; mysql -h127.0.0.1 -uroot -p123456  -P6447 -e " select count(*) from sales.employee"; done What will happen?  Even though Secondary Node goes down , MySQL Router will re-routing to Primary server and return results as you see in above image. Re-confirm:- Can you Observe one important observation? why port 6447 is executing R/W query? When we execute R/W and R/O on 6447 port Router does routing to Primary Node 6446. Because as per documentation:- When you use MySQL Router with a replica set, be aware that: ·       The read-write port of MySQL Router directs client connections to the primary instance of the replica set ·       The read-only port of MySQL Router direct client connections to a secondary instance of the replica set, although it could also direct them to the primary Please try this brand new features to set up MySQL Replication with the help of MySQL Shell. Want to Know more? https://dev.mysql.com/doc/refman/8.0/en/working-with-replicasets.html https://mysqlserverteam.com/introducing-mysql-innodb-replicaset/
  4. MySQL InnoDB Cluster Replication via Router to Slave Single Node using SSL

    This article is written to share how to setup SSL Replication between MySQL InnoDB Cluster and Slave single node via MySQL Router.It is for Demo Purpose ONLY.The video below shows the Replication working between Primary Node failover in the MySQL InnoDB Cluster.   The Replication switches to another Primary Node via the MySQL Router.https://youtu.be/R0jOMfZlF8cThe General Steps as follows :Setup as follows (Demo only)Virtual Machine 1 1. A working MySQL InnoDB ClusterVirutal Machine 2 2. A working MySQL Node as Slave 3. A working MySQL Router setup on Slave Node to point to the MySQL InnoDB Cluser on VM1.The key part is to ensure the "key files" to be the same on each node of the MySQL InnoDB Cluster.For the InnoDB Cluster on VM1 setup :For example with MySQL InnoDB Cluster in the demo setup :   Node1 DataDir : /home/mysql/data/3310   Node2 DataDir : /home/mysql/data/3320   Node3 DataDir : /home/mysql/data/3330Copy all the *.pem files from datadir of Node1 to Node2 and Node3 datadir.  Thereafter restarting all the servers and bringing the InnoDB Cluster ONLINE ensures the keys are the same for ALL nodes.For the SINGLE node slave in DR (VM2), the setup as follows  Setup MySQL Router to connect to the  MySQL InnoDB Cluster on VM1.  So that the 6446 as primary node connection is always going into the RW node.  Setup the MySQL Server as standalone Server on DR with proper setup (empty or restored backup from MySQL Server on InnoDB Cluster.)     Copy the public key file (public_key.pem) file from the VM1 with those Server setup to VM2.   (e.g. the location of the file as /opt/download/lab/keys/public_key.pem)  The MySQL Replication Channel can be setup as follows (Because the Router is sitting on the server VM2 and the port number is 6446.  The replication is setup to connect to ROUTER with the master_public_key defined with the copied "public_key.pem" from VM1. mysql -uroot -h127.0.0.1 -P<the port of the slave>  << EOL1change master tomaster_host='127.0.0.1',master_user='repl',master_password='repl',master_port=6446,master_public_key_path='/opt/download/lab/keys/public_key.pem',get_master_public_key=1,master_auto_position=1for channel 'channel1'; start slave for channel 'channel1';Once the server is setup properly, the replication channel is started. Thanks for reading.
  5. Important Health Checks for your MySQL Master-Slave Servers

    In a MySQL master-slave high availability (HA) setup, it is important to continuously monitor the health of the master and slave servers so you can detect potential issues and take corrective actions. In this blog post, we explain some basic health checks you can do on your MySQL master and slave nodes to ensure your setup is healthy. The monitoring program or script must alert the high availability framework in case any of the health checks fails, enabling the high availability framework to take corrective actions in order to ensure service availability. MySQL Master Server Health Checks We recommended that your MySQL master monitoring program or scripts runs at frequent intervals. Assuming that the monitoring script is running on the same server as your MySQL server, you can check for the following: Ensure the MySQL service is running This can be done using a simple command like: > pgrep mysqld OR >service mysqld status Ensure you can connect to MySQL and do a simple query We recommended having a short timeout for these commands so you can quickly detect if MySQL is unresponsive. This can be achieved from a call like: /usr/bin/timeout 5 mysql -u testuser -ptestpswd -e 'select * from mysql.test’ Be sure to examine the exit value of the above command: Exit value=0 ⇒ Success Exit value=1 ⇒ Failure Exit-value=124 ⇒ Timeout If the command times out, it means that the MySQL service is not responsive enough. We advice you retry after some time so as to avoid false negative results. If the exit code indicates a failure, the return code from MySQL will tell us the failure reason. One example of a failure is the ‘Too many connections’ error from MySQL which happens if the number of connections to the server exceeds your ‘max_connections’ configuration value. Ensure the MySQL master is running in read-write mode You can use the following command to ensure the MySQL master is running in read-write mode: /usr/bin/timeout 5 mysql -u testuser -ptestpswd -e "SELECT @@global.read_only" The master is expected to be always running in read-write mode, and hence, the value of  read_only should be ‘OFF’. It is also possible to club this step with step 2, and instead of doing the test query ‘select * from mysql.test, we can just do the query to get the read_only value. Important Health Checks for your MySQL Master-Slave ServersClick To Tweet MySQL Slave Server Health Checks You can run the monitoring for your MySQL slaves at a lesser frequency compared to the master, as they are not handling data writes. The first 3 steps for your slave health check can be the same as that of the master, except that we need to ensure the slave is running in read-only mode - the value of the variable read_only should be ‘ON’ in step-3. In addition, we can do more checks on the slave to ensure its replication status is healthy, such as: The slave is configured to replicate from the right master. The slave’s connection to the master is healthy. The slave is able to apply the master events it has received. It’s possible to check for all the above using the ‘show slave status’ command. For example: mysql> show slave status \G; *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 172.31.17.43 Master_User: repl_user Master_Port: 3306 Connect_Retry: 10 Master_Log_File: mysql-bin.000001 Read_Master_Log_Pos: 7510 Relay_Log_File: relay-log.000006 Relay_Log_Pos: 414 Relay_Master_Log_File: mysql-bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes ******************Truncated********************************* The Master_Host value indicates the master server is configured for replication. For the Slave_IO_Running value, “Yes” indicates that the slave has connected to the master and is receiving the replication stream. For the Slave_SQL_Running value, “Yes” indicates that the slave’s applier is running and able to apply all the events received from the master. In this blog post, we discussed some simple checks that can detect if there are basic issues in your MySQL master and slave servers. In general, the failure detection mechanism in a high availability setup is a complex subject and needs a robust high availability framework through which health check monitoring should be implemented. You can learn more about the details of our high availability framework in our MySQL High Availability Framework Explained – Part I: Introduction blog post.

Upcoming Events

Visitors

We have 55 guests and no members online