Planet MySQL

Planet MySQL -
  1. MySQL File Privilege

    While preparing a set of student instructions to create a MySQL 8 (8.0.21) Windows 10 instance I found an error with LOAD command and the --secure-file_priv variable set in the my.ini file. After granting the global FILE permission to the previously provisioned student user: GRANT FILE ON *.* TO 'student'@'localhost'; Any attempt to run the following command failed: LOAD DATA INFILE 'C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\avenger.csv' INTO TABLE avenger FIELDS TERMINATED BY ',' ENCLOSED BY '"' ESCAPED BY '\\' LINES TERMINATED BY '\r\n'; and, raise this error message: ERROR: 1290: The MySQL server is running with the --secure-file-priv option so it cannot execute this statement The following covers my exploration to try and fix this error without removing a designated directory for secure file uploads. While MySQL 5.7 contains the request for he server-file-priv variable, there is nothing in the MySQL System Server Variables document on how to troubleshoot the server-file-priv variable when set. Somehow, I think there should be some mention of how to resolve this error without unsetting the server-file-privy variable. I checked and fixed all Windows 10 sharing and read-write privileges on the secure-file-priv designated directory. They Windows 10 settings allowed for global sharing and both read and write privileges, but the LOAD command failed to load the file contents from the authorized upload directory. The MySQL FILE privilege is a global privilege to read and write files on the local server. MySQL 8 installation on Windows 10 sets the following directory as the target for uploading files in the my.ini file: # Secure File Priv. secure-file-priv="C:/ProgramData/MySQL/MySQL Server 8.0/Uploads" You can find the setting in the C:\ProgramData\MySQL\MySQL Server 8.0\my.ini file. You can find this value without referencing the my.ini file by querying the data: show variables like 'secure_file_priv'; A new installation should return: +------------------+------------------------------------------------+ | Variable_name | Value | +------------------+------------------------------------------------+ | secure_file_priv | C:\ProgramData\MySQL\MySQL Server 8.0\Uploads\ | +------------------+------------------------------------------------+ 1 row in set (0.2253 sec) You can find the my.ini file in the C:\ProgramData\MySQL\MySQL Server 8.0 directory of a standard Windows 10 file system. Using the Windows’ File Explorer, I checked the Uploads directory’s privileges by right clicking the Uploads directory to check the Properties of the directory in the File Explorer dialog box: The General tab indicates that the files are Read-only, as shown: The Sharing tab indicates that the files are Shared, as shown: Clicking the Share … button, the files in this directory are shared with Read/Write permissions to Everyone, as shown below. The Security tab indicates that the files Everyone has Full control of the files in this directory, as shown: Unfortunately, with all these set appropriately the secure-file-priv variable appears to block reading files from the designated secure directory. It appears that I may have to simply remove the secure-file-priv setting from the my.ini file and reboot the server.
  2. Diagnosing and Fixing “MySQL Server Has Gone Away” Messages

    We all like when error messages are descriptive and give a clear idea about what is happening; however, there are some cases when a few possible reasons lay behind one error message. “MySQL server has gone away” is one of them. Most of the cases when the error occurs are described in MySQL documentation, but it can get tricky. And here, I’d like to talk about “tricky”. There are only a few major cases when this happens: 1. MySQL Thread Was Killed by an Administrator or a Utility Such as pt-kill The manual intervention is likely to be intermittent and, as it is a one-time thing in certain situations (e.g., a bad long-running query), probably would be known to a DBA. Pt-kill might be less noticeable, as it is often left running as a workaround to prevent those bad long queries from taxing system resources. Checking the system processlist should bring the commands to the surface: $ ps xauf | grep pt-kill taras 6069 0.1 0.1 111416 29452 pts/29 S+ 10:57 0:00 | | \_ perl /usr/bin/pt-kill --interval 1s --busy-time 5s --match-info (SELECT) h= --print --kill taras 6913 0.0 0.0 21532 1112 pts/30 S+ 11:00 0:00 | \_ grep --color=auto pt-kill A very convenient way is to use the audit plugin available for Percona Server for MySQL to determine where the kill command came from: <AUDIT_RECORD> <NAME>Query</NAME> <RECORD>624484743_2020-06-30T17:38:14</RECORD> <TIMESTAMP>2020-06-30T17:57:35 UTC</TIMESTAMP> <COMMAND_CLASS>kill</COMMAND_CLASS> <CONNECTION_ID>17</CONNECTION_ID> <STATUS>0</STATUS> <SQLTEXT>KILL QUERY 16</SQLTEXT> <USER>taras[taras] @ localhost []</USER> <HOST>localhost</HOST> <OS_USER></OS_USER> <IP></IP> <DB></DB> </AUDIT_RECORD> It shows the hostname, user, and time when the connection got killed. 2. Big Data Chunk Transfer For example, when using BLOB fields to store binary data in a table or there is an INSERT statement containing a lot of rows. It may happen when using the MySQL CLI client (one of the cases being loading an SQL dump), or it can happen within an application when it tries to store the BLOB data (for example, from a file upload). There is a limit MySQL imposes on the amount of data that can be transmitted per query, and the max_allowed_packet variable defines it. So, in both cases, we need to determine which table the data is being written to, for instance, grepping the SQL file for INSERT INTO statements and implementing logging on the application end. This way, the statement will be stored along with the error that prevented it from completing. A partial statement can be captured (as BLOBs could be a burden to log), but as long as there is a table name, it is possible to check the table structure and see if it does contain binary data. Example of an INSERT statement with binary data (truncated): INSERT INTO t1 VALUES (1, x'89504....82’) To allow for a larger query execution, the variable needs to be adjusted: SET GLOBAL max_allowed_packet = 128M ; The variable can be set per session or globally, depending on the use case. 3. The Connection Was Closed by Timeout It is trivial, but applications can be reusing already-established connections. During the time of inactivity or lower traffic, it is possible some connections will not be used for a while and closed on the MySQL end. It is traced best with the application logging; if there is an event that happened in the evening followed by a period of inactivity and then the error in the morning, it is very likely that MySQL closed the connection. mysql> SET SESSION wait_timeout = 5 ; Query OK, 0 rows affected (0.00 sec) Wait for 5 seconds: mysql> select 1 ; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 16 Current database: *** NONE *** +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.01 sec) Usually, the connection is re-established, and the application continues normal operations; yet it is always possible to extend the timeout in MySQL configuration: SET GLOBAL wait_timeout = 57600 ; The default value for the variable is 28800 seconds (8 hours), which is enough in most cases. Also, closing connections cleanly from an application end, after a period of inactivity, eliminates this problem. 4. MySQL Server Has Actually Gone Away This one is probably the worst possible scenario when MySQL crashes on a query or due to some other reason, e.g., the OOM killer killed the process. However, it can be caused by a clean restart, too. In this case, one should check MySQL uptime and the logs, MySQL error log, and syslog. Those should indicate whether the server restart occurred and if there was an error leading to the restart. In case the server did crash, it is time to find the actual cause. Check the bug tracker, as the issue might have been reported and possibly fixed already; upgrade MySQL if needed. In case it was a clean restart, check if auto-updates are enabled or if someone else restarted the service interactively (yes, lack of communication is a thing too).
  3. Verify MySQL Backups With TwinDB Backup Tool

    By Maksym Kryva. If you don’t verify backups you may safely assume you don’t have them. It often happens that MySQL backups can be invalid or broken due to a software bug, or some hidden corruption. If you are lucky enough, hours and days will be needed to resurrect a database from a bad backup copy. If you ran out of luck quota, you may lose a lot of data. Hence the importance of data backup verification. Not many companies do backups, and even less verify them. To make the verification problem easier, we have added a verify command to the TwinDB Backup Tool. What the command does is that it takes a backup copy, restores it, prepares (applies redo logs, fixes permissions and so on) and runs a MySQL instance on it. Then it checks if the recovered database is healthy. You can verify the backups either on the same server where the backup was taken, or on a dedicated “verification” machine. Usage # twindb-backup verify mysql --help Usage: twindb-backup verify mysql [OPTIONS] [BACKUP_COPY]     Verify backup   Options:   --dst TEXT       Directory where to restore the backup copy  [default:                    /tmp/]   --hostname TEXT  If backup_copy is latest this option specifies hostname                    where the backup copy was taken.   --help           Show this message and exit. backup_copy is a backup copy name. You can get it from the twindb-backup ls output.  Or you can pass latest for verifying the most recent MySQL backup. hostname – if you verify the backup on another machine, you have to specify what host the backup was taken from. If you run it without specifying the hostname, it will use the hostname of the local machine. dst is a directory for restored mysql backup. By default it’s /tmp. For example: twindb-backup verify mysql \ /path/to/twindb-server-backups/master1/hourly/files/_home-2017-11-13_16_43_17.tar.gz \ --dst /var/lib/mysql To verify a backup, twindb-backup gets it from destinations such as S3, SSH, or Local. After this, twindb-backup runs innobackupex to restore the backup from the archive. This feature works transparently with both full and incremental backups. Besides, if you configure the twindb-backup tool on export data to DataDog (watch out for our next post :), you can monitor restore time and alert your team about invalid backups, or if restore time breaks SLA. The TwinDB Backup Tool supports verification starting from version 2.15.0 which you can install from the source code.
  4. Setting up MySQL Monitoring With New Relic Infrastructure Pro

    If you have a New Relic Infrastructure Pro license, and unmonitored MySQL servers, there’s now an easy solution at your fingertips. With the New Relic MySQL integration you can monitor and graph almost any detailed metric you could possibly want. New Relic recently unified its analytics tools with New Relic One, a dashboard that provides quick access to all the New Relic tools. With an Infrastructure Pro subscription, you get access to: New Relic Infrastructure: Flexible, dynamic monitoring of your entire infrastructure, from services running in the cloud or on dedicated hosts, to containers running in orchestrated environments. New Relic Alerts: A flexible, centralized notification system that unlocks the operational potential of New Relic. Alerts is a single tool to manage alert policies and alert conditions for all of your New Relic data. New Relic Logs: A fast, scalable log management platform that allows you to connect your log data with the rest of your telemetry data. New Relic Insights: Allows you to query and chart your New Relic data. Insights is a software analytics resource for gathering and visualizing data about your software, and understanding what that data says about your business. New Relic infrastructure agent install New Relic provides a nice wizard page that generates the commands needed to install the infrastructure agent and configure it for your account. For our target OS (CentOS 7) the wizard generated the following commands for installation: # Create a configuration file and add your license key echo "license_key: xxxxxxxxxxxxxxxxxxxxxx" | sudo tee -a /etc/newrelic-infra.yml && # Create the agent’s yum repository sudo curl -o /etc/yum.repos.d/newrelic-infra.repo && # Update your yum cache sudo yum -q makecache -y --disablerepo='*' --enablerepo='newrelic-infra' && # Run the installation script sudo yum install newrelic-infra -y Once you run the above commands, the infrastructure agent will begin sending OS metric data to New Relic (Note: We are making the assumption that you’ve already set up any firewalls needed for the New Relic infrastructure agent). If you want a custom name for your host (Default is hostname) you can add  “display_name:  Name” to the /etc/newrelic-infra.yml file. You can add tags as custom_attributes and can use them to describe the host’s function. All of the configuration options can be found here and below is a sample /etc/newrelic-infra.yml file using the display_name and custom_attributes options: license_key: 1db211983ab4871600faef016f5235066f208fd5 display_name: mysql-qa-usw-a-1 verbose: 0 custom_attributes: env: QA app: MySQL team: Pythian This takes care of getting the host level metrics into New Relic, but we really want to collect MySQL metrics so we can be alerted for specific database problems. To collect MySQL metrics, we need to install the MySQL integration and have an Infrastructure Pro account. The Link to the MySQL integration provides these simple instructions for installation: sudo yum -q makecache -y --disablerepo='*' --enablerepo='newrelic-infra' sudo yum install nri-mysql The above commands will install the MySQL integration on my CentOS 7 hosts but we will still need to set up a MySQL user and customize the MySQL integrations configuration file before we start collecting MySQL data. Using the MySQL command line, create a user with replication privileges : mysql> CREATE USER 'newrelic'@'localhost' IDENTIFIED BY 'YOUR_SELECTED_PASSWORD'; mysql> GRANT REPLICATION CLIENT ON *.* TO 'newrelic'@'localhost' WITH MAX_USER_CONNECTIONS 5 Change the directory to the integration’s folder and copy the sample configuration file: # Change the directory to the integration's folder. cd /etc/newrelic-infra/integrations.d # Copy the sample configuration file: sudo cp mysql-config.yml.sample mysql-config.yml # Edit the file mysql-config.yml as shown below substituting the hostname, user, password: vi /etc/newrelic-infra/integrations.d/mysql-config.yml integration_name: com.newrelic.mysql instances: - name: mysql-server command: status arguments: hostname: localhost port: 3306 username: newrelic password: xxxxxxxxxxx extended_metrics: 1 extended_innodb_metrics : 1 # New users should leave this property as `true`, to identify the # monitored entities as `remote`. Setting this property to `false` (the # default value) is deprecated and will be removed soon, disallowing # entities that are identified as `local`. # Please check the documentation to get more information about local # versus remote entities: # remote_monitoring: true labels: env: production role: Master Note, in the above configuration file we have enabled both extended_metrics and extended_innodb_metrics for MySQL. The complete list of the MySQL metrics is quite extensive, and custom graphs and alerts can be created for any of them. The next step is to restart the infrastructure agent: systemctl restart newrelic-infra.service Now that we have New Relic Infrastructure collecting data about our MySQL server(s) we can start creating custom graphs along with alert policies and notification actions. By signing in at we can see the MySQL hosts we added to New Relic Infrastructure. Now we’re ready to create alerts on any conditions / data New Relic collects. With the MySQL integration we have a large choice of metrics to choose from: For our setup we are going to create a simple alert that will notify us whenever replication is stopped. In order to set up alerting, you need to create a new notification channel. To set up a notification channel, click on the alerts menu item. Next, pick the “Notifications channels” tab on the alerts page. New Relic Alerts provides a nice selection of notification channels / integrations for alerting including Slack, PagerDuty, OpsGenie, XMatters and, of course, email.  Once you have a notification channel set up, you can create a channel policy for receiving notifications: Once you have a notification channel and policy defined you can start creating alert conditions: Below you can see I picked the “Integrations” alert type to get the list of MySQL metrics. I then chose the “MySQLSample” and narrowed the scope to MySQL servers with the “cluster – node type” of slave. This will enable this alert on all New Relic hosts set up with the MySQL integration that are slave members using replication. We define the measurement threshold for the metric. In this case it’s always “1” when replication is running and “0” when it’s stopped. We also choose the alert policy / notification channel, then click “create.” And we have created our first alert. It’s important to make sure you’re targeting the proper servers for each alert. In our example above, for the “narrow down entities” option, we indirectly targeted only MySQL DB Servers by choosing a metric that only MySQL instances will send to our New Relic account. That’s great for this metric, but as the DBA (database administrator) we don’t want to get paged for anything other than the database hosts and database server issues. One way we can narrow down entities is via tags or other attributes. To ensure you are only monitoring the hosts you are targeting it’s a best practice to use tags generously in your infrastructure deployment code and leverage them in “narrow down entities” for precise monitor targeting. For monitoring the “MySQL Server Down” condition, we’ll use the general host level metrics and leverage the process information to alert only when we see a count of zero /usr/sbin/mysqld processes. With a few more clicks I have now added MySQL database server alerts for excessive CPU conditions, max_connections or MySQL Server failures: Another way to explore these metrics is to use the Data Explorer which is part of “New Relic Insights” and can be found as a tab in New Relic Infrastructure. Data Explorer lets you list / graph individual metrics and build alerts if desired. Below you can see I was examining the metric “Avg Cluster.Seconds Behind Master”: With Insights and its tools you can explore your data and create custom dashboards: Hopefully this post has provided you with a helpful overview of the type of MySQL alerting and monitoring you can set up using New Relic Infrastructure.
  5. MySQL: automatic partitions surely would be nice

    In Deleting data we have been looking at a process that loads data into MySQL, leveraging partitions to make it easier and faster to later get rid of the data again. For this, we created three processes, a data loader process, and two observers - one for creating partitions, and one for deleting them. The observer processes have been running ANALYZE TABLES and then polling INFORMATION_SCHEMA.PARTITIONS every 1/10th of a second to check if intervention is needed. They then have been dynamically generating the necessary ALTER TABLE statements maintaining the proper partitioning of the table by adding and dropping additional partitions. That is cumbersome and should not be necessary. Using SQL to maintain partitions It is possible to prepare and execute dynamic DDL in MySQL, using PREPARE, EXECUTE and DEALLOCATE PREPARE. So I can do the following, if I numb myself sufficiently to actually write and generate code in procedural SQL: kris@localhost [kris]> set @next_name := "p3"; Query OK, 0 rows affected (0.00 sec) kris@localhost [kris]> set @next_limit := "30000"; Query OK, 0 rows affected (0.00 sec) kris@localhost [kris]> select -> concat("alter table data add partition ( partition ", -> @next_name, -> " values less than (", -> @next_limit, -> "))") as cmd into @cmd; Query OK, 1 row affected (0.00 sec) kris@localhost [kris]> select @cmd; +-------------------------------------------------------------------------+ | @cmd | +-------------------------------------------------------------------------+ | alter table data add partition ( partition p3 values less than (30000)) | +-------------------------------------------------------------------------+ 1 row in set (0.00 sec) kris@localhost [kris]> prepare s from @cmd; Query OK, 0 rows affected (0.00 sec) Statement prepared kris@localhost [kris]> execute s; Query OK, 0 rows affected (0.10 sec) Records: 0 Duplicates: 0 Warnings: 0 kris@localhost [kris]> deallocate prepare s; Query OK, 0 rows affected (0.00 sec) kris@localhost [kris]> show create table data\G Table: data Create Table: CREATE TABLE `data` ( `id` int NOT NULL AUTO_INCREMENT, `d` varchar(64) NOT NULL, `e` varchar(64) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci PARTITION BY RANGE (`id`) (PARTITION p1 VALUES LESS THAN (10000) ENGINE = InnoDB, PARTITION p2 VALUES LESS THAN (20000) ENGINE = InnoDB, PARTITION p3 VALUES LESS THAN (30000) ENGINE = InnoDB) 1 row in set (0.00 sec)I could put the logic of the partitioner and dropper into stored procedures and use the MySQL Event Scheduler to have this running in the background at all times to maintain the partitions on the data table. Except that would still fail if I insert too many rows in a single transaction. I may be able to run the calls to my stored procedures in an insert trigger, but that is overhead I’d rather not have for each row inserted. Also, I am pretty confident that the trigger will also trigger a number of bugs in rarely used code pathes. :-) Wishful thinking An actual solution would be a less cumbersome notation for partitions, specifically range partitioning by an auto_increment primary key, and range partitioning along a time dimensions. I should be able to write how many partitions of which bucket size I want insteaf of maintaining a giant if-then-else of VALUES LESS THAN statements. SQL is supposed to be a declarative language, after all. For the time dimension, I should be able to specifify the same, in retention time and intervals for the buckets. MySQL would then create and drop partitions automatically, and generate the names for them automatically, too. So something like the following made-up syntax: -- maintain 10 buckets, -- equivalent to -- VALUES LESS THAN (<previous value> + 10000) -- -- When new buckets are autogenerated at the top, and the number is larger than 10, -- drop the lowest one. -- create table data ( `id` int NOT NULL AUTO_INCREMENT, `d` varchar(64) NOT NULL, `e` varchar(64) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci PARTITION BY AUTOMATIC RANGE (`id`) ( PARTITIONS 10 VALUES (10000)) -- maintain 10 buckets -- equivalent to -- VALUES LESS THAN (UNIX_TIMETSTAMP(<previous value> + INTERVAL 1 DAY)) -- -- When new buckets are autogenerated at the end, and the number is larger than 10, -- drop the oldest one. create table data ( `created` DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP, `d` varchar(64) NOT NULL, `e` varchar(64) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci PARTITION BY AUTOMATIC TIME RANGE (`created`) (PARTITIONS 10 VALUES (INTERVAL 1 DAY))This would get rid of any manually maintained procedures, events, triggers, and most importantly, implementations, and specify procedurally how and when partitions are created and how long they are kept. Not quite as automatic as Cassandra TTL fields, but a large step forward.

Upcoming Events


We have 96 guests and no members online