Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. FOSDEM MySQL Community Dinner – Friday 2 Feb 2018 – Tickets Now Available!

    FOSDEM is back in town, folks, and as usual, there will there be a MySQL and Friends Devroom. At this point, we can’t really remember what came first, was it the annual dinner or the devroom? Our memories of the past few years are particularly hazy on that…  Can’t imagine why. Following a great tradition of dining together in style with members from all over the community, we have rented a by now familiar private space at ICAB, more detailed directions below. A couple of things changed compared to the last couple of years. The community dinner will take place on Friday night 2 February, following the Pre FOSDEM MySQL Day, both in the same location. Book your tickets now!!! The listed ticket price includes a selection of Belgian Speciality Beers and food, which will be typical Belgian style (no ‘Italian’ Pizza!) If you have any dietary requirements (vegetarian, vegan, gluten-free,…) please let us know ahead of time when buying the ticket as we need to inform the caterer who will make the necessary accommodations for you. We’re looking forward to meeting you all again at Fosdem and the Community Dinner. See you then! Party-Squad – Dimitri Vanoverbeke, Tom De Cooman, Liz van Dijk, (Kenny Gryp) Sponsors Once again, we want to thank our generous sponsors, whose help makes this affordable at such a great price. Community Sponsors: Other Sponsors: ICAB Brussels – Business and Technology Incubator Wondering how to get there? The venue itself is located very close to the VUB. A map is displayed below to help you plan your route. The total distance from the heart of Brussels, Grand Place, is about 20 minutes by public transport, along the route of metro line 5 (closest stop: Petillon). Be sure to use Google/Apple/Your Preferred Maps to determine the best way to get there, though as both tram and bus lines are also an option.
  2. Shinguz: Advanced MySQL and MariaDB training in Cologne 2018

    End of February, from February 26 to March 2 (5 days), FromDual offers an additional training for DBAs and DevOps: our most visited Advanced MySQL and MariaDB training. This training is hold in the training facilities of the FromDual training partner GFU Cyrus GmbH in Cologne-Deutz (Germany). There are already enough registrations so it is certain the training will take place. But there are still free places for at least 3 additional participants. The training is in German. You can find the training of this 5-day MySQL/MariaDB training here. If you have any question please do not hesitate to contact us. Taxonomy upgrade extras:  training advanced cologne
  3. Shinguz: Oracle releases MySQL security vulnerability fixes 2018-01

    As in every quarter of the year Oracle has released yesterday its recommendation for the MySQL security updates. This is called, in Oracle terminology, Critical Patch Update (CPU) Advisory. This CPU is published for all Oracle products. But FromDual is only interested in MySQL related topics. So let us concentrate on those. This time 25 fixes with a maximum score of 8.1 (out of 10.0) were published. 6 of theses 25 vulnerabilities are exploitable remotely over the network without authentication (no user credentials required)! The following MySQL products are affected: MySQL Enterprise Monitor (3.3.6.3293 and before, 3.4.4.4226 and before, 4.0.0.5135 and before) MySQL Connector/Net (6.9.9. and before, 6.10.4 and before) MySQL Connector/ODBC (5.3.9. and before) MySQL Server (5.5.58 and before, 5.6.38 and before, 5.7.19 and before) It is recommended to upgrade your MySQL products to close the security vulnerabilities. FromDual upgrade decision aid Because such security updates are published quarterly and some of our customers have dozens to hundreds of MySQL installations this would end up in a never ending story where you are continuously upgrading MySQL database servers and other products. This led to idea to create an upgrade decision aid to decide if you have to upgrade to this CPU or not. The following questions can be asked: How exposed is your database? Databases can be located in various network segments. It is not recommended to expose databases directly to the internet. Databases are either installed in demilitarized zones (DMZ) with no direct access from the internet or in the companies private network (only company employees should be able to access the database) or even specialized secure networks (only a limited number of specific employees can access this network). How critical are your data? Some data are more interesting or critical, some data are less interesting or critical. Interesting data are: User data (user name and password), customer data (profiles, preferences, etc.), financial data (credit cards) and health care data (medical data). Systems containing such data are more critical than others. You can also ask: How sever is it if such data leak? How broad is the user base able to access the database? How many employees do you have in your company? How many contractors do you have in your company? How many employees have physical access to the database server? How good is the mood of those people? How good are the user credentials to protect your database? Do you have shared passwords or no passwords at all? Do you have an account management (expiring old accounts, rotate passwords from time to time)? How much do you trust your users? Do you trust all your employees? Do you trust only admins? Or do you not even trust your admins? How severe are the security vulnerabilities? You can define a threshold of severity of the vulnerabilities above you want to take actions. According to your criticality you can take actions for example as follows: Greater or equal than 7.5 if you have less critical data. Greater or equal than 6.0 if you have critical data. Can the vulnerability be use from remote (over the network) and does it need a user authentication to exploit the vulnerability? What products (MySQL Enterprise Monitor, MySQL Server, MySQL Connectors) and what modules (Apache/Tomcat, .Net Connector, Partitioning, Stored Procedures, InnoDB, DDL, GIS, Optimizer, ODBC, Replication, DML, Performance Schema) are affected? Depending on your readiness to take a risk you get now answers to decide if you have to take actions or not. Some examples Situation: Your database is exposed directly to the internet or you forgot to install some firewall rules to protect your MySQL port.Analysis: You are probably affected by CVE-2018-2696 and CVE-2017-3737 (score 5.9 and 7.5). So you passed the threshold for non-critical data (7.5) and nearly passed the threshold for critical data (6.0). These vulnerabilities allow attacks over the network without user authentication.Action: Immediate upgrade is recommended. Mid-term action: Install firewall rules to protect your MySQL to avoid access from remote and/or do not expose databases directly to the internet. Situation: Your database is located in the intranet zone. You have slack user/password policies and you have many employees and also many contractors from foreign countries working on various projects. And you have very sensitive/interesting financial data stored in your database.Analysis: Many people, not all of them are really trusted, have network access to the database. It is quite possible that passwords have been shared or people have passwords for projects they are not working for any more. You are affected by nearly all of the vulnerabilities (network).Action: You should plan an upgrade soon. Mid-term action: Try to restrict access to the databases and implement some password policy rules (no shared passwords, password expiration, account locking etc.). Situation: Your highly critical databases are located in a specially secured network and only applications, Linux admins and DBAs have access to this network. And you completely trust those people.Analysis: Your threshold is 6.0 and (unauthenticated) attack over the network is not possible. There are some vulnerabilities of which you are affected but the database is only accessed by an application. So those vulnerabilities cannot be exploited easily.Action: You possibly can ignore this CPU for the MySQL database this time. But you have a vulnerability in the .Net Connector (Connector/Net). If an attacker exploits the vulnerability on the Connector he possibly can get access to the data. So you have to upgrade the Connector of your application accessing the database. If you follow the ideas of this aid you will probably have one or two upgrades a year. And this you should do anyway just to stay up to date... See also Common Vulnerability Scoring System Version 3.0 Calculator. Taxonomy upgrade extras:  cpu security mysql upgrade
  4. Updating/Deleting Rows From Clickhouse (Part 2)

    In this post, we’ll look at updating and deleting rows with ClickHouse. It’s the second of two parts. In the first part of this post, we described the high-level overview of implementing incremental refresh on a ClickHouse table as an alternative support for UPDATE/DELETE. In this part, we will show you the actual steps and sample code. Prepare Changelog Table First, we create the changelog table below. This can be stored on any other MySQL instance separate from the source of our analytics table. When we run the change capture script, it will record the data on this table that we can consume later with the incremental refresh script: CREATE TABLE `clickhouse_changelog` ( `db` varchar(255) NOT NULL DEFAULT '', `tbl` varchar(255) NOT NULL DEFAULT '', `created_at` date NOT NULL, `updated_at` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `log_file` varchar(255) NOT NULL, `log_pos` int(10) unsigned NOT NULL, PRIMARY KEY (`db`,`tbl`,`created_at`), KEY `log_file` (`log_file`,`log_pos`) ) ENGINE=InnoDB; Create ClickHouse Table Next, let’s create the target ClickhHouse table. Remember, that the corresponding MySQL table is below: CREATE TABLE `hits` ( `id` int(11) NOT NULL AUTO_INCREMENT, `type` varchar(100) DEFAULT NULL, `user_id` int(11) NOT NULL, `location_id` int(11) NOT NULL, `created_at` datetime DEFAULT NULL PRIMARY KEY (`id`), KEY `created_at` (`created_at`) ) ENGINE=InnoDB; Converting this table to ClickHouse looks like below, with the addition of a “created_day” column that serves as the partitioning key: CREATE TABLE hits ( id Int32, created_day Date, type String, user_id Int32, location_id Int32, created_at Int32 ) ENGINE = MergeTree PARTITION BY toMonday(created_day) ORDER BY (created_at, id) SETTINGS index_granularity = 8192; Run Changelog Capture Once the tables are ready, running the change capture script. An example script can be found in this gist, which is written in Python and uses the python-mysql-replication library. This library acts as replication client, continuously downloads the binary logs from the source and sifts through it to find any UPDATE/DELETE executed against our source table. There are a few configuration options that need to be customized in the script. LOG_DB_HOST: The MySQL host where we created the clickhouse_changelog table. LOG_DB_NAME: The database name where the clickhouse_changelog table is created. SRC_DB_HOST: The MySQL host where we will be downloading binary logs from. This can either be a primary or secondary/replica as long as its the same server where our raw table is also located. MYSQL_USER: MySQL username. MYSQL_PASS: MySQL password. TABLE: The table we want to watch for changes. When the script is successfully configured and running, the clickhouse_changelog table should start populating with data like below.mysql> select * from mydb.clickhouse_changelog; +------+------+------------+---------------------+------------------+-----------+ | db | tbl | created_at | updated_at | log_file | log_pos | +------+------+------------+---------------------+------------------+-----------+ | mydb | hits | 2014-06-02 | 2017-12-23 17:19:33 | mysql-bin.016353 | 18876747 | | mydb | hits | 2014-06-09 | 2017-12-23 22:10:29 | mysql-bin.016414 | 1595314 | | mydb | hits | 2014-06-16 | 2017-12-23 02:59:37 | mysql-bin.016166 | 33999981 | | mydb | hits | 2014-06-23 | 2017-12-23 18:09:33 | mysql-bin.016363 | 84498477 | | mydb | hits | 2014-06-30 | 2017-12-23 06:08:59 | mysql-bin.016204 | 23792406 | | mydb | hits | 2014-08-04 | 2017-12-23 18:09:33 | mysql-bin.016363 | 84499495 | | mydb | hits | 2014-08-18 | 2017-12-23 18:09:33 | mysql-bin.016363 | 84500523 | | mydb | hits | 2014-09-01 | 2017-12-23 06:09:19 | mysql-bin.016204 | 27120145 | +------+------+------------+---------------------+------------------+-----------+ Full Table Import So we have our changelog capture in place, the next step is to initially populate the ClickHouse table from MySQL. Normally, we can easily do this with a  mysqldump into a tab-separated format, but remember we have to transform the created_at column from MySQL into ClickHouse’s Date format to be used as partitioning key. A simple way to do this is by using a simple set of shell commands like below: SQL=$(cat <<EOF SELECT id, DATE_FORMAT(created_at, "%Y-%m-%d"), type, user_id, location_id, UNIX_TIMESTAMP(created_at) FROM hits EOF ) mysql -h source_db_host mydb -BNe "$sql" > hist.txt cat hist.txt | clickhouse-client -d mydb --query="INSERT INTO hits FORMAT TabSeparated" One thing to note about this process is that the MySQL client buffers the results for the whole query, and it could eat up all the memory on the server you run this from if the table is really large. To avoid this, chunk the table into several million rows at a time. Since we already have the changelog capture running and in place from the previous step, you do not need to worry about any changes between chunks. We will consolidate those changes during the incremental refreshes. Incremental Refresh After initially populating the ClickHouse table, we then set up our continuous incremental refresh using a separate script. A template script we use for the table on our example can be found in this gist. What this script does is twofold: Determines the list of weeks recently modified based on clickhouse_changelog, dump rows for those weeks and re-imports to ClickHouse. If the current week is not on the list of those with modifications, it also checks for new rows based on the auto-incrementing primary key and appends them to the ClickHouse table. An example output of this script would be: ubuntu@mysql~/clickhouse$ bash clickhouse-incr-refresh.sh hits 2017-12-24_00_20_19 incr-refresh Starting changelog processing for hits 2017-12-24_00_20_19 incr-refresh Current week is: 2017-12-18 2017-12-24_00_20_19 incr-refresh Processing week: 2017-12-18 2017-12-24_00_20_20 incr-refresh Changelog import for hits complete ubuntu@mysql~/clickhouse$ bash clickhouse-incr-refresh.sh hits 2017-12-24_00_20_33 incr-refresh Starting changelog processing for hits 2017-12-24_00_20_33 incr-refresh Current week is: 2017-12-18 2017-12-24_00_20_33 incr-refresh Weeks is empty, nothing to do 2017-12-24_00_20_33 incr-refresh Changelog import for hits complete 2017-12-24_00_20_33 incr-refresh Inserting new records for hits > id: 5213438 2017-12-24_00_20_33 incr-refresh No new rows found 2017-12-24_00_20_33 incr-refresh Incremental import for hits complete ubuntu@mysql~/clickhouse$ Note that, on step 4, if you imported a really large table and the changelog had accumulated a large number of changes to refresh, the initial incremental execution might take some time. After that though, it should be faster. This script can be run every minute, longer or shorter, depending on how often you want the ClickHouse table to be refreshed. To wrap up, here is a query from MySQL on the same table, versus ClickHouse. mysql> SELECT COUNT(DISTINCT user_id) FROM hits WHERE created_at -> BETWEEN '2016-01-01 00:00:00' AND '2017-01-01 00:00:00'; +-------------------------+ | COUNT(DISTINCT user_id) | +-------------------------+ | 3023028 | +-------------------------+ 1 row in set (25.89 sec) :) SELECT COUNT(DISTINCT user_id) FROM hits WHERE created_at BETWEEN 1451606400 AND 1483228800; SELECT COUNTDistinct(user_id) FROM hits WHERE (created_at >= 1451606400) AND (created_at <= 1483228800) ┌─uniqExact(user_id)─┐ │ 3023028 │ └────────────────────┘ 1 rows in set. Elapsed: 0.491 sec. Processed 35.82 million rows, 286.59 MB (73.01 million rows/s., 584.06 MB/s.) Enjoy!
  5. Fun with Bugs #60 - On Some Memory Leaks, Replication and Other Bugs Fixed in MySQL 5.7.21

    Oracle had formally released MySQL 5.7.21 yesterday. I do not bother any more to study MySQL release notes carefully and completely, but during a quick review today I've noted several interesting items I'd like you to pay attention to.I am historically interested in InnoDB implementation details, so I could not miss Bug #87619 - "InnoDB partition table will lock into the near record as a condition in the use ". This was a regression bug in 5.7+, probably caused by new implementation of partitioning in InnoDB.Another interesting bug is Bug #86927 - "Renaming a partitioned table does not update mysql.innodb_table_stats.", by Jean-François Gagné. It was yet another bug in InnoDB's persistent statistics (that I truly hate). What makes it especially interesting to me, though, is that it's the first public bug report I noted that mentioned MySQL 9.0.0 release as a target for the fix:"Fixed as of the upcoming 5.7.21, 8.0.4, 9.0.0 release"So, it's clear that back in October 2017 Oracle had already got a separate branch for upcoming MySQL 9.0.x! It also probably means that MySQL 8.0.x GA is coming really soon.There are bug reports that are worth reading for technical reasons, others - only if you want to get some fun. Bug #86607 - "InnoDB crashed when master thread evict dict_table_t object" is agood example that covers both cases. Good to know the crash is fixed, but, please, make sure to read all comments there.In this release I've noted fixes to several public bugs reported by Shane Bester. The first one of them is Bug #86573 - "foreign key cascades use excessive memory". Check how he used memory instrumentation in Performance Schema to demonstrate the problem! In Bug #86482 - "innodb leaks memory, performance_schema file_instances #sql-ib3129987-252773.ibd", he used similar approach to show potential memory leak in the Performance Schema itself ! Yet another bug that mentions 9.0.0 as a target version for the fix, among others...  Bug #78048 - "INNODB Full text Case sensitive not working", is here both because I recently started to notice problems related to InnoDB FULLTEXT indexing, again (first time was soon after it was introduced), and because it has an MTR  test case contributed by Sveta Smirnova.XA transactions support had always been problematic in MySQL  (still "Verified" Bug #87526 by Sveta Smirnova is one of recent examples how incomplete or useless it can be, see also MDEV-14593). Check the following bugs fixed in MySQL 5.7.21 if you use XA transactions: Bug #87393 - "xa rollback with wrong xid will be recorded into the binlog". It was reported by HongXiang Jiang, who had also contributed a patch. Bug #83295 - "replication error occurs, use xa transaction(one phase)". Yet another XA transactions problem reported by Hiroyuki Itoh and then confirmed by many affected users. Nice to see it fixed. There are many fixes in MySQL 5.7.21 related to memory leaks. Two bug reports of this kind were from Przemyslaw Malkowski: Bug #85371 - "Memory leak in multi-source replication when binlog_rows_query_log_events=1". Again, memory instrumentation of Performance Schema was used to demonstrate the problem. Vlad Lesin, also from Percona, contributed the patch for this bug. Bug #85251 - "Memory leak in master-master GTID replication with sync_relay_log_info". Here Vlad Lesin, who had contributed the patch, also used Massif for the detailed analysis. To summarize, I start to miss memory instrumentation in Performance Schema in MariaDB 10.x... This is a really useful feature.I usually care about optimizer bugs, and these two attracted my attention: Bug #87207 - "select distinct with secondary key for 'Using index for group-by' bad results". This nice optimizer regression bug was found by Shane Bester. As a workaround, while you do not use 5.7.21, you can try to set optimizer_switch='use_index_extensions=off'. I'd keep it that way by default... Bug #72854 - "Extremely slow performance with outer joins and join buffer". I am happy to see this old optimizer bug reported by Sergey Petrunya from MariaDB finally fixed. You can find a lot more details, including usual references to MySQL bug reports that are still private, in the Release Notes. Keep reading and consider upgrade :)