Planet MySQL

Planet MySQL - https://planet.mysql.com
  1. Setting up an NDB Cluster in the Oracle Cloud using Auto Installer

    In MySQL Cluster 8.0.18 we have developed MySQL Cluster Auto Installer toalso support installing NDB :)We have made it very easy to setup an NDB Cluster in the Oracle Cloud.The Auto Installer will take care of installing the proper software, installingfirewalls, installing some supportive software. Most of the testing of thissoftware have been done against Oracle Cloud using instances with OracleLinux 7.I prepared two Youtube videos to show how it works. The first one gives someinsights into setting up the Compute Instances required in the Oracle Cloud.Setup compute instances in Oracle Cloud for MySQL Cluster AutoInstallerThe second video uses these compute instances to set up an NDB Cluster.Setting up an NDB Cluster in the Oracle Cloud using Auto Installer
  2. InnoDB Page Compression – MySQL 8 Compression

    InnoDB Page Compression Explained  We have several customers with multi-terabyte database infrastructure on SSDs, The SSDs are great investment for performance but they are also expensive with shorter lifespans so storage efficiency management is something we are very cautious about on SSDs, At MinervaDB Labs we spend considerable amount of time doing research on InnoDB page compressions benefits and most common mistakes. The compressed tables were first introduced in 2008  with InnoDB plugin for MySQL 5.1 . Facebook has been a major committer to this project and most of it were later implemented in upstream MySQL code as well. We can implement compression in InnoDB is two ways, Either by using Barracuda InnoDB file format or ROW_FORMAT=COMPRESSED How InnoDB page compression works ? When a page is written, It will be compressed applying the specific compression algorithm and written to disk, where the hole punching mechanism ( Several popular Linux file systems already support the hole punching feature. For example: XFS since Linux 2.6.38, ext4 since Linux3.0, tmpfs (/dev/shm  ) since Linux 3.5, and Btrfs since Linux 3.7.)releases empty blocks from the end of the page. If compression fails, data is written out as-is. MySQL implements compression with the help of the well-known zlib library, which implements the LZ77 compression algorithm. Some operating systems implement compression at the file system level. Files are typically divided into fixed-size blocks that are compressed into variable-size blocks, which easily leads into fragmentation.if innodb_page_size=16K and the file system block size is 4K, page data must compress to less than or equal to 12K to make hole punching possible so every time something inside a block is modified, the whole block is recompressed before it is written to disk. These properties make InnoDB compression technique unsuitable for use in an update-intensive database system. InnoDB Page Compression Supported Platforms InnoDB page compression requires sparse file and hole punching support, You can implement page compression on Windows with NTFS and following are the Linux platforms which support hole punching: RHEL 7 and derived distributions that use kernel version 3.10.0-123 or higher OEL 5.10 (UEK2) kernel version 2.6.39 or higher OEL 6.5 (UEK3) kernel version 3.8.13 or higher OEL 7.0 kernel version 3.8.13 or higher SLE11 kernel version 3.0-x SLE12 kernel version 3.12-x OES11 kernel version 3.0-x Ubuntu 14.0.4 LTS kernel version 3.13 or higher Ubuntu 12.0.4 LTS kernel version 3.2 or higher Debian 7 kernel version 3.2 or higher InnoDB compression on Windows The way NTFS clustering is designed we hardly get any benefit with InnoDB compression. The hole punch is done on a “compression unit” and this compression unit is derived from the cluster size (see the table below). This means that by default you cannot punch a hole if the cluster size >= 8K. Here’s a breakdown for smaller cluster sizes: Cluster Size Compression Unit 512 Bytes 8 KB 1 KB 16 KB 2 KB 32 KB 4 KB 64 KB i.e. for page compression to work on Windows, the file system must be created with a cluster size smaller than 4K, and the innodb_page_size must be at least twice the size of the compression unit. For example, for page compression to work on Windows, you could build the file system with a cluster size of 512 Bytes (which has a compression unit of 8KB) and initialize InnoDB with an innodb_page_size value of 16K or greater. How enable InnoDB compression ? You can enabled page compression by specifying the COMPRESSION attribute in the CREATE TABLE statement, We have explained same below with example: CREATE TABLE tab11 (col101 INT) COMPRESSION="zlib"; We can also enable page compression with ALTER TABLE statement. But ALTER TABLE … COMPRESSION updates only the tablespace compression attribute.  Writes to the tablespace that occur after setting the new compression algorithm use the new setting, To enable the new compression algorithm to existing pages, you must rebuild the table using OPTIMIZE TABLE. Disabling InnoDB Page Compression You can disable InnoDB page compression by setting COMPRESSION = None in ALTER TABLE statement. Writes to the tablespace that occur after setting COMPRESSION=None no longer use page compression. To uncompress existing pages, you must rebuild the table using OPTIMIZE TABLE after setting COMPRESSION=None. Monitoring tables using Page Compression You can monitor the tables with page compression enabled from INFORMATION_SCHEMA.TABLES CREATE_OPTIONS columns for tables defined with the COMPRESSION attribute: mysql> SELECT TABLE_NAME, TABLE_SCHEMA, CREATE_OPTIONS FROM INFORMATION_SCHEMA.TABLES WHERE CREATE_OPTIONS LIKE '%COMPRESSION=%'; +------------+--------------+--------------------+ | TABLE_NAME | TABLE_SCHEMA | CREATE_OPTIONS | +------------+--------------+--------------------+ | Ad_Clicks | ADSE | COMPRESSION="zlib" | +------------+--------------+--------------------+ InnoDB Page Compression Limitations Cross-platform dependency issues – We can copy the page-compressed tablespaces between Linux and Windows servers only if the compression algorithm is available on both servers InnoDB page compression doesn’t work always as expected: Page compression is not supported for tables that reside in shared tablespaces, which include the system tablespace, temporary tablespaces, and general tablespaces. Page compression is not supported for undo log tablespaces. Page compression is not supported for redo log pages. During recovery, updated pages are written out in an uncompressed form. R-tree pages, which are used for spatial indexes, are not compressed. Pages that belong to compressed tables (ROW_FORMAT=COMPRESSED) are just left as-is. During recovery, updated pages are written out in an uncompressed form. Fragmentation issues – Since hole punching releasing blocks back to the file system free list, You can expect severe fragmentation in I/O intensive applications References  Percona XtraDB: Compressed Columns with Dictionaries – An Alternative to InnoDB Table Compression by Yura Sorokin – https://www.percona.com/live/e17/sessions/percona-xtradb-compressed-columns-with-dictionaries-an-alternative-to-innodb-table-compression   InnoDB page compression explained in MySQL documentation – https://dev.mysql.com/doc/refman/8.0/en/innodb-page-compression.html InnoDB Page Compression: the Good, the Bad and the Ugly by Vadim Tkachenko – https://www.percona.com/blog/2017/11/20/innodb-page-compression/ On InnoDB Data Compression in MySQL by Valerii Kravchuk – http://mysqlentomologist.blogspot.com/2018/06/on-innodb-data-compression-in-mysql.html  InnoDB Transparent Page Compression by Sunny Bains – https://mysqlserverteam.com/innodb-transparent-page-compression/ MyRocks and InnoDB compression explained by Mark Callaghan –  http://smalldatum.blogspot.com/2017/12/myrocks-innodb-and-tokudb-summary.html InnoDB compressed columns introduced by Yura – https://www.percona.com/doc/percona-server/LATEST/flexibility/compressed_columns.html The post InnoDB Page Compression – MySQL 8 Compression appeared first on MySQL Consulting, Support and Remote DBA Services.
  3. Setting up MySQL Cluster on local machine using AutoInstaller

    We have a new version of the MySQL Cluster Auto Installer. I have prepared 3Youtube videos that shows how to make use of the Auto Installer to install andset up a cluster on your local machine.This is my first attempt at making Youtube videos to explain things aroundMySQL Cluster.The Auto Installer is intended as a tool to make it easy to get a cluster up andrunning. It is NOT intended for managing a cluster.The first Youtube video Install MySQL Cluster 8.0.18 on Mac OS X shows howto install MySQL Cluster 8.0.18 on Mac OS X. This is obviously trivial, so shouldbe straightforward to do even without the video.The second Youtube video Starting a local MySQL Cluster using Auto Installer setsup a small cluster with 1 management server, 2 data nodes and 1 MySQL Server andexplains some details around this.Using MySQL Cluster AutoInstaller to start development with NDB goes a bit deeperand also shows how to make use of the cluster and do some trivial operationsthrough a MySQL client and the NDB management client. It also shows how one canextend the possible configurations supported by directly manipulating configurationfiles deployed by the Auto Installer.A little note for Mac OS X users with high resolution is that full 4K resolution isavailable through Google Chrome, not through Safari.
  4. NoSQL + SQL = MySQL 8; Keynote OSI2019

    NoSQL + SQL = Mysql 8 Open Source India 2019 keynote from Sanjay Manwani Slides from the keynote presented at Open Source India 2019 at Nimhans convention center Bangalore. As usual lots of interesting folks. Lots of focus on Open Source. Met people from the SODA foundation who are trying to standardize the IO layer across all cloud implementations. All the best guys. Also met folks from MOSIP who have an effort ongoing to help countries create their own UID. Seems like they already have some traction. Also met an interesting person trying to think about Indian design and creativity in software. After Chumbak it does make sense to think about how not only UI but the software development process needs more creativity. Thank you EFY for the opportunity. Great job and all the best for the future.  
  5. MySQL & InnoDB Disk Space

    Yesterday, Bhuvanesh published an article about how to verify the difference between allocated diskspace for a tablespace and the the data in it. I commented with an old post explaining how to get some similar info only using SQL in case you don’t have filesystem access. And finally, my friend Bill Karwin, commented how this info is not always accurate. Which, of course, I agree with. This is why, I checked what info we have available and try to find some better answer. So first, please remind that information_schema statistics are cached by default: mysql> show global variables like 'information_schema_stats_expiry'; +---------------------------------+-------+ | Variable_name | Value | +---------------------------------+-------+ | information_schema_stats_expiry | 86400 | +---------------------------------+-------+ And that for better results, it’s always advised to run ANALYZE TABLE... For the following examples, I set information_schema_stats_expiry to 0. The New Query The new query takes advantage of the column FILE_SIZE in Performance_Schema.INNODB_TABPLESPACES: > SELECT NAME, TABLE_ROWS, format_bytes(data_length) DATA_SIZE, format_bytes(index_length) INDEX_SIZE, format_bytes(data_length+index_length) TOTAL_SIZE, format_bytes(data_free) DATA_FREE, format_bytes(FILE_SIZE) FILE_SIZE, format_bytes((FILE_SIZE/10 - (data_length/10 + index_length/10))*10) WASTED_SIZE FROM information_schema.TABLES as t JOIN information_schema.INNODB_TABLESPACES as it ON it.name = concat(table_schema,"/",table_name) ORDER BY (data_length + index_length) desc limit 5; +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | NAME | TABLE_ROWS | DATA_SIZE | INDEX_SIZE | TOTAL_SIZE | DATA_FREE | FILE_SIZE | WASTED_SIZE | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | big/testing | 10241204 | 647.98 MiB | 0 bytes | 647.98 MiB | 2.00 MiB | 660.00 MiB | 12.02 MiB | | docstore/all_recs | 24353 | 17.56 MiB | 0 bytes | 17.56 MiB | 0 bytes | 25.00 MiB | 7.44 MiB | | big/pktest | 111649 | 11.55 MiB | 0 bytes | 11.55 MiB | 0 bytes | 19.00 MiB | 7.45 MiB | | big/pktest_seq | 81880 | 6.52 MiB | 0 bytes | 6.52 MiB | 0 bytes | 14.00 MiB | 7.48 MiB | | library/books | 39 | 384.00 KiB | 16.00 KiB | 400.00 KiB | 0 bytes | 464.00 KiB | 64.00 KiB | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ We can see that MySQL estimates that the datasize for my biggest table is 648MB and that 660MB are used on the disk. The last info is very easy to verify: $ sudo ls -lh /var/lib/mysql/big/testing.ibd -rw-r----- 1 mysql mysql 660M Oct 22 00:19 /var/lib/mysql/big/testing.ibd As I recommended it, it’s always good to do an ANALYZE TABLE: > analyze table big.testing; +-------------+---------+----------+----------+ | Table | Op | Msg_type | Msg_text | +-------------+---------+----------+----------+ | big.testing | analyze | status | OK | +-------------+---------+----------+----------+ And we can run again our query: +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | NAME | TABLE_ROWS | DATA_SIZE | INDEX_SIZE | TOTAL_SIZE | DATA_FREE | FILE_SIZE | WASTED_SIZE | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | big/testing | 9045529 | 582.42 MiB | 0 bytes | 582.42 MiB | 67.00 MiB | 660.00 MiB | 77.58 MiB | | docstore/all_recs | 24353 | 17.56 MiB | 0 bytes | 17.56 MiB | 0 bytes | 25.00 MiB | 7.44 MiB | | big/pktest | 111649 | 11.55 MiB | 0 bytes | 11.55 MiB | 0 bytes | 19.00 MiB | 7.45 MiB | | big/pktest_seq | 81880 | 6.52 MiB | 0 bytes | 6.52 MiB | 0 bytes | 14.00 MiB | 7.48 MiB | | library/books | 39 | 384.00 KiB | 16.00 KiB | 400.00 KiB | 0 bytes | 464.00 KiB | 64.00 KiB | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ We can see now that the statistics have been updated and that according to my previous post, we are loosing 67MB but with the new one comparing to disk, it seems we are wasting 77.5MB on disk. Let’s see how to table looks like using innodb_ruby: Recovering the disk space Let’s see if we can recover some disk space: > OPTIMIZE table big.testing; +-------------+----------+----------+-------------------------------------------------------------------+ | Table | Op | Msg_type | Msg_text | +-------------+----------+----------+-------------------------------------------------------------------+ | big.testing | optimize | note | Table does not support optimize, doing recreate + analyze instead | | big.testing | optimize | status | OK | +-------------+----------+----------+-------------------------------------------------------------------+ 2 rows in set (1 min 4.8855 sec) And we can check again: +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | NAME | TABLE_ROWS | DATA_SIZE | INDEX_SIZE | TOTAL_SIZE | DATA_FREE | FILE_SIZE | WASTED_SIZE | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ | big/testing | 9045529 | 582.42 MiB | 0 bytes | 582.42 MiB | 67.00 MiB | 584.00 MiB | 1.58 MiB | | docstore/all_recs | 24353 | 17.56 MiB | 0 bytes | 17.56 MiB | 0 bytes | 25.00 MiB | 7.44 MiB | | big/pktest | 111649 | 11.55 MiB | 0 bytes | 11.55 MiB | 0 bytes | 19.00 MiB | 7.45 MiB | | big/pktest_seq | 81880 | 6.52 MiB | 0 bytes | 6.52 MiB | 0 bytes | 14.00 MiB | 7.48 MiB | | library/books | 39 | 384.00 KiB | 16.00 KiB | 400.00 KiB | 0 bytes | 464.00 KiB | 64.00 KiB | +-------------------+------------+------------+------------+------------+------------+------------+-------------+ We can see that now we have regain some disk space ! So even if this is not always 100% accurate, this method provides you already a very close to reality view of how your InnoDB Tablespaces are using the disk and when you will benefit from rebuilding your tablespace. MySQL Shell Plugin I’ve updated the innodb/fragmented MySQL Shell Plugin on my github with a new method:

Upcoming Events

Tue Nov 05 @ 7:00PM - 09:00PM
Monthly Meeting - TBA

Latest News

Visitors

We have 20 guests and no members online