MySQL - nimrody/knowledgebase GitHub Wiki
-
Show named locks
Enable monitoring of named locks (see show all current locks)
UPDATE performance_schema.setup_instruments SET enabled = 'YES' WHERE name = 'wait/lock/metadata/sql/mdl';
and then
SELECT * FROM performance_schema.metadata_locks WHERE OBJECT_TYPE='USER LEVEL LOCK'
Or useselect IS_USED_LOCK('lockname')
to get the state of a specific lock. -
See top queries
select left(digest_text, 50),
sum(sum_timer_wait)/1000000000000 assum
,
(sum(sum_timer_wait)/sum(count_star))/1000000000000 as average,
sum(count_star) ascount
,
round((sum(sum_timer_wait)/tot)*100, 2) as percent
from (
select sum(sum_timer_wait) as tot
from performance_schema.events_statements_summary_by_digest
) as x, performance_schema.events_statements_summary_by_digest
group by digest order by sum(sum_timer_wait) desc limit 10;
-
Running in a docker container with tmpfs (ramdisk, 400MB or so required)
docker run --rm -p 3306:3306 -e MYSQL_ROOT_PASSWORD=finger --tmpfs=/var/lib/mysql/:rw,noexec,nosuid,size=600m --tmpfs=/tmp/:rw,noexec,nosuid,size=50m mysql:5.7
-
Concurrent ALTER TABLE (Summary: add
LOCK=NONE
to ensure that theALTER TABLE
does not lock the table. If MySQL cannot honor this request it will report an error rather than locking the table) -
Locking and Concurrency Control Video - Explains isolation levels, MVCC, row write locks and deadlocks.
-
- Also Rick James site and here
-
Increasing
group_concat
maximum lengthSET @@group_concat_max_len = value_numeric;
-
Dumping data into CSV file on a locally running MySQL server (does not work on RDS where we have no admin user)
SHOW VARIABLES LIKE "secure_file_priv"; SELECT * from mytable INTO OUTFILE '';
-
CRC32 column as a hash index (use both crc= condition and
user_id=
andpackage_name=
in the where clause)alter table user_activity add column crc int unsigned; update user_activity set crc=crc32(concat(user_id,package_name)) create index crc32_user_package on user_activity(crc);
-
Table sizes
SELECT table_name "Table name", lpad(truncate( ( data_length + index_length ) / 1024 / 1024, 2), 5, ' ') "Data Base Size in MB", lpad(format( data_free / 1024 / 1024,2), 5, ' ') "Free Space in MB" FROM information_schema.TABLES where table_schema='activitydb' order by (data_length+index_length) desc;
-
Free disk space
SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB", sum( data_free )/ 1024 / 1024 "Free Space in MB" FROM information_schema.TABLES
-
Selecting into variable
SELECT a, b INTO @a, @b FROM mytable LIMIT 1;
or
SET @a := (SELECT ... )
for single variable -
Handle GROUP BY in legacy mode
SELECT list is not in GROUP BY clause and contains nonaggregated column
set global sql_mode = 'STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZSET @@group_concat_max_len = value_numeric; ERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
Or
SET GLOBAL sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY',''));
-
Backup mysql database to s3 using s3cmd
mysqldump -C --default-character-set=utf8 --single-transaction --quick -h databasehost --ssl-ca=./rds-combined-ca-bundle.pem -u inspector --password='xxxxx' activitydb |gzip | $HOME/bin/s3cmd put - s3://xxxx/db/full_$datestamp.sql.gz
-
mysqldump part of a table:
mysqldump -u -p activitydb --skip-extended-insert --tables url_replace_rules --where "publisher_id=46"
-
Efficiently moving rows from one table to another - archiving
-
Adding remote user
CREATE USER username@'%' IDENTIFIED BY 'password'; GRANT ALL ON . to username@'%'; -- or just SELECT for readonly
-
Show innodb locks:
how global status like 'Innodb_row%';
andshow engine innodb status;
sync_frm=0
innodb-flush-log-at-trx-commit=0
innodb-doublewrite=0
innodb-checksums=0
innodb_support_xa=0
Keep the binary log disabled (i.e. comment out any log-bin lines in configuration). For testing, you don’t need to be able to point-in-time restore, or seed replicas with changes.
Set sync_frm=0
. This makes any DDL (schema changes) unsafe, but removes an fsync that is required to make sure the changes are safely on disk.
Set innodb-flush-log-at-trx-commit=0
. This reduces redo log file pressure as data is committed. I would rate this as one of the most important settings to change. The particular case when it will help the most is when you are running on hardware without a battery backed write cache.
Set innodb-doublewrite=0
. This makes recovery unsafe, as InnoDB will not be able to recover from a partially written page. However, as with all these steps, we are working with the assumption that you will just throw out the data when that happens.
Set innodb-checksums=0
. Page checksums are InnoDB’s mechanism to make sure the underlying hardware isn’t failing and corrupting data. It’s usually only a small hit (although more measurable with SSDs), and is only used on loading/unloading pages from disk.
Set innodb_support_xa=0
. This makes sure changes between the binary logs and synchronized. Since we don’t care about recoverability, we can disable it.
Set innodb-log-file-size
and innodb-buffer-pool-size
appropriately large. I have a guide here. You can typically go much larger on the log files when you don’t care about recoverability - 4G or 8G.
In your session set unique_checks=0
and foreign_key_checks=0
. These two options are optional, since they are the only recommendation listed here which change behaviour. When set they remove internal constraint checking, which means that a unique key may not actually be unique.
You may notice these options from files created by mysqldump.
mysqlbinlog --read-from-remote-server -h databasehost -u admin --password='cleartext' --result-file prod.log mysql-bin-changelog.134240
Add "-v" to generate SQL statements
Execute (as admin)
show binary logs
To get the list of log files and --stop-never
to stream
To compute diff of two consecutive rows:
set @a =0 ; select ( cast(crawl_time as signed) - cast(@a as signed)) as diff, @a := crawl_time last from policy_versions where publisher_id=9 order by crawl_time desc limit 100;
-
Alter a table without copying and with no r/w lock. Returns an error if not possible:
ALTER TABLE ... , ALGORITHM=INPLACE, LOCK=NONE;
-
Kill query
CALL mysql.rds_kill(thread-ID) CALL mysql.rds_kill_query(thread-ID)