简体   繁体   中英

MySQL 5.7 performance issue with 300 GB Innodb

We have database running on Mysql 5.7 and the Database size is of around 300 GB. Database is running on dedicated Linux server (RHEL 6) with 144 GB RAM, 16 CPUs and 15 GB Swap size.

The server is busy through out the day with min of 50 connections. Most of the queries have indexes and are optimized weekly once. But still we are facing the Performance Issue. So requesting to review the my.cnf configuration and please suggest the changes.

my.cnf is as below:

[mysqld]
basedir=/usr
datadir=/sql/mysql/data57
socket=/sql/mysql/data/mysql.sock
skip-external-locking
key_buffer_size = 4000M
max_allowed_packet = 5120M
table_open_cache = 4000
sort_buffer_size = 128M
read_buffer_size =8M
join_buffer_size=128M
read_rnd_buffer_size = 16M
myisam_sort_buffer_size = 128M
thread_cache_size = 100
query_cache_size=0
query_cache_limit=4M
query_cache_type="ON"
query_cache_min_res_unit=20K
query_prealloc_size=40K
query_alloc_block_size=40K
max_connections=300
sql_mode = ""
interactive_timeout = 28800
wait_timeout = 7200
connect_timeout = 60
default_password_lifetime=0
old_passwords=2
lower_case_table_names=1
tmpdir=/tmpfs
tmp_table_size=20G
max_heap_table_size=170M
innodb_buffer_pool_size=100G
innodb_buffer_pool_instances=16
innodb_buffer_pool_chunk_size=6562M
innodb_read_io_threads=64
innodb_write_io_threads=64
innodb_log_file_size=2G
innodb_lock_wait_timeout=180
net_buffer_length=8K
transaction-isolation=READ-COMMITTED
innodb_lock_wait_timeout=180
federated


log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

[client]
port=3306
socket=/sql/mysql/data/mysql.sock

[mysql.server]
user=mysql
basedir=/usr
log-bin=mysql-bin
binlog_format=mixed
server-id=1

[mysqldump]
quick
max_allowed_packet=1G

[mysql]
no-auto-rehash

[myisamchk]
key_buffer_size=1024M
sort_buffer_size=256K
read_buffer=8M
write_buffer=8M

[mysqlhotcopy]
interactive-timeout

I would suggest different approach for performance optimization, not related to MySQL server reconfiguration.

First , try to analyze your tables structures to see if they have optimal column types set properly, for large database like yours - every bit counts:

 SELECT * FROM table PROCEDURE ANALYSE();

The PROCEDURE ANALYSE will tell you, based on the data in the table, the suggested types for the columns in the table. This should help increase your efficiency when reading/writing to tables.

Second , try to enable so called "slow query log" and then study the queries showing up in it. This will help you to avoid some possible catastrophic failures due to poorly written queries and eventually optimize them:

SET GLOBAL slow_query_log = 'ON'; 

and

FLUSH LOGS;

It is important, by my view, to find a way of nailing down the specific performance problem, so you know you're fixing the right problem . Consider using a profiler to find the bottleneck, and then work on that specific problem.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM