Jump to content

MariaDB with XtraDB


sobrenome

Recommended Posts

  • Replies 132
  • Created
  • Last Reply

With the current settings, the first view TTFB gets around 400ms (nice) but the repeated view goes around 1100ms... (much worse than before)

:cry:

​What site you use to check it?
Almost all times i get 250ms, and some times it raises to 300 or even 400ms.
 


 


After i tried the new isolation mode, by adding transaction-isolation = READ-COMMITTED to my.cnf i got huge improvements. I never thought the improvements to be so huge. 

 

New Relic Graph:

Captura%20de%20tela%202015-02-24%2009.54

 

Comparing Today with Yesterday and a Week ago:

Captura%20de%20tela%202015-02-24%2009.54

 

This while having more Requests per Second than yesterday.

Link to comment
Share on other sites

​What site you use to check it?Almost all times i get 250ms, and some times it raises to 300 or even 400ms.

​everything seems to be working fine for me so far with some of the changes you recommended on the last page. I'm seeing the same as you around 250ms with peaks of 300-450ms. I'm using Server version: 10.0.16-MariaDB.

Seems interesting that @sobrenome is seeing peaks of 1000ms for TTFB.

Link to comment
Share on other sites

Test with: http://www.bytecheck.com/

Mine:

Your Time To First Byte is: 0.260 seconds

Extended Information:

FieldData
Url Effective********
Http Code301
Connect Time000
Total Time0.260
Time Namelookup0.002
Time Connect0.091
Time Pretransfer0.091
Time Redirect0.000
Time To First Byte0.260
Size Header445
Size Request81
Speed Download0.000
Speed Upload0.000
Content Typetext/html;charset=UTF-8
Num Connects1
Link to comment
Share on other sites

Measuring TTFB using third party tools like WebPageTest and so on isn't very accurate or reliable. I recommend installing and utilizing something like Newrelic as @RevengeFNF is using in his above screenshot, that will provide you with real accurate performance metrics of your application itself, which is what you want here.

By the way, I don't think you ever mentioned, this is this an actual dedicated server you are renting/leasing right, not a VPS? I'm assuming this is an actual dedicated server being that you have 16GB of memory available to you.

And if this is a dedicated server, what is your current hardware configuration? (Specifically, what is your database server running on? A single spinning disk, a SSD, a RAID configuration?)

Link to comment
Share on other sites

​What site you use to check it?Almost all times i get 250ms, and some times it raises to 300 or even 400ms.
 


 


After i tried the new isolation mode, by adding transaction-isolation = READ-COMMITTED to my.cnf i got huge improvements. I never thought the improvements to be so huge. 

 

New Relic Graph:

Captura%20de%20tela%202015-02-24%2009.54

 

Comparing Today with Yesterday and a Week ago:

Captura%20de%20tela%202015-02-24%2009.54

 

This while having more Requests per Second than yesterday.

​Could you please post you my.cnf settings?

Link to comment
Share on other sites

Yes, it's a dedicated server with a single HDD.

I will install newrelic and post here the test results.

​If it's an option for you and within your budget, I would seriously consider building a server utilizing a SSD as the primary drive and an extra HDD for storage if you need it.

Spinning disks are simply slow and one of the primary bottlenecks for a database server, that's why a lot of the major performance optimizations are centered around minimizing disk I/O. Upgrading your server to a solid state drive will give you an immediate all around substantial performance increase. To offer a bit of perspective, this one of my main production forums running off a RAID 1 SSD configuration for the operating system and database server (and a hardware RAID 5 configuration for storage),

Screenshot_from_2015-02-24_17:15:20.thum

This is also running on a customized Nginx + PHP-FPM setup among many other various server optimizations, but getting scalable database performance like this on anything but a solid state drive is going to be pretty much impossible, MariaDB 10 and InnoDB/XtraDB 5.6 have introduced a lot of improvements in this area, but your HDD will always be a large bottleneck no matter what you do. Upgrading your server to a SSD is probably the single best thing you can do to improve overall performance. So, again, if it's an option for you and within your budget, it's something to seriously consider.

Link to comment
Share on other sites

​Could you please post you my.cnf settings?

[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
max_connections=120
max_user_connections=70
wait_timeout=30
interactive_timeout=50
max_allowed_packet = 64M
tmpdir = /var/mysqltmp

#Slow Queries
slow_query_log = 1
slow_query_log_file=/var/log/mysq-slow.log
long_query_time=5
log_slow_verbosity=Query_plan,explain,Innodb

#InnoDB
innodb_file_per_table
innodb_flush_method=O_DIRECT
innodb_flush_log_at_trx_commit = 0
innodb_buffer_pool_size =2800M
innodb_log_file_size=256M
innodb_read_io_threads=8
innodb_write_io_threads=8
innodb_buffer_pool_instances=2
innodb_buffer_pool_dump_at_shutdown = 1
innodb_buffer_pool_load_at_startup = 1
transaction-isolation = READ-COMMITTED

#MyISAM
key_buffer_size =40M
aria_pagecache_buffer_size=128K

#Thread Cache
thread_cache_size = 8
thread_handling	= pool-of-threads

#Table cache
table_open_cache=1000
open_files_limit =2000

#Buffers
join_buffer_size =2M
read_buffer_size=128K
sort_buffer_size=256K

#Query Cache
query_cache_size=64M
query_cache_limit =6M
query_cache_min_res_unit=1k

#Temporary Tables
tmp_table_size=256M
max_heap_table_size=256M

[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

 

Link to comment
Share on other sites

​If it's an option for you and within your budget, I would seriously consider building a server utilizing a SSD as the primary drive and an extra HDD for storage if you need it.

Spinning disks are simply slow and one of the primary bottlenecks for a database server, that's why a lot of the major performance optimizations are centered around minimizing disk I/O. Upgrading your server to a solid state drive will give you an immediate all around substantial performance increase. To offer a bit of perspective, this one of my main production forums running off a RAID 1 SSD configuration for the operating system and database server (and a hardware RAID 5 configuration for storage),

Screenshot_from_2015-02-24_17:15:20.thum

This is also running on a customized Nginx + PHP-FPM setup among many other various server optimizations, but getting scalable database performance like this on anything but a solid state drive is going to be pretty much impossible, MariaDB 10 and InnoDB/XtraDB 5.6 have introduced a lot of improvements in this area, but your HDD will always be a large bottleneck no matter what you do. Upgrading your server to a SSD is probably the single best thing you can do to improve overall performance. So, again, if it's an option for you and within your budget, it's something to seriously consider.

​But InnoDB relies on RAM memory, isn't it?

Link to comment
Share on other sites

​But InnoDB relies on RAM memory, isn't it?

There's still a disk overhead in other areas but ideally yes. I know it may sound a bit contradictory, and I used to frequently say SSD's shouldn't be necessary on properly tuned database servers with plenty of memory available, but I don't really bother trying to preach this ideal world scenario anymore. It's such an easy upgrade that makes such a significant improvement to overall server performance that it's very hard to argue against.

With enough memory available and a large enough buffer pool, MySQL should be reading and working almost entirely from memory. Even so, there has always an undeniable substantial improvement to database (and overall) performance with every server I've worked on after upgrading the primary OS drive to a SSD, no matter how thoroughly optimized it may have been.

So, I really shouldn't have painted this as something specific to the database server, because it's not. It's also a general server upgrade. You're probably running off a standard 7200RPM disk currently. You will likely be amazed at the difference in performance you will see after upgrading your primary OS drive to a good solid state drive. It's really not that different from the feeling of upgrading your desktop to a SSD for the first time.

(Which, by the way, if you haven't already :P)

I'm not saying it's an alternative to properly tuning your database server or anything of the sort, it's not, but if you're trying to improve your overall server performance and have enough room in your budget to upgrade, it's probably the easiest and most significant performance upgrade available to you right now.

Link to comment
Share on other sites

I might recommend something like this to use as a base configuration,

# MariaDB database server configuration file.

[mysqld]
###############################
# Basic Settings              #
###############################
skip-external-locking
performance_schema	= off
bind-address		= 127.0.0.1

###############################
# Fine Tuning                 #
###############################
max_connections		= 192
connect_timeout		= 30
wait_timeout		= 600
max_allowed_packet	= 256M
thread_cache_size	= 128
tmp_table_size		= 64M
max_heap_table_size	= 64M
thread_handling		= pool-of-threads

###############################
# MyISAM                      #
###############################
myisam_recover		= BACKUP
concurrent_insert	= 2

###############################
# Query Cache Configuration   #
###############################
query_cache_limit	= 128K
query_cache_size	= 0M
query_cache_type	= OFF

###############################
# Logging and Replication     #
###############################
#general_log_file	= /var/log/mysql/mysql.log
#general_log		= 1

###############################
# InnoDB                      #
###############################
default_storage_engine	= InnoDB
innodb_log_file_size	= 2G
innodb_buffer_pool_size	= 8G
innodb_log_buffer_size	= 32M
innodb_file_per_table	= 1
innodb_open_files		= 800
innodb_io_capacity		= 800
innodb_flush_method		= O_DIRECT
innodb_flush_log_at_trx_commit = 0

I've lowered your maximum connections a bit, since your peak connection usage is only 52 at the moment it would probably be safe to lower this even further but you're not using that much memory per thread at the moment anyways.

I keep query cache disabled, as I don't believe it provides any real benefit for complex applications such as IP.Board (and can even potentially degrade performance.) Specifically, notice how many query cache prunes you have per day? This is because complex applications such as IP.Board can have a lot of complex queries that can't be cached properly and end up needing to be frequently flushed.

I've increased the InnoDB buffer pool to 8GB, which still gives you a little breathing room for scaling at 6.7GB of current usage.

This includes some various general purpose performance tweaks for InnoDB and MariaDB in general (e.g. O_DIRECT flushing and flushing the log once per second instead of at every transaction commit, thread pooling, making sure performance_schema is disabled (I can't remember if this is still enabled by default in the latest stable release of MariaDB but it degrades performance in production and should only be enabled when you're using it)).

Going off your previous MySQLtuner output, I'm assuming you're running on 16GB of total available memory. With this configuration, you'll be dedicated a little over half your available memory pool to MySQL. This should still allow plenty of room for other web services to run as long as everything else is properly configured.

I can't remember if you still need to do this with MariaDB 10 or not, but just to be safe, after updating your configuration please run the following commands to restart MySQL. You generally need to do this any time you change the innodb_log_file_size. (I think MariaDB 10 may have removed this requirement but I honestly can't remember)

mysql -uroot -p -e"SET GLOBAL innodb_fast_shutdown = 0"
service mysql stop
mv /var/lib/mysql/ib_logfile[01] /tmp
service mysql start

​This suggestion seems to be the best I have tried so far. But I really can't understand why TTFB is lower for a first time visitor and higher for a visitor that has files already cached.

There is only one problem: 
"bind-address = 127.0.0.1" makes the mysql down, it doesn't restart. Is it important? How to set it?

What about tmp directory on RAM? Is it effective?

Link to comment
Share on other sites

Please be sure you're referencing data from a server side utility like Newrelic and not WebPageTest.org. These third party utilities are impacted by a wide variety of factors outside of your control and are really not reliable for testing server/application performance.

Don't worry about bind-address for right now, it's a security directive to prevent remote connections but I don't know enough about your server configuration to suggest changing it.

Yes, mounting tmp as tmpfs can be helpful as long as you have memory to spare.

Link to comment
Share on other sites

SSD is one of the best things for a server. 

Im also using HDD, so we need to try to use the Ram instead of the HDD. We should also try to reduce IO operations.

Another thing that can boost write operations up to 60% and reduce a lot the IO Operations is disable the innod_doublewrite.
But you can only do this if your filesystem is ext4 with journal. If not, don't disable the doublewrite or you will likely have database corrupted files in case of a crash.

 

Results

TEST:DOUBLE WRITE BUFFERFILE SYSTEM OPTIONSAVERAGE NOPTM OVER 1H
EXT4_DWYesrw690
EXT4_DIONOLOCK_DWYesrw,dioread_nolock668
EXT4_NODWNorw1107
EXT4TRX_NODWNorw,data=journal1066
XFS_DWYesxfs rw,noatime754
Link to comment
Share on other sites

I installed New Relic. Waiting for data.

Latest mysqltuner recommendations:

 >>  MySQLTuner 1.4.0 - Major Hayden <major@mhtx.net>
 >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
 >>  Run with '--help' for additional options and output filtering
[!!] Currently running unsupported MySQL version 10.0.16-MariaDB
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics -------------------------------------------
[--] Status: +ARCHIVE +Aria +BLACKHOLE +CSV +FEDERATED +InnoDB +MRG_MyISAM 
[--] Data in MyISAM tables: 3G (Tables: 1034)
[--] Data in InnoDB tables: 5G (Tables: 674)
[--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 52)
[--] Data in MEMORY tables: 0B (Tables: 2)
[!!] Total fragmented tables: 191

-------- Security Recommendations  -------------------------------------------
[OK] All database users have passwords assigned

-------- Performance Metrics -------------------------------------------------
[--] Up for: 11h 36m 24s (953K q [22.827 qps], 48K conn, TX: 17B, RX: 274M)
[--] Reads / Writes: 67% / 33%
[--] Total buffers: 8.2G global + 2.8M per thread (192 max threads)
[OK] Maximum possible memory usage: 8.7G (56% of installed RAM)
[OK] Slow queries: 0% (6/953K)
[OK] Highest usage of available connections: 6% (12/192)
[OK] Key buffer size / total MyISAM indexes: 128.0M/2.3G
[OK] Key buffer hit rate: 95.7% (205K cached / 8K reads)
[!!] Query cache is disabled
[OK] Sorts requiring temporary tables: 0% (21 temp sorts / 33K sorts)
[OK] Temporary tables created on disk: 22% (5K on disk / 24K total)
[OK] Thread cache hit rate: 98% (721 created / 48K connections)
[!!] Table cache hit rate: 1% (400 open / 20K opened)
[OK] Open file limit used: 5% (216/4K)
[OK] Table locks acquired immediately: 99% (1M immediate / 1M locks)
[OK] InnoDB buffer pool / data size: 8.0G/5.9G
[OK] InnoDB log waits: 0
-------- Recommendations -----------------------------------------------------
General recommendations:
    Run OPTIMIZE TABLE to defragment tables for better performance
    MySQL started within last 24 hours - recommendations may be inaccurate
    Increase table_open_cache gradually to avoid file descriptor limits
    Read this before increasing table_open_cache over 64: http://bit.ly/1mi7c4C
Variables to adjust:
    query_cache_size (>= 8M)
    table_open_cache (> 400)

 

Link to comment
Share on other sites

​If it's an option for you and within your budget, I would seriously consider building a server utilizing a SSD as the primary drive and an extra HDD for storage if you need it.

Spinning disks are simply slow and one of the primary bottlenecks for a database server, that's why a lot of the major performance optimizations are centered around minimizing disk I/O. Upgrading your server to a solid state drive will give you an immediate all around substantial performance increase. To offer a bit of perspective, this one of my main production forums running off a RAID 1 SSD configuration for the operating system and database server (and a hardware RAID 5 configuration for storage),

Screenshot_from_2015-02-24_17:15:20.thum

This is also running on a customized Nginx + PHP-FPM setup among many other various server optimizations, but getting scalable database performance like this on anything but a solid state drive is going to be pretty much impossible, MariaDB 10 and InnoDB/XtraDB 5.6 have introduced a lot of improvements in this area, but your HDD will always be a large bottleneck no matter what you do. Upgrading your server to a SSD is probably the single best thing you can do to improve overall performance. So, again, if it's an option for you and within your budget, it's something to seriously consider.

​I always got more than 100ms, but in the last 2 days, i have less than 50ms. It almost seems i've changed my HDD to a SSD.

I've accomplished this with 3 alterations in my.cnf. Two of them given by you, thanks. 

innodb_flush_log_at_trx_commit=0
thread_handling = pool-of-threads
transaction-isolation = READ-COMMITTED

This proves that we might not have the strongest server, but a good configuration can do miracles in some cases

Captura%20de%20tela%202015-02-26%2016.16.

 

Link to comment
Share on other sites

​I always got more than 100ms, but in the last 2 days, i have less than 50ms. It almost seems i've changed my HDD to a SSD.

I've accomplished this with 3 alterations in my.cnf. Two of them given by you, thanks. 

innodb_flush_log_at_trx_commit=0
thread_handling = pool-of-threads
transaction-isolation = READ-COMMITTED

This proves that we might not have the strongest server, but a good configuration can do miracles in some cases

​I know I've read up on read committed transaction isolation before, once a long time ago, but I guess I never gave it much serious thought for whatever reason, I don't remember why. I've read into the implications of it a bit more and it is very interesting. It seems like it's a perfectly safe default setting to use for most web applications, and if it really makes such a large performance impact on IP.Board that's awesome, I'll test this on my production websites a bit soon and see if it makes any noticeable difference for me. (Though, my average database response time is only 6-10ms anyways :P)

Disabling flush log at transaction commit is one of the best tweaks I know of. Normally MySQL/InnoDB has to flush its log to disk at the end of each transaction without this setting, which naturally can be detrimental to performance. Setting it to 0 just causes it to be done in the background every second instead at the end of each transaction. It's a simple tweak that's great for improving performance (as long as you don't mind the risk of losing a ~single seconds worth of data in the event of a sudden server crash or loss of power, which it's very unlikely you do).

Link to comment
Share on other sites

That's actually not really bad all considered. 171ms response time is more or less reasonable, your database server response times in specific seem reasonably good (and that's what we're optimizing here). PHP is the major limiting factor here now. What version of PHP are you running? You should definitely look into enabling OPcache if you haven't already. This comes native with PHP 5.5, before that (5.2 - 5.4) you can install PHP OPcache as a PECL extension.

Statistics from just the last 30 minutes also only say so much, it would be good to see how your application performs over a 24 hour period. Now that you have this set up, you'll be able to see the real impact to any server configuration changes you make as well.

Link to comment
Share on other sites

That's actually not really bad all considered. 171ms response time is more or less reasonable, your database server response times in specific seem reasonably good (and that's what we're optimizing here). PHP is the major limiting factor here now. What version of PHP are you running? You should definitely look into enabling OPcache if you haven't already. This comes native with PHP 5.5, before that (5.2 - 5.4) you can install PHP OPcache as a PECL extension.

Statistics from just the last 30 minutes also only say so much, it would be good to see how your application performs over a 24 hour period. Now that you have this set up, you'll be able to see the real impact to any server configuration changes you make as well.

​PHP 5.5 with fcgi and opcache. I am stuck in Cpanel, and they promised that in the next 2 months they are going to add PHP-FPM.

Is there something that I can do to optimize fcgi besides opcache?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...