Oracle and linux I/O schedulers (part 2)

Oracle and linux I/O Scheduler
Part 2

This paper start from what were left behind in the previous one.
It's purpose is to further tests the capabilities and performance of oracle against different linux I/O scheduler.
The workload of choice is transactional. Horahammer is going to simulate a TPC-C workload while the scheduler will be switched dinamically.

The main difference compared to the first test is that the redologs have been relocated on another disk with a different I/O scheduler setting.
Since deadline has been the one with less wait event on the redolog it has been the scheduler of choice for the device of the redo logs.

The device sdc (a RAID 1) is going to host the redo logs and is set to deadline with default parameters.
The device sdb, hosting the datafile, will have its scheduler modified without interrupting the TPC-C workload.

Of course, hardware and DB remained the same.
No parameters of oracle and SUSE where modified.

Results:

For any scheduler you can see an AWR report following the below links:
 

  transaction per second log file sync % user calls physical reads physical writes
noop 151.74 8.6 705.11 310.14 214.27
anticipatory 126.32 6.9 586.14 260.42 179.45
deadline 154.94 8.9 717.67 317.77 216.99
cfq 51.33 0.5 238.90 107.69 82.31

Again the winner is deadline and again noop is close to it.
The worst performance seems to be given by cfq (the default scheduler *sigh*).

I tryed to retest the deadline scheduler after tuning some of the available parameters but I couldn't measure any significant variation.
Probably the workload rely mainly on merging while a marginal part can be improved by the I/O sorting algorithm.

A second test has been performed.
The different scheduler are tested with and without direct I/O and asynchronous I/O.

Results setting filesystemio_options=none:

  transaction per second log file sync % user calls physical reads physical writes
noop 152.36 8.6 705.49 308.85 213.97
anticipatory 125.63 7.1 580.85 255.18 179.27
deadline 150.80 8.7 698.20 310.96 212.79
cfq 56.81 0.7 264.32 117.92 87.84

The winner this time is noop while the worst scheduler for this workload remains cfq (the test has been repeated several times to be sure of the performance drop).
The absence of asynch I/O doesn't seem to make a great deal of difference in the number of transaction per second (only deadline performance are worse).

What now if the direct I/O is enabled?
The I/O merging should be disabled as well as the filesystem caching.
Below the results of the test with filesystemio_options=setall.

  transaction per second log file sync % user calls physical reads physical writes
noop 138.76 7.5 645.24 288.37 193.82
anticipatory 123.20 6.8 580.85 253.20 178.53
deadline 139.58 7.8 648.43 291.79 199.89
cfq 51.22 / 237.86 108.19 81.52

Performance surelly dropped but still it seems the scheduler got its importance.
As always cfq shows the worst behaviour.

To be noticed: deadline and noop schedulers, which showed the best performance during the previous test, had the highest performance drop.
Probably this is due to the impossibility of the I/O merging (as showed by iostat and the kernel statistics) induced by direct I/O.

 

Contact information:
fabrizio.magni _at_ gmail.com

 

 
Copyright © 2010-2015 - Fabrizio Magni