Oracle and linux I/O schedulers (part 2)


This paper start from what were left behind in the previous one.
It’s purpose is to further tests the capabilities and performance of oracle against different linux I/O scheduler.
The workload of choice is transactional. Horahammer is going to simulate a TPC-C workload while the scheduler will be switched dynamically.

The main difference compared to the first test is that the redologs have been relocated on another disk with a different I/O scheduler setting.
Since deadline has been the one with less wait event on the redolog it has been the scheduler of choice for the device of the redo logs.

The device sdc (a RAID 1) is going to host the redo logs and is set to deadline with default parameters.
The device sdb, hosting the datafile, will have its scheduler modified without interrupting the TPC-C workload.

Of course, hardware and DB remained the same.
No parameters of oracle and SUSE where modified.

Results:

For any scheduler you can see an AWR report following the below links:
 noopanticipatorydeadlinecfq

 transaction per secondlog file sync %user callsphysical readsphysical writes
noop151.748.6705.11310.14214.27
anticipatory126.326.9586.14260.42179.45
deadline154.948.9717.67317.77216.99
cfq51.330.5238.90107.6982.31

Again the winner is deadline and again noop is close to it.
The worst performance seems to be given by cfq (the default scheduler *sigh*).

I tried to retest the deadline scheduler after tuning some of the available parameters but I couldn’t measure any significant variation.
Probably the workload rely mainly on merging while a marginal part can be improved by the I/O sorting algorithm.

A second test has been performed.
The different scheduler are tested with and without direct I/O and asynchronous I/O.

Results setting filesystemio_options=none:

 transaction per secondlog file sync %user callsphysical readsphysical writes
noop152.368.6705.49308.85213.97
anticipatory125.637.1580.85255.18179.27
deadline150.808.7698.20310.96212.79
cfq56.810.7264.32117.9287.84

The winner this time is noop while the worst scheduler for this workload remains cfq (the test has been repeated several times to be sure of the performance drop).
The absence of asynch I/O doesn’t seem to make a great deal of difference in the number of transaction per second (only deadline performance are worse).

What now if the direct I/O is enabled?
The I/O merging should be disabled as well as the filesystem caching.
Below the results of the test with filesystemio_options=setall.

 transaction per secondlog file sync %user callsphysical readsphysical writes
noop138.767.5645.24288.37193.82
anticipatory123.206.8580.85253.20178.53
deadline139.587.8648.43291.79199.89
cfq51.22/237.86108.1981.52

Performance surely dropped but still it seems the scheduler got its importance.
As always cfq shows the worst behaviour.

To be noticed: deadline and noop schedulers, which showed the best performance during the previous test, had the highest performance drop.
Probably this is due to the impossibility of the I/O merging (as showed by iostat and the kernel statistics) induced by direct I/O.

,