Dion Cho – Oracle Performance Storyteller

We are natural born scientists

Interesting combination of RAC and serial direct path read

with 7 comments

For who are not aware, Oracle 11g has introduced a serial direct path read which enables the efficient direct path read for the serial table scan.

Disabling Direct path read for the serial full table scan-11g

A couple of days ago, I was reported a weird performance bias between 2 nodes in 11g RAC.

The query has the same execution plan on both nodes.

---------------------------------------------------------
| Id  | Operation                       | Name         | 
---------------------------------------------------------
|   0 | SELECT STATEMENT                |              | 
|   1 |  NESTED LOOPS                   |              | 
|*  2 |   TABLE ACCESS FULL             | 			   | 
|*  3 |   VIEW                          |              | 
|   4 |    UNION ALL PUSHED PREDICATE   |              | 
|*  5 |     FILTER                      |              | 
|   6 |      TABLE ACCESS BY INDEX ROWID| 			   | 
|*  7 |       INDEX RANGE SCAN          | 			   | 
|*  8 |     FILTER                      |              | 
|   9 |      TABLE ACCESS BY INDEX ROWID| 			   | 
|* 10 |       INDEX RANGE SCAN          | 			   | 
---------------------------------------------------------

But on node A, it takes 12 sec to complete while takes only 2 sec on node B.

-- node A
all  count    cpu  elapsed    disk    query  current    rows 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
Parse    1    0.06    0.06      0      0      0      0 
Execute    1    0.00    0.00      0      0      0      0 
Fetch    2    1.31    10.48    90421    90437      0      2 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
total    4    1.37    10.55    90421    90437      0      2 
Elapsed times include waiting on following events: 
Event waited on                  Times  Max. Wait Total Waited 
----------------------------------------  Waited ---------- ------------ 
library cache lock                    1    0.00      0.00 
library cache pin                    1    0.00      0.00 
SQL*Net message to client              2    0.00      0.00 
enq: KO - fast object checkpoint          2    0.00      0.00 
reliable message                    1    0.00      0.00 
direct path read                  1579    0.00      0.01 
SQL*Net message from client              2    23.81      23.81

-- node B
call  count    cpu  elapsed    disk    query  current    rows 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
Parse    1    0.03    0.03      0      0      0      0 
Execute    1    0.00    0.00      0      0      0      0 
Fetch    2    2.01    2.01      0    90450      0      2 
------- ------ -------- ---------- ---------- ---------- ---------- ---------- 
total    4    2.04    2.05      0    90450      0      2 

Elapsed times include waiting on following events: 
Event waited on                  Times  Max. Wait Total Waited 
----------------------------------------  Waited ---------- ------------ 
SQL*Net message to client              2    0.00      0.00 
gc cr block 2-way                    1    0.00      0.00 
SQL*Net message from client              2    23.53      23.53 
library cache lock                    1    0.00      0.00 
library cache pin                    1    0.00      0.00

This performance bias was always reproduced. It took always 12 sec on node A and 2 sec on node B.

The hint was direct path read wait event which means that the direct path read was being performed.

  • Because this is not a parallel execution, we have a strong chance of the serial direct path read.
  • Nocache LOB can’t be a reason here. If the nocache LOB is the issue, both nodes should have the same performance feature.
  • But we should be careful when interpreting the wait count and wait time of direct path read event. We can’t rely on these values due to the mechanism of the direct path read.

The customer was suggested of the 10949 diagnostics event as a workaround and the problem was gone(I know that this event is undocumented and unsupported, but… )

It seems that node A and node B had different environments(different cache size and/or different size of preload blocks …) and as a result, Oracle chose the different ways of executing FTS – direct vs. conventional.

Can this be classified as another dark side(side effect) of the automization by Oracle? I’m not sure but the unexpected bias between the multiple nodes is always likely to embarrass the DBAs.

Written by Dion Cho

June 9, 2010 at 4:31 am

Posted in I/O

Tagged with

7 Responses

Subscribe to comments with RSS.

  1. I think direct path read now becomes so popular in 11g…

    maclean

    June 9, 2010 at 4:31 pm

  2. […] always faster than the normal reads that go through the buffer cache. Dion Cho has a good post on a performance problem due to serial direct path reads kicking in on one node of an 11g RAC environment (not Exadata). The node doing the direct path […]

  3. Dion, I have the same issue, very different execution time of the same query on one and second 11g RAC node. By ‘10949 diagnostics event as a workaround’ you mean what is described in your post ‘Disabling direct path read for the serial full table scan – 11g’?

    Tomas

    Tomas

    November 26, 2010 at 1:40 pm

  4. I did not say that but,of course, the main wait event on the problematic node is direct path read. I have disabled direct path read by

    alter system set events ‘10949 trace name context forever, level 1’;

    but it does not solve the issue

    Tomas

    November 26, 2010 at 1:43 pm

    • Hi, Tomas.

      Could you let me know following information?
      – What is the size of the segment in mega bytes?
      – What is the value of “_very_large_object_threshold” parameter?

      Dion Cho

      November 28, 2010 at 11:23 am

  5. Thanks for reply,Dion

    I use Oracle 11.1.0.7.0 and the only parameter containing very_large is _very_large_partitioned_table

    (using the script http://www.tanelpoder.com/files/scripts/pd.sql)

    My table space uses automatic segment space management with block size 8192B. How can I find the size of segment?

    Tomas

    December 6, 2010 at 11:21 am

    • Seems that _very_large_object_threshold parameter is valid only from 11gR2.

      Oracle does direct read on the table whose size is bigger than around the that parameter. Seems that Oracle 11gR1 does not have such tweaks.

      By the way, you can get the segment size from the dictionary views like dba_segments.

      Dion Cho

      December 10, 2010 at 2:03 am


Leave a comment