2016-11-20

cost vs. gets

Last week I hit an interesting performance issue:
A Table (T) has 2 index (of interest). One (IX_1) is only on column S, the other (IX_2) on (C, S).
The optimizer calculates the plan with IX_1 more expensive than the plan with IX_2 - as this should be more selective.
But the gets for the plan with IX_1 were less than those the optimizer preferred.

Here the information about the statement, index and plans.
As it's a real life example (just the object names are obfuscated) please do not wonder about the Partition operators. This statement only worked in the same subset of partitions - based on record_timestamp.


SELECT <columns>
FROM T WHERE S = :BIND0
         AND C = :BIND1
   AND record_timestamp > :BIND2
--       AND some more filters 

INDEX:

IX_1 C
IX_2 S, C


Plan1:
Plan hash value: 1749230273
------------------------------------------------------------------------------------------------------
| Id  | Operation                                     | Name            | Rows  | Bytes | Cost (%CPU)|
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                              |                 |   218 |   352K|   517   (1)|
|   1 |  SORT ORDER BY                                |                 |   218 |   352K|   517   (1)|
|*  2 |   FILTER                                      |                 |       |       |            |
|   3 |    PARTITION RANGE ITERATOR                   |                 |   218 |   352K|   516   (0)|
|   4 |     PARTITION LIST SINGLE                     |                 |   218 |   352K|   516   (0)|
|*  5 |      TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T               |   218 |   352K|   516   (0)|
|*  6 |       INDEX RANGE SCAN                        | IX_1            |   473 |       |    63   (0)|
------------------------------------------------------------------------------------------------------
consistent gets 27647

Plan2:
Plan hash value: 4278735161
------------------------------------------------------------------------------------------------
| Id  | Operation                                     | Name      | Rows  | Bytes | Cost (%CPU)|
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                              |           |   218 |   352K|   505   (1)|
|   1 |  SORT ORDER BY                                |           |   218 |   352K|   505   (1)|
|*  2 |   FILTER                                      |           |       |       |            |
|   3 |    PARTITION RANGE ITERATOR                   |           |   218 |   352K|   504   (0)|
|   4 |     PARTITION LIST SINGLE                     |           |   218 |   352K|   504   (0)|
|*  5 |      TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T         |   218 |   352K|   504   (0)|
|*  6 |       INDEX RANGE SCAN                        | IX_2      |   461 |       |    63   (0)|
------------------------------------------------------------------------------------------------
consistent gets 33807


How could the optimizer be wrong, even all table and column statistics are correct?

It's just simple the relation between columns C and S: There can be many S for a given C, but a S always have a dedicated C. Both Index will provide the same ROWIDs for any given (C, S) - but as IX_1 is smaller than IX_2 - less gets are required.

But before we simply suggest just to hint (or outline, profile, patch, ...) the statement to use IX_1 some more information about the ongoing discussion:
As the application loops through all Cs, and for every C through all S, it might be more efficient to have all S per C (as in IX_2) in buffercache for the time C is of any interest - and afterwards do not care about those blocks at all. IX_1 would hit the same blocks quite often as the S are by no way grouped/ordered for any C. I'm not sure if we ever can make a reasonable testcase, as it's hard to bring the environment into a state without disturbing effects to measure such effects.

This is shows how easily the optimizer can be misguided, and even with reasonable knowledge about the data and application a decision for the "best" solution can be hard to impossible.

Keine Kommentare: