2008-12-31

Windows NTP client - allow large time offsets

During the christmas holidays, I 'm doing my own christmas ralley to meet a rather huge family. At these visits I'm often asked to solve some smal or great PC-Problems. Even I'm no PC-pro, I'm something like the one-eyed among blindes. Ad of course it makes more sense to solve problems onsite than remote using just the telephone.
One of the problems was an old laptop with an decrepit BIOS-battery. This led to an totally insane system date at every startup. Even with a proper system time and NTP-Server setup it never set the time correctly as the time offset was to large.
After some searching within the registry I changed under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\Config the parameters MaxNegPhaseCorrection and MaxPosPhaseCorrection toFFFFFFFF, which is the maximum value (approx 136 years). Now at every connection, or at least at every manual time adjustment, the time is really set (and no boring error message is shown).

2008-12-28

no row information in data dictionary

It all began with a simple question how to get all ROWIDs from a table (without selecting it)? I posted on oracle-l. After some private forks of conversations I got an email by Jared Still where he mentioned:
"If you constructed a rowid, and did a select on the row, it probably doesn't
matter if the row exists or not.

Oracle has to read the block from disk to read the row directory.
The block would get cached whether or not the row exists.

I haven't tested this, but I imagine you will.  :)
"
Of course, he was right:
My testcase was very simple:

  1. preparation
     create table jared as select * from dual;
  2. testcase
    ALTER SESSION SET events '10046 trace name context forever, level 12';
    
    Session altered.
    
    select * from jared where rowid='AAAPa5AABAAATHhAAB';
    
    no rows selected
    
    /
    
    no rows selected
    
    select * from jared where rowid='AAAPa5AABAAATHhAAA';
    
    D
    -
    X
    
    which created a tracefile.
  3. summary
    the first statement hase one related wait: 'db file sequential read' the second and the third have no such waits!
    I crosschecked the situation by asking the buffer:


    select tch from x$bh where obj=63161 and dbablk=78305 and dbarfil=1;
    and - as expected - it just counted up from no rows selected (before the first run) counting 1, 2, 3.
  4. conclusio
    Disclaimer: Even I hope it's well coated, I never can be 100% sure. A modern RDBMS has so many ways to do it's job I would have to ask the developers to prove. (which I can't).

    • the buffer_cache caches blocks, not rows. (obvious?)
    • even if a row does not exist, it's including block is cached, if it's available.
    • as the smallest granularity the RDBMS takes care at IO is a block, and it even stores blocks if the row asked for does not exist.
    this brings me to my own answer to my initial question:
    There is no ROWID information for a given table stored in data dictionary.

2008-12-24

Frohe Weihnachten!

Ich wünsche allen frohe Weihnachten!
Möge eurer Leben so friedvoll und zufrieden verlaufen, dass ihr jeden Tag das Gefühl habt, es sei Weihnachten.

(translation available if required )

2008-12-18

Doing it better ... (Part 2)

Based on my Doing it better ... (Part 1), several days after the first approach and a lot of testcases (for other parts of the application) the performance of our so well tuned statement decreased dramatically.
What happened? The index on RESOURCE_POOL.FREED_TIME became soomething unbalanced. Not the kind you might argue 'B* trees can not be unbalanced per definition', but more of the kind 'based on the code there often remain some blocks nearly unused, which makes the selectivity of the index apear less and less usable. In fact, the statment is not really nice to the CBO, as an AND ROWNUM=1 is hard to estimate.

This brings me to one of the highlights in this project: Based on good arguments and a test-case we could argue to make a slightly change in the code and the logic. To understand the change some words must be said to the logic: the part v_time>FREED_TIME was used to introduce a kind of 'cold down period' where a resource was not used but neverthleless NOT reused again. This period was ever planned to be several minutes (about 5 was the asumption for the project, but it could also be 2 or 15, but it was agreed NEVER to be less than 1 or more than 20 minutes).

This led us to these changes:
we could change the table to make FREED_TIME of type NUMBER and changed the statement to:

UPDATE RESOURCE_POOL SET
STATUS='INUSE',
FREED_TIME=NULL,
SESSION_START_TIME=:B7
WHERE
FREED_TIME NOT IN
(select (to_char(sysdate, 'MI')-(rownum - 1)) - 60 * FLOOR((to_char(sysdate, 'MI')-(rownum - 1))/60) safe_minutes
from dual
connect by level <= :MINUTES_GRACE)
AND STATUS='FREE'
AND ADDRESS_POOL=:B1
AND ROWNUM=1
RETURNING IP_ADDR INTO :O0

and the statement which 'frees' the resource:


UPDATE RESOURCE_POOL SET
...
STATUS='FREE' FREED_TIME=to_char(sysdate, 'MI')
...
This leads to a logic, where every time the status is freed the current minute is stored in FREED_TIME. There are only 60 different values which come again and again and again. (so The selectivity is quite stable.
The select ... from dual ... connect by ... is a simple way to have a eaily changable list of 'the last x minutes' which is aware the break every hour; the more or less strange formula is a kind of MOD().

2008-12-17

Doing it better ... (Part 1)

I never thought something like this could be worth a blog entry, but as Tom Kyte can do this, I can also! ;-)

I had the pleasure to assist in a little project as a DBA.
The pleasure was mine, as the target (in response-time) was outstanding (in my current working circumstances) and the setup was done to achieve this in a more or less professional manner (which much more outstanding ...), so we had testcases, a kind of change-management, really load-tests, totally cooperative designers/developers. Great!

The goal of the whole thing was to 'lock' and 'free' available resources out of a pool (according to some additional 'quality'). The example I'm writing is about the process 'provide the next free ressource out of the given pool'. It should come back in about 10ms.

Here the Iterations we came through:

1) primary code:


SELECT resource into v_resource
FROM RESOURCE_POOL
WHERE v_time>FREED_TIME
AND STATUS='FREE'
AND POOL=v_pool
AND ROWNUM=1
FOR UPDATE;
UPDATE RESOURCE_POOL set
STATUS='INUSE',
FREED_TIME=null,
START_TIME=current_timestamp,
WHERE RESOURCE=v_resource;

with this comment beside these statements:
-- FOR UPDATE statement ensures that the selected row is locked until the transition is completed --
-- open 2 different sqlplus commands, issue the command and watch the last command. it waits until update is completed (or commit) on the first session
-- FOR UPDATE NOWAIT would report an error if the row cant be locked immediately --
-- FOR UPDATE SKIP LOCKED pauses other db queries - this could be the perf bottleneck ...


BRRR! Even with an index in RESOURCE_POOL.RESOURCE it's not really optimal. And, the hard fact is, it's to slow.

2) after some iterations we came to:

UPDATE RESOURCE_POOL set
STATUS='INUSE',
FREED_TIME=null,
SESSION_START_TIME=current_timestamp,
WHERE v_time>FREED_TIME
AND STATUS='FREE'
AND RESOURCE_POOL=v_pool
AND ROWNUM=1
AND v_time>FREED_TIME
RETURNING RESOURCE INTO v_resource;


As we are counting miliseconds, this WAS an advantage. And for me personally, it's more beautiful, even this is not really a tuning-goal.

Beside some tests on the physical layout (we came to the conclusion to use partitions by resource pools and local indexes on them) this saved us enough time so we could stop tuning.
Maybe I also have to tell all the data is small enough to fit into the buffer_pool.
And for the Records, it's 10.2.0.4 EE on HP-UX (Itanium), Failover ServiceGuard.

2008-12-16

Detuning soft parses

During our seminar Oracle 10g Performance analysis with Doug Burns the question arises how to force the optimizer to do a hard-parse all the teme, even when prepared statements and bind variables are used. The question was about a 10.2.0.4 DWH env.

For 11g and some backports I found DBMS_SHARED_POOL.PURGE which just purges the cursor (of course, you have to know it first; but that's no problem for one who complain about bind peeking in 10g).
And just for the records there are some Notes
Note:457309.1 - How To Flush an Object out the Library Cache [SGA]
Note:751876.1 - DBMS_SHARED_POOL.PURGE Is Not Working On 10.2.0.4
and other blogs also: Flushing a single cursor

But my initial idea was tha classic art or de-tuning:
Sometimes we complain about situatiions, where the statement is literally the same, but the optimizer creates several childs by hard-parsing. So let's use this bahaviour this time: Just change one of the many related optimizer environments.
1og Optimizer Environment Views will be a good start, but I assume (not tested!) even a NLS_SORT might help for the start.
Just grab one environment parameter which provides a wide range of values with little to no affect. So first bitmap_merge_area_size came into my mind, but I'm not sure if it will have no side effects, at least in a DWH env.

At the end, the common disclaimer: I have not tested any of the methods described here, so be warned ;-)

Oracle 10g Performance Analysis with Doug Burns in Vienna

Doug Burns hold a 2 day class in Vienna. The topic was "Oracle 10g Performance Analysis with Doug Burns".
I had the pleasure to attend this class and saw (and of course listen to) Doug.
The course had 3 major parts (from my point of view):
  1. General introduction to Tuning like:
    • It's all about time (not costs, not ratios, not beauty, ...)
    • the customer is the only who can tell what's good (and must do this, before tuning begins)
    • beware Compulsive tuning disorder
  2. the history of the tools:
    • bstat/estat
    • wait interface
    • statspack
  3. current tools (10g/11g) and new features (11g) - (or why it's all the same, just better)
    • ASH
    • AWR
    • ADDM
    • improvements in 11g

That's all I remember (high level) - so if I miss anything substantial, sorry Doug, I must have drifted off (which I have to account my little daughters growing teeth much more than Dough) .

It was the first time to see Dough and it was great. It was full of good examples (many of them known if you follow his blog or others, but that's good!), but he never lost the red line through all his slides.
As he started with some history, he picked me up in a well known and familiar area and guided me to the brave new world, their strenghts and pitfalls.

During all the coffee breaks (and there where enough, even for the smokers) quite refreshing debates arose. Very convenient.

At the end, Doug just finished before time, but I had no feeling he missed something during the past 2 days. So I was happy to use the time for some more  general, not so technical chat.

At all, these 2 days where the perfect recreation from ordinary labour work as a DBA for a geek like me!