Dienstag, 7. Oktober 2014

enhanced SQL trace

As a huge friend of Oracles SQL trace I also see it's limitations as well. One of these limitations is visible in it's underlying infrastructure: the wait interface. It's only about waits.
There is another source of information as well, you could call it the DO interface, but the common term is session and system statistics. there Oracle counts what it is doing.
Unfortunately I do not know a simple way to show both realities in a common way.

At least for consistent gets I now can show (most of) them together with wait events.

The magic is done by event 10200 - Dump Consistent Reads.

I create a trace file with both, 10046 and 10200 event:


SET AUTOTRACE ON STATISTICS

ALTER SESSION SET EVENTS '10200 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';

select * from berx.t1;

Statistics
----------------------------------------------------------
         80  recursive calls
          0  db block gets
        118  consistent gets
          0  physical reads
          0  redo size
        654  bytes sent via SQL*Net to client
        524  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
         15  sorts (memory)
          0  sorts (disk)
          3  rows processed

Of course the additional event gives additional lines which will not be processed by those tools to parse SQL traces. I will not go into any detail of those additional lines - especially as I don't know about them to much. If you are interested I'd recommend Charles Hoopers blog.

I just grab lines which contain the string completed for block and rework their values so it looks similar to something the SQL trace parsers know. This is a very inexperienced piece of code!

#!/usr/bin/perl -w
#10200_preproc.pl
use strict;

my $cursor       = 0 ;
my $timestamp    = 0 ;
my $is_processed = 0 ;
my $tablespace;
my $block;
my $object_id;

while (<>) {
  my $line = $_;
  if ($line =~ m/.*#(\d+)[: ].*/) {
    ($cursor = $line) =~ s/.*#(\d+)[: ].*/$1/ ;
  }
  $cursor =~ tr/\n//d;

  if ($line =~ m/.*tim=(\d+).*/) {
    ($timestamp = $line) =~ s/.*#(\d+)[: ].*/$1/ ;
  }
  $timestamp =~ tr/\n//d;

  if ($line =~ m/completed for block/) {
    ($tablespace = $line) =~ s/.*\<([abcdefx\d]+) : .*/$1/ ;
    ($block      = $line) =~ s/.* : ([abcdefx\d]+).*/$1/;
    ($object_id  = $line) =~ s/.*objd: ([abcdefx\d]+).*/$1/;
    $tablespace =~ tr/\n//d;
    $block      =~ tr/\n//d;
    $object_id  =~ tr/\n//d;
    $tablespace = hex($tablespace) +1;
    $block      = hex($block);
    $object_id  = hex($object_id);
# WAIT #391723720: nam='db file scattered read' ela= 2876 file#=5 block#=129 blocks=127 obj#=71123 tim=90982740602
    print "WAIT #"; # as the hash is somethign special ...
    print "$cursor: nam='LIO - completed for block' ela=0 tablespace=$tablespace block#=$block obj=$object_id obj#=$object_id tim=$timestamp\n";
    $is_processed = 1;

  }
  if ($is_processed == 0) {
    print $line;
  }
  else {
    $is_processed = 0
  }

}

This code should really be seen as an example, not a solution!
Out of a line like
ktrgcm(): completed for block  <0x0000 : 0x0041560c> objd: 0x0000003e
it creates
WAIT #47572859450312: nam='LIO - completed for block' ela=0 tablespace=1 block#=4281868 obj=62 obj#=62 tim=47572859450312

Probably the only magic is the Cursor and timing info which I just copy from the last SQL trace line above. I'd as well like not to use the keyword WAIT as for sure the process is not waiting in this case, but I have to still trick the tools.

With my modified tracefile from the statement above I create this report:

CALL-NAME                    DURATION       %  CALLS      MEAN       MIN       MAX
---------------------------  --------  ------  -----  --------  --------  --------
SQL*Net message from client  0.671976   95.8%      3  0.223992  0.000165  0.671529
PARSE                        0.025995    3.7%      1  0.025995  0.025995  0.025995
row cache lock               0.001640    0.2%     12  0.000137  0.000079  0.000497
FETCH                        0.001000    0.1%      2  0.000500  0.000000  0.001000
library cache pin            0.000471    0.1%      2  0.000236  0.000214  0.000257
library cache lock           0.000443    0.1%      2  0.000222  0.000211  0.000232
Disk file operations I/O     0.000047    0.0%      1  0.000047  0.000047  0.000047
SQL*Net message to client    0.000002    0.0%      3  0.000001  0.000000  0.000001
LIO - completed for block    0.000000    0.0%    117  0.000000  0.000000  0.000000
CLOSE                        0.000000    0.0%      3  0.000000  0.000000  0.000000
EXEC                         0.000000    0.0%      1  0.000000  0.000000  0.000000
XCTEND                       0.000000    0.0%      1  0.000000  0.000000  0.000000
---------------------------  --------  ------  -----  --------  --------  --------
TOTAL (12)                   0.701574  100.0%    148  0.004740  0.000000  0.671529

You see 117 LIO - completed for block events here, which is quite close to 118 consistent gets.

There are probably a lot more things to do, still it helps me to show what a statement is doing (in terms of LIO) beside the time it spends waiting.

Montag, 16. Dezember 2013

Instrumentation still needed

Recently I received an email with an simple content:
"It" is slow again :-)
The email explained a package to be slow.

Ok, I started with my "not enough information" reply and an email conversation began.

We are still not ready to identify the problem, but as there is no code instrumentation in place I tried to identify what's possible with v$active_session_history (ASH).
To do so, I created 2 Packages: P1 and P2. P1 has a procedure DO_IT and a function SLEEP, which burns CPU for approximately x seconds. P2 only has a function SLEEP, which also burns CPU for approximately x seconds and then calls P1.SLEEP again. P1.DO_IT has a simple sql which calls P1.SLEEP and P2.SLEEP. My code is properly instrumented using DBMS_APPLICATION_INFO.set_module in the code and dbms_session.set_identifier to identify one specific execution - to compare it's value to the information already available in v$active_session_history.
Here the packages:
CREATE OR REPLACE
PACKAGE P1
IS
  PROCEDURE DO_IT(SEC IN NUMBER);
  FUNCTION SLEEP(SEC IN NUMBER)
    RETURN NUMBER;
END; 
/

CREATE OR REPLACE
PACKAGE P2
IS
  FUNCTION SLEEP(SEC IN NUMBER)
    RETURN NUMBER;
END; 
/
 
CREATE OR REPLACE PACKAGE BODY P1
IS
  PROCEDURE DO_IT(SEC IN NUMBER)
  IS
    r NUMBER;
  BEGIN
    DBMS_APPLICATION_INFO.set_module(module_name => 'P1', 
                                     action_name => 'DO_IT');
    dbms_output.put_line('++P1.DO_IT++');
    SELECT P2.SLEEP(SEC) into r 
    FROM DUAL
    WHERE P1.SLEEP(SEC) > 1;
    dbms_output.put_line('--P1.DO_IT--');
    DBMS_APPLICATION_INFO.set_module(module_name => 'blaaa',
                                     action_name => 'blaaa');
  END DO_IT;
  
  FUNCTION SLEEP(SEC IN NUMBER)
    RETURN NUMBER
  IS
    C NUMBER := 0;
    now DATE := SYSDATE;
  BEGIN
    DBMS_APPLICATION_INFO.set_module(module_name => 'P1',
                                     action_name => 'SLEEP');
    dbms_output.put_line('++P1.SLEEP++');
    LOOP
      C := C + 1;
      EXIT WHEN now + (SEC * (1/86400)) <= SYSDATE;
    END LOOP;
    
    dbms_output.put_line('--P1.SLEEP--');
    DBMS_APPLICATION_INFO.set_module(module_name => 'blaaa',
                                    action_name => 'blaaa');
    RETURN C;
    END SLEEP;
END;
/

CREATE OR REPLACE
PACKAGE BODY P2
IS
  FUNCTION SLEEP(SEC IN NUMBER)
    RETURN NUMBER
  IS 
    C NUMBER := 0; 
    D NUMBER := 0; 
    now DATE := SYSDATE;
  BEGIN
    DBMS_APPLICATION_INFO.set_module(module_name => 'P2',
                                    action_name => 'SLEEP');
        dbms_output.put_line('++P2.SLEEP++');                                
    LOOP
      C := C + 1;
      EXIT WHEN now + (SEC * (1/86400)) <= SYSDATE;
    END LOOP;  
    SELECT P1.SLEEP(SEC) into D
    FROM DUAL;
    C := C + D; 
    dbms_output.put_line('--P2.SLEEP--');    
    DBMS_APPLICATION_INFO.set_module(module_name => 'blaaa',
                                     action_name => 'blaaa');
    RETURN C;
  END SLEEP;
END;
/

I started the testcase with a simple
set serveroutput on
exec dbms_session.set_identifier(client_id => 'berx testcase ' || to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS') );
exec p1.do_it(5);
exec dbms_session.set_identifier(client_id => 'berx testcase ' || to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS') );
exec p1.do_it(5);
exec dbms_session.set_identifier(client_id => 'blaaa');

And immediately afterwards I checked for sql_id, top_level_sql_id, action, module, client_id, plsql_entry_object_id, plsql_entry_subprogram_id, plsql_object and plsql_subprogram_id

select SESSION_ID,
       sql_id,
       ( SELECT max( substr( sql_text , 1, 40 )) FROM v$sql sq WHERE sq.sql_id = ash.sql_id ) AS sql_text,
       TOP_LEVEL_SQL_ID,
       ( SELECT max( substr( sql_text , 1, 40 )) FROM v$sql sq WHERE sq.sql_id = ash.top_level_sql_id ) AS top_level_sql_text,
       action,
       module,
       client_id
,      ( SELECT object_name    FROM dba_procedures WHERE object_id = plsql_entry_object_id AND subprogram_id = 0) AS plsql_entry_object
,      ( SELECT procedure_name FROM dba_procedures WHERE object_id = plsql_entry_object_id AND subprogram_id = plsql_entry_subprogram_id) AS plsql_entry_subprogram
,      ( SELECT object_name    FROM dba_procedures WHERE object_id = plsql_object_id       AND subprogram_id = 0) AS plsql_object
,      ( SELECT procedure_name FROM dba_procedures WHERE object_id = plsql_object_id       AND subprogram_id = PLSQL_SUBPROGRAM_ID) AS plsql_subprogram 
from v$active_session_history ash 
where SAMPLE_TIME > sysdate - 1/(24*60)
      and session_id = SYS_CONTEXT('USERENV','SID')
order by SAMPLE_TIME asc;

Here is the result (please excuse the wide list).

ST                  SESSION_ID  SQL_ID        SQL_TEXT                                 TOP_LEVEL_SQL_ID TOP_LEVEL_SQL_TEXT                       ACTION     MODULE     CLIENT_ID                           PLSQL_ENTRY_OBJECT             PLSQL_ENTRY_SUBPROGRAM         PLSQL_OBJECT    PLSQL_SUBPROGRAM
------------------- ----------  ------------- ---------------------------------------- ---------------- ---------------------------------------- ---------- ---------- ----------------------------------- ------------------------------ ------------------------------ --------------- ----------------
2013-12-14 22:24:00        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:01        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:02        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:03        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:04        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:05        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:06        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:07        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          P2              SLEEP            
2013-12-14 22:24:08        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:09        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          P2              SLEEP            
2013-12-14 22:24:10        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:11        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:12        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:13        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:14        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:00   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:15        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:16        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:17        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:18        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:19        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:20        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P2              SLEEP            
2013-12-14 22:24:21        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P2              SLEEP            
2013-12-14 22:24:22        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          STANDARD        SYSDATE          
2013-12-14 22:24:23        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P2              SLEEP            
2013-12-14 22:24:24        160  cqd77zbnpa79n SELECT P2.SLEEP(:B1 ) FROM DUAL WHERE P1 1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P2         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P2              SLEEP            
2013-12-14 22:24:25        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:26        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:27        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:28        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P1              SLEEP            
2013-12-14 22:24:29        160  cc9hp65m5aq6s SELECT P1.SLEEP(:B1 ) FROM DUAL          1ahw2g5n38js5    BEGIN p1.do_it(5); END;                  SLEEP      P1         berx testcase 2013-12-14 22:24:15   P1                             DO_IT                          P1              SLEEP            

 30 rows selected 

So my findings here:

  • TOP_LEVEL_SQL_ID is the same all the time. That's good, as I have a valid anchor on the top most layer. 
  • there is nothing similar to SQL_EXEC_ID for the TOP_LEVEL_SQL_ID. So I can not distinguish the 2 runs of P1.DO_IT.
  • PLSQL_OBJECT.PLSQL_SUBPROGRAM does not always reflect the current application module, as also internal code (as SYSDATE) is shown there. ACTION and MODULE are more reliable. 
  • Only with ASH I don't see any general way to identify the runtime of sub-TOP_LEVEL SQLs or PLSQL_SUBPROGRAMs.
  • The most complete picture from ASH is only given with proper instrumentation
  • If you need the full insight about the stack of SQL and PL/SQL calls, you need to trace the session. This again works like a charm with ACTION and MODULE set and DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE 

Sonntag, 24. November 2013

No Adaptive Cursor Sharing for collections

Recently I was hunting a performance problem at work.

A developer read about adaptive cursor sharing and therefore guessed, the optimizer would know about the number of rows when he passes a collection for a table(:bind) function. I can totally understand it, as there where no limitations in the statement (except the 14 binds):
The adaptive cursor sharing feature enables a single statement that contains bind variables to use multiple execution plans. Cursor sharing is "adaptive" because the cursor adapts its behavior so that the database does not always use the same plan for each execution or bind variable value.
For appropriate queries, the database monitors data accessed over time for different bind values, ensuring the optimal choice of cursor for a specific bind value. For example, the optimizer might choose one plan for bind value 9 and a different plan for bind value 10. Cursor sharing is "adaptive" because the cursor adapts its behavior so that the same plan is not always used for each execution or bind variable value.
Adaptive cursor sharing is enabled for the database by default and cannot be disabled. Note that adaptive cursor sharing does not apply to SQL statements containing more than 14 bind variables.
There is no limitation about where the bind peeking is done (and where not). Based on my findings I'd limit it to "only in filtration". But I accept any better wording.

Now let's go down to the testcase (all tested in 11.2.0.3):
To prepare a nice environment I did:

DROP TYPE T_COLLECTION_TEST;
/

DROP TYPE O_COLLECTION_TEST;
/

DROP PACKAGE COLLECTION_TEST;
/
*/

CREATE TYPE O_COLLECTION_TEST
AS
  OBJECT
  (
    RN             NUMBER,
    OWNER          VARCHAR2(30),
    OBJECT_NAME    VARCHAR2(30),
    SUBOBJECT_NAME VARCHAR2(30),
    OBJECT_ID      NUMBER,
    DATA_OBJECT_ID NUMBER,
    OBJECT_TYPE    VARCHAR2(19),
    CREATED        DATE,
    LAST_DDL_TIME  DATE,
    TIMESTAMP      VARCHAR2(19),
    STATUS         VARCHAR2(7),
    TEMPORARY      VARCHAR2(1),
    GENERATED      VARCHAR2(1),
    SECONDARY      VARCHAR2(1),
    NAMESPACE      NUMBER,
    EDITION_NAME   VARCHAR2(30));
  /
CREATE TYPE T_COLLECTION_TEST
IS
  TABLE OF O_COLLECTION_TEST;
  /

DROP TABLE AOT ;
CREATE TABLE AOT AS 
      select rownum rn, v.*
      from (SELECT * FROM ALL_OBJECTS
      UNION ALL
      SELECT * FROM ALL_OBJECTS
      UNION ALL
      SELECT * FROM ALL_OBJECTS
   UNION ALL
      SELECT * FROM ALL_OBJECTS
   UNION ALL
      SELECT * FROM ALL_OBJECTS) v
;

create unique index iAOT on aot(rn);

exec dbms_stats.gather_table_stats(null,'AOT');



CREATE OR REPLACE
PACKAGE COLLECTION_TEST
IS
  G_COLLECTION_SIZE NUMBER := 0;
  R_CT T_COLLECTION_TEST := T_COLLECTION_TEST();
  FUNCTION RUN(
      P_COLLECTION_SIZE IN NUMBER)
    RETURN T_COLLECTION_TEST;
END; 
/

CREATE OR REPLACE
PACKAGE BODY COLLECTION_TEST
IS
FUNCTION RUN(
    P_COLLECTION_SIZE IN NUMBER)
  RETURN T_COLLECTION_TEST
IS
BEGIN
  IF G_COLLECTION_SIZE <> P_COLLECTION_SIZE THEN
    G_COLLECTION_SIZE  := P_COLLECTION_SIZE;
    SELECT O_COLLECTION_TEST(RN, OWNER , OBJECT_NAME , SUBOBJECT_NAME , OBJECT_ID , DATA_OBJECT_ID , OBJECT_TYPE , CREATED , LAST_DDL_TIME , TIMESTAMP , STATUS , TEMPORARY , GENERATED , SECONDARY , NAMESPACE , EDITION_NAME ) BULK COLLECT
    INTO R_CT
    FROM AOT WHERE rn <= G_COLLECTION_SIZE;
  END IF;
  RETURN R_CT;
END;
END;
/


That's enough for a small testcase. In fact I merged 2 tests into one run:
first: does the optimizer cares for a table(:bind) function
second: does the cardinality hint helps?

So the actual test run is:

DECLARE
  n NUMBER := 1;
  i NUMBER := 0;
  c T_COLLECTION_TEST;
TYPE tab_ao_tab
IS
  TABLE OF VARCHAR2(100);
  ao_tab tab_ao_tab;
  idx NUMBER;
  x   NUMBER := 0;
BEGIN
  EXECUTE immediate 'ALTER SYSTEM FLUSH SHARED_POOL';
  DBMS_MONITOR.session_trace_enable(waits=>TRUE, binds=>TRUE);
  FOR i IN 1..6
  LOOP
    dbms_output.put_line(i);
    n := power(10,i);
    c := collection_test.run(n);
    dbms_output.put_line(n);
    DBMS_APPLICATION_INFO.set_client_info(client_info => 'Number: '|| n);
    dbms_output.put_line('test1 ');
    EXECUTE immediate 'select /*+ gather_plan_statistics dynamic_sampling(t 10) */                      
t.owner || ''.'' || ao.object_name                     
from table(:c) t,   AOT ao                      
where t.rn = ao.rn' bulk collect INTO ao_tab USING c;
    FOR indx IN ao_tab.FIRST..ao_tab.LAST
    LOOP
      x := n + 1;
    END LOOP;
    dbms_output.put_line('x: ' || x);
    dbms_output.put_line('test2 ');
    dbms_output.put_line('cardinality1 ');
    EXECUTE immediate 'select /*+ gather_plan_statistics cardinality(t ' || n || ') */                      
t.owner || ''.'' || ao.object_name                     
from table(:c) t,   AOT ao                      
where t.rn = ao.rn' bulk collect INTO ao_tab USING c;
    FOR indx IN ao_tab.FIRST..ao_tab.LAST
    LOOP
      x := x + 1;
    END LOOP;
    dbms_output.put_line('x: ' || x);
  END LOOP;
  DBMS_MONITOR.session_trace_disable;
END;
/

If you want to try this at home, there is no need to create a sql-trace file. I'm just so used to it.
Now my big question is, how many cursors this testcase creates and what's their execution plan is.

select sql_id, child_number, executions, plan_hash_value, substr(sql_text,1,56) text
from v$sql 
where lower(sql_text) like 'select /*+ gather_plan_statistics%'
order by executions desc, plan_hash_value asc, text asc;

With a result (on my system):

SQL_ID         CN  EX PLAN_HASH_VALUE TEXT                                                   
------------- --- --- --------------- --------------------------------------------------------
9741g8bfng454   0   6      1690681298 select /*+ gather_plan_statistics dynamic_sampling(t 10) 
dr4yxf89n0g2k   0   1      1690681298 select /*+ gather_plan_statistics cardinality(t 10) */   
14yffxq80k21d   0   1      1690681298 select /*+ gather_plan_statistics cardinality(t 100) */  
52vt0cr9ghnh4   0   1      1690681298 select /*+ gather_plan_statistics cardinality(t 1000) */ 
2d6328126t9gz   0   1      3714863083 select /*+ gather_plan_statistics cardinality(t 10000) * 
du8ds9qupzzcn   0   1      3714863083 select /*+ gather_plan_statistics cardinality(t 100000)  
9t4zm8nkpr8dj   0   1      3714863083 select /*+ gather_plan_statistics cardinality(t 1000000)

The statements with cardinality hint show a proper flip in the execution plan from 1000 to 10000 rows.
Unfortunately the statement with dynamic_sampling does not follow this pattern. The first execution plan sticks:
------------------------------------------------------------------------------------------------------                                                                                                                                                                                                       
| Id  | Operation                           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |                                                                                                                                                                                                       
------------------------------------------------------------------------------------------------------                                                                                                                                                                                                       
|   0 | SELECT STATEMENT                    |      |      1 |        |     10 |00:00:00.01 |      19 |                                                                                                                                                                                                       
|   1 |  NESTED LOOPS                       |      |      1 |        |     10 |00:00:00.01 |      19 |                                                                                                                                                                                                       
|   2 |   NESTED LOOPS                      |      |      1 |     10 |     10 |00:00:00.01 |       9 |                                                                                                                                                                                                       
|   3 |    COLLECTION ITERATOR PICKLER FETCH|      |      1 |     10 |     10 |00:00:00.01 |       0 |                                                                                                                                                                                                       
|*  4 |    INDEX UNIQUE SCAN                | IAOT |     10 |      1 |     10 |00:00:00.01 |       9 |                                                                                                                                                                                                       
|   5 |   TABLE ACCESS BY INDEX ROWID       | AOT  |     10 |      1 |     10 |00:00:00.01 |      10 |                                                                                                                                                                                                       
------------------------------------------------------------------------------------------------------   
And all the following executions obey this execution plan. Without thinking or checking.
But the plan can change:  Doing the same testcase with 6..1 the plan is now stable at


------------------------------------------------------------------------------------------------------------------------------------------------------------                                                                                                                                                 
| Id  | Operation                          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|                                                                                                                                                 
------------------------------------------------------------------------------------------------------------------------------------------------------------                                                                                                                                                 
|   0 | SELECT STATEMENT                   |      |      1 |        |    340K|00:00:01.88 |    5101 |   6810 |   6810 |       |       |          |         |                                                                                                                                                 
|*  1 |  HASH JOIN                         |      |      1 |    340K|    340K|00:00:01.88 |    5101 |   6810 |   6810 |    47M|  4032K| 3769K (1)|   58368 |                                                                                                                                                 
|   2 |   COLLECTION ITERATOR PICKLER FETCH|      |      1 |    340K|    340K|00:00:00.13 |       0 |      0 |      0 |       |       |          |         |                                                                                                                                                 
|   3 |   TABLE ACCESS FULL                | AOT  |      1 |    340K|    340K|00:00:00.09 |    5101 |      0 |      0 |       |       |          |         |                                                                                                                                                 
------------------------------------------------------------------------------------------------------------------------------------------------------------                                                                                                                                                 

I hope you can see, even in 11.2.0.3 it's like rain on monday morning in Washington DCDynamic Sampling jumps in, but it's not adaptive.
What's the conclusion here? 
If you use a collection in a table function, and the expected numbers in that collection varies by dimensions, take care! My solution is a cardinality hint, but that might be complicated if you are using a framework.
I don't know any 'cheap' solution. 
Maybe anyone wants to try this in 12c. I did not focus there yet. It's still 2013 :-) 

Freitag, 15. Februar 2013

Resetting Your Oracle User Password with SQL Developer on OSX

This post started as a shameless copy of Jeff Smiths Resetting Your Oracle User Password with SQL Developer - but it ended in a trilogy about OSX, Oracle and SQLDeveloper.
In my first two posts I made myself comfortable with Oracle instant client on my MacBook. Now I want to use it for something more serious, like changing passwords.

Even with all the modifications from previous post SQLDeveloper still does not show the Reset Password... option. The reason: it just don't know about the library correctly.

First the PATH as shown in previous post is not set in an application started via launcher - It does not contain the ORACLE_HOME at all. But even as I forced that with a temporary hack it did not help; so I just can say: with OSX it's more complicated to let SQLDeveloper use the proper libraries.

As the PATH is not that important on OSX, the real issue is DYLD_LIBRARY_PATH. Regardless if it's set in /etc/profiles or ~/.profiles - they never reach an application started via launcher, probably for similar reasons like PATH shown above.

But there is hope: ORACLE_HOME as set in /etc/launchd.conf is shown in the environment of SQLDeveloper. At the end, that's all we need, now just to check where to put this little flame of hope to let it grow to a bonfire.

From now on I'm talking about SQLDeveloper 3.2.20.09! Future Versions might behave different!

The script to check is
/Applications/SQLDeveloper.app/Contents/Resources/sqldeveloper/ide/bin/launcher.sh. There is a function called CheckLibraryPath() which is supposed to check for a proper LD_LIBRARY_PATH (or it's equivalences) and if not set create a proper setting. Oracle just forgot to check about OSX! I write forgot on purpose, as HP-UX is checked with
if [ `uname -s` = 'HP-UX' ] and SHLIB_PATH is set instead of LD_LIBRARY_PATH.

So my solution is simple: I just enhance this function with this little patch:
524a525,541
>  elif [ `uname -s` = 'Darwin' ]
>  then
>   echo "OSX!"
>   if [ "X$DYLD_LIBRARY_PATH" = "X" ]
>  then
>   DYLD_LIBRARY_PATH=$ORACLE_HOME
>   #  only instant client exists on OSX
>  else
>   echo $DYLD_LIBRARY_PATH | egrep -e "(^|\:)$ORACLE_HOME($|\:)" > /dev/null
>   if [ $? != 0 ]
>   then
>    DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:$ORACLE_HOME
>   fi
> 
>  fi
>  export DYLD_LIBRARY_PATH
>  #echo "$DYLD_LIBRARY_PATH"


With this little patch now I can change the password of my users with SQLDeveloper on OSX!

setting environment variables for sqlplus on OSX

In my previous post I showed how to create an universal binary for Oracle instant client so I don't need to care about 32/64 bit anymore.

Now I try to show how to set some environment variables so I only need to enter sqlplus and can start my work without preparing anything first.

I know I could do this in my local .profile, but I was looking for a more centralized way to provide them, maybe on a shared desktop where basic applications are installed just once.

This is all about OSX Mountain Lion - I'm working on 10.8.2 right now.

The variables I want to set/enhance are

  • PATH
    That is quite simple: I just added (as root) a new line in /etc/paths:
    /Users/berx/instantclient_11_2

  • ORACLE_HOME
    That was a little bit more tricky as I had to create (as root) the file first. In /etc/launchd.conf
    there is one line now:
    setenv ORACLE_HOME /Users/berx/instantclient_11_2

  • DYLD_LIBRARY_PATH
    If you are curious what this is used for, it's the equivalent to Linux LD_LIBRARY_PATH. This is needed for all applications which are linked dynamically. Even sqlplus is linked that way.
    Unfortunately it can not be set in /etc/launchd.conf. To be more precise, it CAN be set, but it will not show up in the terminal.
    Instead it will create this line at the top of each terminal window direct after it opens:
    dyld: DYLD_ environment variables being ignored because main executable (/usr/bin/login) is setuid or setgid
    That's due to a security setting in Mountain Lion.
    I decided to use the profile for this work. In my case ~/.profile, but /etc/profile also works:
    # ORACLE_HOME is set in /etc/launchd.conf !!
    export DYLD_LIBRARY_PATH=$ORACLE_HOME:$DYLD_LIBRARY_PATH

With these 3 little steps now I can start sqlplus directly from my terminal without caring about the environment anymore.

There is just one little drawback: all commands with setuid/setgid bit, like ps or sudo now throw a warning like this:
dyld: DYLD_ environment variables being ignored because main executable (/bin/ps) is setuid or setgid
But for me this is acceptable.

The only workaround I could imagine is any kind of wrapper for sqlplus.

Oracle client universal binaries for OSX

Recently Oracle announced a new Oracle Instant Client for OSX. As always there is a 32-bit and a 64-bit version available.
I was curious if they can be merged together to an universal binary. The short answer: yes, they can.

It all starts with downloading the instant client zips for 32 and 64 binary. I loaded basic and sqlplus and unzipped them into 2 directories instantclient_11_2_32 and instantclient_11_2_64. I also created a target directory instantclient_11_2.

Then the simple script
for i in `ls instantclient_11_2_32`
do 
lipo -create ~/instantclient_11_2_{32,64}/$i -output ~/instantclient_11_2/$i || cp ~/instantclient_11_2_64/$i ~/instantclient_11_2/$i
done


did the work with this output:
lipo: can't figure out the architecture type of: /Users/berx/instantclient_11_2_32/BASIC_README
lipo: can't figure out the architecture type of: /Users/berx/instantclient_11_2_32/SQLPLUS_README
lipo: can't figure out the architecture type of: /Users/berx/instantclient_11_2_32/glogin.sql
lipo: can't figure out the architecture type of: /Users/berx/instantclient_11_2_32/ojdbc5.jar
lipo: can't figure out the architecture type of: /Users/berx/instantclient_11_2_32/ojdbc6.jar
lipo: can't figure out the architecture type of: /Users/berx/instantclient_11_2_32/xstreams.jar


here what a simple file * shows:

BASIC_README:         ASCII text
SQLPLUS_README:       ASCII text
adrci:                Mach-O universal binary with 2 architectures
adrci (for architecture i386): Mach-O executable i386
adrci (for architecture x86_64): Mach-O 64-bit executable x86_64
genezi:               Mach-O universal binary with 2 architectures
genezi (for architecture i386): Mach-O executable i386
genezi (for architecture x86_64): Mach-O 64-bit executable x86_64
glogin.sql:           ASCII English text
libclntsh.dylib.11.1: Mach-O universal binary with 2 architectures
libclntsh.dylib.11.1 (for architecture i386): Mach-O dynamically linked shared library i386
libclntsh.dylib.11.1 (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64
libnnz11.dylib:       Mach-O universal binary with 2 architectures
libnnz11.dylib (for architecture i386): Mach-O dynamically linked shared library i386
libnnz11.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64
libocci.dylib.11.1:   Mach-O universal binary with 2 architectures
libocci.dylib.11.1 (for architecture i386): Mach-O dynamically linked shared library i386
libocci.dylib.11.1 (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64
libociei.dylib:       Mach-O universal binary with 2 architectures
libociei.dylib (for architecture i386): Mach-O bundle i386
libociei.dylib (for architecture x86_64): Mach-O 64-bit bundle x86_64
libocijdbc11.dylib:   Mach-O universal binary with 2 architectures
libocijdbc11.dylib (for architecture i386): Mach-O bundle i386
libocijdbc11.dylib (for architecture x86_64): Mach-O 64-bit bundle x86_64
libsqlplus.dylib:     Mach-O universal binary with 2 architectures
libsqlplus.dylib (for architecture i386): Mach-O dynamically linked shared library i386
libsqlplus.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64
libsqlplusic.dylib:   Mach-O universal binary with 2 architectures
libsqlplusic.dylib (for architecture i386): Mach-O bundle i386
libsqlplusic.dylib (for architecture x86_64): Mach-O 64-bit bundle x86_64
ojdbc5.jar:           Zip archive data, at least v1.0 to extract
ojdbc6.jar:           Zip archive data, at least v1.0 to extract
sqlplus:              Mach-O universal binary with 2 architectures
sqlplus (for architecture i386): Mach-O executable i386
sqlplus (for architecture x86_64): Mach-O 64-bit executable x86_64
uidrvci:              Mach-O universal binary with 2 architectures
uidrvci (for architecture i386): Mach-O executable i386
uidrvci (for architecture x86_64): Mach-O 64-bit executable x86_64
xstreams.jar:         Zip archive data, at least v1.0 to extract

Ok, now all the files are merged (or just copied) together.
Now let's check if 32 and 64 bit works.

I need this environment set:

export DYLD_LIBRARY_PATH=/Users/berx/instantclient_11_2
export PATH=$PATH:/Users/berx/instantclient_11_2

So let's thy the 32 bit version:

arch -arch i386 ./sqlplus 

SQL*Plus: Release 11.2.0.3.0 Production on Fri Feb 15 11:22:46 2013

Copyright (c) 1982, 2012, Oracle.  All rights reserved.

Enter user-name: 

It looks good, just let's check if it's really running at 32 bit. Activity Monitor helps:


And the same thing for 64 bit:

arch -arch x86_64 sqlplus 

SQL*Plus: Release 11.2.0.3.0 Production on Fri Feb 15 11:38:23 2013

Copyright (c) 1982, 2012, Oracle.  All rights reserved.

Enter user-name: 





So this can be seen as a short example how we can use only one ORACLE_HOME for 32 and 64 bit binaries on OSX.
But as always there is the usual disclaimer: It's not supported by Oracle; never do it on a production system (who is running anything Oracle-Related on OSX Server at all?); it's not tested with all the different applications which use oracle client.

Update 1 (2012-02-17 21:50):

If you want the binary installation more like a real ORACLE_HOME check Ronald Roods blog!

Montag, 21. Januar 2013

about traveling

At the moment I am trying to travel to Edinburgh for a week of Exadata Prove of Concept.
The basic idea was to leave Vienna on sunday evening and arrive in Edinburgh on sunday night.
As I tried to do an online checkin on Saturday evening - I discovered the flights where shifted to monday evening. - That was not my initial plan as one day missing on a 5 days PoC is not so nice.
After some research I discovered my initial flight from Vienna to Amsterdam was cancelled. But there was still an earlier flight on sunday lunchtime. As the flights where booked at +KLM I checked on their website for some way to get support (note, it was saturday, late evening).
To my concern there where no 24 hr telephone numbers or web interfaces - but they mentioned to contact their twitter or facebook account. So I just gave @KLM a try, Even I was skeptical about support via twitter. But They really responded, asked for some details and booked me on the earlier flight on sunday.
I was very happy about that and made my way to Schiphol. There I had to wait for about 6 hours for my flight to Edinburgh. Waiting on an airport is never fun, but in Schiphol it was much better than on many other airports I've seen so far!
But while I enjoyed the airport, it began to snow. Some flights got cancelled, more flights got cancelled, my flight got cancelled! Of course that's not nice and none is doing it on purpose. I am traveling alone, no other people to take care of, even without my luggage I am well equipped for such a weather, so no big deal at all.
First I needed a place to stay. So heading for the proper counter. I was not the only one whose flight was cancelled so this lead to some queueing. In this time, serious queueing! If you ever have a harddisk response time gone up from 10ms to 50ms due to queueing issues, and call this hard, you just don't know.
Well, after some hours I got my hotel voucher. - Off to the next queue. This time transfer busses to the hotels. In my case it was a hotel with about 400 rooms - a roundtrip time of 20-30 min and 2 small busses which could handle at about 12 passengers. Please do the math!
Lucky as I am I got a nice room in Ramada Apollo Amsterdam.
After 2 beers at the bar I decided to head to my room. Midnight was over already.
But before I went to bed - one more tweet to send - and to ask for another flight to Edinburgh.

The next morning I had my direct message - the new flight was arranged by KLM twitter support: 15:35
There was still plenty of time, so I decided to stay at the hotel for some more time. Started up my laptop and do some work.
Transfer to Schiphol and checkin where flawless, So I'm sitting here and waiting for boarding to flight KL1285. As there is always enough work I can do remote, the time is at least of limited use.

There is just one question I'd like to know: Why do I need to travel to a small town in Schottland for a prove of concept where I will use the same Laptop and the same terminal application, do not need any "high speed low latency" connection and do not need to get in touch with the physic at all?