Donnerstag, 26. November 2015

handling disks for ASM - when DB, Linux and Storage admins work together

A proper management of ASM Disks can be a complicated task.

On DOAG2015 I discussed with Martin Bach about the concept in my current company, where we implemented a setting which is consistent, robust and enables Storage, Linux and DB admins to work together easily.

As we started to think about ASM when 10.1 was out we tried to evaluate our possibility. asmlib was discarded quite early as it only increased complexity without additional value: We have a SAN (fibre channel) infrastructure with 2 separated fabrics. So a multipath configuration is needed. ASM (or asmlib)  can not handle this, so a proper multipath configuration is needed at all. Also asmlib hides storage details from DBAs/ASM-Admins, where we wanted to enable every person involved know as many details as possible easily.

We also saw ASM sometimes takes a long time to scan for new disks (every time you access v$asm_disks - so use v$asm_disk_stat as this view does not rescan but only shows infos about devices it has in SGA already) if there are many "files" (devices) in asm_diskstring.


We set asm_diskstring to a dedicated directory. In our case it's called /appl/oracle/asm_disks/* This speeds up a rescan of all "disks", it's also a clear indicator of all disks ASM uses. There we have symlinks to devices in /dev/mapper/


The symlink has this format:
/appl/oracle/asm_disks/360060e80167bd70000017bd700000007p1_p9500_b52_MONIQP01_000 -> /dev/mapper/360060e80167bd70000017bd700000007p1

Some informations about all the values we stored there:
360060e80167bd70000017bd700000007p1 is the WWN of the disk, together with it's partition (p1).  The WWN is very useful in every discussion with Storage Admins, as it identifies the LUN from their perspective. We decided to partition the disks. It's shown in our records that Linux Admins touches un-formatted devices more often than devices which are formatted already. There were also some cases in early tests when the first block of a disk was cached by the kernel. Both issues are addressed when we format every disk. If required partitioning can help do adapt alignments.
p9500 is a shortname which identifies the Storage box with a name we can use during discussions. It's somewhere within the WWN as well. So it's a pure redundancy. But it makes discussions much easier.
b52 is a shortname to identify the datacenter. As pur fabrics are spawned across several datacenters, sometimes it's nice to have a fast knowledge about the topology.
MONIQP01_000 is the label used in some Storage boxes. It contains the Diskgroup name and some number.  At the moment it's NOT the NAME of an ASM-disk, but this can be introduced easily.

As the name of a diskgroup is coded into our naming schema, it's not accepted to reuse a disk for some other diskgroup. (Technically it's still possible, we just agreed not to do so). Even it seems this limits the DBAs flexibility, there are good reasons to do so. Disks are sometimes created with dedicated settings/parameters for a special purpose. Reusing such disks in other DGs would cause strange and hard to find performance symptoms. So If disks are not needed anymore we always "destroy" them and re-create new if needed.

udev rules

Our udev ruleset on RedHat6 is quite simple:
the file /etc/udev/rules.d/41-multipath.rules contains such lines:
ACTION=="add|change", ENV{DM_NAME}=="360060e80167bd70000017bd700000007p1", OWNER:="oracle", MODE:="0660", GROUP:="asmadmin"
We do not do any mapping of names here - it's only there to set permissions.


The config in /etc/multipath.conf is quite simple, only parameters required for every specific storage vendor / product.

I can not say a lot about configurations outside if the Linux server, so both SAN fabrics and the storage system are "just working".

Samstag, 21. November 2015


In Oracle database the vie DBA_FEATURE_USAGE_STATISTICS is given to
display information about database feature usage statistics.
That's all I can find in The (12.1) Docu.

Just in case anyone asks - DBA_FEATURE_USAGE_STATISTICS is not mentioned in the license guide at all:

Still for me it's interesting what's behind the view, and how it's content is populated.

So I started to analyze it's anatomy.
Starting with the view definition (slightly rewritten for better readability):
SELECT samp.dbid,    ,  
       Decode(To_char( last_usage_date,  'MM/DD/YYYY, HH:MI:SS'), 
              NULL, 'FALSE', 
              To_char( last_sample_date, 'MM/DD/YYYY, HH:MI:SS'), 
              'TRUE', 'FALSE') currently_used, 
FROM   wri$_dbu_usage_sample samp, 
       wri$_dbu_feature_usage fu, 
       wri$_dbu_feature_metadata mt 
WHERE  samp.dbid = fu.dbid 
       AND samp.version = fu.version 
       AND = 
       AND NOT LIKE '_DBFUS_TEST%'    /* filter out test features */ 
       AND Bitand(mt.usg_det_method, 4) != 4  /* filter out disabled features */

This view is quite simple, and all important columns of wri$_dbu_usage_sample and wri$_dbu_feature_usage are shown in the projection.
The filters are only to hide some rows used for internal testing.

So the interesting part are those columns of wri$_dbu_feature_meatadata which are not shown.
In this table are 2 groups of columns:
The first group are the columns INST_CHK_METHOD and INST_CHK_LOGIC.
They describe if and how to check, if a detailed detection is needed at all.
INST_CHECK_METHOD has 3 possible values right now:
DBU_INST_ALWAYS_INSTALLED1/* no test required */
DBU_INST_OBJECT/* object name in LOGIC */
DBU_INST_TEST4/* only test */
DBU_INST_ALWAYS_INSTALLED means the test is run always. If DBU_INST_OBJECT it's only tested, if the object in INST_CHK_LOGIC exists.

The detection itself is described in the columns USG_DET_METHOD and USG_DET_LOGIC.
USG_DET_METHOD has 3 possible values:
DBU_DETECT_BY_PROCEDURE/* Procedure specified in USG_DET_LOGIC */
DBU_DETECT_NULL4/* only test for _DBFUS_TEST_SQL_7 */

If it's set to DBU_DETECT_BY_SQL, the SQL stored in USG_DET_LOGIC is executed and it's result stored. In case it's DBU_DETECT_BY_PROCEDURE, the procedure is called. It has 3 parameters (with different names, but the type is always the same):
Argument Name Type   In/Out Default 
------------- ------ ------ ------- 
IS_USED       NUMBER OUT            
DATA_RATIO    NUMBER OUT            


These infos should be sufficient to analyze everything inside and related to DBA_FEATURE_USAGE_STATISTICS.

During the research I found 2 ways to do a immediate run of all tests. Instead of the default weekly run. Those update wri$_dbu_feature_usage - so the results are stored.

The first one is to kick mmon to run the usage sampling out of schedule:
alter session set events 'immediate trace name mmon_test level 6';

The second method is to run the sampling in the current session. This is quite nice for any kind of tracing:
alter session set "_SWRF_TEST_ACTION"=53;

But be aware both methods can create a higher DETECTED_USAGES than weeks between FIRST_USAGE_DATE and LAST_USAGE_DATE.

For a proper mapping between entries in DBA_FEATURE_USAGE_STATISTICS and options/packs there is a script options_packs_usage_statistics.sql provided by My Oracle Support in Database Options/Management Packs Usage Reporting for Oracle Databases 11gR2 and 12c (Doc ID 1317265.1)

Samstag, 10. Oktober 2015

12c datapatch can fail with manual out of place patching

datapatch can fail if doing manual out of place patching, when the new ORACLE_HOME is not cloned from previous one but a fresh installation. The problem is a one-of patch in the old ORACLE_HOME, which is included in something else (like a PSU) in the new ORACLE_HOME.
In my specific situation the old O_H had PSU + some one-off + Patch:21355879.
This Patch is not needed anymore in my new O_H ( + some other one-off) - I checked on My Oracle Support - Patches & Updates:

So no need (and no simple way) to add this patch to the new O_H at all.

But when running datapatch within the new O_H, it throws this error and refuses to continue:
Error: prereq checks failed!
  patch 21355879: rollback script /appl/oracle/product/rdbms_121024Jc/sqlpatch/21355879/19095143/21355879_rollback.sql does not exist
Prereq check failed, exiting without installing any patches.

It's obvious the patch  21355879 is not installed within the new O_H. But somehow ( I did not check the exact internal function) datapatch checks the view DBA_REGISTRY_SQLPATCH and finds PATCH_ID:21355879. With a statement similar to
SELECT XMLSerialize(CONTENT dbms_sqlpatch.opatch_registry_state INDENT) from dual;
it finds the patchuid 19095143. The base table seems to be OPATCH_XINV_TAB.  (the other patches within 21355879 are not in this XML and doesn't require a rollback script).

So the database repository says a rollback script should be present, but it's not installed in the new O_H as Oracle calls it not required anymore.

The clean solution would be to patch every new O_H in the exact sequence of all it predecessors - this can be time consuming and also challenging when different branches with conflicting one-offs needs to be merged - even all these conflicting one-offs are merged in a later PSU and not needed anymore.

We decided to do the easier way and just copy all patch directories from old onto new O_H.

12c datapatch - take care of parallel patching

datapatch is a nice new feature in recent Oracle database installations. It helps to ensure the databases objects match the binaries after any kind of patching and so avoid situations which can be an operational and support nightmare - and very hard to identify.


Unfortunately it has some drawbacks as well.
One of those I hit recently when running datapatch on 2 instances which uses the same ORACLE_HOME.
At some time in it's progress datapatch uses dbms_qopatch.get_opatch_lsinventory, which uses the external table SYS.OPATCH_XML_INV. This has the preprocessor $ORACLE_HOME/QOPatch/qopiprep.bat. This script executes
$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
rm $ORACLE_HOME/QOpatch/stout.txt
Maybe you see already what will happen when 2 different instances execute this step at the same time:
2 different opatch lsinventory -xml instances are writing into the same file. A very efficient was to corrupt the xml file, as they write to the same file $ORACLEHOME/QPatch/xml_file.xml

in one db I got this error:
Determining current state...DBD::Oracle::st execute failed: ORA-20001: Latest xml inventory is not loaded into table
ORA-06512: at "SYS.DBMS_QOPATCH", line 1937
ORA-06512: at "SYS.DBMS_QOPATCH", line 1259 (DBD ERROR: error possibly near <*> indicator at char 143 in 'SELECT description, startup_mode
               FROM XMLTable('/InventoryInstance/patches/patch[patchID=21573304]'
                    PASSING <*>dbms_qopatch.get_opatch_lsinventory
                    COLUMNS description VARCHAR2(100) PATH 'patchDescription',
                            startup_mode VARCHAR2(7) PATH 'sqlPatchDatabaseStartupMode')') [for Statement "SELECT description, startup_mode
               FROM XMLTable('/InventoryInstance/patches/patch[patchID=21573304]'
                    PASSING dbms_qopatch.get_opatch_lsinventory
                    COLUMNS description VARCHAR2(100) PATH 'patchDescription',
                            startup_mode VARCHAR2(7) PATH 'sqlPatchDatabaseStartupMode')"] at /appl/oracle/product/rdbms_121024Jc/sqlpatch/ line 1368, <LOGFILE> line 73.

in the other
verify_queryable_inventory returned ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00229: input source is empty
ORA-06512: at "SYS.XMLTYPE", line 272
ORA-06512: at line 1

Queryable inventory could not determine the current opatch status.
Execute 'select dbms_sqlpatch.verify_queryable_inventory from dual'

But there might be different errors possible as well - all based on the concurrency issue here.


In our environment the it's quite simple to workaround such issues: We jave scripts which do all the steps during patching, so the "apply datapatch" module just needs a small extension which ckecks for a "latch" (existence of a file in our case) and only continues if it can grab this file. Otherwise it sleeps for a short time.


Oracle could easily use a filename like  $ORACLEHOME/QPatch/xml_file.$$.xml instead. I'm not sure if it's worth the effort to fight through the perimeters in MOS.

Dienstag, 23. Juni 2015

SQL Plan Directives and result cache affects performance

In my current company we are preparing a migration of our billing application to a new version. During this migration there are the ordinary changes of infrastructure as well. Application servers from HPUX to Linux, database servers from RH5 to RH6, different storage subsystem, Oracle from 10.2 to 12c, different application partitioning, and so on ...
At least from the management perspective the expectation is clear: everything is shiny new and costed a lot of money. So it must be faster than before.

In this case the first test of a billrun was not faster. Our application mate contacted the DBA team as it seems to be significantly slower than on the [old env]

As you can see in the graph most sessions spent their time in wait class Other. It's the wait latch free - so it's a serialization issue. But the latch was Result Cache: RC Latch and it was (at least for some sessions) in exclusive mode, as a shared mode only would not cause the sessions to wait.
The DB has set RESULT_CACHE_MODE = MANUAL and the statements affected where like (slightly simplified)
select distinct ID 
from instance_history 
where base__id = 123456 
  and status_code = 3 
  and start_date >= to_date('20150515', 'yyyymmdd')     
  and start_date <= to_date('20150614', 'yyyymmdd')
As there is no RESULT_CACHE hint MODE is MANUAL and nothing is set in the sessions as well, it was not obvious why and where the latch is required at all. A quick check in ASH showed all sessions with this wait had IN_PARSE=Y.  It seems to be strange to have the result cache involved in parsing of a schema in exclusive mode, as this is only required to enter values to the result cache, and the statement is not executed yet.
But sometimes there is a statement executed in PARSE state: There are recursive SQLs fired by the optimizer to get more informations and deliver better plans. In this case it's dynamic sampling.
In the DB the parameter optimizer_dynamic_sampling is 2. And the table had statistics. The optimizer should not do any dynamic sampling based on this parameter.
But a new 12c feature initiates dynamic sampling regardless the DB settings: SQL Plan Directives. Here the SPD told the optimizer the stats on table instance_history might not be enough and it should do some dynamic sampling there as well.
By some luck I attended a presentation from Christian Antognini about Adaptive Dynamic Sampling some days ago. There he mentioned in the optimizer uses optimizer_dynamic_sampling level 11 regardless th spfile settings.  Even worse, it lies about this in the Note section of explain plan.

With all these informations at hand the quick summary at that time was:
A lot of parse creates a huge amount of (identical) dynamic sampling statements. Those try to utilize the result cache by the hint RESULT_CACHE(SNAPSHOT=3600). As the result cache was configured very small, there were few hits but many misses - and all those misses tried to add their results to the cache. Other sampling statements had to wait for that. And as totally different statements are run at the same time, those wiped out the cache again and again.

As a first reaction we increased the result cache in the database.It looks much better now. latch free is down to 16% (CPU:50%, db file sequential read: 25%). But still sometimes the throughput drops due to serialization.  Of course we could again increase the result cache. But we decided another approach.

Back to the first issue, we have many statements in PARSE. As all of these statements had the same execution plan, there is no need for parsing the statement again and again. But it has literals instead of bind variables.
One suggestion was to set cursor_sharing=force. We could do this on instance-level or via login-trigger for a specific user/host/program. But still I prefer to have a most specific solution with least side effects.
So we asked the colleague responsible for the application tests if the statement could be changed to use bind variables. And to my big surprise it was possible without a big issue!

Now all the latch free waits are gone. The topmost "latch"-event is latch: cache buffer chains.

During these investigations I learned a lot about SQL Plan Directives, dynamic sampling and even result cache latches. It shows it's still true in 12c to avoid unnecessary parses - for one more reason now.

I have to thank Chris Antognini, Franck Pachot, Martin Klier and Stefan Koehler for their blogs, presentations and fast support via twitter.

Montag, 11. Mai 2015

Poor mans getTracefile

As you might know already, I'm a big fan of Oracle Traces. Consequently I'm a fan of Method-R tools to work with these files as well.
A very important part in the chain of tools is Method R Trace - an extension to SQL Developer - as it speeds up developers a lot: When they generate a tracefile, they can access it immediately and do not need to ask and wait for a DBA to get and provide the file. On the other side, I as a lazy dba want developers to serve themselves.
Therein Method R Trace specializes.
Beside the functionality to trace a current statement/script, in Version of Method R Trace There was another functionality (called feature 2) to list all tracefiles

 and download them.

Unfortunately this functionality disappeared in Version as I mentioned earlier.

As I really like that feature I implemented a poor mans getTracefile functionality in SQL Developer 4.1 with a mixture of Method R Trace Installation Script from version and

The outcome is not so nice looking or integrated, but it provides the general functionality.
You can list all the tracefiles in a User Defined Report:

When you click a Tracefile the child report gets populated with a bfile locator associated with the specific physical LOB file.

Here SQL Developer is very kind to provide a context menue for this bfile locator.

And you can decide to store the file locally or view it directly. There is no fancy integration into Method R Trace like tagging, filter or whatever, so you must process the file manually.

To get this functionality an extension to the MRTRACE schema is required, and of course the user defined Report must be installed in SQL Developer.

For a proper MRTRACE schema you need both installation scripts from Method R Trace and First install the script (+patch). Afterwards install only the level two feature set objects from

Additional grants are required as well:
grant select on dba_directories to mrtrace;

And as these objects, grants and synonyms needs to be created. I create them for user MRTRACE.

/* 0.3 */
/* 20150511 - bergerma - add full schema reference (MRTRACE)                */
/*                       as MRTRACE lacks create session and has NOPASSWORD */
/*                       add JAVA_GRANTS as trace directories changed       */
/* requires MrTrace objects! */
/* for details */

connect SYS
grant select on dba_directories to mrtrace; -- <- make sure this user is correct

connect SYSTEM/:system_pwd

  trace_path varchar2(200),
  trace_name varchar2(200),
  filesize  NUMBER(15,0),
  filedate  date



/* ***************************************************************************
* berx - 20150507 - initial release - for testers only
* see 
*   for details
*************************************************************************** */

function get_version return varchar2;

function get_tracefiles_java  return TRACEFILES pipelined;

function get_bfile(file_name IN varchar2, directory_name IN varchar2)
return bfile;



/* ***************************************************************************
* berx - 20150507 - initial release - for testers only
* berx - 20150508 - 0.2 - minor adaptions
* berx - 20150511 - 0.3 - add full schema reference
* see 
*   for details
* requirements
*************************************************************************** */

FUNCTION get_version
 v_myversion varchar2(10) := '0.3';
 v_required_mrtrace  varchar2(20) := '';
 v_actual_mrtrace    varchar2(20);
 v_warning_string1   varchar2(60) := ' - MRTrace_bx requires MRTRACE level two feature set ';
 v_warning_string2   varchar2(30) := ' but the version is: ';
   v_actual_mrtrace := MRTRACE.get_version;
  if v_actual_mrtrace = v_required_mrtrace then RETURN v_myversion;
  else RETURN v_myversion || v_warning_string1 || v_required_mrtrace || 
              v_warning_string2 || v_actual_mrtrace;
END get_version;

FUNCTION get_tracefiles_java
  VRETURNARRAY SimpleStringArrayType;
  v_cnt NUMBER := 0;
    pipe row( TRACEFILE( SUBSTR(VRETURNARRAY(i),1,instr(VRETURNARRAY(i), '|', 1, 1)-1)  -- PATH
            , SUBSTR(VRETURNARRAY(i),
                     instr(VRETURNARRAY(i), '|', 1, 1) + 1, 
                     instr(VRETURNARRAY(i), '|', 1, 2)-instr(VRETURNARRAY(i), '|', 1, 1)-1) -- FILENAME
            , to_number( SUBSTR(VRETURNARRAY(i), 
                                instr(VRETURNARRAY(i), '|', 1, 2) +1, 
                                instr(VRETURNARRAY(i), '|', 1, 3)-instr(VRETURNARRAY(i), '|', 1, 2)-1) 
                       ) -- SIZE
            , (TO_DATE('19700101000000', 'YYYYMMDDHH24MISS') + 
               to_number( SUBSTR(VRETURNARRAY(i), 
                                 instr(VRETURNARRAY(i), '|', 1, 3)+1, 
                                 LENGTH(VRETURNARRAY(i))-instr(VRETURNARRAY(i), '|', 1, 3))) / 86400000
          ) ); -- pipe row ... 
END get_tracefiles_java;

FUNCTION get_bfile(file_name IN varchar2, directory_name IN varchar2)
  RETURN bfile
 v_path    VARCHAR2(200);
  v_dirname  VARCHAR2(100) := 'METHODR_UDUMP_1';
  v_bfile BFILE;
  v_path := directory_name;
   select min(dir.DIRECTORY_NAME) into v_dirname 
   from dba_directories dir 
   where dir.directory_path = v_path 
     and dir.directory_name like 'METHODR_UDUMP%';
      DBMS_OUTPUT.put_line (
         'Error in get_bfile where raised:');
      DBMS_OUTPUT.put_line (
  v_bfile := bfilename(v_dirname,file_name);
  RETURN v_bfile;
END get_bfile;


  v_path   varchar2(200); 
  c_path sys_refcursor; 
open c_path for 'select DIRECTORY_PATH from dba_directories where directory_name like ''METHODR_%''';
        fetch c_path into v_path;
        exit when c_path%notfound;
        if ( v_path is null )then
          raise_application_error(-20009, 'Could not get the value of the "METHODR_" directory from dba_directories.', false);
        end if;
        execute immediate 'BEGIN DBMS_JAVA.GRANT_PERMISSION(''MRTRACE'',' || ''''',''' || v_path || ''', ''read'' ); END;';
        execute immediate 'BEGIN DBMS_JAVA.GRANT_PERMISSION(''MRTRACE'',' || ''''',''' || v_path || '/-'', ''read'' ); END;';        
      end loop;
close c_path;

grant execute on MRTRACE.MRTRACE_BX to PUBLIC;
create public synonym mrtrace_bx for mrtrace.mrtrace_bx;

At last this report is required in SQL Developer:

<?xml version="1.0" encoding="UTF-8" ?>
<display id="04e5e018-014a-1000-8001-ac193119805b" type="" style="Table" enable="true">
<name><![CDATA[get tracefiles]]></name>
<description><![CDATA[create a list of tracefiles in Instances UDUMP directory and makes specific files available
it's based on Method-R Method R Trace functionality (version needed) with an extension MrTrace_bx ]]></description>
<tooltip><![CDATA[create a list of tracefiles in Instances UDUMP directory and makes specific files available ]]></tooltip>
<sql><![CDATA[select TRACE_NAME ,
from table(mrtrace_bx.get_tracefiles_java)]]></sql>
<pdf version="VERSION_1_7" compression="CONTENT">
<docproperty title="-" author="-" subject="-" keywords="-" />
<cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" />
<heading font="null" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" />
<footing font="null" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" />
<blob blob="EMBED" zip="false" />
<table font="Agency FB" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" />
<header enable="false" generatedate="false">

null                                                               </data>
<footer enable="false" generatedate="false">
<data value="null" />
<security enable="false" useopenpassword="false" openpassword="null" encryption="EXCLUDE_METADATA">
<permission enable="false" permissionpassword="null" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" />
<pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" />
<display id="null" type="" style="Table" enable="true">
<name><![CDATA[get tracefile]]></name>
<description><![CDATA[access the specific tracefile ]]></description>
<tooltip><![CDATA[Specific tracefile from list above ]]></tooltip>
<sql><![CDATA[select mrtrace_bx.get_bfile(:TRACE_NAME, :TRACE_PATH) as " Tracefile "
from dual]]></sql>
<bind id="TRACE_NAME">
<bind id="TRACE_PATH">
<pdf version="VERSION_1_7" compression="CONTENT">
<docproperty title="-" author="-" subject="-" keywords="-" />
<cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" />
<heading font="null" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" />
<footing font="null" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" />
<blob blob="EMBED" zip="false" />
<table font="null" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" />
<header enable="false" generatedate="false">

null          </data>
<footer enable="false" generatedate="false">
<data value="null" />
<security enable="false" useopenpassword="false" openpassword="null" encryption="EXCLUDE_METADATA">
<permission enable="false" permissionpassword="null" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="false" />
<pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" />

If you see any issues with these script and report please tell me, I only had limited possibilities to test them.

Mittwoch, 6. Mai 2015

SQL Developer 4.1 with Method R Trace 3

These days SQL Developer 4.1 GA was released. If you use SQL Developer, I'd recommend an update.

Nearly at the same time Method R Trace extension for SQL Developer was released as well. Especially for developers that's a very important news - at least if they concern about performance. Method R Trace
A new Version of this extension is needed as SQL Developer changed it's extension framework at the upgrade from version 3 to 4. So I'm very glad Method R took the effort to upgrade Method R Trace. Because of this upgrade, the feature of the current release is limited, as they described in the announcement.
I follow their argumentation it's better to release Method R Trace with feature 1 right now and not wait some more months until feature 1 and 2 are released.

But this decision leads to some consequences: Feature 1 and 2 need different objects installed in the database. And as with version only feature 1 is released, all the objects for feature 2 (it's called level two feature set object in the installation script) is not released. As I did not find the old extension anymore available at Method R website, I just can advise to preserve a copy in case you need some of the level two feature set objects for any purpose.

One more little thing in case you use Method R Trace on a RAC: in the installation script replace
open c_gvparameter for 'select i.inst_id, v.value 
from gv$instance i, v$diag_info v 
where = ''Diag Trace''';
open c_gvparameter for 'select i.inst_id, v.value 
from gv$instance i, gv$diag_info v 
where i.inst_id = v.inst_id 
  and = ''Diag Trace''';

to ensure you can access the local tracefiles on all instances.