2012-09-18

side channel attack on ORA-00942

Oracle Databases has a powerful set of grants and permissions. One of the easy philosophies behind it is just to hide anything a user is not allowed to see. Technically this leads to an error message
ORA-00942: table or view does not exist.
More precisely it should give a text like table or view does not exist or you are not allowed to access it.
For an ordinary user/schema separation there might be no big difference: If user A can not read table B.TAB it's of no value for user A whether the object does not exist or is just not accessible. But from a security point of view it might be of some interest if there exists a particular object within the database. Even it's not exploitable yet, maybe it's worth to try to reach it, or there is a combined vector of attack together with other objects.
To gain information about a system which should be kept hidden a well known method for physical cryptosystems is the side channel attack. At this attack extra information about the system is gained by measuring external channels like timings, power consumption and so on.
I am showing a similar method to get many informations about objects within a database a user should not know in general.

The described attack combines 2 well known concepts of every Oracle database: row cache and session statistics.

The row rache (or data dictionary cache) ... is a collection of database tables and views containing reference information about the database, its structures, and its users. - It's important to know this cache is filled by SQL-statements against the data dictionary tables.

session statistics just increase a counter for every session every time an action is executed.

Now let's check if we can get some hidden information out of my test system.
I did a alter system flush shared_pool; to make it slightly easier for the testcase, but even with a pre-filled row cache I am sure these informations can get collected with only little more effort.

My measurement query is
select ms.sid, ms.statistic#, sn.name, ms.value 
from v$mystat ms, V$STATNAME sn 
where sn.name in ('recursive calls', 'execute count') and ms.statistic# = sn.statistic# order by 2;
with a result like

       SID STATISTIC# NAME                      VALUE
---------- ---------- -------------------- ----------
       221          9 recursive calls            3108
       221        588 execute count               947

Now let's see where we can go. Something easy for the beginning. A select from a table which does not exist:

SQL> select * from gfdsaqwert;
select * from gfdsaqwert
              *
ERROR at line 1:
ORA-00942: table or view does not exist
with the measurement

       SID STATISTIC# NAME                      VALUE
---------- ---------- -------------------- ----------
       221          9 recursive calls            3111
       221        588 execute count               950
this means, there where 3 recursive calls and 3 execute counts.

Now for something more interesting - let's check if a user exists:

SQL> select * from zzz.uztrfghj;
select * from zzz.uztrfghj
                  *
ERROR at line 1:
ORA-00942: table or view does not exist
20 recursive calls
 3 execute count


SQL> select * from a.uztrfghj;
select * from a.uztrfghj
                *
ERROR at line 1:
ORA-00942: table or view does not exist
21 recursive calls
 3 execute count

In my test-DB no user ZZZ exists, but a user A exists. (but no table A.UZTRFGHJ) There is one more recursive call if the user exists. Not much, but a difference.


SQL> select * from a.m ;
select * from a.m
                *
ERROR at line 1:
ORA-00942: table or view does not exist
26 recursive calls
 7 execute count

Now there are even more recursive calls and execute count. The Table A.M exists in my system.

I'd say this shows the concept. The official answer from the database was ORA-00942: table or view does not exist all the time. But v$mystat gave us some more information we did not get officially.

2012-05-05

who cares if a listener is dying


In this post I try to show what's going on if a local listener dies in a 11gR2 RAC environment. My basic question is: When does (a) SCAN-Listener knows the local Listener disappeared?
My testcase (a sandbox):

  • A 2-node RAC - all actions are run on node 1, if not explicit defined.
  • My test-DB is called TTT04 (Test, you know?)
  • I have 3 SCAN listeners there, but I want to make the test-case easier so I do pin down my connection string to only one SCAN-listener (it's SCAN2 in my case):
    TTT04_bx =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = 172.24.32.117)
                                   (PORT = 1521)) # SCAN2
        (CONNECT_DATA =
          (SERVICE_NAME = TTT04_SITE1)
        )
      )
  • start tracing pmon:
    ps -ef | grep pmon | grep TTT04
    SQL> oradebug setospid <pid_of_pmon>
    Oracle pid: 2, Unix process pid: <pid_of_pmon>, image: oracle@<node1> (PMON)
    SQL> oradebug Event 10257 trace name context forever, level 16
    Statement processed.

  • just to make sure server side load balancing will lead my to node1:
    on node2: several
    bzip2 -z -c /dev/urandom > /dev/null &
An now the real test. My 2 test-scripts:
/tmp/bx1.sql

connect berx/berx123#@TTT04_bx
spool /tmp/bx1.txt
select to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'), HOST_NAME from v$instance;
exit


/tmp/bx2.sql

connect berx/berx123#@TTT04_bx
spool /tmp/bx2.txt
select to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'), HOST_NAME from v$instance;
exit


My command is
kill -9 `pgrep -f "tnslsnr LISTENER "` ; lsnrctl services LISTENER_SCAN2 > /tmp/lsnr1.txt ; sqlplus /nolog @/tmp/bx1.sql & sleep 5 ; lsnrctl services LISTENER_SCAN2 > /tmp/lsnr2.txt; sqlplus /nolog @/tmp/bx2.sql


and the result on the Terminal:

SQL*Plus: Release 11.2.0.3.0 Production on Sat May 5 23:00:50 2012

Copyright (c) 1982, 2011, Oracle. All rights reserved.

ERROR:
ORA-12541: TNS:no listener


SP2-0640: Not connected
[1]+ Done sqlplus /nolog @/tmp/bx1.sql 2> /tmp/bx1.err

SQL*Plus: Release 11.2.0.3.0 Production on Sat May 5 23:00:55 2012

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected.

TO_CHAR(SYSDATE,'YY HOST_NAME
------------------- ---------
2012-05-05 23:00:55 <node2>



2012-05-04

how to secure CMAN against CVE-2012-1675 - or an easier method than ASO

In the Oracle DBA World at the moment CVE-2012-1675 is a great issue. Oracle announced some methods how to secure existing systems. But these are sometimes not that easy, and there is no backport for older systems.
As I investigated the problem how to secure a connection manager I was hinted at Note:1455068.1.
The solution is somewhat easy: Only allow incoming connections to your systems. e.g.
    (rule=(src=*)(dst=10.220.8.114)(srv=*)(act=accept))

In a well designed environment where you can separate your DB Servers from others at low network layers, a set of CMAN might be enough to secure your DBs against CVE-2012-1675.
At least it might be a simple and fast solution to secure your systens from untrusted areas, until you know how to secure the DB servers themselves. Especially for legacy systems it might be the only solution to secure it.

2012-03-26

do not touch if you do not know for sure


Oracle provides and documents a huge load of possibilities and functions for nearly every purpose. For me it is impossible to know all of them. Even to know such an area exists is hard.
But still sometimes these functions Oracle does not document for customers purpose seems to be more attractive than those officially available.
One of these attractive packages is DBMS_SYSTEM. You will not find any description of this package in the official Oracle documentation. There are some small traces available, but nothing really useful.
Oracle also have quite clear words about using such unofficial, and hidden, packages:
In How To Edit, Read, and Query the Alert.Log [ID 1072547.1] you can read:

NOTE about DBMS_SYSTEM:
This package should in fact only be installed when requested by Oracle Support.
It is not documented in the server documentation.
As such the package functionality may change without notice.
It is to be used only as directed by Oracle Support and its use otherwise is not supported.

Per internal Note 153324.1:
Generally, if a package is not in the Oracle documentation at all, it is intentional, as it is not for end user use. Non-documented packages should be avoided by customers unless specifically instructed to use them by either Oracle Support or Oracle Development.

For some reasons I'm one of those which likes to play with forbidden toys like these. I found a procedure in DBMS_SYSTEM which changed behavior slightly in 11gR2 (I've tested with 11.2.0.3 patchset - so maybe other patchsets behave quite different!)

I'm talking about DBMS_SYSTEM.READ_EV. This procedure more or less calls directly the internal C-routine READ_EV_ICD. Common sense is, it should return the level of an event given. This is also quite true, just for one exception: the probably most known event in Oracle world: 10046 - or sql_trace.

My test-script here
VARIABLE lev number
SET AUTOPRINT ON
EXECUTE sys.dbms_system.read_ev(10046, :lev)

ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';

SELECT sql_trace, sql_trace_waits, sql_trace_binds FROM v$session WHERE sid=userenv('sid')


EXECUTE sys.dbms_system.read_ev(10046,:lev)

oradebug setmypid
oradebug eventdump session

gives the expected result in one of my 10g test DBs:
@test_read_ev.sql

PL/SQL procedure successfully completed.

       LEV
----------
         0

Session altered.

PL/SQL procedure successfully completed.

       LEV
----------
         8

Statement processed.
10046 trace name CONTEXT level 8, forever

but an unexpected result in my 11.2.0.3 test DB:

@test_read_ev.sql

PL/SQL procedure successfully completed.

       LEV
----------
         0

Session altered.

PL/SQL procedure successfully completed.

       LEV
----------
         0

Statement processed.
sql_trace level=8

I guessed events with an ALIAS might be excluded somehow, but other tests with DEADLOCK==60 or DB_FILES==10222 shows this special behavior only with sql_trace.

My todays conclusion is easy:
If it's not there for you, don't guess you can play with it without any consequences.

2012-03-19

looking close at TAF


At the moment I'm trying to collect and sort some informations about Oracles Transparent Application Failover. There is a lot of general information available in the wild, but no deeper details. Here I try to show my findings.

Testcase

For my test-database with DB_UNIQUE_NAME: TTT06_SITE1 I created the service
srvctl add service -d TTT06_SITE1 -s TTT06_TAF -P BASIC -e SELECT -r TTT061,TTT062 .
The tnsnames.ora entry is
TTT06_TAF =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (LOAD_BALANCE = OFF)
      (ADDRESS = (PROTOCOL = TCP)(HOST = crs908.my.domain)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = TTT06_TAF)(SERVER=DEDICATED)
    )
  )

tracing

Just
strace -f -t -s 128 -o sqlplus_taf.strace sqlplus "berx/berx123#@TTT06_TAF"
I will look closely on the sqlplus_taf.strace soon, just the testcase can be finished easily:

current instance


SELECT (SELECT instance_number
        FROM   v$instance) inst,
       s.sid,
       s.service_name,
       s.failover_type,
       s.failover_method,
       s.failed_over,
       p.spid
FROM   v$process p,
       v$session s
WHERE  s.paddr = p.addr
       AND addr IN (SELECT paddr
                    FROM   v$session
                    WHERE  audsid = Sys_context('USERENV', 'SESSIONID'));


   INST      SID SERVICE_NAME FAILOVER_TYPE FAILOVER_M FAI   SPID
------- -------- ------------ ------------- ---------- --- ------
      1      144    TTT06_TAF        SELECT      BASIC  NO  23440

and after a startup force in a 2nd session in instance 1

new instance


/

   INST      SID SERVICE_NAME FAILOVER_TYPE FAILOVER_M FAI   SPID
------- -------- ------------ ------------- ---------- --- ------
      2      146    TTT06_TAF        SELECT      BASIC YES  14927

what's going on

A short excerpt of the sqlplus_taf.strace
First sqlplus tries to access ~/.tnsnames.ora, fails and then opens $TNS_ADMIN/tnsnames.ora. Of course there it reads the connection string shown above.
Next it tries to resolve the HOST entry:
connect(9, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("<my_dns>")}, 28) = 0
and gets all IPs for my SCAN-DNS.
sqlplus asks one of the SCAN listeners:
connect(9, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("<any SCAN IP>")}, 16) = -1 EINPROGRESS (Operation now in progress)
for the SERVICE and gets a kind of redirect:
read(9, "\1\10\0\0\6\0\0\0\0@(ADDRESS=(PROTOCOL=TCP)(HOST=<NODE1-vip>)(PORT=1521))\0(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=<SCAN IP>)(PORT=1"..., 8208) = 264
The SCAN-Listener is of no good anymore: close(9). sqlplus looks up the name of <NODE2-vip> in /etc/hosts and tries it's next step with the <NODE1-vip> listener:
connect(9, {sa_family=AF_INET, sin_port=htons(1521), sin_addr=inet_addr("<NODE1-vip IP>")}, 16) = -1 EINPROGRESS (Operation now in progress)

The listener creates a server process for sqlplus - and let's them do their private business.


The startup force killed the server process for sqlplus. But it doesn't know anything about it, until it get's the <newline> from the terminal.

Of course filehandle 9 is somewhat dead and gets close(9). Now really the same steps as above (just tnsnames.ora is not re-read!): SCAN IP lookup, redirect to an NODE-vip, etc.

So only tnsnames.ora is cached, all other lookups and connections are re-run again.
Take this into account if you try to change your setup (IPs, lookups, DNS) while connections are active.

2012-03-05

creating my mobile toolbox (for windows) I


I am somewhat tired to re-install the same set of software again and again, every time I (have to) switch to a new PC. Probably it's me, not the PCs, but it takes some tome to have the system setup, and me productive again.
Somehow it's like a craftsman has to setup a new labor space with new tools in every house they visit. But craftsmen are clever, they bring your tools with them - and take them away if not needed anymore. In best case they do not leave any traces (except the work done).
I try to mimic this approach: I'm creating my own toolbox. Mine is not made of leather or plastic, it's made of an USB-stick, portableapps.com and some additional modifications.

First I chose the programs available in portableapps app directory: Notepad++, Google Chrome, KeePass,  PuTTY and WinSCP. I tried to keep the list small, but you can make your own decisions,  of course.

Unfortunately I need some more tools: As a DBA, sometimes I not only like  to access the database servers, but the database directly. I did not find any proper tool in the app directory, therefore I decided to include Oracles SQL Developer into my toolbox and followed the Portable Apps Format Specification. It's not that complex it looks at first sign. Here my steps:

  1. create the proper directory structure:
    SQLDeveloperPortable
    + App
      + AppInfo
      + DefaultData
    + Data
    + Other
      + Help
        + Images
      + Source

  2. download SQL Developer and unzip it into the App folder

  3. in AppInfo create the file appinfo.ini:
    [Format]
    Type=PortableApps.comFormat
    Version=2.0
    
    [Details]
    Name=SQLDeveloper Portable
    AppID=SQLDeveloperPortable
    Publisher=^/\x
    Homepage=berxblog.blogspot.com/2012/03/creating-my-mobile-toolbox-for-windows.html
    Category=Utilities
    Description=Oracle SQL Developer is a graphical version of SQL*Plus that gives database developers a convenient way to perform basic tasks
    Language=Multilingual
    Trademarks=Oracle
    InstallType=
    
    [License]
    Shareable=false
    OpenSource=false
    Freeware=false
    CommercialUse=true
    EULAVersion=1
    
    [Version]
    PackageVersion=3.1.07.42
    DisplayVersion=3.1
    
    [SpecialPaths]
    Plugins=NONE
    
    [Dependencies]
    UsesJava=no
    UsesDotNetVersion=
    
    [Control]
    Icons=1
    Start=sqldeveloper.bat
    ExtractIcon=App\sqldeveloper\icon.png
    

  4. In SQLDeveloperPortable create the file sqldeveloper.bat:
    REM ^/\x
    SET IDE_USER_DIR=%~d0\PortableApps\SQLDeveloperPortable\Data
    start /b %~d0\PortableApps\SQLDEveloperPortable\App\sqldeveloper\sqldeveloper.exe
    By setting IDE_USER_DIR all configurations will be stored on the USB-stick, not on (changing) PCs.

  5. creating a proper icon for App\sqldeveloper\icon.png

That's it - works like a charm!



Next I prepared Xming for the portable world. X11 is still needed in the world of an Oracle DBA.
The steps where similar to those of SQL Developer, therefore I only describe the differences here:
As I don't want to extract the installer of Xming, I just let it install onto my PC into C:\Program Files. Then I copied the full structure C:\Program Files\Xming into XmingPortable\App.
Also in this case a bat file as a wrapper is needed, as Xming needs some parameters to go into tray without a window:
start /b %~d0\PortableApps\XmingPortable\App\Xming\Xming.exe :0 ‑clipboard ‑multiwindow



By these 2 examples you can see it's nice and easy to have the private toolbox at your hands all the time.
I do not provide the packages for any of these programs. First, I don't want to take care of any legal implications. Second, I have no interest in this kind of work. I just have no skills in doing so. period.

2012-02-27

total abuse of technology



I had a (for my environment) unusual request:
After the migration of a Repository Database from 9i to latest 10g I was asked to keep a backup of the old DB for at least 3 years.
This does not sound very unusual, but it's not that simple in our environment. We do only keep backups for weeks to some month, worst case. I also cannot just backup the datafiles at all: The old database run on Solaris, but we are switching to Linux right now. With just some bad luck I would not have any system to restore (or open) this database backup at all.
This brought me to another solution; in my point of view it was not worth to write a blog about it, but I was asked by Leighton L. Nelson and so I write:

  1. export of the full database
    I run a simple export of the database. There is no expdp in 9i, so the choice was easy.

  2. compress the files
    the dump (and the logfile!) where tared together and compressed. Just to save space.

  3. prepare a proper store
    As mentioned above, there is no dedicated system for this purpose. So I had to prepare a place where the dump is safe. As a DBA, of course I know a good place to store data: A database!
    First a DBFS came to my mind. But the DB is in Version 10.2 - no DBFS.
    But it's quite simple to do the important steps manually:
    create tablespace old_dump datafile '+<DG>' autoextend on;
    create user old_dump identified by dump_old1 default tablespace old_dump;
    GRANT CONNECT, CREATE SESSION,  CREATE TABLE to old_dump;
    alter user old_tech_dump quota unlimited on old_dump;
    
    connect old_dump/dump_old1
    
    create table old_dump_store 
    (id integer primary key, description VARCHAR(2000), file_store BLOB) 
    LOB (file_store) STORE AS  SECUREFILE 
    (TABLESPACE old_dump DISABLE STORAGE IN ROW   NOCACHE LOGGING);
    

  4. insert the dump (and some metadata)

    There is a nice way in SQL Developer to load a file to a BLOB. It's just so simple.
    At last some words in the comment field are worth - so everyone knows what's inside the BLOB.
It still might sound strange to save the dump of an old database into it's descendant. But at the end: do you know a better place to store data than a database?

2012-02-16

restore DBMS_SCHEDULER.CREATE_CREDENTIAL cleartext password


If you want to use Oracle file watcher, you need to Create a Credential. As there a password needs to be stored in the database, Oracle tries to save it in a secure way. But as the password must be decrypted for the purpose to login on the file watchers agent side, it is not safe at all:
The credentials are stored with DBMS_SCHEDULER.CREATE_CREDENTIAL. Here an example:

exec DBMS_SCHEDULER.CREATE_CREDENTIAL(
  credential_name => 'local_credential',
  username => 'oracle',  password => 'welcome1');
exec DBMS_SCHEDULER.CREATE_CREDENTIAL(
  credential_name => 'local_credential2',
  username => 'oracle2', password => 'welcome1');


It's quite easy to see the values again:

select o.object_name credential_name, username, password
 FROM SYS.SCHEDULER$_CREDENTIAL c, DBA_OBJECTS o
 WHERE c.obj# = o.object_id;

CREDENTIAL_NAME    USERNAME PASSWORD
------------------ -------- ------------------------------------
LOCAL_CREDENTIAL   oracle   BWVYxxK0fiEGAmtiKXULyfXXgjULdvHNLg==
LOCAL_CREDENTIAL2  oracle2  BWyCCRtd8F0zAVYl44IhvVcJ2i8wNUniDQ==


At least the password is somehow encrypted, and even the password was welcome1 for both credentials, the encrypted string is not identical.

Nothing to blame here, but I mentioned, the password can be decrypted. So let's do so:

SELECT u.name CREDENTIAL_OWNER, O.NAME CREDENTIAL_NAME, C.USERNAME,
  DBMS_ISCHED.GET_CREDENTIAL_PASSWORD(O.NAME, u.name) pwd
FROM SYS.SCHEDULER$_CREDENTIAL C, SYS.OBJ$ O, SYS.USER$ U
WHERE U.USER# = O.OWNER#
  AND C.OBJ#  = O.OBJ# ;

CREDENTIAL_OWNER CREDENTIAL_NAME      USERNAME PWD
---------------- -------------------- -------- --------
SYS              LOCAL_CREDENTIAL     oracle   welcome1
SYS              LOCAL_CREDENTIAL2    oracle2  welcome1


Can you see it? It's there. Try it at home!
I don't blame anyone here. It's hard to store anything really safe in case you need to decrypt it also.
But don't expect your password save, if you store it with DBMS_SCHEDULER.CREATE_CREDENTIAL.
Maybe it's slightly to easy to use DBMS_ISCHED.GET_CREDENTIAL_PASSWORD (ok, only SYS can do so) but even it might be slightly more difficult in the future, the basic problem will still exist.

some tracing events in DBMS_SCHEDULER



I currently have the fun to review DBMS_SCHEDULER. As I'm always interested in ways to trace anything, to dig deeper in case of problems, I searched for ways to trace it.
As I did not find a collected list of events anywhere, I start them here. It's by far not a complete list, so feel free to discuss and contribute, if you want!

event 10862


resolve default queue owner to current user in enqueue/dequeue
Cause: resolve default queue owner to current user in enqueue/dequeue.
Action: turn on if client wish to resolve the default queue owner to the current user. If not turned on, the default queue owner will be resolved to the login user.
This event is not checked the way you might imagine. Just in the area of REMOVE_JOB_EMAIL_NOTIFICATION if it's 0, it's set to 1 for a call of DBMS_AQADM.REMOVE_SUBSCRIBER and set to 0 afterwards.

27401

scheduler tracing event

  • bit 0x10000 - Logging e-mail to table and returning
    bitand( ,65536)
    logs informations about sending emails into table sys.scheduler$_sent_emails
  • bit 0x20000 - start DBMS_DEBUG_JDWP.CONNECT_TCP in file watcher
    bitand( ,131072)
    starts DBMS_DEBUG_JDWP.CONNECT_TCP on localhost, port 4444
    I'm not sure if I like this event. In general I don't want any software opening connections without my knowing. And I could not find this documented anywhere.
    Is it fair to call this a backdoor?
  • bit 0x40000 - starts tracing in file watcher
    bitand( ,262144)
    logs informations about file watcher into trace file

27402

scheduler tracing event

  • bit 0x40 - starts tracing about emails
    bitand( ,64)
    similar to event 27401 bit 0x10000, but tracefile instead of table
  • bit 0x80 - starts tracing about emails
    bitand( ,128)
    logs information about email jobs into trace file
  • bit 0x100 - starts tracing in chains
    bitand( ,256)
    logs information about chains into trace file

I guess there is at least also a bit 0x200, but could not prove it right now.

27403

scheduler stop job event


I did not find anything about it yet. comments are most welcome!

2012-02-06

get your traces - yourself



I'd like to mention a small peace of software. It's called MrTrace and available in Version 2.0.0.43 right now. For me it's a tool to save time. So what is it doing at all?
MrTrace is a plugin for Oracles SQL Developer to access tracefiles via SQL Developer. It's previous version could only access the tracefile for the statement you just executed. But since version 2 you anyone with the right permissions access any tracefile in the trace directory.

For a DBA it does not sound spectacular to access tracefiles, but it can be quite annoying to get and distribute tracefiles for developers. In my current company there is no OS-access to databae servers for anyone except DBAs and OS-admins. this means someone must copy over the traces to make them accessible for others. It's not a complex task, but it's disturbing.
With MrTrace I can grant anyone who knows how to use SQL Developer permissions to access trace files. So it saves a lot of time, for me and the developer. At a price of less than us$50 it should amortize in no time, if you have a diligent developer.

A list of my very private findings:

PROs:

  • it's easy to install (on the client side) 
  • it's not OS-dependent - you can apply it on any client-OS where you can start SQL Developer
  • the installation script for the database user and objects is not wrapped. So you can review and even change this part of the software. (in my case, we have a PASSWORD_VERIFY_FUNCTION enforced in my company. I needed to edit the installation script to create the user MRTRACE.
  • The support of Method R is great! I had the joy to participate in their beta program for version 2 of MrTrace. We had some nice conversations.  

CONs:

  • You need SQL Developer for the client side. - No big deal for me, but in some companies that might be a problem.
  • MrTrace needs java to do some tasks. Unfortunately there is no method to list the content of a directory, so java is needed. 
  • OS-commands like ls,  find and xargs are used. There is nothing bad about these commands, but I don't see anything they do what cannot be done in java directly. So for me it increases complexity without a need. 

And no, I am not an employee of Method R, the only relation is the software license I bought myself. 

2012-01-31

Setting Up Oracle Connection Manager (without SOURCE_ROUTE)


This post must be seen as a direct follow up to Arup Nandas Setting Up Oracle Connection Manager.
As there are many references to this post, please read it first. Problem and Solution are quite similar, only the architecture is a little bit different:

The Architecture

 The network diagram of the three machines is slightly different:


There is a new needed connection: from the instance on dbhost1 to the connection manager on cmhost1.

After changing the setup, you will need to rewrite the TNSNAMES.ORA in the following way:

TNS_CM = 
  (DESCRIPTION = 
    (ADDRESS = 
      (PROTOCOL = TCP)(HOST = cmhost1)(PORT = 1950)
    )
    (CONNECT_DATA = 
      (SERVICE_NAME=srv1)
    )
  )

You see, the (SOURCE_ROUTE = YES) disappeared as well as the ADDRESS of the listener on dbhost1.

How it Works


Note, all the special parameters and settings on the clients TNSNAMES.ORA disappeared. But the cman must know about the SERVICE_NAME it has to serve. As the cman can be seen as a special kind of listener, there is a common way a listener gets informed about a SERVICE_NAME: the Instance has to register the services to the listener. In general this is done by pmon at registering to logal_listener and remote_listener. In this case, remote_listener is the magic parameter.

Setting Up


You can follow step (1) to (9) as in Arups blog.
But before (10) an additional step is required:

(x) on the instance add the cman to remote_listener:

Alter System Set remote_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=cmhost1)(PORT=1950))))' scope=both;

If there is already an entry in remote_listener, e.g. in a RAC, you can separate the different connection strings by comma. An example can be

Alter System Set remote_listener='SCAN-IP:1521,(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=cmhost1)(PORT=1950))))' scope=both;

(For more details about SCAN I'd recommend this PDF)

CMCTL Primer

As we have now the services registered also on cman, we can see it there. The  SHOW command has a 2nd parameter services. Here an example

Services Summary...
Proxy service "cmgw" has 1 instance(s).
  Instance "cman", status READY, has 2 handler(s) for this service...
    Handler(s):
      "cmgw001" established:1 refused:0 current:0 max:256 state:ready
         <machine: 127.0.0.1, pid: 16786 >
         (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=44391))
      "cmgw000" established:1 refused:0 current:0 max:256 state:ready
         <machine: 127.0.0.1,pid: 16784>
         (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=44390))
Service "INSTANCE1" has 1 instance(s).
  Instance "INSTANCE1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:0 refused:0 state:ready
         REMOTE SERVER
         (ADDRESS=(PROTOCOL=TCP)(HOST=dbhost1)(PORT=1521))
Service "cmon" has 1 instance(s).
  Instance "cman", status READY, has 1 handler(s) for this service...
    Handler(s):
      "cmon" established:3 refused:0 current:1 max:4 state:ready
         <machine: 127.0.0.1, pid: 16759>
         (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=44374))
The command completed successfully.

Fine Tuning

I try to create a dedicated service for all (or a well known set of) connections via the connection manager. By doing so it's sometimes easier to separate or identify different kinds of sessions.