2023-07-16

SQLcl stores connections - how and where?

Oracles SQLcl has a lot of nice features. One is to store connections (connection_string, username and  password - if you choose to). But - as normal - there is little information how and where it is stored. 

First I create a new connection in my brand new 23.2 SQLcl: 


sql -nolog


SQLcl: Release 23.2 Production on Sun Jul 16 17:53:37 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

No Connection
SQL> connect berx/berx@//127.0.0.1/freepdb1
Connected.
SQL> conn -save berxfree -savepwd
Name: berxfree
Connect String: //127.0.0.1/freepdb1
User: berx
Password: ******
SQL> connmgr list
berxfree
SQL> connmgr show berxfree
Name: berxfree
Connect String: //127.0.0.1/freepdb1
User: berx
Password: ******
SQL>

I found the connection definition in a subdirectory of  ~/.dbtools/connections/

[oracle@localhost connections]$ pwd
/home/oracle/.dbtools/connections
[oracle@localhost connections]$ ls -la
total 0
drwx------. 3 oracle oinstall 22 Jul 16 17:54 .
drwx------. 3 oracle oinstall 25 Jul 15 16:34 ..
drwx------. 2 oracle oinstall 51 Jul 16 17:54 berxfree
[oracle@localhost connections]$
And in this directory I have 2 files: cwallet.sso dbtools.properties

First let's check the dbtools.properties:

#
#Sun Jul 16 17:54:23 UTC 2023
connectionString=//127.0.0.1/freepdb1
name=berxfree
type=ORACLE_DATABASE
userName=berx


also the cwallet.sso has some valuable content:

[oracle@localhost berxfree]$ mkstore -wrl . -list
Oracle Secret Store Tool Release 23.2.0.0.0 - Production
Version 23.2.0.0.0
Copyright (c) 2004, 2023, Oracle and/or its affiliates. All rights reserved.

Oracle Secret Store entries:
oracle.security.client.connect_string1
oracle.security.client.password1
oracle.security.client.username1
[oracle@localhost berxfree]$ mkstore -wrl . -viewEntry oracle.security.client.connect_string1
Oracle Secret Store Tool Release 23.2.0.0.0 - Production
Version 23.2.0.0.0
Copyright (c) 2004, 2023, Oracle and/or its affiliates. All rights reserved.

oracle.security.client.connect_string1 = //127.0.0.1/freepdb1
[oracle@localhost berxfree]$ mkstore -wrl . -viewEntry oracle.security.client.password1
Oracle Secret Store Tool Release 23.2.0.0.0 - Production
Version 23.2.0.0.0
Copyright (c) 2004, 2023, Oracle and/or its affiliates. All rights reserved.

oracle.security.client.password1 = berx

It should be obvious by now the wallet is auto open - so no password is needed (and my credentials password isn't very secure). 

Btw: if you omit the -savepwd parameter, there will be still a cwallet.sso - but it's simply empty. All the other connection parameters are in dbtools.properties already. 


Of course, sometimes I have to use Windows. There is nothing like ~/.dbtools. The directory is more visible in %HOMEDRIVE%%HOMEPATH%\DBTools. This is especially valuable as I can copy the berxfree subdirectory from my linux sandbox to the Windows laptop into %HOMEDRIVE%%HOMEPATH%\DBTools\connections and just use it - simple like that.  
 
That's nearly all to share here - just in case you want to remove a stored connection, delete it's directory and all is gone!

2023-07-13

ASM directories size and age

Automatic Storage Management is a very powerful method of managing oracles datafiles (and some other files as spfile, controlfile, archivelogs, flashbacklogs, ... also). 
Unfortunately (of for good reasons) Oracle never made the ASM diskgroups directly mountable into an OS - instead the tool asmcmd is given which can be used to navigate the structure and manipulate files. 
Normally there is little need to tweak with files there as all files are oracle-managed by default. 
Still sometimes I have to do a cleanup. most of the time someone suspects there are leftovers which do consume significant space but are not required anymore. Unfortunately asmcmd does not help me a lot finding those directories which contain files of significant space but at the same time does have files which were not touched for smoe time. So I created a little report for my help. 
The report has about 70 lines, so I put it into my git as ASM_agg_dirs.sql. It contains 3 sub-selects which do the work:
all_directories to get the path of the directories, 
byte_to_every_dir to get the size and dates for a sub-path down from the leaves and 
dir_sizes_dates to aggregate the byte_to_every_dir.

All together it shows for every directory the size of all its containing files including the sub-directories. For the age of the files, the earliest and latest modify date is aggregated and propagated. Its result can be something like 

FULL_ALIAS_PATH                                                 SUM_BYTES        GB LVL EARLIEST_MODIFY_ LATEST_MODIFY_DA
------------------------------------------------------------- ----------- --------- --- ---------------- ----------------
+DATA_DG/ASM                                                          6656       .00   1 2020-12-10 16:00 2020-12-10 16:00
+DATA_DG/ASM/PASSWORD                                                 6656       .00   2 2020-12-10 16:00 2020-12-10 16:00
+DATA_DG/DB_UNKNOWN                                                  36864       .00   1 2022-06-08 09:00 2022-06-08 09:00
+DATA_DG/DB_UNKNOWN/PASSWORD                                         36864       .00   2 2022-06-08 09:00 2022-06-08 09:00
+DATA_DG/my_cluster                                              354004480       .33   1 2020-12-10 16:00 2023-07-13 11:00
+DATA_DG/my_cluster/ASMPARAMETERFILE                                  3584       .00   2 2020-12-10 16:00 2020-12-10 16:00
+DATA_DG/my_cluster/OCRBACKUP                                    188026880       .18   2 2021-02-23 18:00 2023-07-13 11:00
+DATA_DG/my_cluster/OCRFILE                                      165974016       .15   2 2023-06-11 11:00 2023-06-11 11:00
+DATA_DG/DB_UNIQ_1                                            610545300480    568.61   1 2022-04-13 08:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/BC3B5B02E2AAAD7BE0532401210A1E60           492219031552    458.41   2 2023-06-11 10:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/BC3B5B02E2AAAD7BE0532401210A1E60/DATAFILE  491145281536    457.41   3 2023-07-04 21:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/BC3B5B02E2AAAD7BE0532401210A1E60/TEMPFILE    1073750016      1.00   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_1/B50214B4214037B1E053C40DD10A4E69             7231012864      6.73   2 2023-06-11 10:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/B50214B4214037B1E053C40DD10A4E69/DATAFILE    6182428672      5.76   3 2023-07-04 22:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/B50214B4214037B1E053C40DD10A4E69/TEMPFILE    1048584192       .98   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_1/CONTROLFILE                                   676773888       .63   2 2023-06-11 10:00 2023-07-12 23:00
+DATA_DG/DB_UNIQ_1/DATAFILE                                    31809601536     29.63   2 2023-07-04 22:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/DATAGUARDCONFIG                                   32768       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/DG                                                32768       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/ONLINELOG                                   77309429760     72.00   2 2022-12-10 08:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/PARAMETERFILE                                     35840       .00   2 2023-07-13 12:00 2023-07-13 12:00
+DATA_DG/DB_UNIQ_1/PASSWORD                                           5120       .00   2 2022-04-13 08:00 2022-04-13 08:00
+DATA_DG/DB_UNIQ_1/pdbseed                                                             2
+DATA_DG/DB_UNIQ_1/pdbseed/DATAFILE                                                    3
+DATA_DG/DB_UNIQ_1/pdb1                                                                2
+DATA_DG/DB_UNIQ_1/pdb1/DATAFILE                                                       3
+DATA_DG/DB_UNIQ_1/TEMPFILE                                     1073750016      1.00   2 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_2                                            780968243712    727.33   1 2021-09-21 15:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/CONTROLFILE                                   627933184       .58   2 2023-06-11 10:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165             8196775936      7.63   2 2021-09-21 15:00 2023-07-04 02:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/DATAFILE    8069890048      7.52   3 2021-09-21 15:00 2023-07-04 02:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/TEMPFILE     126885888       .12   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9           705188782080    656.76   2 2021-09-21 15:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/DATAFILE  686935162880    639.76   3 2021-09-21 15:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/TEMPFILE   18253619200     17.00   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_2/DATAFILE                                    55903256576     52.06   2 2021-09-21 15:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/DATAGUARDCONFIG                                   24576       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/DG                                                24576       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/ONLINELOG                                    9663694848      9.00   2 2022-12-10 10:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/PARAMETERFILE                                     27648       .00   2 2023-07-13 12:00 2023-07-13 12:00
+DATA_DG/DB_UNIQ_2/PASSWORD                                           5120       .00   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/pdbseed                                                             2
+DATA_DG/DB_UNIQ_2/pdbseed/DATAFILE                                                    3
+DATA_DG/DB_UNIQ_2/pdb1                                                                2
+DATA_DG/DB_UNIQ_2/pdb1/DATAFILE                                                       3
+DATA_DG/DB_UNIQ_2/TEMPFILE                                     1073750016      1.00   2 2023-06-11 10:00 2023-06-11 10:00

 
If you have a close look, there are some directories with a EARLIEST_MODIFY_DATE from 2021 or 2022 - so they might be good candidates to have a look and eliminate some files. To see how much data might be able to release, a small modification can be done to the SQL:

In lines 51 & 52 of the script the projection should be commented and instead the block between 55 and 59 should be active. I there only aggregate data for files last modified 90 days or older in the past. 
The result provides a much more clear picture of possible savings: 


+DATA_DG/DB_UNIQ_2                                             408085720576    380.06   1 2021-09-21 15:00 2022-12-10 10:00
+DATA_DG/DB_UNIQ_2/CONTROLFILE                                                          2
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165              1887461376      1.76   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/DATAFILE     1887461376      1.76   3 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/TEMPFILE                            3
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9            378750590976    352.74   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/DATAFILE   378750590976    352.74   3 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/TEMPFILE                            3
+DATA_DG/DB_UNIQ_2/DATAFILE                                     25300172800     23.56   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/DATAGUARDCONFIG                                                      2
+DATA_DG/DB_UNIQ_2/DG                                                                   2
+DATA_DG/DB_UNIQ_2/ONLINELOG                                     2147487744      2.00   2 2022-12-10 10:00 2022-12-10 10:00
+DATA_DG/DB_UNIQ_2/PARAMETERFILE                                                        2
+DATA_DG/DB_UNIQ_2/PASSWORD                                            5120       .00   2 2021-09-21 15:00 2021-09-21 15:00
Somewhere in the PDB with GUID C7CAF2050A22AC66E0532601210A9FF9 about 352 GB of "old" files may be hidden. 

Checking the files the situation is obvious: 

 ls -l
Type      Redund  Striped  Time             Sys  Name
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    ABC_DATA.2828.1083856881
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    ABC_DATA.2850.1083857223
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    XYZ.2829.1083856889
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    XYZ.2851.1083857229
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    DEF.2827.1083856881
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    DEF.2849.1083857221
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    GHI.2826.1083856879
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    GHI.2848.1083857221
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    SYSAUX.2819.1083856867
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    SYSAUX.2841.1083857211
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    SYSTEM.2818.1083856865
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    SYSTEM.2840.1083857209
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDOTBS1.2820.1083856871
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDOTBS1.2842.1083857213
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDO_4.2821.1083856873
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDO_4.2843.1083857215
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDO_5.2822.1083856873
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDO_5.2844.1132579919
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDO_6.2823.1083856875
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDO_6.2845.1132579917
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UVW.2824.1083856877
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UVW.2846.1083857217

It seems in September 2021 somehow all the files were copied into the ASM, but the step was done twice for whatever reason. Unfortunately the old files were never cleared up, but now I can release some space in the diskgroup (of course firs ' crosscheck with the CDB if the datafiles are really not needed anymore). 




2023-07-09

small insights how the SQL Translation Framework does it's work

 Oracles SQL Translation Framework (STF) is designed to simplify some aspects of migration an application which was written for another relational database to Oracle. The idea is to translate each SQL which does not fulfil Oracles SQL dialect and replace it with the corresponding SQL. 

Beside this main feature, it also can be used to "replace" a bad written SQL with one which generates the same results but with better performance. But these activities can only be done within the limitations of the STF. 

One of this limitations is the restriction to SQLs which come directly from the application. It should be kind of obvious that SQLs from within PL/SQL must by correct Oracle SQLs by definition. So there is no reason to put those SQLs through the STF and waste cpu-cycles. This is somehow explained in this Ask TOM article.

I would *suspect* the reason is similar to what I've said above. Anything being run in PLSQL must *by definition* be valid Oracle SQL, because it would not have compiled otherwise. So my hypothesis is that the translation facility is not even invoked in such cases. I'm seeking a confirmation for this.

But still it might be interesting to observe the STF in it's activities. My first choice is to check the "new" (since 11g) kernel diagnostic and tracing infrastructure. Unfortunately there is nothing mentioned for the STF.

The next idea is to check the Oracle Optimizer. As it's all about SQL statements, that might be a good idea: 

my simple translation is

  exec dbms_sql_translator.register_sql_translation('FOO','select 1','select 2');

The test is quite simple:
select 1; 

alter system flush shared_pool;
alter session set sql_translation_profile = sys.FOO; 

alter session set tracefile_identifier='TEST1'; 

alter session set events = '10601 trace name context forever, level 32';
alter session set events '10053 trace name context forever'; 
select 1;
alter session set events '10053 trace name context off'; 

the result as expected: 
         1
----------
         1

SQL> SQL>
System altered.

SQL>
Session altered.

SQL> SQL>
Session altered.

SQL> SQL>
Session altered.

SQL>
Session altered.

SQL>
         2
----------
         2

SQL>

and the tracefile starts with

Trace file /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_15105_TEST1.trc
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
Build label:    RDBMS_23.2.0.0.0_LINUX.X64_230325
ORACLE_HOME:    /opt/oracle/product/23c/dbhomeFree
System name:    Linux
Node name:      localhost.localdomain
Release:        5.4.17-2136.312.3.4.el8uek.x86_64
Version:        #2 SMP Wed Oct 19 17:42:16 PDT 2022
Machine:        x86_64
VM name:        KVM CPUID feature flags: 0x01000089
CLID:   P
Instance name: FREE
Redo thread mounted by this instance: 1
Oracle process number: 70
Unix process pid: 15105, NID: 4026531836, image: oracle@localhost.localdomain


*** 2023-07-09T07:38:57.883180+00:00 (FREEPDB1(3))
*** SESSION ID:(45.5733) 2023-07-09T07:38:57.883205+00:00
*** CLIENT ID:() 2023-07-09T07:38:57.883215+00:00
*** SERVICE NAME:(freepdb1) 2023-07-09T07:38:57.883223+00:00
*** MODULE NAME:(SQL*Plus) 2023-07-09T07:38:57.883231+00:00
*** ACTION NAME:() 2023-07-09T07:38:57.883239+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2023-07-09T07:38:57.883247+00:00
*** CONTAINER ID:(3) 2023-07-09T07:38:57.883255+00:00
*** CLIENT IP:(127.0.0.1) 2023-07-09T07:38:57.883263+00:00
*** CONNECTION ID:(AAn1ABgrOv/gZQAAAAAAAQ==) 2023-07-09T07:38:57.883273+00:00

Registered qb: SEL$1 0x7fa657242d38 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$1 nbfros=1 flg=0
    fro(0): flg=4 objn=147 hint_alias="DUAL"@"SEL$1"

SPM: Processing SQL: select 2
SPM: statement not found in SMB
SPM: capture of plan baseline is OFF
kkoctx(): choosing CBO since optimizer mode set to all/first_rows
...
...
I see only the new SQL: select 2 - but nothing regarding the STF.

The reason for this is simple, I should just RTFM:

It translates the SQL statements of a client program before they are processed by the Oracle Database SQL compiler.

OK, the SQL_Optimizer can not even show me something about STF. 
But with just some more RTFM, there is a trace flag in STF: 

      
exec dbms_sql_translator.set_attribute('FOO', dbms_sql_translator.attr_trace_translation, 
dbms_sql_translator.attr_value_true);

With this set, a new attempt to trace the STF works fine now: 

 
...
*** 2023-07-09T07:59:10.127763+00:00 (FREEPDB1(3))
*** SESSION ID:(279.57941) 2023-07-09T07:59:10.127792+00:00
*** CLIENT ID:() 2023-07-09T07:59:10.127802+00:00
*** SERVICE NAME:(freepdb1) 2023-07-09T07:59:10.127810+00:00
*** MODULE NAME:(SQL*Plus) 2023-07-09T07:59:10.127818+00:00
*** ACTION NAME:() 2023-07-09T07:59:10.127826+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2023-07-09T07:59:10.127833+00:00
*** CONTAINER ID:(3) 2023-07-09T07:59:10.127841+00:00
*** CLIENT IP:(127.0.0.1) 2023-07-09T07:59:10.127850+00:00
*** CONNECTION ID:(AAo8a2DBVZDgZQAAAAAAAQ==) 2023-07-09T07:59:10.127860+00:00

SQL Translation Profile "SYS"."FOO": original SQL text "select 1"
SQL Translation Profile "SYS"."FOO": translated SQL text "select 2"
and checking with SQL_OPTIMIZER trace enabled in addition there I see the SQL:
 
----- Current SQL Statement for this session (sql_id=3q69sdmxzv0t6) -----
select 2
----- Parser State -----
With all this knowledge, I'm tempted to do a testcase with SQL within PL/SQL and see how STF and the Optimizer cooperate:
 
set serveroutput on 
alter system flush shared_pool;
alter session set sql_translation_profile = sys.FOO; 

alter session set tracefile_identifier='TEST5'; 

alter session set events = '10601 trace name context forever, level 32';
alter session set events '10053 trace name context forever'; 

declare 
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;
/
 
SQL>
System altered.

SQL>
Session altered.

SQL> SQL>
Session altered.

SQL> SQL>
Session altered.

SQL>
Session altered.

  nn => 1

PL/SQL procedure successfully completed.
As expected, there was no STF for the select within my PL/SQL block, and the trace confirms that:
 
*** 2023-07-09T08:13:04.202341+00:00 (FREEPDB1(3))
*** SESSION ID:(285.30162) 2023-07-09T08:13:04.202367+00:00
*** CLIENT ID:() 2023-07-09T08:13:04.202377+00:00
*** SERVICE NAME:(freepdb1) 2023-07-09T08:13:04.202386+00:00
*** MODULE NAME:(SQL*Plus) 2023-07-09T08:13:04.202394+00:00
*** ACTION NAME:() 2023-07-09T08:13:04.202406+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2023-07-09T08:13:04.202425+00:00
*** CONTAINER ID:(3) 2023-07-09T08:13:04.202443+00:00
*** CLIENT IP:(127.0.0.1) 2023-07-09T08:13:04.202458+00:00
*** CONNECTION ID:(AApvQtb1aJngZQAAAAAAAQ==) 2023-07-09T08:13:04.202474+00:00

SQL Translation Profile "SYS"."FOO": original SQL text "BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
SQL Translation Profile "SYS"."FOO": original SQL text "alter session set events '10053 trace name context forever'"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
SQL Translation Profile "SYS"."FOO": original SQL text "BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
AP: qesdpDPCSetupNextExec(enter): SQL_ID=9babjv8yq8ru3
AP: qesdpDPCSetupNextExec: Not an adaptive plan
AP: qesdpInitExecMutable(enter): SQL_ID=9babjv8yq8ru3
AP: qesdpInitExecMut (exit)
AP: qesdpSetupMutable(enter): SQL_ID=9babjv8yq8ru3
SQL Translation Profile "SYS"."FOO": original SQL text "declare
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
Registered qb: MISC$1 0x7f059f8e4d38 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature(): NULL
SPM: Processing SQL: declare
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;
SPM: disallowed: statement type not supported
kkoctx(): choosing CBO since optimizer mode set to all/first_rows

**************************
Automatic degree of parallelism (AUTODOP)
**************************
Automatic degree of parallelism is disabled: instance state restriction (parallelism not supported).

PM: Considering predicate move-around in query block MISC$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=002rtvjv7wxc9) -----
declare
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;
----- Parser State -----

...
...
...
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$1 nbfros=1 flg=0
    fro(0): flg=4 objn=147 hint_alias="DUAL"@"SEL$1"

SPM: Processing SQL: select 1
SPM: statement not found in SMB
SPM: capture of plan baseline is OFF
kkoctx(): choosing CBO since optimizer mode set to all/first_rows

**************************
Automatic degree of parallelism (AUTODOP)
**************************
Automatic degree of parallelism is disabled: instance state restriction (parallelism not supported).

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=0y30pf6xwqt3x) -----
select 1
----- Parser State -----
...

Of course, there could have been a chance my SQL within the PL/SQL is not literally identical to the one defined in my STF, but that's not the case as a quick check in DBA_SQL_TRANSLATION shows:


The SQL_ID for the select 1 to be translated is 0y30pf6xwqt3x - which is also visible in the last SQL_Optimizer trace.


This should provide some insights how STF works - and where it just can't be used at all. 

I also did some tests with the parameters 
_STFForceTranslateOracleSQL     if TRUE translation profile will translate Oracle SQL statements
_STFTranslateDynamicSQL         if TRUE translation profile will translate dynamic SQL statements
(mentioned here) - but could not make any sense out of those - maybe some time I can find another angle of approach to bring them into the whole picture.


It might be worth to mention there is another framework which can replace SQLs: Advanced Query Rewrite - but this has it's own limitations.