2023-09-23

changes in SQL real time monitor report in 23c free

I like Oracle SQL real time monitor reports as they contain a huge amount of valuable information and can be viewed and transferred very simple. 
It's content increased over time - and as oracle updates the java script libraries to generate a nice view, sometimes the same report reveals even more details when you open the same old report later! So it was a kind of downer when I saw the amount of content decreasing since 19c! Here some screenshots what changed: 

19c:


21c:

Here the report misses Activity and Metrics. 

23c free: 

This report misses Activity, Metrics and Outlines - compared to 19c


That's quite sad! 
And this information is really missing in the html file - so we can't blame the java script library for this. 

The data is stored as a XML structure in the html report. But to be readable for humans, we need to decode it first  - at least I can't read this directly:



  <script id="fxtmodel" type="text/xml">
   <!--FXTMODEL-->
   <report db_version="19.0.0.0.0" elapsed_time="3.23" cpu_time="3.21" cpu_cores="4" hyperthread="N" con_id="3" con_name="PDB1" timezone_offset="0" packs="2" service_type="0" encode="base64" compress="zlib">
    <report_id><![CDATA[/orarep/sqlmonitor/main%3fsql_exec_id%3d16777216%26sql_exec_start%3d09%3a23%3a2023%2012%3a42%3a10%26sql_id%3d2u4jc5hhwhc7n]]></report_id>
                eAHtffl348aR8O/+K/AxTnZmP9siDl6OxLc6OI4cHWMdSbz79vFRFCTRpkiZBOfI
X79VfaAPNIAGCFIaDbz7MiLQ6Kvu6qrq3eUf0+HjfDaJ5ovhInyaLyLnQ7hYTuaz
vUbwQ7PhLD8vb0dRuNdo9nY8f8drer7jej8G3o9ut9H/xnF26WfDp9Fi9BhG8DE+
hefY9eS2762C38ath4ePD+PObHeHPRVNwk/hGNu57U6n47lt2oQ/1doto9Ei6ien
...

Luckily it claims what it is doing: encode="base64" compress="zlib" And so Franck Pachot already cared about a way to decode it.

I slightly adapted it to my laptops needs:

xmllint --xpath '/html/body/script/report' 19c.html  | grep -v report | base64 -id | \
python3 -c "import zlib,sys;sys.stdout.buffer.write(zlib.decompress(sys.stdin.buffer.read()))"  > 19c.xml


The missing sections in 21c and 23cfree are


Activity:

   <activity_sampled>
    <activity class="Cpu">1</activity>
  </activity_sampled>
  <activity_detail start_time="09/23/2023 12:42:10" end_time="09/23/2023 12:42:19" first_sample_time="09/23/2023 12:42:10" last_sample_time="09/23/2023 12:42:10" duration="1" sample_interval="1" bucket_interval="1" bucket_count="10" bucket_duration="10" cpu_cores="4" total_cpu_cores="4" hyperthread="N">
    <bucket number="1">
      <activity class="Cpu" line="77">1</activity>
    </bucket>
  </activity_detail>


Metrics:

  <stattype name="metrics" cpu_cores="4" hyperthread="N">
    <stat_info>
      <stat id="1" name="nb_cpu"/>
      <stat id="2" name="nb_sess"/>
      <stat id="3" name="reads" unit="per_sec"/>
      <stat id="4" name="writes" unit="per_sec"/>
      <stat id="5" name="read_kb" unit="bytes_per_sec" factor="1024"/>
      <stat id="6" name="write_kb" unit="bytes_per_sec" factor="1024"/>
      <stat id="7" name="interc_kb" unit="bytes_per_sec" factor="1024"/>
      <stat id="8" name="cache_kb" unit="bytes_per_sec" factor="1024"/>
      <stat id="9" name="pga_kb" unit="bytes" factor="1024"/>
      <stat id="10" name="tmp_kb" unit="bytes" factor="1024"/>
    </stat_info>
    <buckets bucket_interval="1" bucket_count="10" start_time="09/23/2023 12:42:10" end_time="09/23/2023 12:42:19" duration="10">
      <bucket bucket_id="1">
        <stat id="1" value=".26"/>
        <stat id="3" value="494"/>
        <stat id="5" value="4008"/>
        <stat id="7" value="4008"/>
        <stat id="8" value="154224"/>
        <stat id="9" value="12586"/>
      </bucket>
    </buckets>
  </stattype>

Outlines:

        <outline_data>
          <hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint>
          <hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('19.1.0')]]></hint>
          <hint><![CDATA[DB_VERSION('19.1.0')]]></hint>
          <hint><![CDATA[ALL_ROWS]]></hint>
          <hint><![CDATA[OUTLINE_LEAF(@"SEL$3")]]></hint>
          <hint><![CDATA[OUTLINE_LEAF(@"SEL$4")]]></hint>
          <hint><![CDATA[OUTLINE_LEAF(@"SEL$11")]]></hint>
...
  



It would be a quite depressing post when the story ends here. 
But there is hope:

In 23c_free Oracle introduced new parameters for _fix_control
This one is really helpful: 

350691421QKSFM_CBO_35069142use ADVANCED format for plan display in SQL Monitor

With a simple 

alter session set "_fix_control"='35069142:1';

My Report shows much more content now:


Activity, Metrics and Outlines are back and new content is visible also: SQL Analysis. 

This contains of 2 sections: 

SQL Analysis

Line ID
Query Block
Message
 1
SEL$43The query block has 2 cartesian products which may be expensive. Consider adding join conditions or removing the disconnected tables or views.
 13
SEL$E029B2FF / "O"@"SEL$5"The following columns have predicates which preclude their use as keys in index range scan. Consider rewriting the predicates.
  "NAME"
  "TYPE#"
 45
SEL$D0BBB93D / "O"@"SEL$19"The following columns have predicates which preclude their use as keys in index range scan. Consider rewriting the predicates.
  "NAME"
  "TYPE#"
 77
SEL$B2BBC534 / "O"@"SEL$33"The following columns have predicates which preclude their use as keys in index range scan. Consider rewriting the predicates.
  "NAME"
  "TYPE#"
selected


which comes from the XML section 
      <sqla>
        <sec id="1">
          <n><![CDATA[SEL$43]]></n>
          <msg>
            <m li="1"><![CDATA[The query block has 2 cartesian products which may be ]]></m>
            <m li="2"><![CDATA[expensive. Consider adding join conditions or removing the ]]></m>
            <m li="3"><![CDATA[disconnected tables or views.]]></m>
          </msg>
        </sec>
        <sec id="13">
          <n><![CDATA[SEL$E029B2FF / "O"@"SEL$5"]]></n>
          <msg>
            <m li="1"><![CDATA[The following columns have predicates which preclude their ]]></m>
            <m li="2"><![CDATA[use as keys in index range scan. Consider rewriting the ]]></m>
            <m li="3"><![CDATA[predicates.]]></m>
            <m li="4" list="Y"><![CDATA["NAME"]]></m>
            <m li="5" list="Y"><![CDATA["TYPE#"]]></m>
          </msg>
        </sec>
        <sec id="45">
          <n><![CDATA[SEL$D0BBB93D / "O"@"SEL$19"]]></n>
          <msg>
            <m li="1"><![CDATA[The following columns have predicates which preclude their ]]></m>
            <m li="2"><![CDATA[use as keys in index range scan. Consider rewriting the ]]></m>
            <m li="3"><![CDATA[predicates.]]></m>
            <m li="4" list="Y"><![CDATA["NAME"]]></m>
            <m li="5" list="Y"><![CDATA["TYPE#"]]></m>
          </msg>
        </sec>
        <sec id="77">
          <n><![CDATA[SEL$B2BBC534 / "O"@"SEL$33"]]></n>
          <msg>
            <m li="1"><![CDATA[The following columns have predicates which preclude their ]]></m>
            <m li="2"><![CDATA[use as keys in index range scan. Consider rewriting the ]]></m>
            <m li="3"><![CDATA[predicates.]]></m>
            <m li="4" list="Y"><![CDATA["NAME"]]></m>
            <m li="5" list="Y"><![CDATA["TYPE#"]]></m>
          </msg>
        </sec>
      </sqla>



Hint Usage Issues

Line ID
Query Block
Hint
Hint Status
 2
SEL$1materialize Applied Successfully
 34
SEL$15materialize Applied Successfully
 66
SEL$29materialize Applied Successfully

with the XML  
      <hreport>
        <leg>
          <t>3</t>
        </leg>
        <sec id="2">
          <n><![CDATA[SEL$1]]></n>
          <h><![CDATA[materialize]]></h>
        </sec>
        <sec id="34">
          <n><![CDATA[SEL$15]]></n>
          <h><![CDATA[materialize]]></h>
        </sec>
        <sec id="66">
          <n><![CDATA[SEL$29]]></n>
          <h><![CDATA[materialize]]></h>
        </sec>
      </hreport>

We will see how 35069142 evolves over time.




2023-09-10

ExaWatcher custom sampler

Oracles ExaWatcher is a nice tool to sample some OS related measurements and keep them for a short period (normally some days) for later analysis. As its name indicates, it exists only on Oracles Engineered systems. A more general tool for non-Engineered Systems is OSWatcher - similar in many areas, but not in the content of this post. 

ExaWatcher collect a lot of default information about the system. But sometimes (or somewhere) additional details would be fine to be collected. Of course I can always write my own script to be scheduled in cron and then handle the results. But I am lucky: ExaWatcher provides an interface to extend it's collection: cusom samplers can be implemented easily. 

Speaking of easily, the documentation is correct, but somehow a little sparse (Or I'm just not capable of reading it, which is also likely).  

I created a script PsSwap.sh based on my previous post stored in /opt/oracle.ExaWatcher/

To use it, the documentation provides the parameter --customcmd.

-u | --customcmd 'sample_name ;; "custom_command;... " '



To include a custom collection module in the current group.

Example: --customcmd 'Lsl; "/bin/ls -l"'


My first attempt failed with a quite useless error message: 
  
/opt/oracle.ExaWatcher/ExaWatcher.sh --group --start "now" --end "never" --interval 5 --count 360 \
    --command_mode "SELECTED"   --customcmd 'ProcSwap;;"ProcSwap.sh"'

Can't exec "ProcSwap.sh": No such file or directory at /opt/oracle.ExaWatcher//ExaWatcherParserElements.pm line 2086, <gen3> line 24.
[1693909750][2023-09-05 12:29:10][WARNING][/opt/oracle.ExaWatcher/ParserExaWatcher.pl][ExaWatcherParserElements::format_custom_CMDs][] The custom command "ProcSwap.sh" is not supported by your system. It will be skipped.
It's fine to know it can not be executed, but no reason was given.
But at least I know the file and location, so after changing the code to be slightly more verbose the reason was obvious: ExaWatcher needs the scripts/binaries to execute in the $PATH (which does not contain /opt/oracle.ExaWatcher ) or the full path added. quite obvious once analysed. 
 
In the next iteration which accepted my ProcSwap sampler now I created a sample config file: bx. By analysing this sample file, I want to know how to include my ProcSwap into the default ExaWatcher.conf permanently:

/opt/oracle.ExaWatcher/ExaWatcher.sh --group --start "now" --end "never" --interval 5 --count 360 \
    --command_mode "SELECTED"   --customcmd 'ProcSwap;;"/opt/oracle.ExaWatcher/ProcSwap.sh"' --createconf bx
the result looks promising:

...
[1693909952][2023-09-05 12:32:32][INFO][/opt/oracle.ExaWatcher/ParserExaWatcher.pl][ExaWatcherParserElements::format_custom_CMDs][] CCMDInfo: ProcSwap "/opt/oracle.ExaWatcher/ProcSwap.sh". - scalar: 2

[1693909952][2023-09-05 12:32:38][INFO][/opt/oracle.ExaWatcher/ParserExaWatcher.pl][ExaWatcherParserElements::format_custom_CMDs][] ExaWatcher will automatically generate a name for custom commmand "/opt/oracle.ExaWatcher/ProcSwap.sh".
The previous custom command name "ProcSwap" will be replaced by a new name "CustomCMD0_ProcSwap".  
and also the config file contains the expected lines:

...
<Group>
<Start> now
<End> never
<Interval:s> 5
<Count> 360
<CommandMode> SELECTED
<CustomCMD> CustomCMD0_ProcSwap;;"/opt/oracle.ExaWatcher/ProcSwap.sh"

<RunEnd>

The last steps are easy: Adding the line 
<CustomCMD> ProcSwap;;"/opt/oracle.ExaWatcher/ProcSwap.sh"
to ExaWatcher.conf and restart ExaWatcher.  After ProcSwap.sh is executed once, there is a new directory in /opt/oracle.ExaWatcher/archive/CustomCMD.ExaWatcher/CustomCMD0_ProcSwap and a file there 2023_09_10_10_59_59_CustomCMD0_ProcSwap_<hostname>.dat The header of this file looks promising as it's identical to all the other data collections by ExaWatcher:

############################################################
# Starting Time:        09/10/2023 10:59:59
# Sample Interval(s):   5
# Archive Count:        360
# Collection Module:    CustomCMD0_ProcSwap
# Collection Command:   /opt/oracle.ExaWatcher/ProcSwap.sh
# Misc Info: ############################################################
zzz <09/10/2023 10:59:59> Count:0
         112 kB       1       root systemd
          12 kB     815       root lvmetad  

As always: once the method is clear, it's quite easy!

2023-09-05

Identifying programs which get swapped

Swapping is seen as a nasty effect on modern systems. When the systems performance goes down there is a bunch of scenarios where significant swapping is observed. Unfortunately a root cause is hard to identify. Those processes which get part of their memory paged out are not those which create memory pressure. Also the memory demand can be quite short lived and therefore hard to observe in sample based monitoring. 

Still sometimes I'd like to see at least which processes are affected from paged out memory. If they don't need to access this memory for whatever reason, that doesn't mean any problem at all. But of course, we mostly have to look at those systems which show any kind of issue.

Unfortunately I did not find any tool which shows me a useful number of memory paged out per process, so I had to come up with my own. 
My simple script checks all processes /proc/pid/status "file". There the line with name should help out: 

VmSwap Swapped-out virtual memory size by anonymous
       private pages; shmem swap usage is not included
       (since Linux 2.6.34).  This value is inaccurate;
       see /proc/pid/statm above.

The warning doesn't sound promising, but the explanation given for statm is somehow acceptable (for me):

Some of these values are inaccurate because of a kernel- internal scalability optimization. If accurate values are required, use /proc/pid/smaps or /proc/pid/smaps_rollup instead, which are much slower but provide accurate, detailed information.

For my purpose that's good enough! I need an overview and doesn't want to put even more load on a system by parsing all smaps entries.

So my script is quite short:

(echo "        SWAP        PID       USER PROGRAM"
echo
ps -ewo pid,user,comm \
  | while read -r pid user cmd ; do
  VmSwap=$(awk ' /VmSwap/ { print $2 } '  /proc/${pid}/status 2>/dev/null)
  if [ ${VmSwap:-0} -gt 0 ]; then 
    printf "%12d kB %7d %10s %s \n" "${VmSwap:-0}" "$pid" "$user" "$cmd"
  fi
done | sort -n
echo
echo "        SWAP        PID       USER PROGRAM"
echo )
and the result is simple, but useful:

        SWAP        PID       USER PROGRAM
 
       55640 kB   51304       root crsd.bin
       59068 kB  385709     oracle ora_lgwr_zzz1
       60672 kB   94092     oracle ora_lg04_zzz3
       97668 kB  396323       root java
      115136 kB  385731     oracle ora_lg00_zzz1
      119728 kB  385879     oracle ora_mmon_zzz1
      121484 kB  385841     oracle ora_lg04_zzz1
      185296 kB   14656       root ohasd.bin
      431344 kB   53850       grid java
      564416 kB   19273     oracle java

        SWAP        PID       USER PROGRAM
Again, those processes are not responsible for swapping, they are just victims. But for me I try to see the symptoms bright and clear, and this is one possibility. 

2023-07-16

SQLcl stores connections - how and where?

Oracles SQLcl has a lot of nice features. One is to store connections (connection_string, username and  password - if you choose to). But - as normal - there is little information how and where it is stored. 

First I create a new connection in my brand new 23.2 SQLcl: 


sql -nolog


SQLcl: Release 23.2 Production on Sun Jul 16 17:53:37 2023

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

No Connection
SQL> connect berx/berx@//127.0.0.1/freepdb1
Connected.
SQL> conn -save berxfree -savepwd
Name: berxfree
Connect String: //127.0.0.1/freepdb1
User: berx
Password: ******
SQL> connmgr list
berxfree
SQL> connmgr show berxfree
Name: berxfree
Connect String: //127.0.0.1/freepdb1
User: berx
Password: ******
SQL>

I found the connection definition in a subdirectory of  ~/.dbtools/connections/

[oracle@localhost connections]$ pwd
/home/oracle/.dbtools/connections
[oracle@localhost connections]$ ls -la
total 0
drwx------. 3 oracle oinstall 22 Jul 16 17:54 .
drwx------. 3 oracle oinstall 25 Jul 15 16:34 ..
drwx------. 2 oracle oinstall 51 Jul 16 17:54 berxfree
[oracle@localhost connections]$
And in this directory I have 2 files: cwallet.sso dbtools.properties

First let's check the dbtools.properties:

#
#Sun Jul 16 17:54:23 UTC 2023
connectionString=//127.0.0.1/freepdb1
name=berxfree
type=ORACLE_DATABASE
userName=berx


also the cwallet.sso has some valuable content:

[oracle@localhost berxfree]$ mkstore -wrl . -list
Oracle Secret Store Tool Release 23.2.0.0.0 - Production
Version 23.2.0.0.0
Copyright (c) 2004, 2023, Oracle and/or its affiliates. All rights reserved.

Oracle Secret Store entries:
oracle.security.client.connect_string1
oracle.security.client.password1
oracle.security.client.username1
[oracle@localhost berxfree]$ mkstore -wrl . -viewEntry oracle.security.client.connect_string1
Oracle Secret Store Tool Release 23.2.0.0.0 - Production
Version 23.2.0.0.0
Copyright (c) 2004, 2023, Oracle and/or its affiliates. All rights reserved.

oracle.security.client.connect_string1 = //127.0.0.1/freepdb1
[oracle@localhost berxfree]$ mkstore -wrl . -viewEntry oracle.security.client.password1
Oracle Secret Store Tool Release 23.2.0.0.0 - Production
Version 23.2.0.0.0
Copyright (c) 2004, 2023, Oracle and/or its affiliates. All rights reserved.

oracle.security.client.password1 = berx

It should be obvious by now the wallet is auto open - so no password is needed (and my credentials password isn't very secure). 

Btw: if you omit the -savepwd parameter, there will be still a cwallet.sso - but it's simply empty. All the other connection parameters are in dbtools.properties already. 


Of course, sometimes I have to use Windows. There is nothing like ~/.dbtools. The directory is more visible in %HOMEDRIVE%%HOMEPATH%\DBTools. This is especially valuable as I can copy the berxfree subdirectory from my linux sandbox to the Windows laptop into %HOMEDRIVE%%HOMEPATH%\DBTools\connections and just use it - simple like that.  
 
That's nearly all to share here - just in case you want to remove a stored connection, delete it's directory and all is gone!

2023-07-13

ASM directories size and age

Automatic Storage Management is a very powerful method of managing oracles datafiles (and some other files as spfile, controlfile, archivelogs, flashbacklogs, ... also). 
Unfortunately (of for good reasons) Oracle never made the ASM diskgroups directly mountable into an OS - instead the tool asmcmd is given which can be used to navigate the structure and manipulate files. 
Normally there is little need to tweak with files there as all files are oracle-managed by default. 
Still sometimes I have to do a cleanup. most of the time someone suspects there are leftovers which do consume significant space but are not required anymore. Unfortunately asmcmd does not help me a lot finding those directories which contain files of significant space but at the same time does have files which were not touched for smoe time. So I created a little report for my help. 
The report has about 70 lines, so I put it into my git as ASM_agg_dirs.sql. It contains 3 sub-selects which do the work:
all_directories to get the path of the directories, 
byte_to_every_dir to get the size and dates for a sub-path down from the leaves and 
dir_sizes_dates to aggregate the byte_to_every_dir.

All together it shows for every directory the size of all its containing files including the sub-directories. For the age of the files, the earliest and latest modify date is aggregated and propagated. Its result can be something like 

FULL_ALIAS_PATH                                                 SUM_BYTES        GB LVL EARLIEST_MODIFY_ LATEST_MODIFY_DA
------------------------------------------------------------- ----------- --------- --- ---------------- ----------------
+DATA_DG/ASM                                                          6656       .00   1 2020-12-10 16:00 2020-12-10 16:00
+DATA_DG/ASM/PASSWORD                                                 6656       .00   2 2020-12-10 16:00 2020-12-10 16:00
+DATA_DG/DB_UNKNOWN                                                  36864       .00   1 2022-06-08 09:00 2022-06-08 09:00
+DATA_DG/DB_UNKNOWN/PASSWORD                                         36864       .00   2 2022-06-08 09:00 2022-06-08 09:00
+DATA_DG/my_cluster                                              354004480       .33   1 2020-12-10 16:00 2023-07-13 11:00
+DATA_DG/my_cluster/ASMPARAMETERFILE                                  3584       .00   2 2020-12-10 16:00 2020-12-10 16:00
+DATA_DG/my_cluster/OCRBACKUP                                    188026880       .18   2 2021-02-23 18:00 2023-07-13 11:00
+DATA_DG/my_cluster/OCRFILE                                      165974016       .15   2 2023-06-11 11:00 2023-06-11 11:00
+DATA_DG/DB_UNIQ_1                                            610545300480    568.61   1 2022-04-13 08:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/BC3B5B02E2AAAD7BE0532401210A1E60           492219031552    458.41   2 2023-06-11 10:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/BC3B5B02E2AAAD7BE0532401210A1E60/DATAFILE  491145281536    457.41   3 2023-07-04 21:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/BC3B5B02E2AAAD7BE0532401210A1E60/TEMPFILE    1073750016      1.00   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_1/B50214B4214037B1E053C40DD10A4E69             7231012864      6.73   2 2023-06-11 10:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/B50214B4214037B1E053C40DD10A4E69/DATAFILE    6182428672      5.76   3 2023-07-04 22:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/B50214B4214037B1E053C40DD10A4E69/TEMPFILE    1048584192       .98   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_1/CONTROLFILE                                   676773888       .63   2 2023-06-11 10:00 2023-07-12 23:00
+DATA_DG/DB_UNIQ_1/DATAFILE                                    31809601536     29.63   2 2023-07-04 22:00 2023-07-04 22:00
+DATA_DG/DB_UNIQ_1/DATAGUARDCONFIG                                   32768       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/DG                                                32768       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/ONLINELOG                                   77309429760     72.00   2 2022-12-10 08:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_1/PARAMETERFILE                                     35840       .00   2 2023-07-13 12:00 2023-07-13 12:00
+DATA_DG/DB_UNIQ_1/PASSWORD                                           5120       .00   2 2022-04-13 08:00 2022-04-13 08:00
+DATA_DG/DB_UNIQ_1/pdbseed                                                             2
+DATA_DG/DB_UNIQ_1/pdbseed/DATAFILE                                                    3
+DATA_DG/DB_UNIQ_1/pdb1                                                                2
+DATA_DG/DB_UNIQ_1/pdb1/DATAFILE                                                       3
+DATA_DG/DB_UNIQ_1/TEMPFILE                                     1073750016      1.00   2 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_2                                            780968243712    727.33   1 2021-09-21 15:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/CONTROLFILE                                   627933184       .58   2 2023-06-11 10:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165             8196775936      7.63   2 2021-09-21 15:00 2023-07-04 02:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/DATAFILE    8069890048      7.52   3 2021-09-21 15:00 2023-07-04 02:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/TEMPFILE     126885888       .12   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9           705188782080    656.76   2 2021-09-21 15:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/DATAFILE  686935162880    639.76   3 2021-09-21 15:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/TEMPFILE   18253619200     17.00   3 2023-06-11 10:00 2023-06-11 10:00
+DATA_DG/DB_UNIQ_2/DATAFILE                                    55903256576     52.06   2 2021-09-21 15:00 2023-07-13 02:00
+DATA_DG/DB_UNIQ_2/DATAGUARDCONFIG                                   24576       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/DG                                                24576       .00   2 2023-07-13 14:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/ONLINELOG                                    9663694848      9.00   2 2022-12-10 10:00 2023-07-13 14:00
+DATA_DG/DB_UNIQ_2/PARAMETERFILE                                     27648       .00   2 2023-07-13 12:00 2023-07-13 12:00
+DATA_DG/DB_UNIQ_2/PASSWORD                                           5120       .00   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/pdbseed                                                             2
+DATA_DG/DB_UNIQ_2/pdbseed/DATAFILE                                                    3
+DATA_DG/DB_UNIQ_2/pdb1                                                                2
+DATA_DG/DB_UNIQ_2/pdb1/DATAFILE                                                       3
+DATA_DG/DB_UNIQ_2/TEMPFILE                                     1073750016      1.00   2 2023-06-11 10:00 2023-06-11 10:00

 
If you have a close look, there are some directories with a EARLIEST_MODIFY_DATE from 2021 or 2022 - so they might be good candidates to have a look and eliminate some files. To see how much data might be able to release, a small modification can be done to the SQL:

In lines 51 & 52 of the script the projection should be commented and instead the block between 55 and 59 should be active. I there only aggregate data for files last modified 90 days or older in the past. 
The result provides a much more clear picture of possible savings: 


+DATA_DG/DB_UNIQ_2                                             408085720576    380.06   1 2021-09-21 15:00 2022-12-10 10:00
+DATA_DG/DB_UNIQ_2/CONTROLFILE                                                          2
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165              1887461376      1.76   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/DATAFILE     1887461376      1.76   3 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAD0D3231198B1E0532401210A7165/TEMPFILE                            3
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9            378750590976    352.74   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/DATAFILE   378750590976    352.74   3 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/C7CAF2050A22AC66E0532601210A9FF9/TEMPFILE                            3
+DATA_DG/DB_UNIQ_2/DATAFILE                                     25300172800     23.56   2 2021-09-21 15:00 2021-09-21 15:00
+DATA_DG/DB_UNIQ_2/DATAGUARDCONFIG                                                      2
+DATA_DG/DB_UNIQ_2/DG                                                                   2
+DATA_DG/DB_UNIQ_2/ONLINELOG                                     2147487744      2.00   2 2022-12-10 10:00 2022-12-10 10:00
+DATA_DG/DB_UNIQ_2/PARAMETERFILE                                                        2
+DATA_DG/DB_UNIQ_2/PASSWORD                                            5120       .00   2 2021-09-21 15:00 2021-09-21 15:00
Somewhere in the PDB with GUID C7CAF2050A22AC66E0532601210A9FF9 about 352 GB of "old" files may be hidden. 

Checking the files the situation is obvious: 

 ls -l
Type      Redund  Striped  Time             Sys  Name
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    ABC_DATA.2828.1083856881
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    ABC_DATA.2850.1083857223
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    XYZ.2829.1083856889
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    XYZ.2851.1083857229
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    DEF.2827.1083856881
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    DEF.2849.1083857221
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    GHI.2826.1083856879
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    GHI.2848.1083857221
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    SYSAUX.2819.1083856867
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    SYSAUX.2841.1083857211
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    SYSTEM.2818.1083856865
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    SYSTEM.2840.1083857209
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDOTBS1.2820.1083856871
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDOTBS1.2842.1083857213
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDO_4.2821.1083856873
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDO_4.2843.1083857215
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDO_5.2822.1083856873
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDO_5.2844.1132579919
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UNDO_6.2823.1083856875
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UNDO_6.2845.1132579917
DATAFILE  HIGH    COARSE   SEP 21 2021      Y    UVW.2824.1083856877
DATAFILE  HIGH    COARSE   JUL 13 02:00:00  Y    UVW.2846.1083857217

It seems in September 2021 somehow all the files were copied into the ASM, but the step was done twice for whatever reason. Unfortunately the old files were never cleared up, but now I can release some space in the diskgroup (of course firs ' crosscheck with the CDB if the datafiles are really not needed anymore). 




2023-07-09

small insights how the SQL Translation Framework does it's work

 Oracles SQL Translation Framework (STF) is designed to simplify some aspects of migration an application which was written for another relational database to Oracle. The idea is to translate each SQL which does not fulfil Oracles SQL dialect and replace it with the corresponding SQL. 

Beside this main feature, it also can be used to "replace" a bad written SQL with one which generates the same results but with better performance. But these activities can only be done within the limitations of the STF. 

One of this limitations is the restriction to SQLs which come directly from the application. It should be kind of obvious that SQLs from within PL/SQL must by correct Oracle SQLs by definition. So there is no reason to put those SQLs through the STF and waste cpu-cycles. This is somehow explained in this Ask TOM article.

I would *suspect* the reason is similar to what I've said above. Anything being run in PLSQL must *by definition* be valid Oracle SQL, because it would not have compiled otherwise. So my hypothesis is that the translation facility is not even invoked in such cases. I'm seeking a confirmation for this.

But still it might be interesting to observe the STF in it's activities. My first choice is to check the "new" (since 11g) kernel diagnostic and tracing infrastructure. Unfortunately there is nothing mentioned for the STF.

The next idea is to check the Oracle Optimizer. As it's all about SQL statements, that might be a good idea: 

my simple translation is

  exec dbms_sql_translator.register_sql_translation('FOO','select 1','select 2');

The test is quite simple:
select 1; 

alter system flush shared_pool;
alter session set sql_translation_profile = sys.FOO; 

alter session set tracefile_identifier='TEST1'; 

alter session set events = '10601 trace name context forever, level 32';
alter session set events '10053 trace name context forever'; 
select 1;
alter session set events '10053 trace name context off'; 

the result as expected: 
         1
----------
         1

SQL> SQL>
System altered.

SQL>
Session altered.

SQL> SQL>
Session altered.

SQL> SQL>
Session altered.

SQL>
Session altered.

SQL>
         2
----------
         2

SQL>

and the tracefile starts with

Trace file /opt/oracle/diag/rdbms/free/FREE/trace/FREE_ora_15105_TEST1.trc
Oracle Database 23c Free, Release 23.0.0.0.0 - Developer-Release
Version 23.2.0.0.0
Build label:    RDBMS_23.2.0.0.0_LINUX.X64_230325
ORACLE_HOME:    /opt/oracle/product/23c/dbhomeFree
System name:    Linux
Node name:      localhost.localdomain
Release:        5.4.17-2136.312.3.4.el8uek.x86_64
Version:        #2 SMP Wed Oct 19 17:42:16 PDT 2022
Machine:        x86_64
VM name:        KVM CPUID feature flags: 0x01000089
CLID:   P
Instance name: FREE
Redo thread mounted by this instance: 1
Oracle process number: 70
Unix process pid: 15105, NID: 4026531836, image: oracle@localhost.localdomain


*** 2023-07-09T07:38:57.883180+00:00 (FREEPDB1(3))
*** SESSION ID:(45.5733) 2023-07-09T07:38:57.883205+00:00
*** CLIENT ID:() 2023-07-09T07:38:57.883215+00:00
*** SERVICE NAME:(freepdb1) 2023-07-09T07:38:57.883223+00:00
*** MODULE NAME:(SQL*Plus) 2023-07-09T07:38:57.883231+00:00
*** ACTION NAME:() 2023-07-09T07:38:57.883239+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2023-07-09T07:38:57.883247+00:00
*** CONTAINER ID:(3) 2023-07-09T07:38:57.883255+00:00
*** CLIENT IP:(127.0.0.1) 2023-07-09T07:38:57.883263+00:00
*** CONNECTION ID:(AAn1ABgrOv/gZQAAAAAAAQ==) 2023-07-09T07:38:57.883273+00:00

Registered qb: SEL$1 0x7fa657242d38 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$1 nbfros=1 flg=0
    fro(0): flg=4 objn=147 hint_alias="DUAL"@"SEL$1"

SPM: Processing SQL: select 2
SPM: statement not found in SMB
SPM: capture of plan baseline is OFF
kkoctx(): choosing CBO since optimizer mode set to all/first_rows
...
...
I see only the new SQL: select 2 - but nothing regarding the STF.

The reason for this is simple, I should just RTFM:

It translates the SQL statements of a client program before they are processed by the Oracle Database SQL compiler.

OK, the SQL_Optimizer can not even show me something about STF. 
But with just some more RTFM, there is a trace flag in STF: 

      
exec dbms_sql_translator.set_attribute('FOO', dbms_sql_translator.attr_trace_translation, 
dbms_sql_translator.attr_value_true);

With this set, a new attempt to trace the STF works fine now: 

 
...
*** 2023-07-09T07:59:10.127763+00:00 (FREEPDB1(3))
*** SESSION ID:(279.57941) 2023-07-09T07:59:10.127792+00:00
*** CLIENT ID:() 2023-07-09T07:59:10.127802+00:00
*** SERVICE NAME:(freepdb1) 2023-07-09T07:59:10.127810+00:00
*** MODULE NAME:(SQL*Plus) 2023-07-09T07:59:10.127818+00:00
*** ACTION NAME:() 2023-07-09T07:59:10.127826+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2023-07-09T07:59:10.127833+00:00
*** CONTAINER ID:(3) 2023-07-09T07:59:10.127841+00:00
*** CLIENT IP:(127.0.0.1) 2023-07-09T07:59:10.127850+00:00
*** CONNECTION ID:(AAo8a2DBVZDgZQAAAAAAAQ==) 2023-07-09T07:59:10.127860+00:00

SQL Translation Profile "SYS"."FOO": original SQL text "select 1"
SQL Translation Profile "SYS"."FOO": translated SQL text "select 2"
and checking with SQL_OPTIMIZER trace enabled in addition there I see the SQL:
 
----- Current SQL Statement for this session (sql_id=3q69sdmxzv0t6) -----
select 2
----- Parser State -----
With all this knowledge, I'm tempted to do a testcase with SQL within PL/SQL and see how STF and the Optimizer cooperate:
 
set serveroutput on 
alter system flush shared_pool;
alter session set sql_translation_profile = sys.FOO; 

alter session set tracefile_identifier='TEST5'; 

alter session set events = '10601 trace name context forever, level 32';
alter session set events '10053 trace name context forever'; 

declare 
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;
/
 
SQL>
System altered.

SQL>
Session altered.

SQL> SQL>
Session altered.

SQL> SQL>
Session altered.

SQL>
Session altered.

  nn => 1

PL/SQL procedure successfully completed.
As expected, there was no STF for the select within my PL/SQL block, and the trace confirms that:
 
*** 2023-07-09T08:13:04.202341+00:00 (FREEPDB1(3))
*** SESSION ID:(285.30162) 2023-07-09T08:13:04.202367+00:00
*** CLIENT ID:() 2023-07-09T08:13:04.202377+00:00
*** SERVICE NAME:(freepdb1) 2023-07-09T08:13:04.202386+00:00
*** MODULE NAME:(SQL*Plus) 2023-07-09T08:13:04.202394+00:00
*** ACTION NAME:() 2023-07-09T08:13:04.202406+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2023-07-09T08:13:04.202425+00:00
*** CONTAINER ID:(3) 2023-07-09T08:13:04.202443+00:00
*** CLIENT IP:(127.0.0.1) 2023-07-09T08:13:04.202458+00:00
*** CONNECTION ID:(AApvQtb1aJngZQAAAAAAAQ==) 2023-07-09T08:13:04.202474+00:00

SQL Translation Profile "SYS"."FOO": original SQL text "BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
SQL Translation Profile "SYS"."FOO": original SQL text "alter session set events '10053 trace name context forever'"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
SQL Translation Profile "SYS"."FOO": original SQL text "BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
AP: qesdpDPCSetupNextExec(enter): SQL_ID=9babjv8yq8ru3
AP: qesdpDPCSetupNextExec: Not an adaptive plan
AP: qesdpInitExecMutable(enter): SQL_ID=9babjv8yq8ru3
AP: qesdpInitExecMut (exit)
AP: qesdpSetupMutable(enter): SQL_ID=9babjv8yq8ru3
SQL Translation Profile "SYS"."FOO": original SQL text "declare
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;"
SQL Translation Profile "SYS"."FOO": translated SQL text ""
Registered qb: MISC$1 0x7f059f8e4d38 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature(): NULL
SPM: Processing SQL: declare
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;
SPM: disallowed: statement type not supported
kkoctx(): choosing CBO since optimizer mode set to all/first_rows

**************************
Automatic degree of parallelism (AUTODOP)
**************************
Automatic degree of parallelism is disabled: instance state restriction (parallelism not supported).

PM: Considering predicate move-around in query block MISC$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=002rtvjv7wxc9) -----
declare
  stmt varchar2(100) := 'select 1';
  nn number;
begin
  execute immediate stmt
    into nn;
  dbms_output.put_line(' nn => ' || nn );
end;
----- Parser State -----

...
...
...
---------------------
QUERY BLOCK SIGNATURE
---------------------
  signature (): qb_name=SEL$1 nbfros=1 flg=0
    fro(0): flg=4 objn=147 hint_alias="DUAL"@"SEL$1"

SPM: Processing SQL: select 1
SPM: statement not found in SMB
SPM: capture of plan baseline is OFF
kkoctx(): choosing CBO since optimizer mode set to all/first_rows

**************************
Automatic degree of parallelism (AUTODOP)
**************************
Automatic degree of parallelism is disabled: instance state restriction (parallelism not supported).

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=0y30pf6xwqt3x) -----
select 1
----- Parser State -----
...

Of course, there could have been a chance my SQL within the PL/SQL is not literally identical to the one defined in my STF, but that's not the case as a quick check in DBA_SQL_TRANSLATION shows:


The SQL_ID for the select 1 to be translated is 0y30pf6xwqt3x - which is also visible in the last SQL_Optimizer trace.


This should provide some insights how STF works - and where it just can't be used at all. 

I also did some tests with the parameters 
_STFForceTranslateOracleSQL     if TRUE translation profile will translate Oracle SQL statements
_STFTranslateDynamicSQL         if TRUE translation profile will translate dynamic SQL statements
(mentioned here) - but could not make any sense out of those - maybe some time I can find another angle of approach to bring them into the whole picture.


It might be worth to mention there is another framework which can replace SQLs: Advanced Query Rewrite - but this has it's own limitations.