Thursday, March 10, 2022

How to use dbms_metadata to generate DDL for a Unified Auditing Policy

SQL> set long 5000
SQL> select DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')  from dual;

DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')
--------------------------------------------------------------------------------

   CREATE AUDIT POLICY "ORA_LOGON_FAILURES" ACTIONS  LOGON

Wednesday, March 2, 2022

How to check when a parameter was changed

Thanks to Sandeep Singh for providing this very useful information.

The following query will reveal any changes to a particular parameter:
select instance_number instance, snap_id, time, parameter_name, old_value, new_value 
from (
       select a.snap_id,to_char(end_interval_time,'DD-MON-YY HH24:MI') TIME, a.instance_number, parameter_name, value new_value,
       lag(parameter_name,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_pname,
       lag(value,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_value ,
       decode(substr(parameter_name,1,2),'__',2,1) calc_flag
from dba_hist_parameter a, dba_Hist_snapshot b , v$instance v
where a.snap_id=b.snap_id
and   a.instance_number=b.instance_number
and   parameter_name like nvl('&parameter_name',parameter_name)
and   a.instance_number like nvl('&instance_number',v.instance_number)
)
where
new_value != old_value
order by 1,2;
In my case the output was
INSTANCE SNAP_ID TIME PARAMETER_NAME OLD_VALUE NEW_VALUE
1
52050
28-FEB-22 14:00 optimizer_adaptive_statistics TRUE FALSE

Wednesday, February 23, 2022

How to fix the pending_dst_session error when running the preupgrade script

If you receive an error during the precheck-phase of an upgrade from 18c to 19c, which points to "pending_dst_sessions", like this:

you can follow Doc ID 1509653.1 "Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 ) and above using DBMS_DST"

In short, execute the following:
SELECT version FROM v$timezone_file;
In my case the result was

   VERSION
----------
        32
Then, check the following:
col property_name format a40
col value format a40
set lines 200
SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;
Result:
PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   32
DST_SECONDARY_TT_VERSION                 31
DST_UPGRADE_STATE                        UPGRADE
Then, create a file called "dst.sql" and add the following:
col property_name format a40
col value format a40
set lines 200
alter session set "_with_subquery"=materialize;
alter session set "_simple_view_merging"=TRUE;
set serveroutput on
VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/
VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;
exit
Execute the script:
oracle@myserver.mydomain.com:[testdb01]# sqlplus / as sysdba @dst.sql
Output:
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.14.0.0.0


Session altered.


Session altered.

Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
Number of failures: 0
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
Number of failures: 0
Table list: "DVSYS"."SIMULATION_LOG$"
Number of failures: 0
Table list: "DVSYS"."AUDIT_TRAIL$"
Number of failures: 0
Failures:0

PL/SQL procedure successfully completed.

An upgrade window has been successfully ended.
Failures:0

PL/SQL procedure successfully completed.


PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   32
DST_SECONDARY_TT_VERSION                 0
DST_UPGRADE_STATE                        NONE

SQL> exit
The error should now be fixed and you can try to run the prechecks for the upgrade once more.

When explaining a query that is accessing a partitioned table, what does the Pstart=KEY or Pstop=KEY indicate?

The Pstart=KEY or Pstop=KEY indicate that the exact partition cannot be determined at compile time, but will most likely be found during run time.

Some earlier step in the plan is producing one or more values for the partition key, so that pruning can take place.

Example: I have a composite partitioned table, with a locally partitioned index:
create table published_documents(
  UNIQUE_ID                   VARCHAR2(160 BYTE) NOT NULL,
  REGYEAR                     NUMBER(18),
  DOCUMENT_TYPE               VARCHAR2(100 CHAR),
  DOCUMENT_NAME               VARCHAR2(1000 CHAR),
  TOPIC                       VARCHAR2(30 CHAR),
  VALID                       CHAR(1 BYTE),
  VERSION                     NUMBER(18),
  DATA_XML                    CLOB,
  FORMAT                      VARCHAR2(1000 CHAR),
  PERIOD                      VARCHAR2(1000 CHAR)
)
PARTITION BY LIST (DOCUMENT_TYPE)
SUBPARTITION BY LIST (PERIOD)
...
);

create index pub_docs_idx1 on published_documents
(regyear, document_type, period)
  local;
Send the following query to the database:
select  document_type, count(*)
from myuser.published_documents
partition(LEGAL)
group by document_type;

The output is as expected:
DOKUMENTTYPE COUNT(*)
Affidavit
7845
Amending Agreement
29909
Contract
6647

And result in the following execution plan:
-------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                     |     4 |   128 |   195M  (1)| 02:07:06 |       |       |
|   1 |  PARTITION LIST SINGLE|                     |     4 |   128 |   195M  (1)| 02:07:06 |   KEY |   KEY | 
|   2 |   HASH GROUP BY       |                     |     4 |   128 |   195M  (1)| 02:07:06 |       |       |
|   3 |    PARTITION LIST ALL |                      |  2198M|    65G|   195M  (1)| 02:07:03|     1 |   114 |
|   4 |     TABLE ACCESS FULL | PUBLISHED_DOCUMENTS |   2198M|    65G|   195M  (1)| 02:07:03|   KEY |   KEY |
-------------------------------------------------------------------------------------------------------------
When we specifiy a named partition, we can see how the optimzer is limiting its search only to the partition mentioned in the predicate, but it does not yet know how many subpartitions to scan. Since there is no mention of a date range to match the PERIOD column in the predicate, all 114 subpartitions must be scanned.

Note that the text "TABLE ACCESS FULL" in step 4 can be somewhat confusing: we are only talking about a full table access of the partition called "LEGAL", not the the entire table.

In my experience, specifying the partition name directly is rather unusual, and mostely done by DBAs.
Let's try it with a predicate that is more likely to be sent to the oracle server by a user or a batch program:
select dokumenttype, period, count(*)
from myuser.published_documents
where periode = '2018-01'
group by dokumenttype, period;
The output is as expected:
DOKUMENTTYPE PERIODE COUNT(*)
Affidavit 2018-01
7845
Amending Agreement 2018-01
29909
Contract 2018-01
6647
Payroll 2018-01
7824
HA_related 2018-01
36608
Banking 2018-01
14167
IT 2018-01
4094

The rows in the output above belongs to many different partitions, but they are all from the period 2018-01.

The explain plan for this query would be:
---------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name                      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                           |    50 |  1950 |  6589K  (1)| 00:04:18 |       |       |
|   1 |  PARTITION LIST ALL     |                           |    50 |  1950 |  6589K  (1)| 00:04:18 |     1 |    11 |
|   2 |   HASH GROUP BY         |                           |    50 |  1950 |  6589K  (1)| 00:04:18 |       |       |
|   3 |    PARTITION LIST SINGLE|                           |  8122K|   302M|  6589K  (1)| 00:04:18 |       |       |
|*  4 |     INDEX SKIP SCAN     |        PUB_DOCS_IDX1      |  8122K|   302M|  6589K  (1)| 00:04:18 |   KEY |   KEY |
---------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PERIOD"='2018-01')
       filter("PERIOD"='2018-01')
Here, too, we see that the optimizer first selects all 11 partitions, but then use the partitioned index PUB_DOCS_IDX1 to find the rows that would match the string '2018-01'. The optimizer does not yet know how many index subpartitions to scan; this will be determined during run-time.

Thanks to

  • Jim Brull
  • Justin Cave
  • Thursday, February 3, 2022

    Observation: rman saves files according to end date of backup

    I noticed the following: An archivelog file backed up at 1.01.2022 23:57:18 will NOT be saved in the folder for 31.01.2022. Instead, it will be saved in the folder for 01.02.2022.
    Output from RMAN:
     list archivelog from time '31.01.2022' until time '01.02.2022';
     
    Output (excerpt)
    450706  1    450701  A 31.01.2022 23:56:46
            Name: /u04/fra/proddb01/archivelog/2022_01_31/o1_mf_1_450701__vwojnvs6_.arc
    
    450707  1    450702  A 31.01.2022 23:57:16
            Name: /u04/fra/proddb01/archivelog/2022_01_31/o1_mf_1_450702__vwokkx0p_.arc
    
    450708  1    450703  A 31.01.2022 23:57:18
            Name: /u04/fra/proddb01/archivelog/2022_02_01/o1_mf_1_450703__vx4cmycs_.arc
    

    The file /u04/fra/proddb01/archivelog/2022_02_01/o1_mf_1_450703__vx4cmycs_.arc has the timestamp Feb 1 00:05
    So in this case, the last file generated on 31.01 actually ended up in the folder for files generated on the 01.02

    Monday, January 31, 2022

    Simple SQL to list

    The following SQL lists the indexes defined on a table, along with the columns and their positioning:
    SELECT I.INDEX_NAME,I.INDEX_TYPE,I.NUM_ROWS,I.DEGREE, C.COLUMN_NAME,C.COLUMN_POSITION
    FROM DBA_INDEXES I JOIN DBA_IND_COLUMNS C
    ON (I.INDEX_NAME = C.INDEX_NAME)
    WHERE I.OWNER='MYSCHEMA'
    AND I.OWNER = C.INDEX_OWNER
    AND I.TABLE_NAME='MYTABLE'
    ORDER BY I.INDEX_NAME, C.COLUMN_POSITION;
    

    Wednesday, January 5, 2022

    How to set a dynamic parameter in a postgreSQL database cluster

    As with oracle, some parameters may be set dynamically in a postgreSQL database cluster. A postgreSQL database cluster uses a parameter file called postgres.conf. This file holds the cluster wide parameters. If you set a dynamic parameter using the ALTER SYSTEM SET command, the parameter will be written to yet another file called postgres.auto.conf, which values will always override the ones parameters in the postgres.conf Before the change, postgres.auto.conf look like this:
    log_line_prefix = '[%m] – %p %q- %u@%h:%d – %a'
    wal_level = 'replica'
    hot_standby = 'on'
    
    I then make a change to the system configuration:
    alter system set hot_standby_feedback=on;
    ALTER SYSTEM
    
    After this change, the file postgres.auto.conf has another entry:
    log_line_prefix = '[%m] – %p %q- %u@%h:%d – %a'
    wal_level = 'replica'
    hot_standby = 'on'
    hot_standby_feedback = 'on'
    
    I will then have to reload the database using the function pg_reload_conf() for the new parameter to take effect:
    postgres=#  select pg_reload_conf();
     pg_reload_conf
    ----------------
     t
    (1 row)
    
    The current logfile for the postgreSQL database cluster records this fact:
    [2022-01-03 14:45:23.127 CET] – 1683 LOG:  received SIGHUP, reloading configuration files
    [2022-01-03 14:45:23.129 CET] – 1683 LOG:  parameter "hot_standby_feedback" changed to "on"
    
    For details, check the documentation