Friday, March 11, 2022

How to use adrci to show all messages related to shutdown and startup of a database

These commands will give you all messages related to a shutdown and a startup of an oracle database, respectively:
adrci> show alert -p "MESSAGE_GROUP LIKE 'shutdown'"
adrci> show alert -p "MESSAGE_GROUP LIKE 'startup'"
Example output for a shutdown:
Instance shutdown complete (OS id: 4058731)
2022-03-10 18:00:18.042000 +01:00
License high water mark = 863
2022-03-10 18:00:19.115000 +01:00
Instance shutdown complete (OS id: 3611031)
2022-03-10 18:00:33.286000 +01:00
Shutting down instance: further logons disabled
2022-03-10 18:00:36.357000 +01:00
License high water mark = 1
Shutting down archive processes
Archiving is disabled
2022-03-10 18:00:37.519000 +01:00
Shutting down archive processes
Archiving is disabled
2022-03-10 18:00:43.753000 +01:00
Instance shutdown complete (OS id: 3611306)
For the startup sequence, adrci will show you all the details of the startup, in other words what you normally see if you put a tail on the alert log. This is a rather lengthy output, so I am shorting it down. It should be familiar to most DBAs:
2022-03-10 18:00:22.413000 +01:00
Oracle instance running with ODM in PGA: Oracle Direct NFS ODM Library Version 6.0
2022-03-10 18:00:25.600000 +01:00
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
Autotune of undo retention is turned on.
IMODE=BR
ILAT =249
LICENSE_MAX_USERS = 0
SYS auditing is enabled
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.14.0.0.0.
ORACLE_HOME:    /sw/oracle/product/19c
System name:    Linux
Node name:      myserver.mydomain.com
Release:        4.18.0-348.12.2.el8_5.x86_64
Version:        #1 SMP Mon Jan 17 07:06:06 EST 2022
Machine:        x86_64
Using parameter settings in server-side spfile /sw/oracle/product/19c/dbs/spfiletestdb01.ora
System parameters with non-default values:
etc etc
Since the output is very long, if you simply want to check for an indication of a database startup you could use this command instead:
adrci> show alert -p "message_text like '%Starting ORACLE instance%'"
which will give you a shorter list to examine, and in my case showing that the database was restarted Jan 28th and March 10th:
2022-02-28 07:46:54.505000 +01:00
Starting ORACLE instance (restrict) (OS id: 4058595)
2022-02-28 07:47:39.567000 +01:00
Starting ORACLE instance (normal) (OS id: 3300)
2022-03-10 18:00:22.421000 +01:00
Starting ORACLE instance (restrict) (OS id: 3611160)
2022-03-10 18:02:06.831000 +01:00
Starting ORACLE instance (normal) (OS id: 3429)

Thursday, March 10, 2022

How to create a unfied auditing policy that captures logons from privileged users

By default, only unsuccessful logon attempts are audited by mandatory auditing under Unified Auditing.
The policy used for this purpose is ORA_LOGON_FAILURES, and it will audit both privileged and non-privileged users' attempts to logon to the database.

If you want to audit all privileged users that have successfully logged onto the database, you need to create a new policy.

Here is how:
 CREATE AUDIT POLICY PRIVILEGED_USER_LOGONS
   ACTIONS  LOGON
   WHEN 'SYS_CONTEXT (''USERENV'',''CURRENT_USER'') IN (''SYS'',''SYSTEM'')'
   EVALUATE PER SESSION; 
Start using it:
AUDIT POLICY PRIVILEGED_USER_LOGONS;
The resulting audit record can be found immediately afterwards, with this query against UNIFIED_AUDIT_TRAIL:
select os_username "os user",
       userhost "host",
       authentication_type "authtype",
       dbusername "db user",
       client_program_name "client",
       event_timestamp "time",
       action_name "action",
       system_privilege_used "sys priv",
       unified_audit_policies "aud pol"
from UNIFIED_AUDIT_TRAIL 
where event_Timestamp = (select max(event_Timestamp) from UNIFIED_AUDIT_TRAIL)
order by event_timestamp desc;
Result:
os user host authtype db user client time action sys priv aud pol
oracle myserver.mydomain.com (TYPE=(OS));(CLIENT ADDRESS=((PROTOCOL=beq)(HOST=192.168.0.34))); SYS sqlplus@myserver.mydomain.com (TNS V1-V3) 10.03.2022 11:38:14,972147 LOGON SYSDBA PRIVILEGED_USER_LOGONS

How to use dbms_metadata to generate DDL for a Unified Auditing Policy

SQL> set long 5000
SQL> select DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')  from dual;

DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')
--------------------------------------------------------------------------------

   CREATE AUDIT POLICY "ORA_LOGON_FAILURES" ACTIONS  LOGON

Wednesday, March 2, 2022

How to check when a parameter was changed

Thanks to Sandeep Singh for providing this very useful information.

The following query will reveal any changes to a particular parameter:
select instance_number instance, snap_id, time, parameter_name, old_value, new_value 
from (
       select a.snap_id,to_char(end_interval_time,'DD-MON-YY HH24:MI') TIME, a.instance_number, parameter_name, value new_value,
       lag(parameter_name,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_pname,
       lag(value,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_value ,
       decode(substr(parameter_name,1,2),'__',2,1) calc_flag
from dba_hist_parameter a, dba_Hist_snapshot b , v$instance v
where a.snap_id=b.snap_id
and   a.instance_number=b.instance_number
and   parameter_name like nvl('&parameter_name',parameter_name)
and   a.instance_number like nvl('&instance_number',v.instance_number)
)
where
new_value != old_value
order by 1,2;
In my case the output was
INSTANCE SNAP_ID TIME PARAMETER_NAME OLD_VALUE NEW_VALUE
1
52050
28-FEB-22 14:00 optimizer_adaptive_statistics TRUE FALSE

Wednesday, February 23, 2022

How to fix the pending_dst_session error when running the preupgrade script

If you receive an error during the precheck-phase of an upgrade from 18c to 19c, which points to "pending_dst_sessions", like this:

you can follow Doc ID 1509653.1 "Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 ) and above using DBMS_DST"

In short, execute the following:
SELECT version FROM v$timezone_file;
In my case the result was

   VERSION
----------
        32
Then, check the following:
col property_name format a40
col value format a40
set lines 200
SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;
Result:
PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   32
DST_SECONDARY_TT_VERSION                 31
DST_UPGRADE_STATE                        UPGRADE
Then, create a file called "dst.sql" and add the following:
col property_name format a40
col value format a40
set lines 200
alter session set "_with_subquery"=materialize;
alter session set "_simple_view_merging"=TRUE;
set serveroutput on
VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/
VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;
exit
Execute the script:
oracle@myserver.mydomain.com:[testdb01]# sqlplus / as sysdba @dst.sql
Output:
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.14.0.0.0


Session altered.


Session altered.

Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
Number of failures: 0
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
Number of failures: 0
Table list: "DVSYS"."SIMULATION_LOG$"
Number of failures: 0
Table list: "DVSYS"."AUDIT_TRAIL$"
Number of failures: 0
Failures:0

PL/SQL procedure successfully completed.

An upgrade window has been successfully ended.
Failures:0

PL/SQL procedure successfully completed.


PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   32
DST_SECONDARY_TT_VERSION                 0
DST_UPGRADE_STATE                        NONE

SQL> exit
The error should now be fixed and you can try to run the prechecks for the upgrade once more.

When explaining a query that is accessing a partitioned table, what does the Pstart=KEY or Pstop=KEY indicate?

The Pstart=KEY or Pstop=KEY indicate that the exact partition cannot be determined at compile time, but will most likely be found during run time.

Some earlier step in the plan is producing one or more values for the partition key, so that pruning can take place.

Example: I have a composite partitioned table, with a locally partitioned index:
create table published_documents(
  UNIQUE_ID                   VARCHAR2(160 BYTE) NOT NULL,
  REGYEAR                     NUMBER(18),
  DOCUMENT_TYPE               VARCHAR2(100 CHAR),
  DOCUMENT_NAME               VARCHAR2(1000 CHAR),
  TOPIC                       VARCHAR2(30 CHAR),
  VALID                       CHAR(1 BYTE),
  VERSION                     NUMBER(18),
  DATA_XML                    CLOB,
  FORMAT                      VARCHAR2(1000 CHAR),
  PERIOD                      VARCHAR2(1000 CHAR)
)
PARTITION BY LIST (DOCUMENT_TYPE)
SUBPARTITION BY LIST (PERIOD)
...
);

create index pub_docs_idx1 on published_documents
(regyear, document_type, period)
  local;
Send the following query to the database:
select  document_type, count(*)
from myuser.published_documents
partition(LEGAL)
group by document_type;

The output is as expected:
DOKUMENTTYPE COUNT(*)
Affidavit
7845
Amending Agreement
29909
Contract
6647

And result in the following execution plan:
-------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                     |     4 |   128 |   195M  (1)| 02:07:06 |       |       |
|   1 |  PARTITION LIST SINGLE|                     |     4 |   128 |   195M  (1)| 02:07:06 |   KEY |   KEY | 
|   2 |   HASH GROUP BY       |                     |     4 |   128 |   195M  (1)| 02:07:06 |       |       |
|   3 |    PARTITION LIST ALL |                      |  2198M|    65G|   195M  (1)| 02:07:03|     1 |   114 |
|   4 |     TABLE ACCESS FULL | PUBLISHED_DOCUMENTS |   2198M|    65G|   195M  (1)| 02:07:03|   KEY |   KEY |
-------------------------------------------------------------------------------------------------------------
When we specifiy a named partition, we can see how the optimzer is limiting its search only to the partition mentioned in the predicate, but it does not yet know how many subpartitions to scan. Since there is no mention of a date range to match the PERIOD column in the predicate, all 114 subpartitions must be scanned.

Note that the text "TABLE ACCESS FULL" in step 4 can be somewhat confusing: we are only talking about a full table access of the partition called "LEGAL", not the the entire table.

In my experience, specifying the partition name directly is rather unusual, and mostely done by DBAs.
Let's try it with a predicate that is more likely to be sent to the oracle server by a user or a batch program:
select dokumenttype, period, count(*)
from myuser.published_documents
where periode = '2018-01'
group by dokumenttype, period;
The output is as expected:
DOKUMENTTYPE PERIODE COUNT(*)
Affidavit 2018-01
7845
Amending Agreement 2018-01
29909
Contract 2018-01
6647
Payroll 2018-01
7824
HA_related 2018-01
36608
Banking 2018-01
14167
IT 2018-01
4094

The rows in the output above belongs to many different partitions, but they are all from the period 2018-01.

The explain plan for this query would be:
---------------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name                      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                           |    50 |  1950 |  6589K  (1)| 00:04:18 |       |       |
|   1 |  PARTITION LIST ALL     |                           |    50 |  1950 |  6589K  (1)| 00:04:18 |     1 |    11 |
|   2 |   HASH GROUP BY         |                           |    50 |  1950 |  6589K  (1)| 00:04:18 |       |       |
|   3 |    PARTITION LIST SINGLE|                           |  8122K|   302M|  6589K  (1)| 00:04:18 |       |       |
|*  4 |     INDEX SKIP SCAN     |        PUB_DOCS_IDX1      |  8122K|   302M|  6589K  (1)| 00:04:18 |   KEY |   KEY |
---------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PERIOD"='2018-01')
       filter("PERIOD"='2018-01')
Here, too, we see that the optimizer first selects all 11 partitions, but then use the partitioned index PUB_DOCS_IDX1 to find the rows that would match the string '2018-01'. The optimizer does not yet know how many index subpartitions to scan; this will be determined during run-time.

Thanks to

  • Jim Brull
  • Justin Cave
  • Thursday, February 3, 2022

    Observation: rman saves files according to end date of backup

    I noticed the following: An archivelog file backed up at 1.01.2022 23:57:18 will NOT be saved in the folder for 31.01.2022. Instead, it will be saved in the folder for 01.02.2022.
    Output from RMAN:
     list archivelog from time '31.01.2022' until time '01.02.2022';
     
    Output (excerpt)
    450706  1    450701  A 31.01.2022 23:56:46
            Name: /u04/fra/proddb01/archivelog/2022_01_31/o1_mf_1_450701__vwojnvs6_.arc
    
    450707  1    450702  A 31.01.2022 23:57:16
            Name: /u04/fra/proddb01/archivelog/2022_01_31/o1_mf_1_450702__vwokkx0p_.arc
    
    450708  1    450703  A 31.01.2022 23:57:18
            Name: /u04/fra/proddb01/archivelog/2022_02_01/o1_mf_1_450703__vx4cmycs_.arc
    

    The file /u04/fra/proddb01/archivelog/2022_02_01/o1_mf_1_450703__vx4cmycs_.arc has the timestamp Feb 1 00:05
    So in this case, the last file generated on 31.01 actually ended up in the folder for files generated on the 01.02