Tuesday, February 4, 2020

How to change configuration for your audit trail


This post is applicable from Oracle version 11.2 until present (Oracle 19 as per writing).

Check the current settings:
SELECT * 
FROM DBA_AUDIT_MGMT_CONFIG_PARAMS
ORDER BY AUDIT_TRAIL;

"PARAMETER_NAME" "PARAMETER_VALUE" "AUDIT_TRAIL"
DB AUDIT TABLESPACE AUDDATA FGA AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 1 STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE AUDDATA STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 UNIFIED AUDIT TRAIL
AUDIT FILE MAX AGE 5 UNIFIED AUDIT TRAIL
DB AUDIT TABLESPACE AUDDATA UNIFIED AUDIT TRAIL
AUDIT WRITE MODE IMMEDIATE WRITE MODE UNIFIED AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL


I will now change the DBMS_AUDIT_MGMT.OS_FILE_MAX_AGE for the XML AUDIT TRAIL from the default 5 days to 2 days.
This is the maximum age of an audit trail file before a new audit trail file gets created:
BEGIN
DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(
   audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_XML,
   audit_trail_property => DBMS_AUDIT_MGMT.OS_FILE_MAX_AGE,
   audit_trail_property_value => 2
) ;
END;
/

Verify that it was set successfully:
SELECT * 
 FROM DBA_AUDIT_MGMT_CONFIG_PARAMS 
 WHERE AUDIT_TRAIL='XML AUDIT TRAIL'
 AND PARAMETER_NAME='AUDIT FILE MAX AGE';

"PARAMETER_NAME" "PARAMETER_VALUE" "AUDIT_TRAIL"
AUDIT FILE MAX AGE 2 XML AUDIT TRAIL

For more examples, check the Oracle Documentation

Why is Oracle producing .aud files for internal sys-statements?



I have recently been in contact with Oracle support regarding an issue where my Oracle 18c database instance is sending audit information for internal statements, much similar to this:


Sun Jan 26 10:25:41 2020 +01:00
LENGTH : '401'
ACTION :[147] 'select /*+ opt_param('parallel_execution_enabled',
'false') EXEC_FROM_DBMS_XPLAN */ * from gv$sql_plan where 1=0'
DATABASE USER:[1] '/'
PRIVILEGE :[4] 'NONE'
CLIENT USER:[0] ''
CLIENT TERMINAL:[7] 'UNKNOWN'
STATUS:[1] '0'
DBID:[10] '1325844924'
SESSIONID:[1] '0'
USERHOST:[26] 'myhost.mydomain.com'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[1] '3'

Sun Jan 26 10:25:41 2020 +01:00
LENGTH : '375'
ACTION :[121] 'SELECT * FROM gv$sql_plan where sql_id = 'a0f1h9d5muwa6' and inst_id = 1 and child_address = hextoraw('00000004FFF16130')'
DATABASE USER:[1] '/'
PRIVILEGE :[4] 'NONE'
CLIENT USER:[0] ''
CLIENT TERMINAL:[7] 'UNKNOWN'
STATUS:[1] '0'
DBID:[10] '1325844924'
SESSIONID:[1] '0'
USERHOST:[26] 'myhost.mydomain.com'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[1] '3'

If you have migrated to Unified Auditing, Oracle states that "audit records are only expected to be generated in database tables and OS spillover files (*.bin) under audit destination path."

However, dynamic SQL statements parsed or executed using DBMS_SQL package are being audited in the conventional *.aud type OS files.

To get rid of these messages piling up in your audit_dump_dir:
alter system set audit_sys_operations=FALSE scope=spfile;
shutdown immediate
startup

If setting audit_sys_operations to FALSE is not desirable, Oracle states that you can request a patch through the following bug number:


Bug 21133343 *.aud file is generated though unified auditing=true and audit_trail=none


Note that you will see the same phenomenon under the mixed-mode or classic auditing.
Oracle does not explisitly say they will provide a patch in this case though.

Documentation from Oracle support: Doc ID 2020881.1: "OS Audit Files *.aud are Still Generated After Migrating to Unified Audit"


Tuesday, December 10, 2019

What is the Asynchronous Global Index Maintenance introduced in Oracle 12.1?


Asynchronous Global Index Maintenance, also called Deferred Global Index Maintenance, means that Oracle will maintain global indexes during the nightly maintenance window, when you perform operations on a partitioned tables that use a global index. Oracle has introduced this in an effort to make online operations less disruptive.

Examples of such operations are

* DROP PARTITION
* TRUNCATE PARTITION
* MOVE PARTITION (although this operation is not listed in the Oracle 12.1 documentation)

The execution of the index maintenance is done through the PL/SQL program PMO_DEFERRED_GIDX_MAINT, which purpose is to clear orphan data from global indexes generated during partition maintenance operations.

The program can be verified through DBA_SCHEDULER_PROGRAMS:
SELECT OWNER,PROGRAM_NAME,PROGRAM_TYPE,PROGRAM_ACTION,ENABLED
FROM DBA_SCHEDULER_PROGRAMS
WHERE PROGRAM_NAME='PMO_DEFERRED_GIDX_MAINT';

OWNER PROGRAM_NAME PROGRAM_TYPE PROGRAM_ACTION ENABLED
SYS PMO_DEFERRED_GIDX_MAINT PLSQL_BLOCK dbms_part.cleanup_gidx_internal( noop_okay_in => 1); TRUE

It is executed trough the scheduler every night:
select schedule_name,repeat_interval
from DBA_SCHEDULER_SCHEDULES
where schedule_name='PMO_DEFERRED_GIDX_MAINT_SCHED';

SCHEDULE_NAME REPEAT_INTERVAL
PMO_DEFERRED_GIDX_MAINT_SCHED FREQ=DAILY; BYHOUR=02; BYMINUTE=0; BYSECOND=0

It is executed through the job PMO_DEFERRED_GIDX_MAINT_JOB:
SELECT owner,job_name,program_name,schedule_name,enabled,state,run_count,to_char(last_start_date,'dd.mm.yyyy hh24:mi') "start",stop_on_window_close
FROM DBA_SCHEDULER_JOBS
WHERE PROGRAM_NAME='PMO_DEFERRED_GIDX_MAINT';

OWNER JOB_NAME PROGRAM_NAME SCHEDULE_NAME ENABLED STATE RUN_COUNT start STOP_ON_WINDOW_CLOSE
SYS PMO_DEFERRED_GIDX_MAINT_JOB PMO_DEFERRED_GIDX_MAINT PMO_DEFERRED_GIDX_MAINT_SCHED TRUE RUNNING
1328
10.12.2019 02:00 FALSE


If the job is not finished within a day, is it possible to run PMO_DEFERRED_GIDX_MAINT with parallelism, as specified in the Oracle Support Note "Is It Possible To Execute The Job PMO_DEFERRED_GIDX_MAINT_JOB With Parallel Degree? (Doc ID 2126370.1)".


An example from my own experience is that the following command
ALTER TABLE SALES
MOVE PARTITION P_2019_02 ONLINE
TABLESPACE DATA2
ROW STORE COMPRESS ADVANCED
PARALLEL 4;

Triggered the statement:
ALTER INDEX "SH"."SALES_IDX1" COALESCE CLEANUP;

I have found that I sometime need to stop an job that's running through the maintenance window, and into office hours.
In that case, I've used the procedure dbms_scheduler.stop_job, like this:
BEGIN
  DBMS_SCHEDULER.STOP_JOB('SYS.PMO_DEFERRED_GIDX_MAINT_JOB', 'TRUE');
END;
/
If you need to find out which indexes that are due for coalescing, use this query:
SELECT owner,index_name,orphaned_entries
FROM   dba_indexes
where owner ='SCOTT'
and orphaned_entries='YES'
ORDER BY 1;

Tuesday, December 3, 2019

How an incorrect password file format can stop the redo apply process (MRP0) on standby database



Error in dgmgrl shows:
DGMGRL> show configuration;

Configuration - DGConfig1

  Protection Mode: MaxPerformance
  Members:
  proddb01   - Primary database
    stby02- Physical standby database
    stby01 - Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover: DISABLED

Configuration Status:
ERROR   (status updated 36 seconds ago)


Checking the database throwing error:
DGMGRL> show database stby01

Database - stby01

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          21 hours 43 minutes 37 seconds (computed 0 seconds ago)
  Average Apply Rate: 5.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    pipat

  Database Error(s):
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold


Try restarting redo apply:
DGMGRL>  edit database 'stby01'  set state='APPLY-OFF';
Succeeded.
DGMGRL>  edit database 'stby01'  set state='APPLY-ON';
Succeeded.

The alert log reports:
2019-12-03T11:33:22.214114+01:00
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
2019-12-03T11:33:22.270642+01:00
Attempt to start background Managed Standby Recovery process (proddb01)
Starting background process MRP0
2019-12-03T11:33:22.312794+01:00
MRP0 started with pid=105, OS id=20354
2019-12-03T11:33:22.333315+01:00
MRP0: Background Managed Standby Recovery process started (proddb01)
2019-12-03T11:33:27.472196+01:00
 Started logmerger process
2019-12-03T11:33:27.599549+01:00
Managed Standby Recovery starting Real Time Apply
2019-12-03T11:33:27.801888+01:00
Parallel Media Recovery started with 4 slaves
2019-12-03T11:33:28.279378+01:00
Media Recovery Log /u04/fra/STBY01/archivelog/2019_12_02/o1_mf_1_121201__y2thfwyz_.arc
2019-12-03T11:33:28.318586+01:00
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
MRP0: Background Media Recovery terminated with error 46952
2019-12-03T11:33:28.372536+01:00
Errors in file /orasoft/diag/rdbms/stby01/proddb01/trace/proddb01_pr00_20395.trc:
 ORA-46952: standby database format mismatch for password file '/orasoft/product/122/dbs/orapwproddb01'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 826857150
2019-12-03T11:33:28.447080+01:00
2019-12-03T11:33:28.554534+01:00
MRP0: Background Media Recovery process shutdown (proddb01)

The password file is of an older version and should be recreated in order to ressume log apply. How to do this is outlined in one of my previous posts, available here.

After you have done this, restart redo apply again with

DGMGRL>  edit database 'stby01'  set state='APPLY-ON';
Succeeded.

Put a tail on the standby database's alert log and you'll see that the outstanding logs are cherned through quicly.

Update, 04.11.2024

I have also seen situations where recreation of the password file does not help. In such cases:

  • On the standby, start the archiving process manually
      SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    
  • Copy the password file from primary node 1 to standby node

    Source: Standby Database MRP Fails With ORA-46952: Standby Database Format Mismatch For Password (Doc ID 2503352.1)
  • Wednesday, November 27, 2019

    How to move a datafile online in Oracle 12.1


    Starting with Oracle 12.1, you can move datafiles online.


    Generate move-script:
    select 'alter database move datafile ''' || file_name || ''' TO ''' || replace(file_name,'old_sid','new_sid') || ''';'
    from dba_data_files;
    

    Output will be:
    alter database move datafile '/oradata/old_sid/system01.dbf' TO '/oradata/new_sid/system01.dbf';
    


    Tuesday, November 26, 2019

    How to execute a *.sql file using psql in a PostgreSQL database



    There are two ways:

    1. Already connected to the correct database:
    postgres=# \connect proddb01
    You are now connected to database "proddb01" as user "postgres".
    proddb01=# \conninfo
    You are connected to database "proddb01" as user "postgres" via socket in "/var/run/postgresql" at port "5432".
    proddb01=#
    

    Then use the -i notation in psql to execute your file, like this:
    proddb01=# \i yourfile.sql
    

    2. directly from the command line:
    proddb01=# psql proddb01 -f yourfile.sql
    

    You can also direct the output to a log file, using the -o flag:
    proddb01=# psql proddb01 -f test.sql -o test.log
    

    The -o flag will redirect everything to a log file and not show anything on your screen.
    If you would like to see the result of the script on your screen AND save the output in a log file, use the -L flag instead of the -o flag. This will also log the actual query sent to the postgres server:
    proddb01=# psql proddb01 -f test.sql -L test.log
    

    Note that no "exit" statement is necessary in your .sql scripts that you execute through psql directly from the command line. psql quits the session automatically.

    How to list the databases in a PostgreSQL instance


    Connect to the default database:
    psql
    

    List your databases:
    select oid as database_id,
           datname as database_name,
           datallowconn as allow_connect,
           datconnlimit as connection_limit
    from pg_database
    order by oid;
    

    Example output:
    database_id | database_name | allow_connect | connection_limit
    -------------+---------------+---------------+------------------
               1 | template1     | t             |               -1
           13880 | template0     | f             |               -1
           13881 | postgres      | t             |               -1
           16426 | proddb01      | t             |               -1
    (4 rows)
    

    Or use the psql meta-command "list+" (or "\l+") to list the available databases;
    prod-# \l+
                                                                          List of databases
       Name    |  Owner   | Encoding |  Collate   |   Ctype    |      Access privileges       |  Size   | Tablespace |                Description
    -----------+----------+----------+------------+------------+------------------------------+---------+------------+--------------------------------------------
     postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =Tc/postgres                +| 9269 kB | pg_default | default administrative connection database
               |          |          |            |            | postgres=CTc/postgres       +|         |            |
               |          |          |            |            | postgres_exporter=c/postgres |         |            |
     proddb01  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres=CTc/postgres       +| 25 GB   | pg_default |
     template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres                 +| 7965 kB | pg_default | unmodifiable empty database
               |          |          |            |            | postgres=CTc/postgres        |         |            |
     template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres                 +| 8221 kB | pg_default | default template for new databases
               |          |          |            |            | postgres=CTc/postgres        |         |            |
    (4 rows)
    
    Once again, thanks to Bart Gawrych for providing some very useful listings on his site.