Friday, March 11, 2022

What constitute "default auditing" under Unified Auditing?

Under Mixed-mode auditing and Pure Unified Auditing, the policies ORA_SECURECONFIG and ORA_LOGON_FAILURES are audited by default.

From the documentation:

ORA_SECURECONFIG audits the same default audit settings from Oracle Database Release 11g.
It tracks the use of a number of privileges such as ALTER ANY TABLE, GRANT ANY PRIVILEGE, and CREATE USER. The actions that it tracks include ALTER USER, CREATE ROLE, LOGON, and other commonly performed activities.


There are a couple of other policies that are not enabled by default but certainly worth auditing. These are:

  • ORA_DATABASE_PARAMETER audits commonly used Oracle Database parameter settings: ALTER DATABASE, ALTER SYSTEM, and CREATE SPFILE.
  • ORA_ACCOUNT_MGMT audits the commonly used user account and privilege settings: CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE,ALTER ROLE, SET ROLE, GRANT, and REVOKE.

    Enabling these is as simple as executing the following SQL against the database as a privileged user:
    AUDIT POLICY ORA_DATABASE_PARAMETER;
    AUDIT POLICY ORA_ACCOUNT_MGMT;
    
  • How to create a partial index on a table in PostgreSQL

    I was asked by one of my customers to advise on how to create an index for a new column called "sk_type", of datatype varchar, with only 3 distinct values in addition to NULL. Let's call them STRING1, STRING2, STRING3.

    The table is already partitioned on column refyear(date). Here is the table DDL:
    CREATE TABLE event
    (
        id bigint NOT NULL DEFAULT nextval('hendelselager.hendelse_id_seq'::regclass),
        skpl character varying(8)  COLLATE pg_catalog."default",
        refyear integer NOT NULL,
        pnum bigint NOT NULL,
        ksystem character varying(128) COLLATE pg_catalog."default" NOT NULL,
        category character varying(128) COLLATE pg_catalog."default" NOT NULL,
        event character varying(128) COLLATE pg_catalog."default" NOT NULL,
        tstamp timestamp without time zone NOT NULL,
        ip_id character varying(128) COLLATE pg_catalog."default" NOT NULL,
        details jsonb,
        CONSTRAINT e_pkey PRIMARY KEY (refyear, event, id),
        CONSTRAINT uc_e01 UNIQUE (refyear, pnum, ksystem, category, event, ip_id)
    ) PARTITION BY RANGE (refyear);
    
    The distribution of values was expected to be very scew right from the start:
    * STRING1 - 95%
    * STRING2 - 5%
    * STRING3 < 0,5%
    * NULL < 0,1%

    In the documentation I found that perhaps a partial index would be ideal in this situation. A partial index is defined as

    an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries only for those table rows that satisfy the predicate.

    According to the documentation, a partial index could be useful in cases where you want to avoid indexing common values:

    Since a query searching for a common value (one that accounts for more than a few percent of all the table rows) will not use the index anyway, there is no point in keeping those rows in the index at all. This reduces the size of the index, which will speed up those queries that [actually] do use the index. It will also speed up many table update operations because the index does not need to be updated in all cases

    In other words, we would only index rows which have column value different from 'STRING1'.

    First, add the new column:
    ALTER TABLE event
    ADD sk_type character varying(8);
    
    Next, create the partial index. Here is the syntax I used in a sandbox environment:
    CREATE INDEX sk_type_idx ON event (refyear,sk_type)
     WHERE NOT (sk_type = 'STRING1');
    
    This approach comes with a caviat, which may or may not be acceptable:

    Observe that this kind of partial index requires that the common values be predetermined, so such partial indexes are best used for data distributions that do not change. The indexes can be recreated occasionally to adjust for new data distributions, but this adds maintenance effort.

    The offical PostgreSQL 11 documentation can be found here

    How to use adrci to show all messages related to shutdown and startup of a database

    These commands will give you all messages related to a shutdown and a startup of an oracle database, respectively:
    adrci> show alert -p "MESSAGE_GROUP LIKE 'shutdown'"
    adrci> show alert -p "MESSAGE_GROUP LIKE 'startup'"
    
    Example output for a shutdown:
    Instance shutdown complete (OS id: 4058731)
    2022-03-10 18:00:18.042000 +01:00
    License high water mark = 863
    2022-03-10 18:00:19.115000 +01:00
    Instance shutdown complete (OS id: 3611031)
    2022-03-10 18:00:33.286000 +01:00
    Shutting down instance: further logons disabled
    2022-03-10 18:00:36.357000 +01:00
    License high water mark = 1
    Shutting down archive processes
    Archiving is disabled
    2022-03-10 18:00:37.519000 +01:00
    Shutting down archive processes
    Archiving is disabled
    2022-03-10 18:00:43.753000 +01:00
    Instance shutdown complete (OS id: 3611306)
    
    For the startup sequence, adrci will show you all the details of the startup, in other words what you normally see if you put a tail on the alert log. This is a rather lengthy output, so I am shorting it down. It should be familiar to most DBAs:
    2022-03-10 18:00:22.413000 +01:00
    Oracle instance running with ODM in PGA: Oracle Direct NFS ODM Library Version 6.0
    2022-03-10 18:00:25.600000 +01:00
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =249
    LICENSE_MAX_USERS = 0
    SYS auditing is enabled
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.14.0.0.0.
    ORACLE_HOME:    /sw/oracle/product/19c
    System name:    Linux
    Node name:      myserver.mydomain.com
    Release:        4.18.0-348.12.2.el8_5.x86_64
    Version:        #1 SMP Mon Jan 17 07:06:06 EST 2022
    Machine:        x86_64
    Using parameter settings in server-side spfile /sw/oracle/product/19c/dbs/spfiletestdb01.ora
    System parameters with non-default values:
    etc etc
    
    Since the output is very long, if you simply want to check for an indication of a database startup you could use this command instead:
    adrci> show alert -p "message_text like '%Starting ORACLE instance%'"
    
    which will give you a shorter list to examine, and in my case showing that the database was restarted Jan 28th and March 10th:
    2022-02-28 07:46:54.505000 +01:00
    Starting ORACLE instance (restrict) (OS id: 4058595)
    2022-02-28 07:47:39.567000 +01:00
    Starting ORACLE instance (normal) (OS id: 3300)
    2022-03-10 18:00:22.421000 +01:00
    Starting ORACLE instance (restrict) (OS id: 3611160)
    2022-03-10 18:02:06.831000 +01:00
    Starting ORACLE instance (normal) (OS id: 3429)
    

    Thursday, March 10, 2022

    How to create a unfied auditing policy that captures logons from privileged users

    By default, only unsuccessful logon attempts are audited by mandatory auditing under Unified Auditing.
    The policy used for this purpose is ORA_LOGON_FAILURES, and it will audit both privileged and non-privileged users' attempts to logon to the database.

    If you want to audit all privileged users that have successfully logged onto the database, you need to create a new policy.

    Here is how:
     CREATE AUDIT POLICY PRIVILEGED_USER_LOGONS
       ACTIONS  LOGON
       WHEN 'SYS_CONTEXT (''USERENV'',''CURRENT_USER'') IN (''SYS'',''SYSTEM'')'
       EVALUATE PER SESSION; 
    
    Start using it:
    AUDIT POLICY PRIVILEGED_USER_LOGONS;
    
    The resulting audit record can be found immediately afterwards, with this query against UNIFIED_AUDIT_TRAIL:
    select os_username "os user",
           userhost "host",
           authentication_type "authtype",
           dbusername "db user",
           client_program_name "client",
           event_timestamp "time",
           action_name "action",
           system_privilege_used "sys priv",
           unified_audit_policies "aud pol"
    from UNIFIED_AUDIT_TRAIL 
    where event_Timestamp = (select max(event_Timestamp) from UNIFIED_AUDIT_TRAIL)
    order by event_timestamp desc;
    
    Result:
    os user host authtype db user client time action sys priv aud pol
    oracle myserver.mydomain.com (TYPE=(OS));(CLIENT ADDRESS=((PROTOCOL=beq)(HOST=192.168.0.34))); SYS sqlplus@myserver.mydomain.com (TNS V1-V3) 10.03.2022 11:38:14,972147 LOGON SYSDBA PRIVILEGED_USER_LOGONS

    How to use dbms_metadata to generate DDL for a Unified Auditing Policy

    SQL> set long 5000
    SQL> select DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')  from dual;
    
    DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')
    --------------------------------------------------------------------------------
    
       CREATE AUDIT POLICY "ORA_LOGON_FAILURES" ACTIONS  LOGON
    

    Wednesday, March 2, 2022

    How to check when a parameter was changed

    Thanks to Sandeep Singh for providing this very useful information.

    The following query will reveal any changes to a particular parameter:
    select instance_number instance, snap_id, time, parameter_name, old_value, new_value 
    from (
           select a.snap_id,to_char(end_interval_time,'DD-MON-YY HH24:MI') TIME, a.instance_number, parameter_name, value new_value,
           lag(parameter_name,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_pname,
           lag(value,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_value ,
           decode(substr(parameter_name,1,2),'__',2,1) calc_flag
    from dba_hist_parameter a, dba_Hist_snapshot b , v$instance v
    where a.snap_id=b.snap_id
    and   a.instance_number=b.instance_number
    and   parameter_name like nvl('&parameter_name',parameter_name)
    and   a.instance_number like nvl('&instance_number',v.instance_number)
    )
    where
    new_value != old_value
    order by 1,2;
    
    In my case the output was
    INSTANCE SNAP_ID TIME PARAMETER_NAME OLD_VALUE NEW_VALUE
    1
    52050
    28-FEB-22 14:00 optimizer_adaptive_statistics TRUE FALSE

    Wednesday, February 23, 2022

    How to fix the pending_dst_session error when running the preupgrade script

    If you receive an error during the precheck-phase of an upgrade from 18c to 19c, which points to "pending_dst_sessions", like this:
    
    
    you can follow Doc ID 1509653.1 "Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 ) and above using DBMS_DST"

    In short, execute the following:
    SELECT version FROM v$timezone_file;
    
    In my case the result was
    
       VERSION
    ----------
            32
    
    Then, check the following:
    col property_name format a40
    col value format a40
    set lines 200
    SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME LIKE 'DST_%'
    ORDER BY PROPERTY_NAME;
    
    Result:
    PROPERTY_NAME                            VALUE
    ---------------------------------------- --------------------
    DST_PRIMARY_TT_VERSION                   32
    DST_SECONDARY_TT_VERSION                 31
    DST_UPGRADE_STATE                        UPGRADE
    
    Then, create a file called "dst.sql" and add the following:
    col property_name format a40
    col value format a40
    set lines 200
    alter session set "_with_subquery"=materialize;
    alter session set "_simple_view_merging"=TRUE;
    set serveroutput on
    VAR numfail number
    BEGIN
    DBMS_DST.UPGRADE_DATABASE(:numfail,
    parallel => TRUE,
    log_errors => TRUE,
    log_errors_table => 'SYS.DST$ERROR_TABLE',
    log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
    error_on_overlap_time => FALSE,
    error_on_nonexisting_time => FALSE);
    DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
    END;
    /
    VAR fail number
    BEGIN
    DBMS_DST.END_UPGRADE(:fail);
    DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
    END;
    /
    
    SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME LIKE 'DST_%'
    ORDER BY PROPERTY_NAME;
    exit
    
    Execute the script:
    oracle@myserver.mydomain.com:[testdb01]# sqlplus / as sysdba @dst.sql
    
    Output:
    Connected to:
    Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
    Version 18.14.0.0.0
    
    
    Session altered.
    
    
    Session altered.
    
    Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
    Number of failures: 0
    Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
    Number of failures: 0
    Table list: "DVSYS"."SIMULATION_LOG$"
    Number of failures: 0
    Table list: "DVSYS"."AUDIT_TRAIL$"
    Number of failures: 0
    Failures:0
    
    PL/SQL procedure successfully completed.
    
    An upgrade window has been successfully ended.
    Failures:0
    
    PL/SQL procedure successfully completed.
    
    
    PROPERTY_NAME                            VALUE
    ---------------------------------------- --------------------
    DST_PRIMARY_TT_VERSION                   32
    DST_SECONDARY_TT_VERSION                 0
    DST_UPGRADE_STATE                        NONE
    
    SQL> exit
    
    The error should now be fixed and you can try to run the prechecks for the upgrade once more.