Thursday, March 17, 2022

How to use active database duplication for creating a physical standby

When you use active database duplication for creating a physical standby database, make sure of the following:

1. keep your pfile used to start an auxiliary instance to a minimum. Only the following lines are needed:
db_block_size=8192
db_name='proddb01'
Of course, the db_name must be identical to the db_name of the primary database.

2. In your duplication script, make sure you the spfile contains a correct value for db_unique name:
run{
        allocate channel c1 type disk;
        allocate channel c2 type disk;
        allocate channel c3 type disk;
        allocate channel c4 type disk;
        allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
SET db_unique_name='stby01'
SET db_domain='mydomain.no'
SET FAL_CLIENT='stby01'
SET FAL_SERVER='proddb01'
SET log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stby01'
SET log_archive_dest_2=''
SET control_files='/data1/oradata/stby01/control01.ctl','/data2/fra/stby01/control02.ctl'
SET STANDBY_FILE_MANAGEMENT='MANUAL'
SET LOG_ARCHIVE_MAX_PROCESSES='2'
SET local_listener='stby01.skead.no'
SET dg_broker_config_file1='/sw/oracle/product/12201/dbs/dr1stby01.dat'
SET dg_broker_config_file2='/sw/oracle/product/12201/dbs/dr2stby01.dat'
SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(stby01,proddb01)'
SET diagnostic_dest='/sw/oracle'
SET db_recovery_file_dest='/data2/fra'
SET db_file_name_convert='/data1/oradata/proddb01','/data1/oradata/stby01'
SET log_file_name_convert='/data3/oradata/proddb01','/data3/oradata/stby01'
NOFILENAMECHECK;
}
Note that the directive "spfile" used right before the individual SET commands does not mean "create spfile". It means "fetch spfile from primary". The SET commands, however, will be written to your local spfile which will then be used to start the auxiliary instance once more, before the actual cloning starts.

When the duplicaton has finished, the value of "db_name" parameter will be identical to the primary database, since we are fetching the spfile from your primary database server over the network.

Friday, March 11, 2022

What constitute "default auditing" under Unified Auditing?

Under Mixed-mode auditing and Pure Unified Auditing, the policies ORA_SECURECONFIG and ORA_LOGON_FAILURES are audited by default.

From the documentation:

ORA_SECURECONFIG audits the same default audit settings from Oracle Database Release 11g.
It tracks the use of a number of privileges such as ALTER ANY TABLE, GRANT ANY PRIVILEGE, and CREATE USER. The actions that it tracks include ALTER USER, CREATE ROLE, LOGON, and other commonly performed activities.


There are a couple of other policies that are not enabled by default but certainly worth auditing. These are:

  • ORA_DATABASE_PARAMETER audits commonly used Oracle Database parameter settings: ALTER DATABASE, ALTER SYSTEM, and CREATE SPFILE.
  • ORA_ACCOUNT_MGMT audits the commonly used user account and privilege settings: CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE,ALTER ROLE, SET ROLE, GRANT, and REVOKE.

    Enabling these is as simple as executing the following SQL against the database as a privileged user:
    AUDIT POLICY ORA_DATABASE_PARAMETER;
    AUDIT POLICY ORA_ACCOUNT_MGMT;
    
  • How to create a partial index on a table in PostgreSQL

    I was asked by one of my customers to advise on how to create an index for a new column called "sk_type", of datatype varchar, with only 3 distinct values in addition to NULL. Let's call them STRING1, STRING2, STRING3.

    The table is already partitioned on column refyear(date). Here is the table DDL:
    CREATE TABLE event
    (
        id bigint NOT NULL DEFAULT nextval('hendelselager.hendelse_id_seq'::regclass),
        skpl character varying(8)  COLLATE pg_catalog."default",
        refyear integer NOT NULL,
        pnum bigint NOT NULL,
        ksystem character varying(128) COLLATE pg_catalog."default" NOT NULL,
        category character varying(128) COLLATE pg_catalog."default" NOT NULL,
        event character varying(128) COLLATE pg_catalog."default" NOT NULL,
        tstamp timestamp without time zone NOT NULL,
        ip_id character varying(128) COLLATE pg_catalog."default" NOT NULL,
        details jsonb,
        CONSTRAINT e_pkey PRIMARY KEY (refyear, event, id),
        CONSTRAINT uc_e01 UNIQUE (refyear, pnum, ksystem, category, event, ip_id)
    ) PARTITION BY RANGE (refyear);
    
    The distribution of values was expected to be very scew right from the start:
    * STRING1 - 95%
    * STRING2 - 5%
    * STRING3 < 0,5%
    * NULL < 0,1%

    In the documentation I found that perhaps a partial index would be ideal in this situation. A partial index is defined as

    an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries only for those table rows that satisfy the predicate.

    According to the documentation, a partial index could be useful in cases where you want to avoid indexing common values:

    Since a query searching for a common value (one that accounts for more than a few percent of all the table rows) will not use the index anyway, there is no point in keeping those rows in the index at all. This reduces the size of the index, which will speed up those queries that [actually] do use the index. It will also speed up many table update operations because the index does not need to be updated in all cases

    In other words, we would only index rows which have column value different from 'STRING1'.

    First, add the new column:
    ALTER TABLE event
    ADD sk_type character varying(8);
    
    Next, create the partial index. Here is the syntax I used in a sandbox environment:
    CREATE INDEX sk_type_idx ON event (refyear,sk_type)
     WHERE NOT (sk_type = 'STRING1');
    
    This approach comes with a caviat, which may or may not be acceptable:

    Observe that this kind of partial index requires that the common values be predetermined, so such partial indexes are best used for data distributions that do not change. The indexes can be recreated occasionally to adjust for new data distributions, but this adds maintenance effort.

    The offical PostgreSQL 11 documentation can be found here

    How to use adrci to show all messages related to shutdown and startup of a database

    These commands will give you all messages related to a shutdown and a startup of an oracle database, respectively:
    adrci> show alert -p "MESSAGE_GROUP LIKE 'shutdown'"
    adrci> show alert -p "MESSAGE_GROUP LIKE 'startup'"
    
    Example output for a shutdown:
    Instance shutdown complete (OS id: 4058731)
    2022-03-10 18:00:18.042000 +01:00
    License high water mark = 863
    2022-03-10 18:00:19.115000 +01:00
    Instance shutdown complete (OS id: 3611031)
    2022-03-10 18:00:33.286000 +01:00
    Shutting down instance: further logons disabled
    2022-03-10 18:00:36.357000 +01:00
    License high water mark = 1
    Shutting down archive processes
    Archiving is disabled
    2022-03-10 18:00:37.519000 +01:00
    Shutting down archive processes
    Archiving is disabled
    2022-03-10 18:00:43.753000 +01:00
    Instance shutdown complete (OS id: 3611306)
    
    For the startup sequence, adrci will show you all the details of the startup, in other words what you normally see if you put a tail on the alert log. This is a rather lengthy output, so I am shorting it down. It should be familiar to most DBAs:
    2022-03-10 18:00:22.413000 +01:00
    Oracle instance running with ODM in PGA: Oracle Direct NFS ODM Library Version 6.0
    2022-03-10 18:00:25.600000 +01:00
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =249
    LICENSE_MAX_USERS = 0
    SYS auditing is enabled
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.14.0.0.0.
    ORACLE_HOME:    /sw/oracle/product/19c
    System name:    Linux
    Node name:      myserver.mydomain.com
    Release:        4.18.0-348.12.2.el8_5.x86_64
    Version:        #1 SMP Mon Jan 17 07:06:06 EST 2022
    Machine:        x86_64
    Using parameter settings in server-side spfile /sw/oracle/product/19c/dbs/spfiletestdb01.ora
    System parameters with non-default values:
    etc etc
    
    Since the output is very long, if you simply want to check for an indication of a database startup you could use this command instead:
    adrci> show alert -p "message_text like '%Starting ORACLE instance%'"
    
    which will give you a shorter list to examine, and in my case showing that the database was restarted Jan 28th and March 10th:
    2022-02-28 07:46:54.505000 +01:00
    Starting ORACLE instance (restrict) (OS id: 4058595)
    2022-02-28 07:47:39.567000 +01:00
    Starting ORACLE instance (normal) (OS id: 3300)
    2022-03-10 18:00:22.421000 +01:00
    Starting ORACLE instance (restrict) (OS id: 3611160)
    2022-03-10 18:02:06.831000 +01:00
    Starting ORACLE instance (normal) (OS id: 3429)
    

    Thursday, March 10, 2022

    How to create a unfied auditing policy that captures logons from privileged users

    By default, only unsuccessful logon attempts are audited by mandatory auditing under Unified Auditing.
    The policy used for this purpose is ORA_LOGON_FAILURES, and it will audit both privileged and non-privileged users' attempts to logon to the database.

    If you want to audit all privileged users that have successfully logged onto the database, you need to create a new policy.

    Here is how:
     CREATE AUDIT POLICY PRIVILEGED_USER_LOGONS
       ACTIONS  LOGON
       WHEN 'SYS_CONTEXT (''USERENV'',''CURRENT_USER'') IN (''SYS'',''SYSTEM'')'
       EVALUATE PER SESSION; 
    
    Start using it:
    AUDIT POLICY PRIVILEGED_USER_LOGONS;
    
    The resulting audit record can be found immediately afterwards, with this query against UNIFIED_AUDIT_TRAIL:
    select os_username "os user",
           userhost "host",
           authentication_type "authtype",
           dbusername "db user",
           client_program_name "client",
           event_timestamp "time",
           action_name "action",
           system_privilege_used "sys priv",
           unified_audit_policies "aud pol"
    from UNIFIED_AUDIT_TRAIL 
    where event_Timestamp = (select max(event_Timestamp) from UNIFIED_AUDIT_TRAIL)
    order by event_timestamp desc;
    
    Result:
    os user host authtype db user client time action sys priv aud pol
    oracle myserver.mydomain.com (TYPE=(OS));(CLIENT ADDRESS=((PROTOCOL=beq)(HOST=192.168.0.34))); SYS sqlplus@myserver.mydomain.com (TNS V1-V3) 10.03.2022 11:38:14,972147 LOGON SYSDBA PRIVILEGED_USER_LOGONS

    How to use dbms_metadata to generate DDL for a Unified Auditing Policy

    SQL> set long 5000
    SQL> select DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')  from dual;
    
    DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES')
    --------------------------------------------------------------------------------
    
       CREATE AUDIT POLICY "ORA_LOGON_FAILURES" ACTIONS  LOGON
    

    Wednesday, March 2, 2022

    How to check when a parameter was changed

    Thanks to Sandeep Singh for providing this very useful information.

    The following query will reveal any changes to a particular parameter:
    select instance_number instance, snap_id, time, parameter_name, old_value, new_value 
    from (
           select a.snap_id,to_char(end_interval_time,'DD-MON-YY HH24:MI') TIME, a.instance_number, parameter_name, value new_value,
           lag(parameter_name,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_pname,
           lag(value,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_value ,
           decode(substr(parameter_name,1,2),'__',2,1) calc_flag
    from dba_hist_parameter a, dba_Hist_snapshot b , v$instance v
    where a.snap_id=b.snap_id
    and   a.instance_number=b.instance_number
    and   parameter_name like nvl('&parameter_name',parameter_name)
    and   a.instance_number like nvl('&instance_number',v.instance_number)
    )
    where
    new_value != old_value
    order by 1,2;
    
    In my case the output was
    INSTANCE SNAP_ID TIME PARAMETER_NAME OLD_VALUE NEW_VALUE
    1
    52050
    28-FEB-22 14:00 optimizer_adaptive_statistics TRUE FALSE