Wednesday, April 6, 2022

Wrappers for pg_dump and pg_restore

Export:

#pg_dump dumps a database as a text file or to other formats.
#!/bin/bash
export SCRIPT_NAME=`basename $0`
export HOST=`uname -n`
export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
export RUN_DATE=`date +\%m.\%d.\%y`
export RUN_DIR=.
export LOG_DIR=/tmp
export DUMPDIR=/pgdata/export
export JOB_NAME="ExportPGDB"
export VERSION=1.0.0
export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log

exec &> ${LOGFILE}
echo "Starting export job at " `date`
pg_dump -Fd musicdb -n music -v -f ${DUMPDIR}/mussikkdbexport -j 2
echo "Ending job at " `date`
exit

Import:

# pg_restore - restore a PostgreSQL database from an archive file created by pg_dump
#!/bin/bash
export SCRIPT_NAME=`basename $0`
export HOST=`uname -n`
export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
export RUN_DATE=`date +\%m.\%d.\%y`
export RUN_DIR=.
export LOG_DIR=/tmp
export DUMPDIR=/pgdata/export
export JOB_NAME="ImportPGDB"
export VERSION=1.0.0
export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log
exec &> ${LOGFILE}
echo "Starting job at " `date`
pg_restore ${DUMPDIR}/mussikkdbexport -C -c -d postgres -j 4 -v
pg_restore ${DUMPDIR}/mussikkdbexport -c -d musicdb -j 4 -v
echo "Ending job at " `date`
exit


More articles on pg_dump and pgrestore:

  • How to export and import a database in PostgreSQL
  • How to export and import a schema in PostgreSQL
  • How to list the contents of a custom format PostgreSQL export file
  • How to export and import a schema using the directory format
  • How to export a single table using different format in PostgreSQL
  • Tuesday, April 5, 2022

    Workaround for Automatic Diagnostic Repository Errors when executing "validate database" through Data Guard Broker

    If you have a Data Guard setup and using the broker, you may see the following error when validating your setup before a switchover:
    DGMGRL> validate database stb01
    
      Database Role:     Physical standby database
      Primary Database:  proddb01
    
      Ready for Switchover:  Yes
      Ready for Failover:    Yes (Primary Running)
    
      Flashback Database Status:
        proddb01:  Off
        stb01:     Off
    
      Managed by Clusterware:
        proddb01:  NO
        stb01:     NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    
      Automatic Diagnostic Repository Errors:
        Error                       proddb01 stb01
        System data file missing    NO       YES
        User data file missing      NO       YES
    
    This problem and its solution is outlined in Doc Id 2300040.1 "Known issues when using "Validate database" DGMGRL command" at Oracle Support.

    Cause: The issue occurs because the old health check messages were not purged properly and the command VALIDATE DATABASE signals it found a failed check.

    To get rid of this warning, rename the file HM_FINDING.ams from the ADR rdbms metadata folder or move it to another folder. This is what I did on my standby server:
     cd /u01/oracle/diag/
    find . -name "HM_FINDING.ams"
    ./rdbms/stb01/stb01/metadata/HM_FINDING.ams
    ./rdbms/stb01/proddb01/metadata/HM_FINDING.ams
    ./plsql/user_oracle/host_1804485962_107/metadata/HM_FINDING.ams
    
    Rename or remove the files listed above and execute the "validate database" command in dgmgrl again. The message should now be gone for good.

    Note that the file named HM_FINDING.ams will most likely reappear immediately after deletion. But this new copy will not cause the Data Guard Broker to throw warnings.

    Tuesday, March 29, 2022

    How to connect via sqlplus with the connect string

    To connect as sys using the entire connect string, use the following syntax:
    sqlplus /nolog SQL>connect sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=proddb01.mydomain.com)(PORT=1534))(CONNECT_DATA=(SERVICE_NAME=proddb01.mydomain.com)(INSTANCE_NAME=proddb01)(SERVER=DEDICATED)))' as sysdba
    Result:
    Enter password: Connected. SQL> select name,database_role from v$database; NAME DATABASE_ROLE --------- ---------------- proddb01 PRIMARY

    Thursday, March 17, 2022

    How to solve message from broker Warning: standby redo logs not configured for thread 1 on primary

    DGMGRL> validate database 'proddb01';
    
      Database Role:    Primary database
    
      Ready for Switchover:  Yes
    
      Flashback Database Status:
        tdridbt1:  Off
    
      Managed by Clusterware:
        tdridbt1:  NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    
    DGMGRL> validate database 'stby01';
    
      Database Role:     Physical standby database
      Primary Database:  proddb01
    
      Ready for Switchover:  Yes
      Ready for Failover:    Yes (Primary Running)
    
      Flashback Database Status:
        proddb01:  Off
        stby01:    Off
    
      Managed by Clusterware:
        proddb01:  NO
        stby01:    NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    
      Current Log File Groups Configuration:
        Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
                  (proddb01)              (stby01)
        1         3                       2                       Insufficient SRLs
    
      Future Log File Groups Configuration:
        Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
                  (stby01)              (proddb01)
        1         3                       0                       Insufficient SRLs
        Warning: standby redo logs not configured for thread 1 on proddb01
    
    The solution was found in Doc ID 1956103.1 "Warning: standby redo logs not configured for thread on "

    This is how I corrected the situation:

    On the primary, check the status of the standby redo log files:
    set lines 200
    col member format a50
    select s.group#,s.thread#,s.sequence#,s.archived,s.status,f.member, f.type
    from v$standby_log s, v$logfile f
    where f.type = 'STANDBY'
    and s.group# = f.group#;
    
    
        GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo01.log               STANDBY
             5          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo02.log               STANDBY
             6          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo03.log               STANDBY
             7          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo04.log               STANDBY
    
    Indeed, there is no standby redo logs assigned to thread 1.

    First, set standby_file_management to MANUAL on both primary and standby, if not already done.
    alter system set standby_file_management=MANUAL;
    

    Still on the primary, drop the standby redo logs:
    alter database drop logfile group 4;
    
    Database altered.
    
    alter database drop logfile group 5;
    
    Database altered.
    
    alter database drop logfile group 6;
    
    Database altered.
    
    alter database drop logfile group 7;
    
    Database altered.
    
    Still on the primary, add new standby redo logs:
    alter database add standby logfile thread 1 group 4('/u03/oradata/proddb01/stb_redo01.log') size 2048M REUSE;
    alter database add standby logfile thread 1 group 5('/u03/oradata/proddb01/stb_redo02.log') size 2048M REUSE;
    alter database add standby logfile thread 1 group 6('/u03/oradata/proddb01/stb_redo03.log') size 2048M REUSE;
    alter database add standby logfile thread 1 group 7('/u03/oradata/proddb01/stb_redo04.log') size 2048M REUSE;
    
    Run the query above again, and you should see a thread number assigned to each group:
        GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo01.log               STANDBY
             5          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo02.log               STANDBY
             6          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo03.log               STANDBY
             7          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo04.log               STANDBY
    
    On the standby database, stop managed recovery:
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    
    Check the status:
         GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          1         46 YES ACTIVE     /u03/oradata/stby01/stb_redo01.log                 STANDBY
             5          1          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo02.log                 STANDBY
             6          0          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo03.log                 STANDBY
             7          0          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo04.log                 STANDBY        
    
    Drop the two standby redo log groups not having been assigned a thread number:
    alter database drop logfile group 6;
    
    Database altered.
    
    alter database drop logfile group 7;
    
    Database altered.
    
    Add two new groups:
    alter database add standby logfile thread 1 group 6('/u03/oradata/stby01/stb_redo03.log') size 2048M REUSE;
    
    Database altered.
    
    alter database add standby logfile thread 1 group 7('/u03/oradata/stby01/stb_redo04.log')  size 2048M REUSE;
    
    Database altered.
    
    Start managed recovery again:
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
    Database altered.
    
    Verify by running the same query once more:
        GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          1         46 YES ACTIVE     /u03/oradata/stby01/stb_redo01.log               STANDBY
             5          1          0 NO  UNASSIGNED /u03/oradata/stby01/stb_redo02.log               STANDBY
             6          1          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo03.log               STANDBY
             7          1          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo04.log               STANDBY
    
    The warning is now gone from the output of data guard broker:
    DGMGRL> validate database 'stby01';
    
      Database Role:     Physical standby database
      Primary Database:  proddb01
    
      Ready for Switchover:  Yes
      Ready for Failover:    Yes (Primary Running)
    
      Flashback Database Status:
        proddb01:  Off
        stby01:  Off
    
      Managed by Clusterware:
        proddb01:  NO
        stby01:  NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    

    Finally, remember to adjust standby_file_management to AUTO on both primary and standby databases:
    alter system set standby_file_management=AUTO;
    

    How to use active database duplication for creating a physical standby

    When you use active database duplication for creating a physical standby database, make sure of the following:

    1. keep your pfile used to start an auxiliary instance to a minimum. Only the following lines are needed:
    db_block_size=8192
    db_name='proddb01'
    
    Of course, the db_name must be identical to the db_name of the primary database.

    2. In your duplication script, make sure you the spfile contains a correct value for db_unique name:
    run{
            allocate channel c1 type disk;
            allocate channel c2 type disk;
            allocate channel c3 type disk;
            allocate channel c4 type disk;
            allocate auxiliary channel stby type disk;
    duplicate target database for standby from active database
    spfile
    SET db_unique_name='stby01'
    SET db_domain='mydomain.no'
    SET FAL_CLIENT='stby01'
    SET FAL_SERVER='proddb01'
    SET log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stby01'
    SET log_archive_dest_2=''
    SET control_files='/data1/oradata/stby01/control01.ctl','/data2/fra/stby01/control02.ctl'
    SET STANDBY_FILE_MANAGEMENT='MANUAL'
    SET LOG_ARCHIVE_MAX_PROCESSES='2'
    SET local_listener='stby01.skead.no'
    SET dg_broker_config_file1='/sw/oracle/product/12201/dbs/dr1stby01.dat'
    SET dg_broker_config_file2='/sw/oracle/product/12201/dbs/dr2stby01.dat'
    SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(stby01,proddb01)'
    SET diagnostic_dest='/sw/oracle'
    SET db_recovery_file_dest='/data2/fra'
    SET db_file_name_convert='/data1/oradata/proddb01','/data1/oradata/stby01'
    SET log_file_name_convert='/data3/oradata/proddb01','/data3/oradata/stby01'
    NOFILENAMECHECK;
    }
    
    Note that the directive "spfile" used right before the individual SET commands does not mean "create spfile". It means "fetch spfile from primary". The SET commands, however, will be written to your local spfile which will then be used to start the auxiliary instance once more, before the actual cloning starts.

    When the duplicaton has finished, the value of "db_name" parameter will be identical to the primary database, since we are fetching the spfile from your primary database server over the network.

    Friday, March 11, 2022

    What constitute "default auditing" under Unified Auditing?

    Under Mixed-mode auditing and Pure Unified Auditing, the policies ORA_SECURECONFIG and ORA_LOGON_FAILURES are audited by default.

    From the documentation:

    ORA_SECURECONFIG audits the same default audit settings from Oracle Database Release 11g.
    It tracks the use of a number of privileges such as ALTER ANY TABLE, GRANT ANY PRIVILEGE, and CREATE USER. The actions that it tracks include ALTER USER, CREATE ROLE, LOGON, and other commonly performed activities.


    There are a couple of other policies that are not enabled by default but certainly worth auditing. These are:

  • ORA_DATABASE_PARAMETER audits commonly used Oracle Database parameter settings: ALTER DATABASE, ALTER SYSTEM, and CREATE SPFILE.
  • ORA_ACCOUNT_MGMT audits the commonly used user account and privilege settings: CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE,ALTER ROLE, SET ROLE, GRANT, and REVOKE.

    Enabling these is as simple as executing the following SQL against the database as a privileged user:
    AUDIT POLICY ORA_DATABASE_PARAMETER;
    AUDIT POLICY ORA_ACCOUNT_MGMT;
    
  • How to create a partial index on a table in PostgreSQL

    I was asked by one of my customers to advise on how to create an index for a new column called "sk_type", of datatype varchar, with only 3 distinct values in addition to NULL. Let's call them STRING1, STRING2, STRING3.

    The table is already partitioned on column refyear(date). Here is the table DDL:
    CREATE TABLE event
    (
        id bigint NOT NULL DEFAULT nextval('hendelselager.hendelse_id_seq'::regclass),
        skpl character varying(8)  COLLATE pg_catalog."default",
        refyear integer NOT NULL,
        pnum bigint NOT NULL,
        ksystem character varying(128) COLLATE pg_catalog."default" NOT NULL,
        category character varying(128) COLLATE pg_catalog."default" NOT NULL,
        event character varying(128) COLLATE pg_catalog."default" NOT NULL,
        tstamp timestamp without time zone NOT NULL,
        ip_id character varying(128) COLLATE pg_catalog."default" NOT NULL,
        details jsonb,
        CONSTRAINT e_pkey PRIMARY KEY (refyear, event, id),
        CONSTRAINT uc_e01 UNIQUE (refyear, pnum, ksystem, category, event, ip_id)
    ) PARTITION BY RANGE (refyear);
    
    The distribution of values was expected to be very scew right from the start:
    * STRING1 - 95%
    * STRING2 - 5%
    * STRING3 < 0,5%
    * NULL < 0,1%

    In the documentation I found that perhaps a partial index would be ideal in this situation. A partial index is defined as

    an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries only for those table rows that satisfy the predicate.

    According to the documentation, a partial index could be useful in cases where you want to avoid indexing common values:

    Since a query searching for a common value (one that accounts for more than a few percent of all the table rows) will not use the index anyway, there is no point in keeping those rows in the index at all. This reduces the size of the index, which will speed up those queries that [actually] do use the index. It will also speed up many table update operations because the index does not need to be updated in all cases

    In other words, we would only index rows which have column value different from 'STRING1'.

    First, add the new column:
    ALTER TABLE event
    ADD sk_type character varying(8);
    
    Next, create the partial index. Here is the syntax I used in a sandbox environment:
    CREATE INDEX sk_type_idx ON event (refyear,sk_type)
     WHERE NOT (sk_type = 'STRING1');
    
    This approach comes with a caviat, which may or may not be acceptable:

    Observe that this kind of partial index requires that the common values be predetermined, so such partial indexes are best used for data distributions that do not change. The indexes can be recreated occasionally to adjust for new data distributions, but this adds maintenance effort.

    The offical PostgreSQL 11 documentation can be found here