Tuesday, May 3, 2022

The cause for and solution to the error message "Database mount ID mismatch" in your previously configured standby database

If you have a previously configured standby database, and you have converted it to a free standing database, no longer a part of a data guard setup, you may see some errors in the alert log looking like this
2022-05-03T12:52:33.896905+02:00
RFS[1332]: Assigned to RFS process (PID:128748)
RFS[1332]: Database mount ID mismatch [0xb40d4ed8:0xb42c30e2] (3020771032:3022794978)
RFS[1332]: Not using real application clusters
Reason:
Even though you have used the data guard broker to stop log shipping, and activated the standby database (inn effect making it read writable), the broker will not stop the previously configured primary database from shipping logs to its previously configured standby destination.

Solution:
Cut off the log shipping from the previously configured primary database completely by either

1) changing the value of log_archive_dest_state_2 from enabled to defer:
alter system set log_archive_dest_state_2=defer scope=both;
or by

2) removing the value of log_archive_dest_2 altogether:
alter system set log_archive_dest_2='' scope=both;

Friday, April 22, 2022

How to set up replication between two postgreSQL servers

Compared to Oracle, setup of a postgreSQL physical standby database is quite simple. With two identically provisioned posgreSQL 13 servers created, the below steps must be performed.
Master: test01-sandbox-pgsql.mydomain.com
Slave: test02-sandbox-pgsql.mydomain.com

1. Configure the master database cluster As the postgres software owner, execute the following:
echo "CREATE ROLE replicate WITH REPLICATION LOGIN password 'mypassword';" | psql
echo "grant app_user to replicate;" | psql
echo "alter system set wal_level = replica;" | psql
echo "select pg_reload_conf();" | psql
echo "alter system set synchronous_commit = local;" | psql
echo "alter system set track_commit_timestamp = on;" |  psql
2. Restart the postgres server
systemctl restart postgresql-13.service
3. Configure the slave database cluster

As the root user, stop the postgres server
systemctl stop postgresql-13.service
4. Remove any files created in the deployment of the slave. They will be replaced anyway
rm -rf /data/*
5. take a base backup of the master database cluster
pg_basebackup -h test01-sandbox-pgsql.mydomain.com -D /data/ -P -U replicate --wal-method=stream
6. Set some parameters
echo "hot_standby = on" >> /pgconfig/postgresql.auto.conf 

echo "
primary_conninfo      = 'host=masterhost.mydomain.com port=5432 user=replicate password=mypassword'
promote_trigger_file  = '/tmp/MasterNow'
" >  /pgconfig/postgresql.conf  
7. Create a file necessary for the master/slave role determination
touch /data/standby.signal
8. Start the postgres server
systemctl start postgresql-13.service
9. Verification checks

On the master, to check that everything works fine, use the dictionary view pg_stat_replication, which, according to the documentation "contain one row per WAL sender process, showing statistics about replication to that sender's connected standby server. Only directly connected standbys are listed; no information is available about downstream standby servers."
 echo 'select usename,state,sync_state,sync_priority,client_hostname from pg_stat_replication;' | psql
  usename  |   state   | sync_state | sync_priority |          client_hostname
-----------+-----------+------------+---------------+-----------------------------------
 replicate | streaming | async      |             0 | test02-sandbox-pgsql.mydomain.com
(1 row)
The check the slave status, use the dictionary view pg_stat_wal_receiver. According to the documentation the view "contain only one row, showing statistics about the WAL receiver from that receiver's connected server":
psql
psql (13.6)
Type "help" for help.

postgres=# \x
Expanded display is on.
postgres=# select status,sender_host,conninfo from pg_stat_wal_receiver;
-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
status      | streaming
sender_host | test01-sandbox-pgsql.mydomain.com
conninfo    | user=replicate password=******** channel_binding=prefer dbname=replication host=test01-sandbox-pgsql.mydomain.com port=5432 fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any
Check that recovery is in progress:
 echo 'SELECT pg_is_in_recovery();' | psql
 pg_is_in_recovery
-------------------
 t
(1 row)
The equivalent query on the master node would state "f" (false). Read more about
  • pg_stat_replication
  • pg_stat_wal_receiver

    Note that some important changes to replication were introduced in PostgreSQL version 12
  • Wednesday, April 20, 2022

    Oracle vs PostgreSQL replication terminology

    Functionality Oracle PostgreSQL
    A standby server that can accept connections and serves read-only queries Active Data Guard Hot Standby
    A standby server that cannot be connected to until it is promoted to a master server Physical Standby Server Warm Standby
    A data-modifying transaction is not considered committed until all servers have committed the transaction. Maximum Availability Synchronous
    Allow some delay between the time of a commit and its propagation to the other servers.
    Some transactions might be lost in the switch to a backup server
    load balanced servers might return slightly stale results.
    Maximum Performance Asynchronous
    Do not allow any transactions to be unprotected at any time Maximum Protection  

  • PostgreSQL 11 dokumentation here
  • Oracle 12cR2 documentation here
  • Wednesday, April 6, 2022

    Wrappers for pg_dump and pg_restore

    Export:

    #pg_dump dumps a database as a text file or to other formats.
    #!/bin/bash
    export SCRIPT_NAME=`basename $0`
    export HOST=`uname -n`
    export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
    export RUN_DATE=`date +\%m.\%d.\%y`
    export RUN_DIR=.
    export LOG_DIR=/tmp
    export DUMPDIR=/pgdata/export
    export JOB_NAME="ExportPGDB"
    export VERSION=1.0.0
    export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log
    
    exec &> ${LOGFILE}
    echo "Starting export job at " `date`
    pg_dump -Fd musicdb -n music -v -f ${DUMPDIR}/mussikkdbexport -j 2
    echo "Ending job at " `date`
    exit
    

    Import:

    # pg_restore - restore a PostgreSQL database from an archive file created by pg_dump
    #!/bin/bash
    export SCRIPT_NAME=`basename $0`
    export HOST=`uname -n`
    export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
    export RUN_DATE=`date +\%m.\%d.\%y`
    export RUN_DIR=.
    export LOG_DIR=/tmp
    export DUMPDIR=/pgdata/export
    export JOB_NAME="ImportPGDB"
    export VERSION=1.0.0
    export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log
    exec &> ${LOGFILE}
    echo "Starting job at " `date`
    pg_restore ${DUMPDIR}/mussikkdbexport -C -c -d postgres -j 4 -v
    pg_restore ${DUMPDIR}/mussikkdbexport -c -d musicdb -j 4 -v
    echo "Ending job at " `date`
    exit
    


    More articles on pg_dump and pgrestore:

  • How to export and import a database in PostgreSQL
  • How to export and import a schema in PostgreSQL
  • How to list the contents of a custom format PostgreSQL export file
  • How to export and import a schema using the directory format
  • How to export a single table using different format in PostgreSQL
  • Tuesday, April 5, 2022

    Workaround for Automatic Diagnostic Repository Errors when executing "validate database" through Data Guard Broker

    If you have a Data Guard setup and using the broker, you may see the following error when validating your setup before a switchover:
    DGMGRL> validate database stb01
    
      Database Role:     Physical standby database
      Primary Database:  proddb01
    
      Ready for Switchover:  Yes
      Ready for Failover:    Yes (Primary Running)
    
      Flashback Database Status:
        proddb01:  Off
        stb01:     Off
    
      Managed by Clusterware:
        proddb01:  NO
        stb01:     NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    
      Automatic Diagnostic Repository Errors:
        Error                       proddb01 stb01
        System data file missing    NO       YES
        User data file missing      NO       YES
    
    This problem and its solution is outlined in Doc Id 2300040.1 "Known issues when using "Validate database" DGMGRL command" at Oracle Support.

    Cause: The issue occurs because the old health check messages were not purged properly and the command VALIDATE DATABASE signals it found a failed check.

    To get rid of this warning, rename the file HM_FINDING.ams from the ADR rdbms metadata folder or move it to another folder. This is what I did on my standby server:
     cd /u01/oracle/diag/
    find . -name "HM_FINDING.ams"
    ./rdbms/stb01/stb01/metadata/HM_FINDING.ams
    ./rdbms/stb01/proddb01/metadata/HM_FINDING.ams
    ./plsql/user_oracle/host_1804485962_107/metadata/HM_FINDING.ams
    
    Rename or remove the files listed above and execute the "validate database" command in dgmgrl again. The message should now be gone for good.

    Note that the file named HM_FINDING.ams will most likely reappear immediately after deletion. But this new copy will not cause the Data Guard Broker to throw warnings.

    Tuesday, March 29, 2022

    How to connect via sqlplus with the connect string

    To connect as sys using the entire connect string, use the following syntax:
    sqlplus /nolog SQL>connect sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=proddb01.mydomain.com)(PORT=1534))(CONNECT_DATA=(SERVICE_NAME=proddb01.mydomain.com)(INSTANCE_NAME=proddb01)(SERVER=DEDICATED)))' as sysdba
    Result:
    Enter password: Connected. SQL> select name,database_role from v$database; NAME DATABASE_ROLE --------- ---------------- proddb01 PRIMARY

    Thursday, March 17, 2022

    How to solve message from broker Warning: standby redo logs not configured for thread 1 on primary

    DGMGRL> validate database 'proddb01';
    
      Database Role:    Primary database
    
      Ready for Switchover:  Yes
    
      Flashback Database Status:
        tdridbt1:  Off
    
      Managed by Clusterware:
        tdridbt1:  NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    
    DGMGRL> validate database 'stby01';
    
      Database Role:     Physical standby database
      Primary Database:  proddb01
    
      Ready for Switchover:  Yes
      Ready for Failover:    Yes (Primary Running)
    
      Flashback Database Status:
        proddb01:  Off
        stby01:    Off
    
      Managed by Clusterware:
        proddb01:  NO
        stby01:    NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    
      Current Log File Groups Configuration:
        Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
                  (proddb01)              (stby01)
        1         3                       2                       Insufficient SRLs
    
      Future Log File Groups Configuration:
        Thread #  Online Redo Log Groups  Standby Redo Log Groups Status
                  (stby01)              (proddb01)
        1         3                       0                       Insufficient SRLs
        Warning: standby redo logs not configured for thread 1 on proddb01
    
    The solution was found in Doc ID 1956103.1 "Warning: standby redo logs not configured for thread on "

    This is how I corrected the situation:

    On the primary, check the status of the standby redo log files:
    set lines 200
    col member format a50
    select s.group#,s.thread#,s.sequence#,s.archived,s.status,f.member, f.type
    from v$standby_log s, v$logfile f
    where f.type = 'STANDBY'
    and s.group# = f.group#;
    
    
        GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo01.log               STANDBY
             5          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo02.log               STANDBY
             6          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo03.log               STANDBY
             7          0          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo04.log               STANDBY
    
    Indeed, there is no standby redo logs assigned to thread 1.

    First, set standby_file_management to MANUAL on both primary and standby, if not already done.
    alter system set standby_file_management=MANUAL;
    

    Still on the primary, drop the standby redo logs:
    alter database drop logfile group 4;
    
    Database altered.
    
    alter database drop logfile group 5;
    
    Database altered.
    
    alter database drop logfile group 6;
    
    Database altered.
    
    alter database drop logfile group 7;
    
    Database altered.
    
    Still on the primary, add new standby redo logs:
    alter database add standby logfile thread 1 group 4('/u03/oradata/proddb01/stb_redo01.log') size 2048M REUSE;
    alter database add standby logfile thread 1 group 5('/u03/oradata/proddb01/stb_redo02.log') size 2048M REUSE;
    alter database add standby logfile thread 1 group 6('/u03/oradata/proddb01/stb_redo03.log') size 2048M REUSE;
    alter database add standby logfile thread 1 group 7('/u03/oradata/proddb01/stb_redo04.log') size 2048M REUSE;
    
    Run the query above again, and you should see a thread number assigned to each group:
        GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo01.log               STANDBY
             5          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo02.log               STANDBY
             6          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo03.log               STANDBY
             7          1          0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo04.log               STANDBY
    
    On the standby database, stop managed recovery:
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
    
    Check the status:
         GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          1         46 YES ACTIVE     /u03/oradata/stby01/stb_redo01.log                 STANDBY
             5          1          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo02.log                 STANDBY
             6          0          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo03.log                 STANDBY
             7          0          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo04.log                 STANDBY        
    
    Drop the two standby redo log groups not having been assigned a thread number:
    alter database drop logfile group 6;
    
    Database altered.
    
    alter database drop logfile group 7;
    
    Database altered.
    
    Add two new groups:
    alter database add standby logfile thread 1 group 6('/u03/oradata/stby01/stb_redo03.log') size 2048M REUSE;
    
    Database altered.
    
    alter database add standby logfile thread 1 group 7('/u03/oradata/stby01/stb_redo04.log')  size 2048M REUSE;
    
    Database altered.
    
    Start managed recovery again:
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
    Database altered.
    
    Verify by running the same query once more:
        GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
    ---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
             4          1         46 YES ACTIVE     /u03/oradata/stby01/stb_redo01.log               STANDBY
             5          1          0 NO  UNASSIGNED /u03/oradata/stby01/stb_redo02.log               STANDBY
             6          1          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo03.log               STANDBY
             7          1          0 YES UNASSIGNED /u03/oradata/stby01/stb_redo04.log               STANDBY
    
    The warning is now gone from the output of data guard broker:
    DGMGRL> validate database 'stby01';
    
      Database Role:     Physical standby database
      Primary Database:  proddb01
    
      Ready for Switchover:  Yes
      Ready for Failover:    Yes (Primary Running)
    
      Flashback Database Status:
        proddb01:  Off
        stby01:  Off
    
      Managed by Clusterware:
        proddb01:  NO
        stby01:  NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    

    Finally, remember to adjust standby_file_management to AUTO on both primary and standby databases:
    alter system set standby_file_management=AUTO;