Wednesday, May 11, 2022

Find available directories on your server

set lines 200
col directory_name format a30
col directory_path format a60
select directory_name,directory_path
from dba_directories;
exit

Tuesday, May 3, 2022

How to take a standby database out of a data guard configuration and convert it to a standalone read/write database using the Data Guard Broker

Deactive the Data Guard Broker configuration:
dgmgrl / 
show configuration
# disable log shipping
edit database 'stdb' SET STATE='APPLY-OFF';
disable configuration;
Check if the standby database is opened in READ ONLY WITH APPLY, READ ONLY or MOUNTED mode:
select open_mode from v$database;
If in READ ONLY WITH APPLY or READ ONLY mode, close the database:
 alter database close;
Often, a database that has been opened in READ ONLY mode still have active sessions. In such cases, it may be necessary to shut the database down and open it in mount-mode:
 shutdown immediate
 startup mount
Activate the standby database:
 ALTER DATABASE ACTIVATE STANDBY DATABASE;
Verify that the status of the control file has changed from "STANDBY" to "CURRENT":
 select CONTROLFILE_TYPE from v$database;
 
 CONTROL
 -------
 CURRENT
Open the database:
 
 ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;
 ALTER DATABASE OPEN;
If not done by the broker, reset the value of log_archive_dest_2:
 Alter system set log_archive_dest_2='';
Your database should now be stand alone and out of the data guard configuration.

To avoid errors related to redo shipping, make sure that your old primary no longer is attempting to ship redo log information to the old standby database, which is now out of the data guard configuration.
On the old primary server, set the relevant log_archive_dest_n parameter to DEFER:
 alter system set log_archive_dest_state_2=defer scope=both;

The cause for and solution to the error message "Database mount ID mismatch" in your previously configured standby database

If you have a previously configured standby database, and you have converted it to a free standing database, no longer a part of a data guard setup, you may see some errors in the alert log looking like this
2022-05-03T12:52:33.896905+02:00
RFS[1332]: Assigned to RFS process (PID:128748)
RFS[1332]: Database mount ID mismatch [0xb40d4ed8:0xb42c30e2] (3020771032:3022794978)
RFS[1332]: Not using real application clusters
Reason:
Even though you have used the data guard broker to stop log shipping, and activated the standby database (inn effect making it read writable), the broker will not stop the previously configured primary database from shipping logs to its previously configured standby destination.

Solution:
Cut off the log shipping from the previously configured primary database completely by either

1) changing the value of log_archive_dest_state_2 from enabled to defer:
alter system set log_archive_dest_state_2=defer scope=both;
or by

2) removing the value of log_archive_dest_2 altogether:
alter system set log_archive_dest_2='' scope=both;

Friday, April 22, 2022

How to set up replication between two postgreSQL servers

Compared to Oracle, setup of a postgreSQL physical standby database is quite simple. With two identically provisioned posgreSQL 13 servers created, the below steps must be performed.
Master: test01-sandbox-pgsql.mydomain.com
Slave: test02-sandbox-pgsql.mydomain.com

1. Configure the master database cluster As the postgres software owner, execute the following:
echo "CREATE ROLE replicate WITH REPLICATION LOGIN password 'mypassword';" | psql
echo "grant app_user to replicate;" | psql
echo "alter system set wal_level = replica;" | psql
echo "select pg_reload_conf();" | psql
echo "alter system set synchronous_commit = local;" | psql
echo "alter system set track_commit_timestamp = on;" |  psql
2. Restart the postgres server
systemctl restart postgresql-13.service
3. Configure the slave database cluster

As the root user, stop the postgres server
systemctl stop postgresql-13.service
4. Remove any files created in the deployment of the slave. They will be replaced anyway
rm -rf /data/*
5. take a base backup of the master database cluster
pg_basebackup -h test01-sandbox-pgsql.mydomain.com -D /data/ -P -U replicate --wal-method=stream
6. Set some parameters
echo "hot_standby = on" >> /pgconfig/postgresql.auto.conf 

echo "
primary_conninfo      = 'host=masterhost.mydomain.com port=5432 user=replicate password=mypassword'
promote_trigger_file  = '/tmp/MasterNow'
" >  /pgconfig/postgresql.conf  
7. Create a file necessary for the master/slave role determination
touch /data/standby.signal
8. Start the postgres server
systemctl start postgresql-13.service
9. Verification checks

On the master, to check that everything works fine, use the dictionary view pg_stat_replication, which, according to the documentation "contain one row per WAL sender process, showing statistics about replication to that sender's connected standby server. Only directly connected standbys are listed; no information is available about downstream standby servers."
 echo 'select usename,state,sync_state,sync_priority,client_hostname from pg_stat_replication;' | psql
  usename  |   state   | sync_state | sync_priority |          client_hostname
-----------+-----------+------------+---------------+-----------------------------------
 replicate | streaming | async      |             0 | test02-sandbox-pgsql.mydomain.com
(1 row)
The check the slave status, use the dictionary view pg_stat_wal_receiver. According to the documentation the view "contain only one row, showing statistics about the WAL receiver from that receiver's connected server":
psql
psql (13.6)
Type "help" for help.

postgres=# \x
Expanded display is on.
postgres=# select status,sender_host,conninfo from pg_stat_wal_receiver;
-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
status      | streaming
sender_host | test01-sandbox-pgsql.mydomain.com
conninfo    | user=replicate password=******** channel_binding=prefer dbname=replication host=test01-sandbox-pgsql.mydomain.com port=5432 fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=any
Check that recovery is in progress:
 echo 'SELECT pg_is_in_recovery();' | psql
 pg_is_in_recovery
-------------------
 t
(1 row)
The equivalent query on the master node would state "f" (false). Read more about
  • pg_stat_replication
  • pg_stat_wal_receiver

    Note that some important changes to replication were introduced in PostgreSQL version 12
  • Wednesday, April 20, 2022

    Oracle vs PostgreSQL replication terminology

    Functionality Oracle PostgreSQL
    A standby server that can accept connections and serves read-only queries Active Data Guard Hot Standby
    A standby server that cannot be connected to until it is promoted to a master server Physical Standby Server Warm Standby
    A data-modifying transaction is not considered committed until all servers have committed the transaction. Maximum Availability Synchronous
    Allow some delay between the time of a commit and its propagation to the other servers.
    Some transactions might be lost in the switch to a backup server
    load balanced servers might return slightly stale results.
    Maximum Performance Asynchronous
    Do not allow any transactions to be unprotected at any time Maximum Protection  

  • PostgreSQL 11 dokumentation here
  • Oracle 12cR2 documentation here
  • Wednesday, April 6, 2022

    Wrappers for pg_dump and pg_restore

    Export:

    #pg_dump dumps a database as a text file or to other formats.
    #!/bin/bash
    export SCRIPT_NAME=`basename $0`
    export HOST=`uname -n`
    export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
    export RUN_DATE=`date +\%m.\%d.\%y`
    export RUN_DIR=.
    export LOG_DIR=/tmp
    export DUMPDIR=/pgdata/export
    export JOB_NAME="ExportPGDB"
    export VERSION=1.0.0
    export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log
    
    exec &> ${LOGFILE}
    echo "Starting export job at " `date`
    pg_dump -Fd musicdb -n music -v -f ${DUMPDIR}/mussikkdbexport -j 2
    echo "Ending job at " `date`
    exit
    

    Import:

    # pg_restore - restore a PostgreSQL database from an archive file created by pg_dump
    #!/bin/bash
    export SCRIPT_NAME=`basename $0`
    export HOST=`uname -n`
    export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
    export RUN_DATE=`date +\%m.\%d.\%y`
    export RUN_DIR=.
    export LOG_DIR=/tmp
    export DUMPDIR=/pgdata/export
    export JOB_NAME="ImportPGDB"
    export VERSION=1.0.0
    export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log
    exec &> ${LOGFILE}
    echo "Starting job at " `date`
    pg_restore ${DUMPDIR}/mussikkdbexport -C -c -d postgres -j 4 -v
    pg_restore ${DUMPDIR}/mussikkdbexport -c -d musicdb -j 4 -v
    echo "Ending job at " `date`
    exit
    


    More articles on pg_dump and pgrestore:

  • How to export and import a database in PostgreSQL
  • How to export and import a schema in PostgreSQL
  • How to list the contents of a custom format PostgreSQL export file
  • How to export and import a schema using the directory format
  • How to export a single table using different format in PostgreSQL
  • Tuesday, April 5, 2022

    Workaround for Automatic Diagnostic Repository Errors when executing "validate database" through Data Guard Broker

    If you have a Data Guard setup and using the broker, you may see the following error when validating your setup before a switchover:
    DGMGRL> validate database stb01
    
      Database Role:     Physical standby database
      Primary Database:  proddb01
    
      Ready for Switchover:  Yes
      Ready for Failover:    Yes (Primary Running)
    
      Flashback Database Status:
        proddb01:  Off
        stb01:     Off
    
      Managed by Clusterware:
        proddb01:  NO
        stb01:     NO
        Warning: Ensure primary database's StaticConnectIdentifier property
        is configured properly so that the primary database can be restarted
        by DGMGRL after switchover
    
      Automatic Diagnostic Repository Errors:
        Error                       proddb01 stb01
        System data file missing    NO       YES
        User data file missing      NO       YES
    
    This problem and its solution is outlined in Doc Id 2300040.1 "Known issues when using "Validate database" DGMGRL command" at Oracle Support.

    Cause: The issue occurs because the old health check messages were not purged properly and the command VALIDATE DATABASE signals it found a failed check.

    To get rid of this warning, rename the file HM_FINDING.ams from the ADR rdbms metadata folder or move it to another folder. This is what I did on my standby server:
     cd /u01/oracle/diag/
    find . -name "HM_FINDING.ams"
    ./rdbms/stb01/stb01/metadata/HM_FINDING.ams
    ./rdbms/stb01/proddb01/metadata/HM_FINDING.ams
    ./plsql/user_oracle/host_1804485962_107/metadata/HM_FINDING.ams
    
    Rename or remove the files listed above and execute the "validate database" command in dgmgrl again. The message should now be gone for good.

    Note that the file named HM_FINDING.ams will most likely reappear immediately after deletion. But this new copy will not cause the Data Guard Broker to throw warnings.