A snapshot standby database is a fully updatable standby database.
Redo data is still received from the primary database, but it is not applied immediately. Before the redo database is applied, the database will need to be converted back into a physical standby database.
Any local updates that the snapshot standby database may have received, will be deleted when the snapshot database is converted into a physical standby database and resynchronized with the primary database.
Oracle 12.2 Data Guard Concepts and administration documentation
Oracle support Using Snapshot Standby Database. (Doc ID 443720.1)
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Tuesday, December 6, 2022
Monday, December 5, 2022
How to avoid redo logs and standby redo logs being created as OMF files during a DUPLICATE ... FOR STANDBY operation
Recently I had to create a Data Guard physical standby database, and I used RMAN active database duplication ( "duplicate target database for standby from active database" ) to create the database on the remote server.
As the remote server had the same file system layout as the primary, I used the directive
Consequently, I expected all datafiles, including the redo and standby redo log files, to be created in exactly the same location as the target.
For some reason this did not happen; in fact, they were named with OMF file names, and placed in the db_recovery_file_dest which point to the path /recovery_data, while on the primary, the same files reside in /u02/oradata
I found the solution based on a post called "JoeLi's TechLife" and added my own experience to find a workaround.
The author points out
after the duplication, all other files seem to be fine regarding their names and locations except for redo log files and standby redo log files. They are created under the fast recovery area with OMF file names!
How come? Well here is the reason — duplicate will always re-create redo log files and standby redo log files. And because DB_RECOVERY_FILE_DEST is defined on the Primary server, redo and standby redo log files are created as OMF despite NOFILENAMECHECK is used. And they are not multiplexed as you have on the Primary database!
I was not aware of this.
The author continues to point out that he hasn't found a solution, yet.
The proposed solution according to Joe is
... to recreate redo log and standby redo log files on the newly created standby server — basically dropping the OMF ones and creating them under the correct locations with correct file names.
I did however, find a solution, which worked for me and was really quite simple, although not very intuitive: simply set the directive log_file_name_convert in your clone script, like examplified below:
As the remote server had the same file system layout as the primary, I used the directive
NOFILENAMECHECKin the duplicate command.
Consequently, I expected all datafiles, including the redo and standby redo log files, to be created in exactly the same location as the target.
For some reason this did not happen; in fact, they were named with OMF file names, and placed in the db_recovery_file_dest which point to the path /recovery_data, while on the primary, the same files reside in /u02/oradata
I found the solution based on a post called "JoeLi's TechLife" and added my own experience to find a workaround.
The author points out
after the duplication, all other files seem to be fine regarding their names and locations except for redo log files and standby redo log files. They are created under the fast recovery area with OMF file names!
How come? Well here is the reason — duplicate will always re-create redo log files and standby redo log files. And because DB_RECOVERY_FILE_DEST is defined on the Primary server, redo and standby redo log files are created as OMF despite NOFILENAMECHECK is used. And they are not multiplexed as you have on the Primary database!
I was not aware of this.
The author continues to point out that he hasn't found a solution, yet.
The proposed solution according to Joe is
... to recreate redo log and standby redo log files on the newly created standby server — basically dropping the OMF ones and creating them under the correct locations with correct file names.
I did however, find a solution, which worked for me and was really quite simple, although not very intuitive: simply set the directive log_file_name_convert in your clone script, like examplified below:
connect target sys/password@primary
connect auxiliary target sys/password@stb
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
SET DB_UNIQUE_NAME='stb'
SET FAL_CLIENT='stb'
SET FAL_SERVER='primary'
SET LOG_ARCHIVE_DEST_1='location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stb'
SET LOG_ARCHIVE_DEST_2='' comment 'Must be set to empty string during duplication'
SET STANDBY_FILE_MANAGEMENT='AUTO'
SET DG_BROKER_CONFIG_FILE1='/u01/oracle/product/12c/dbs/dr1stb.dat'
SET DG_BROKER_CONFIG_FILE2='/u01/oracle/product/12c/dbs/dr2stb.dat'
SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(stb,primary)'
SET DG_BROKER_START='TRUE'
SET LOG_ARCHIVE_TRACE='0' comment 'Must be set to same value as primary during duplicaton'
SET dispatchers='(PROTOCOL=TCP) (SERVICE=stbXDB)' comment 'Must be set to unique name of stb db'
SET LOCAL_LISTENER='stb.skead.no' comment 'Must be set to unique name of stb db'
SET LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf' comment 'Must be set to same value as primary during duplicaton'
SET LOG_FILE_NAME_CONVERT='/redodata','/redodata' comment 'Must be set during cloning to avoid OMF naming of redo log files'
NOFILENAMECHECK
USING COMPRESSED BACKUPSET;
}
Aftwards, the query
COL MEMBER FORMAT A50 SET LINES 200 SELECT A.GROUP#,A.MEMBER, B.BYTES/1024/1024 "MB", B.ARCHIVED,B.STATUS,B.SEQUENCE# FROM V$LOGFILE A INNER JOIN V$LOG B ON A.GROUP# = B.GROUP# ORDER BY GROUP# ASC; set lines 200 col member format a50 select s.group#,s.thread#,s.sequence#,s.archived,s.status,f.member, f.type from v$standby_log s, v$logfile f where f.type = 'STANDBY' and s.group# = f.group#;confirms that the standby database was created with non-OMF datafiles:
SQL> @chk_redo.sql GROUP# MEMBER MB ARC STATUS SEQUENCE# ---------- -------------------------------------------------- ---------- --- ---------------- ---------- 1 /redodata/redo01.log 2048 NO CURRENT 156 2 /redodata/redo02.log 2048 YES UNUSED 0 3 /redodata/redo03.log 2048 YES UNUSED 0 GROUP# THREAD# SEQUENCE# ARC STATUS MEMBER TYPE ---------- ---------- ---------- --- ---------- -------------------------------------------------- ------- 4 1 0 YES UNASSIGNED /redodata/stb_redo01.log STANDBY 5 1 0 YES UNASSIGNED /redodata/stb_redo02.log STANDBY 6 1 0 YES UNASSIGNED /redodata/stb_redo03.log STANDBY 7 1 0 YES UNASSIGNED /redodata/stb_redo04.log STANDBY
Friday, December 2, 2022
How to alter a column to use a sequence generated number
In my example, mytable contains a columnn called id, which is of datatype bigint.
If you want to use a sequence to automatically popluate the column at each insert, you can alter the column to support this with the below syntax:
If you want to use a sequence to automatically popluate the column at each insert, you can alter the column to support this with the below syntax:
ALTER TABLE myschema.mytable ALTER COLUMN id SET DEFAULT nextval('myschema.mysequence'::regclass);
Wednesday, November 16, 2022
How to log entire output from a shell script
I have recently used exec to send all output from a bash script to a designated logfile.
For example, if you have a complex script which performs multiple tasks and even execute tasks under a different user along the way, all the output will go into the global logfile indicated by the $GLOGFILE variable:
Good sources:
Linuxfordevices.com
Geeksforgeeks.org
For example, if you have a complex script which performs multiple tasks and even execute tasks under a different user along the way, all the output will go into the global logfile indicated by the $GLOGFILE variable:
#!/bin/bash
export SCRIPT_NAME=`basename $0`
export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
export GLOGFILE=/u01/mylogfile_${TS}.log
touch ${GLOGFILE}
chmod 666 ${GLOGFILE}
exec 1> ${GLOGFILE} 2>&1
echo "Starting job at " `date`
your shell instructions here
echo "Ending job at " `date`
exit
Good sources:
Tuesday, November 15, 2022
How to correct an identity column sequence after import
Thanks to the authors behind Grassroots-oracle for sharing this trick.
I finished an import of a table with an identity column, and for some reason, the sequence started to hand out number at the bottom of the range, although it had reached 10632546 in the source table.
My table's DDL:
There is no Primary Key on the table, which I pointed out to the developers.
The sequence name was ISEQ$$_88174, and the query below returned the number 2 after import in the destination database:
If I queried the sequence's next value now, it had changed from 2 to 10632547.
I then checked the maximum value of the column:
I finished an import of a table with an identity column, and for some reason, the sequence started to hand out number at the bottom of the range, although it had reached 10632546 in the source table.
My table's DDL:
CREATE TABLE MYTABLE ( MYID NUMBER GENERATED ALWAYS AS IDENTITY ( START WITH 21 MAXVALUE 9999999999999999999999999999 MINVALUE 1 NOCYCLE CACHE 20 NOORDER NOKEEP NOSCALE) NOT NULL, EVENT_TYPE VARCHAR2(75 BYTE) NOT NULL, SOURCE VARCHAR2(50 BYTE) NOT NULL, FORMAT VARCHAR2(75 BYTE) NOT NULL, EVKEY VARCHAR2(60 BYTE) NOT NULL, TITLE VARCHAR2(40 BYTE) NOT NULL, TIMESTAMP TIMESTAMP(6) DEFAULT current_timestamp, DOCUMENT CLOB NOT NULL, FILEREF VARCHAR2(200 BYTE) DEFAULT 'some text' NOT NULL, TRANNUM NUMBER ) LOB (DOCUMENT) STORE AS SECUREFILE ( TABLESPACE LOB_DATA ) TABLESPACE DATA;After import, the developers tested an insert and the row was inserted with MYID set to 1.
There is no Primary Key on the table, which I pointed out to the developers.
The sequence name was ISEQ$$_88174, and the query below returned the number 2 after import in the destination database:
select "ISEQ$$_88174".nextval from dual;To "start where the sequence left off" in the source database, all I had to do was to execute:
ALTER TABLE MYTABLE MODIFY (MYID GENERATED ALWAYS AS IDENTITY ( START WITH LIMIT VALUE) );
If I queried the sequence's next value now, it had changed from 2 to 10632547.
I then checked the maximum value of the column:
select max(MYID) from MYTABLE;And it returned 10632546, in other words, the next value would be the current max value + 1.
Friday, November 11, 2022
Getting ORA-01031: insufficient privileges when creating a view that is refering to another schema's table
If you get PRA-0131 during creation of a view refering to a schema object in a different table, try the following:
grant select on foreing_schema.foreign_table to scott with grant option;
Wednesday, November 9, 2022
Listener output showing multiple instances when only one exists
In one of my multitenant setups, I saw the following output from the listener:
What's causing the listener to view the pdb "sales" as a separate instance, then?
See this post where Uwe Hesse explains:
It is nothing to worry about when you see the same instance showing up twice, once with status READY and once with status UNKNOWN. READY means listener knows this because PMON of that instance told him so (dynamic registration) UNKNOWN means listener knows this because of SID_LIST entry in the listener.ora (static registration)
Quite correctly, I had a static entry in my listener.ora file:
If I remove the following part
Output:
lsnrctl status cdb Service "sales" has 2 instance(s). Instance "cdb", status READY, has 2 handler(s) for this service... Instance "sales", status UNKNOWN, has 1 handler(s) for this service...There is only one instance in my setup, the container database called "cdb".
What's causing the listener to view the pdb "sales" as a separate instance, then?
See this post where Uwe Hesse explains:
It is nothing to worry about when you see the same instance showing up twice, once with status READY and once with status UNKNOWN. READY means listener knows this because PMON of that instance told him so (dynamic registration) UNKNOWN means listener knows this because of SID_LIST entry in the listener.ora (static registration)
Quite correctly, I had a static entry in my listener.ora file:
SID_LIST_cdb = (SID_LIST = (SID_DESC = (ORACLE_HOME= /u01/orasoft/product/19c) (SID_NAME = sales) ) )
If I remove the following part
(SID_NAME = sales)and restart the listener, the extra instance will be removed from the listener output:
lsnrctl stop cdb lsnrctl start cdb
Output:
Service "sales" has 1 instance(s). Instance "cdb", status READY, has 2 handler(s) for this service...
Subscribe to:
Posts (Atom)