col PROPERTY_NAME format a20 col PROPERTY_VALUE format a20 col DESCRIPTION format a40 col LOCAL_UNDO_ENABLED format a20 select * from database_properties where property_name like '%UNDO%'; PROPERTY_NAME PROPERTY_VALUE DESCRIPTION -------------------- -------------------- ---------------------------------------- LOCAL_UNDO_ENABLED TRUE true if local undo is enabled
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Wednesday, January 4, 2023
Check if your CDB is set up for local undo
Monday, December 12, 2022
Some useful ADRCI commands
The ADR (Automatic Diagnostic Repository) and its command-line tool adrci was introduced in Oracle11g and is still available.
I still find it incredibly useful when troubleshooting problems, particulary ORA-600 errors which needs to be sent to Oracle Support.
An ADR may contain file from many homes, or components. You need to set the correct home before you start working with ADR:
Automatic purging:
Other examples:
I still find it incredibly useful when troubleshooting problems, particulary ORA-600 errors which needs to be sent to Oracle Support.
An ADR may contain file from many homes, or components. You need to set the correct home before you start working with ADR:
show homes set homepathexampel:
adrci> show homes ADR Homes: diag/tnslsnr/zus15d-0110/listener_pzhhtet2 diag/tnslsnr/zus15d-0110/listener_pzhhtet1 diag/rdbms/pzhhtet1/PZHHTET1 diag/rdbms/pzhhtet2/PZHHTET2 adrci> set homepath diag/rdbms/pzhhtet1/PZHHTET1 adrci> show home ADR Homes: diag/rdbms/pzhhtet1/PZHHTET1
help show tracefile show tracefileorder by timestamp:
show tracefile -torder by reversed timestamp:
show tracefile -rtfilter on tracefile name:
show tracefile %m00% -rtalert log operations
show alert show alert -tail -f show alert -tail 50 show alert -p "module_id='DBMS_SCHEDULER'" show alert -p "module_id != 'DBMS_SCHEDULER'" show alert -p "module_id LIKE '%SCHEDULER%'"If you want to extract the Streams related statements from the alert log, you would use:
show alert -p "message_text like '%STREAM%'"Show all messages related to a group. For example, a startup:
show alert -p "MESSAGE_GROUP LIKE 'startup'"
spoolcreate a package from an incident:spool off show incident show incident -mode basic show incident -mode detail show incident -mode detail -p "incident_id=incident_ID;" show incident -last 5
adrci> ips create package incident 1697 Created package 1 based on incident id 1697, correlation level typicalview its contents:
adrci> ips show files package 1 FILE_ID 1 FILE_LOCATIONgenerate zip-files on disk:/incident/incdir_1697 FILE_NAME +ASM1_lmhb_15352_i1697.trc LAST_SEQUENCE 0 EXCLUDE Included FILE_ID 2 FILE_LOCATION /incident/incdir_1697 FILE_NAME +ASM1_lmhb_15352_i1697.trm LAST_SEQUENCE 0 EXCLUDE Included FILE_ID 3 FILE_LOCATION /trace FILE_NAME +ASM1_lmhb_15352.trc LAST_SEQUENCE 0 EXCLUDE Included FILE_ID 4 FILE_LOCATION /trace FILE_NAME +ASM1_lmhb_15352.trm LAST_SEQUENCE 0 EXCLUDE Included
adrci> ips generate package 1 in /cs/oracle/grid/packages Generated package 1 in file /cs/oracle/grid/packages/ORA29770_20100427133326_COM_1.zip, mode completeTo remove old incidents, use
purge -age 2 -type incidentwhere -age 2 means older than two days.
Automatic purging:
adrci> purge -age 2880 -type trace--> will automatically remove all trace files older than 2880 minutes (2 days) type "help purge" on the adrci prompt for more options.
Other examples:
purge -age 129600 -type alert purge -age 129600 -type incident purge -age 129600 -type cdump purge -age 129600 -type stage purge -age 129600 -type sweep purge -age 129600 -type hm129600 = 90 days 43200 = 30 days or
purge -age 2880 -type trace purge -age 2880 -type alert purge -age 2880 -type cdump purge -age 2880 -type stage purge -age 2880 -type sweep purge -age 2880 -type hmyou can create a batch file and run all ov the above in one bulk:
adrci script=adrci_settings.cmdYou can spool from within an adrci script:
spool myfile.logThere are two policies in place; the LONGP_POLICY and the SHORTP_POLICY. Events such as incidents will fall into the LONGP policy and trace files and dumps will fall into the SHORTP policy. You can configure the policies using the commands below, which set the short and long retention periods to 3 months and 6 months, respectively. Note that set control accepts the time in hours
show control set control (SHORTP_POLICY = 2160) set control (LONGP_POLICY = 4320)Run in batch mode using the "exec" instruction"
adrci exec="command1; command2 ... command n"
Friday, December 9, 2022
What does the UR = A in tnsnames.ora mean?
I recently had to connect remotely as sysdba to remote instances when I created Physical Standby Databases using RMAN.
There were some some connection problems along the way, and this was solved by setting the (UR=A) attribute in the tnsnames.ora entry for the database.
What exactly is the UR=A attribute and what does it do?
This is explained very well in a blog post from Ed Chen Logic so rather than explaining the same points all over again, continue reading there.
But in short, an instance (memory, background processes, but no data files) will inform the listener to block all connections to its corresponding service by default.
To bypass this restriction, you can add UR=A to the connect descriptor. From the primary server, I will then be able to connect to the standby instance running on prodserver2.oric.no.
Let's demonstrate it:
Start the standby database in nomount:
There were some some connection problems along the way, and this was solved by setting the (UR=A) attribute in the tnsnames.ora entry for the database.
What exactly is the UR=A attribute and what does it do?
This is explained very well in a blog post from Ed Chen Logic so rather than explaining the same points all over again, continue reading there.
But in short, an instance (memory, background processes, but no data files) will inform the listener to block all connections to its corresponding service by default.
To bypass this restriction, you can add UR=A to the connect descriptor. From the primary server, I will then be able to connect to the standby instance running on prodserver2.oric.no.
Let's demonstrate it:
Start the standby database in nomount:
sqlplus / as sysdba startup nomount ORACLE instance started. Total System Global Area 1.2885E+10 bytes Fixed Size 8635096 bytes Variable Size 2986347816 bytes Database Buffers 9865003008 bytes Redo Buffers 24915968 bytesStart a default listener on the standby server:
lsnrctl startMake sure the default service is loaded: wait for a minute, or execute
alter system register;Listener status is now:
Service "proddb01_stby.oric.no" has 1 instance(s). Instance "proddb01", status BLOCKED, has 1 handler(s) for this service...Create a tnsnames.ora entry for connecting to the standby database from the primary server:
proddb01_stby.oric.no = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = proddb02.oric.no)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = proddb01_stby.oric.no) ) )Test it:
sqlplus sys/password@proddb01_stby.oric.no as sysdbaResult:
ORA-12528: TNS:listener: all appropriate instances are blocking new connectionsAdd the UR=A to the connect_data section:
proddb01_stby.oric.no =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = proddb01.oric.no)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = proddb01_stby.oric.no)
(UR = A)
)
)
And try again:
sqlplus sys/password@proddb01_stby.oric.no as sysdbaResult:
SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 9 16:43:09 2022 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> select host_name from v$instance; HOST_NAME ----------------- proddb02.oric.no SQL> show parameter db_unique NAME TYPE VALUE ---------------- --------- -------------- db_unique_name string proddb01_stby
How to solve "Can't locate TXK/ARGS.pm in @INC (you may need to install the TXK::ARGS module)" when running txkChkPDBCompatability.pl
Background:
As a part of converting your non-CDB Oracle database to a pluggable database, you are supposed to run the perl script txkChkPDBCompatability.pl
Problem:
The script fails with:
The PERL5LIB environment variable is not complete. You need to add the location of the ADK and TXK directories, which in turn contains several .pm files.
These folders can be found in $ORACLE_HOME/appsutil/perl
Solution:
Add the $ORACLE_HOME/appsutil/perl to your existing PERL5LIB variable:
To see the note used to convert a non-cdb EBS database to a pluggable database, go to Doc ID 2552181.1 at Oracle Support
As a part of converting your non-CDB Oracle database to a pluggable database, you are supposed to run the perl script txkChkPDBCompatability.pl
Problem:
The script fails with:
Can't locate TXK/ARGS.pm in @INC (you may need to install the TXK::ARGS module) (@INC contains: $ORACLE_HOME/perl/lib/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/5.28.1 $ORACLE_HOME/perl/lib/site_perl/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/site_perl/5.28.1 $ORACLE_HOME/perl/lib/site_perl/5.28.1/x86_64-linux-thread-multi /$ORACLE_HOME/perl/lib/site_perl/5.28.1 $ORACLE_HOME/perl/lib/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/5.28.1) at ./txkChkPDBCompatability.pl line 61. BEGIN failed--compilation aborted at ./txkChkPDBCompatability.pl line 61.Cause:
The PERL5LIB environment variable is not complete. You need to add the location of the ADK and TXK directories, which in turn contains several .pm files.
These folders can be found in $ORACLE_HOME/appsutil/perl
Solution:
Add the $ORACLE_HOME/appsutil/perl to your existing PERL5LIB variable:
export PERL5LIB=$PERL5LIB:$ORACLE_HOME/appsutil/perlThanks to the author of Punitoracledba for providing the solution after lots of troubleshooting.
To see the note used to convert a non-cdb EBS database to a pluggable database, go to Doc ID 2552181.1 at Oracle Support
Tuesday, December 6, 2022
SWITCHOVER VERIFY WARNING: switchover target has no standby database defined in LOG_ARCHIVE_DEST_n parameter.
When performing a switchover verificaton from your primary database, in your Data Guard setup, you may see the following:
2022-12-06T09:56:34.020025+01:00
ALTER DATABASE SWITCHOVER TO stb verify
2022-12-06T09:56:34.192599+01:00
SWITCHOVER VERIFY: Send VERIFY request to switchover target STB
SWITCHOVER VERIFY WARNING: switchover target has no standby database defined in LOG_ARCHIVE_DEST_n parameter.
If the switchover target is converted to a primary database, the new primary database will not be protected.
ORA-16475 signalled during: ALTER DATABASE SWITCHOVER TO stb verify...
Solution:
Update the standby database log_archive_dest_n parameter, to prepare it for a future primary role.
In the standby database, update one of the log_archive_dest_n parameters, I picked the next available from the list, log_archive_dest_2:
2022-12-06T10:03:34.605309+01:00
ALTER DATABASE SWITCHOVER TO stb verify
2022-12-06T10:03:34.773710+01:00
SWITCHOVER VERIFY: Send VERIFY request to switchover target STB
SWITCHOVER VERIFY COMPLETE: READY FOR SWITCHOVER
Completed: ALTER DATABASE SWITCHOVER TO stb verify
sqlplus / as sysdba SQL> ALTER DATABASE SWITCHOVER TO stb verify; ALTER DATABASE SWITCHOVER TO stb verify * ERROR at line 1: ORA-16475: succeeded with warnings, check alert log for more detailsAlert log reports:
2022-12-06T09:56:34.020025+01:00
ALTER DATABASE SWITCHOVER TO stb verify
2022-12-06T09:56:34.192599+01:00
SWITCHOVER VERIFY: Send VERIFY request to switchover target STB
SWITCHOVER VERIFY WARNING: switchover target has no standby database defined in LOG_ARCHIVE_DEST_n parameter.
If the switchover target is converted to a primary database, the new primary database will not be protected.
ORA-16475 signalled during: ALTER DATABASE SWITCHOVER TO stb verify...
Solution:
Update the standby database log_archive_dest_n parameter, to prepare it for a future primary role.
In the standby database, update one of the log_archive_dest_n parameters, I picked the next available from the list, log_archive_dest_2:
alter system set log_archive_dest_2='service=primary.oric.no LGWR ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=primary';
Run the verification again:
SQL> ALTER DATABASE SWITCHOVER TO stb verify; Database altered.Check the alert log and it will confirm that the database stb can now be turned into a primary database:
2022-12-06T10:03:34.605309+01:00
ALTER DATABASE SWITCHOVER TO stb verify
2022-12-06T10:03:34.773710+01:00
SWITCHOVER VERIFY: Send VERIFY request to switchover target STB
SWITCHOVER VERIFY COMPLETE: READY FOR SWITCHOVER
Completed: ALTER DATABASE SWITCHOVER TO stb verify
What is an Oracle snapshot standby database?
A snapshot standby database is a fully updatable standby database.
Redo data is still received from the primary database, but it is not applied immediately. Before the redo database is applied, the database will need to be converted back into a physical standby database.
Any local updates that the snapshot standby database may have received, will be deleted when the snapshot database is converted into a physical standby database and resynchronized with the primary database.
Oracle 12.2 Data Guard Concepts and administration documentation
Oracle support Using Snapshot Standby Database. (Doc ID 443720.1)
Redo data is still received from the primary database, but it is not applied immediately. Before the redo database is applied, the database will need to be converted back into a physical standby database.
Any local updates that the snapshot standby database may have received, will be deleted when the snapshot database is converted into a physical standby database and resynchronized with the primary database.
Oracle 12.2 Data Guard Concepts and administration documentation
Oracle support Using Snapshot Standby Database. (Doc ID 443720.1)
Monday, December 5, 2022
How to avoid redo logs and standby redo logs being created as OMF files during a DUPLICATE ... FOR STANDBY operation
Recently I had to create a Data Guard physical standby database, and I used RMAN active database duplication ( "duplicate target database for standby from active database" ) to create the database on the remote server.
As the remote server had the same file system layout as the primary, I used the directive
Consequently, I expected all datafiles, including the redo and standby redo log files, to be created in exactly the same location as the target.
For some reason this did not happen; in fact, they were named with OMF file names, and placed in the db_recovery_file_dest which point to the path /recovery_data, while on the primary, the same files reside in /u02/oradata
I found the solution based on a post called "JoeLi's TechLife" and added my own experience to find a workaround.
The author points out
after the duplication, all other files seem to be fine regarding their names and locations except for redo log files and standby redo log files. They are created under the fast recovery area with OMF file names!
How come? Well here is the reason — duplicate will always re-create redo log files and standby redo log files. And because DB_RECOVERY_FILE_DEST is defined on the Primary server, redo and standby redo log files are created as OMF despite NOFILENAMECHECK is used. And they are not multiplexed as you have on the Primary database!
I was not aware of this.
The author continues to point out that he hasn't found a solution, yet.
The proposed solution according to Joe is
... to recreate redo log and standby redo log files on the newly created standby server — basically dropping the OMF ones and creating them under the correct locations with correct file names.
I did however, find a solution, which worked for me and was really quite simple, although not very intuitive: simply set the directive log_file_name_convert in your clone script, like examplified below:
As the remote server had the same file system layout as the primary, I used the directive
NOFILENAMECHECKin the duplicate command.
Consequently, I expected all datafiles, including the redo and standby redo log files, to be created in exactly the same location as the target.
For some reason this did not happen; in fact, they were named with OMF file names, and placed in the db_recovery_file_dest which point to the path /recovery_data, while on the primary, the same files reside in /u02/oradata
I found the solution based on a post called "JoeLi's TechLife" and added my own experience to find a workaround.
The author points out
after the duplication, all other files seem to be fine regarding their names and locations except for redo log files and standby redo log files. They are created under the fast recovery area with OMF file names!
How come? Well here is the reason — duplicate will always re-create redo log files and standby redo log files. And because DB_RECOVERY_FILE_DEST is defined on the Primary server, redo and standby redo log files are created as OMF despite NOFILENAMECHECK is used. And they are not multiplexed as you have on the Primary database!
I was not aware of this.
The author continues to point out that he hasn't found a solution, yet.
The proposed solution according to Joe is
... to recreate redo log and standby redo log files on the newly created standby server — basically dropping the OMF ones and creating them under the correct locations with correct file names.
I did however, find a solution, which worked for me and was really quite simple, although not very intuitive: simply set the directive log_file_name_convert in your clone script, like examplified below:
connect target sys/password@primary
connect auxiliary target sys/password@stb
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
SET DB_UNIQUE_NAME='stb'
SET FAL_CLIENT='stb'
SET FAL_SERVER='primary'
SET LOG_ARCHIVE_DEST_1='location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stb'
SET LOG_ARCHIVE_DEST_2='' comment 'Must be set to empty string during duplication'
SET STANDBY_FILE_MANAGEMENT='AUTO'
SET DG_BROKER_CONFIG_FILE1='/u01/oracle/product/12c/dbs/dr1stb.dat'
SET DG_BROKER_CONFIG_FILE2='/u01/oracle/product/12c/dbs/dr2stb.dat'
SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(stb,primary)'
SET DG_BROKER_START='TRUE'
SET LOG_ARCHIVE_TRACE='0' comment 'Must be set to same value as primary during duplicaton'
SET dispatchers='(PROTOCOL=TCP) (SERVICE=stbXDB)' comment 'Must be set to unique name of stb db'
SET LOCAL_LISTENER='stb.skead.no' comment 'Must be set to unique name of stb db'
SET LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf' comment 'Must be set to same value as primary during duplicaton'
SET LOG_FILE_NAME_CONVERT='/redodata','/redodata' comment 'Must be set during cloning to avoid OMF naming of redo log files'
NOFILENAMECHECK
USING COMPRESSED BACKUPSET;
}
Aftwards, the query
COL MEMBER FORMAT A50 SET LINES 200 SELECT A.GROUP#,A.MEMBER, B.BYTES/1024/1024 "MB", B.ARCHIVED,B.STATUS,B.SEQUENCE# FROM V$LOGFILE A INNER JOIN V$LOG B ON A.GROUP# = B.GROUP# ORDER BY GROUP# ASC; set lines 200 col member format a50 select s.group#,s.thread#,s.sequence#,s.archived,s.status,f.member, f.type from v$standby_log s, v$logfile f where f.type = 'STANDBY' and s.group# = f.group#;confirms that the standby database was created with non-OMF datafiles:
SQL> @chk_redo.sql GROUP# MEMBER MB ARC STATUS SEQUENCE# ---------- -------------------------------------------------- ---------- --- ---------------- ---------- 1 /redodata/redo01.log 2048 NO CURRENT 156 2 /redodata/redo02.log 2048 YES UNUSED 0 3 /redodata/redo03.log 2048 YES UNUSED 0 GROUP# THREAD# SEQUENCE# ARC STATUS MEMBER TYPE ---------- ---------- ---------- --- ---------- -------------------------------------------------- ------- 4 1 0 YES UNASSIGNED /redodata/stb_redo01.log STANDBY 5 1 0 YES UNASSIGNED /redodata/stb_redo02.log STANDBY 6 1 0 YES UNASSIGNED /redodata/stb_redo03.log STANDBY 7 1 0 YES UNASSIGNED /redodata/stb_redo04.log STANDBY
Subscribe to:
Posts (Atom)