Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts

Thursday, April 18, 2024

Solution to script warning: here-document at line < num > delimited by end-of-file (wanted `!')

In a script, I had formatted my code as follows:
  for l in $(cat file.txt|grep 'Status'|  awk -F' ' '{print $2}'); do
    if [ $l != "SUCCESS" ]; then
       echo $l ": Errors found. Please check logfile status.log"
    else
      echo "Readjust memory for the remaining cdb instance"
        su - oracle <<!
        
        -- code here --
      !
      runAdjustMemory
    fi
  done
During execution, my script failed with
./myscript.sh: line 32: warning: here-document at line 20 delimited by end-of-file (wanted `!')
./myscript.sh: line 33: syntax error: unexpected end of file


Cause: The termination character, in this case the exclamation point ! was indented in the code.

Solution: Remove the formatting and pull the termination character all the way to the left margin of your editor:
  for l in $(cat file.txt|grep 'Status'|  awk -F' ' '{print $2}'); do
    if [ $l != "SUCCESS" ]; then
       echo $l ": Errors found. Please check logfile status.log"
    else
      echo "Readjust memory for the remaining cdb instance"
        su - oracle <<!
        
        -- code here --
!
      runAdjustMemory
    fi
  done

Wednesday, April 17, 2024

Solution to ORA-28547: connection to server failed, probable Oracle Net admin error

When trying to perform sqlplus actions against one of my databases, I received
oracle@oric-dbserver01:[mydb01]# sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Apr 17 10:43:56 2024
Version 19.22.0.0.0

Copyright (c) 1982, 2023, Oracle.  All rights reserved.

ERROR:
ORA-28547: connection to server failed, probable Oracle Net admin error
Cause:

This is a consequence of the fact that oracle nls files are not found in its default location for this particular server.

If there are no .nlb files in $ORACLE_HOME/nls/data, you need to find out where they are located and set the ORA_NLS10 parameter correctly

If you lookup the error you will get a hint about this fact:
oracle@oric-dbserver01:[mydb01]# oerr ora 28547
A failure occurred during initialization of a network connection from a client process to the Oracle server.  ... 

The most frequent specific causes are: [ oracle lists several potential causes here ]

The character set of the database is not recognized by the client process, which may be caused by an incorrect or unnecessary setting 
of the ORA_NLS10 client environment variable or by a new or user-defined character set installed in the Oracle server and used for the database.


Solution:

Set the ORA_NLS10 environment variable:
export ORA_NLS10=$ORACLE_HOME/nls/data/9idata
When this is done, sqlplus will work.

Don't forget to add the same to your .bash_profile for the operating system user owning and running the oracle server software. For most installations I have seen, this will be the os user called "oracle".

Solution to [FATAL] [DBT-05509] Failed to connect to the specified database (cdb) in dbca

I was attempting to create a container database using dbca like this:
dbca -createDatabase -responsefile /home/oracle/scripts/cdb.rsp -silent
The following error kept coming up:
[FATAL] [DBT-05509] Failed to connect to the specified database (cdb).
   CAUSE: OS Authentication might be disabled for this database (cdb).
   ACTION: Specify a valid sysdba user name and password to connect to the database.
Solution:

Same solution as in my previous post "My solution to ORA-12701: CREATE DATABASE character set is not known":

If there are no *.nlb files in the default location $ORACLE_HOME/nls/data, then set the ORA_NLS10 parameter to the place where these files actually resides. In my case, they were found in the sub directory 9idata instead:
export ORA_NLS10=$ORACLE_HOME/nls/data/9idata
Otherwise Oracle won't be able to find the language files it needs to create the database.

My solution to ORA-12701: CREATE DATABASE character set is not known

Some of our Oracle servers are set up to accommodate EBS.

During the provisioning process of a new oracle server, the Oracle database creation failed with the following errors:
SYS@mydb01>SQL>alter database "mydb01" open resetlogs;
alter database "mydb01" open resetlogs
*
ERROR at line 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-12701: CREATE DATABASE character set is not known
As part of the EBS 12.1.3 installation guide, the parameter ORA_NLS10 is pointing to $ORACLE_HOME/nls/data/9idata, where all the NLS files are unpacked during installation.

However, by default, the ORA_NLS10 variable is pointing to $ORACLE_HOME/nls/data. In my case, this directory does not contain any nls files. They have all been moved to the sub directory 9idata.

As explained in Oracle Suppport Doc ID 1058400.6 "ORA-12701 When Creating a Database": ORA_NLS parameters determine where to find the NLS characterset libraries. If they cannot be found, Oracle cannot create the database with the characterset you have requested. The error was resolved by setting ORA_NLS10 explisitly during installation:
export ORA_NLS10=$ORACLE_HOME/nls/data/9idata
and rerun the installation scripts.

Friday, January 26, 2024

[DBT-50000] Unable to check available system memory when running dbca

dbca returned the following error upon launch:
[FATAL] [DBT-50000] Unable to check available system memory.
   CAUSE: An exception occured while checking kernel parameter.
*ADDITIONAL INFORMATION:*
Exception details
 - The default location for framework home is not available. It must be specified
Cause:

The TMP and/or TMPDIR directory doesn't exist.

Solution:

Make sure the .bash_profile contains the correct values for the TMP and TMPDIR directories. It must be an accessible and writable directory.

In my case:

mkdir -p /u01/ora19c/tmp
Open .bash_profile, and add the following
OWNER=ora19c;export OWNER
TMP=/u01/$OWNER/tmp;export TMP
TMPDIR=/u01/$OWNER/tmp;export TMPDIR
Launch dbca again. It should now proceed as expected.

This article from IBM pointed me in the right direction.

Doc ID 2534894.1 "How To Change Default TEMP Location When Creating Database By DBCA?" is also onto the same solution, but here oracle show you how to append the -j flag to your dbca command, e.g

 dbca -J-Djava.io.tmpdir=/home/oracle/tmp

Tuesday, August 22, 2023

Cause and solution to ORA-00058: DB_BLOCK_SIZE must be 8192 to mount this database (not 16384)

You attempt to execute dbca silently, using a response file, as shown below:
dbca -createDatabase -responsefile mydb01.rsp -silent
But it errors out with the error:
[WARNING] ORA-00058: DB_BLOCK_SIZE must be 8192 to mount this database (not 16384)
and the instance is left running, with an unmounted database.

Cause:

This means you are using a pre-defined template for dbca that includes datafiles.

When you do this, you cannot change the db_block_size. It is set to 8K, and carved in stone.

Solution:

Create your own, customized template and refer to that instead.

Remember that templates ending with the extension .dbc contains database files, so they will be significantly faster to use, than custom made templates ending with the extension .dbt - the latter will have to go through the "CREATE DATABASE" statements from scratch.

Templates that do not contain datafiles have the extension .dbt

What you can do in order to have your cake and eat it, too, is to first create a database with a non-confirming db_block_size, then create a dbca template based on this new database, including its datafiles.

I will see if I have time to write a blog post about how to do extactly this.

Friday, February 24, 2023

Getting ORA-01722 when running adstats.sql after an EBS database upgrade to 19c

An old error from 2015 surfaced today, when we were trying to upgrade a 12.1 EBS database to 19c.

The problem occured during the running of adstats.sql, which should be executed at the end of the installation procedure, right before the database is converted to a PDB in a multitenant architecture:
sqlplus / as sysdba @adstats.sql apps
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Feb 23 16:18:26 2023
Version 19.17.0.0.0
Copyright (c) 1982, 2022, Oracle.  All rights reserved.
Koblet til:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.17.0.0.0
Tilkoblet.
--------------------------------------------------
--- adstats.sql started at 2023-02-23 16:18:26 ---
Checking for the DB version and collecting statistics ...
declare
*
Error on line 1:
ORA-01722: invalid number
ORA-06512: on line 9
The solution is the same as in "ORA-01722 when running ausy1120.sql during preparation of new target database for EBS":

Modify the adstats.sql script slightly and correct a typo. In an editor, change the statement
select
  to_number(substr(version,1,instr(version,'.')))
  into :dbver
  from v$instance
  where rownum=1;
to
select
  to_number(substr(version,1,instr(version,'.')-1))
  into :dbver
  from v$instance
  where rownum=1;
Then rerun the script.

Tuesday, January 24, 2023

How to avoid error Perl lib version (5.28.1) doesn't match executable version (5.36.0)

When trying to use clone.pl to clone the latest version of the Oracle 19c database, the following error stopped me from proceeding:
/sw/oracle/product/19.18/clone/bin/clone.pl: 
Perl lib version (5.28.1) doesn't match executable '/sw/oracle/product/19.18/perl/bin/perl' 
version (5.36.0) at /sw/oracle/product/19.17/perl/lib/5.28.1/x86_64-linux-thread-multi/Config.pm 
line 62.
Turns out that that the error occured at the following point in my code:
export NEW_ORACLE_HOME=$ORACLE_BASE/product/19.18
export PRE_ORACLE_HOME=$ORACLE_BASE/product/19.17
export NEW_ORACLE_VERSION=19C_230117
su - oracle <<!
cd $NEW_ORACLE_HOME/clone/bin
perl clone.pl ORACLE_HOME=$NEW_ORACLE_HOME ORACLE_HOME_NAME=$NEW_ORACLE_VERSION ORACLE_BASE=$ORACLE_BASE OSDBA_GROUP=dba
!
Cause:
The .bash_profile of the oracle user contained a variable commonly used in EBS database environments:
export PERL5LIB=/sw/oracle/product/19.17/perl/lib/5.28.1:/sw/oracle/product/19.17/perl/lib/site_perl/5.28.1:/sw/oracle/product/19.17/appsutil/perl
and when trying to run the clone.pl program from the new ORACLE_HOME, perl was unable to find the matching libraries.

Solution:
unset the PERL5LIB before running the clone.pl program:
export NEW_ORACLE_HOME=$ORACLE_BASE/product/19.18
export PRE_ORACLE_HOME=$ORACLE_BASE/product/19.17
export NEW_ORACLE_VERSION=19C_230117
su - oracle <<!
cd $NEW_ORACLE_HOME/clone/bin
unset PERL5LIB
perl clone.pl ORACLE_HOME=$NEW_ORACLE_HOME ORACLE_HOME_NAME=$NEW_ORACLE_VERSION ORACLE_BASE=$ORACLE_BASE OSDBA_GROUP=dba
!

Friday, December 9, 2022

How to solve "Can't locate TXK/ARGS.pm in @INC (you may need to install the TXK::ARGS module)" when running txkChkPDBCompatability.pl

Background:

As a part of converting your non-CDB Oracle database to a pluggable database, you are supposed to run the perl script txkChkPDBCompatability.pl

Problem:

The script fails with:
Can't locate TXK/ARGS.pm in @INC (you may need to install the TXK::ARGS module) (@INC contains: $ORACLE_HOME/perl/lib/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/5.28.1 $ORACLE_HOME/perl/lib/site_perl/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/site_perl/5.28.1 $ORACLE_HOME/perl/lib/site_perl/5.28.1/x86_64-linux-thread-multi /$ORACLE_HOME/perl/lib/site_perl/5.28.1 $ORACLE_HOME/perl/lib/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/5.28.1) at ./txkChkPDBCompatability.pl line 61.
BEGIN failed--compilation aborted at ./txkChkPDBCompatability.pl line 61.
Cause:

The PERL5LIB environment variable is not complete. You need to add the location of the ADK and TXK directories, which in turn contains several .pm files.
These folders can be found in $ORACLE_HOME/appsutil/perl

Solution:

Add the $ORACLE_HOME/appsutil/perl to your existing PERL5LIB variable:
export PERL5LIB=$PERL5LIB:$ORACLE_HOME/appsutil/perl
Thanks to the author of Punitoracledba for providing the solution after lots of troubleshooting.

To see the note used to convert a non-cdb EBS database to a pluggable database, go to Doc ID 2552181.1 at Oracle Support

Tuesday, November 8, 2022

Workaround for error ERROR: OCI Error DPI (status = 7-DPI-007: invalid OCI handle or descriptor) when starting Golden Gate replicate process

When trying to login to the golden gate cli interface, I got the following error:
GGSCI (myserver1.oric.no) 7> dblogin useridalias mydb  domain admin
ERROR: OCI Error DPI (status = 7-DPI-007: invalid OCI handle or descriptor).
Cause:

Incorrectly specified network configuration:
sqlnet.ora
names.default_domain = world
tnsnames.ora
mydb =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = mydb)
    )
  )
To correct the error, you need to do either of the following:

1. remove the names.default_domain from sqlnet.ora

or

2. add an alias to your tnsnames.ora file
mydb.world =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = mydb)
    )
  )

Tuesday, October 11, 2022

How to solve "Service name or network name of ... in the PDB is invalid or conflicts with an existing service name or network name in the CDB."

From the alert log during startup of the container database called "cdb":
pdb1(3):***************************************************************
pdb1(3):WARNING: Pluggable Database saes with pdb id - 3 is
pdb1(3):         altered with errors or warnings. Please look into
pdb1(3):         PDB_PLUG_IN_VIOLATIONS view for more details.
pdb1(3):***************************************************************
When I check the PDB_PLUG_IN_VIOLATIONS I see this:
select name,cause,type,message,status,action 
from pdb_plug_in_violations;

NAME CAUSE TYPE MESSAGE STATUS ACTION
sales Service Name Conflict WARNING Service name or network name of service salesXDB in the PDB is invalid or conflicts with an existing service name or network name in the CDB. PENDING Drop the service and recreate it with an appropriate name.

Check the CDB_SERVICES view:
SELECT name,network_name,creation_date,pdb,enabled
FROM   cdb_services
where con_id=3
and name='salesXDB'
ORDER BY 1;

NAME NETWORK_NAME CREATION_DATE PDB ENABLED
salesXDB salesXDB 03/03/2022 11:28:56 sales NO

There is indeed a service called salesXDB, which is a reminisce from an earlier point in time where "sales" was a non-CDB database.
It has now been replaced with the container database XDB service, in my case called "cdbXDB".

How to address the warnings

Log onto the sales pluggable database:
sqlplus / as sysdba

alter session set container=sales;

Session altered.
Remove the service from the PDB:
exec dbms_service.delete_service('salesXDB');

PL/SQL procedure successfully completed.
If you check the CDB_SERVICES again, it will now be gone.

Restart the pluggable database:
sqlplus / as sysdba

alter session set container=sales;

Session altered.

shutdown immediate
startup

If you query the PDB_PLUG_IN_VIOLATIONS again, you will see that the value for status in the error is now set to RESOLVED:
time NAME CAUSE TYPE MESSAGE STATUS ACTION
11.10.2022 12:49 mvaoppgr Service Name Conflict WARNING Service name or network name of service mvaoppgrXDB in the PDB is invalid or conflicts with an existing service name or network name in the CDB. RESOLVED Drop the service and recreate it with an appropriate name.


Source: PDB Name Conflicts With Existing Service Name In The CDB Or The PDB (Doc ID 2247291.1) from Oracle Support

Thursday, June 30, 2022

How to solve ORA-01187: cannot read from file because it failed verification tests

After a successful clone, alert log reports:
022-06-30T04:04:17.542368+02:00
Errors in file /orainst/oracle/diag/rdbms/testdb01/testdb01/trace/testdb01_dbw0_130154.trc:
ORA-01186: file 201 failed verification tests
ORA-01122: database file 201 failed verification check
ORA-01110: data file 201: '/data/oradata/testdb01/temp01.dbf'
ORA-01203: wrong incarnation of this file - wrong creation SCN
File 201 not verified due to error ORA-01122

Any operation you try against the database will give the following error stack in return:
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/data/oradata/testdb01/temp01.dbf'
ORA-06512: at "SYS.DBMS_LOB", line 741
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5420
ORA-06512: at line 1

Check tempfile situation:
SYS@testdb01>SQL>select ts#,status,enabled, name from v$tempfile;

       TS# STATUS  ENABLED    NAME
---------- ------- ---------- ---------------------------------
         3 ONLINE  READ WRITE /data/oradata/testdb01/temp01.dbf
         3 ONLINE  READ WRITE /data/oradata/testdb01/temp02.dbf
A query against dba_temp_files however, will give an error:
SYS@testdb01>SQL>SELECT tablespace_name, file_name  FROM dba_temp_files WHERE tablespace_name = 'TEMP';
SELECT tablespace_name, file_name  FROM dba_temp_files WHERE tablespace_name = 'TEMP'
                                        *
ERROR at line 1:
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/data/oradata/testdb01/temp01.dbf'

Is the temporary tablespace database default?
SYS@testdb01>SQL>
col property_value format a30
SELECT PROPERTY_VALUE   
FROM DATABASE_PROPERTIES 
WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';

PROPERTY_VALUE
-----------------------
TEMP
Yes it is, so we cannot drop it, but will have to add new files and drop the old ones

Add new tempfile:
SYS@testdb01>SQL>ALTER TABLESPACE temp ADD TEMPFILE '/data/oradata/testdb01/temp.dbf' size 256M;

Tablespace altered.

Drop the old ones:
SYS@testdb01>SQL>ALTER DATABASE TEMPFILE '/data/oradata/testdb01/temp01.dbf' DROP INCLUDING DATAFILES;

Database altered.

SYS@testdb01>SQL>ALTER DATABASE TEMPFILE '/data/oradata/testdb01/temp02.dbf' DROP INCLUDING DATAFILES;

Database altered.


Both of the old datafiles are now gone from the data dictionary:
SYS@pserser4>SQL>
set lines 200
col tablespace_name format a20
col file_name format a50
SELECT tablespace_name, file_name  FROM dba_temp_files WHERE tablespace_name = 'TEMP';

TABLESPACE_NAME         FILE_NAME
---------------------   --------------------------------------
TEMP                    /data/oradata/testdb01/temp.dbf
Note that the old tempfiles are still on disk:
SYS@testdb01>SQL>!ls -la /data/oradata/testdb01/temp01.dbf
-rw-r----- 1 oracle dba 114302976 Jun 30 04:04 /data/oradata/testdb01/temp01.dbf

If you want to add the old datafiles back to your temp tablespace you can do this with the REUSE keyword:
SYS@testdb01>SQL>ALTER TABLESPACE temp ADD TEMPFILE '/data/oradata/testdb01/temp01.dbf' size 256M reuse;

Tablespace altered.
In my case, the other tempfile /data/oradata/testdb01/temp02.dbf, was not present physically on disk.
The alert log confirms this:
2022-06-30T04:04:18.302852+02:00
Errors in file /data/oracle/diag/rdbms/testdb01/testdb01/trace/testdb01_dbw0_130154.trc:
ORA-01157: cannot identify/lock data file 202 - see DBWR trace file
ORA-01110: data file 202: '/data/oradata/testdb01/temp02.dbf'
ORA-17503: ksfdopn:4 Failed to open file /data/oradata/testdb01/temp02.dbf
ORA-17500: ODM err:File does not exist

Wednesday, June 29, 2022

ORA-01017: incorrect username/password, login denied when using a wallet



If you find yourself in the situation where you cannot make a connection using your wallet because of the error
ORA-01017: incorrect username/password, login denied when using a wallet
It might be worth checking that the user you have created an entry in your wallet for, also is granted the sysdba role.
Example:
I want to use a database user called "dbamaster" to performa a clone from active database over the network.
Since I do not wish to reveal the password of the "dbamaster" user, I add the user to a wallet, and then use the notation
connect target /@proddb
connect auxiliary /@testdb
to connect.

Listing the content of the wallet shows that everything is in order:
mkstore -wrl $TNS_ADMIN/wallet -listCredential
Oracle Secret Store Tool Release 19.0.0.0.0 - Production
Version 19.4.0.0.0
Copyright (c) 2004, 2021, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
2: testdb dbamaster
1: proddb dbamaster
You can connect successfully to both databases using the "dbamaster" user, using TNS:
-- From auxiliary to target
sqlplus dbamaster@proddb

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 29 20:03:09 2022
Version 19.15.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Enter password:

Koblet til:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0

DBAMASTER@proddb>

-- From target to auxiliary
sqlplus dbamaster@testdb

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jun 29 20:43:43 2022
Version 19.15.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.15.0.0.0

DBAMASTER@testdb SQL>


Everything seems in order, but incorrect password nevertheless.

Check the v$pwfile_users view, which holds the current users that have sysdba privilege:
col account_status format a20
col username format a20
col password_profile format a20
select username,sysdba,sysoper,account_Status,password_profile from v$pwfile_users;

USERNAME             SYSDB SYSOP ACCOUNT_STATUS       PASSWORD_PROFILE
-------------------- ----- ----- -------------------- --------------------
SYS                  TRUE  TRUE  OPEN                 NOEXP

In this case user "dbamaster" was missing.

Add it:
grant sysdba to dbamaster;

Check the list of sysdbas again. You should now have two entries there:
 select username,sysdba,sysoper,account_Status,password_profile from v$pwfile_users;

USERNAME             SYSDB SYSOP ACCOUNT_STATUS       PASSWORD_PROFILE
-------------------- ----- ----- -------------------- --------------------
SYS                  TRUE  TRUE  OPEN                 NOEXP
DBAMASTER            TRUE  FALSE OPEN                 NOEXP

Your connections should now work:
RMAN> connect target /@proddb

connected to target database: proddb (DBID=253631174)

RMAN> connect auxiliary /@testdb

connected to auxiliary database: TESTDB (DBID=296823958)

Friday, June 17, 2022

How to solve ORA-17628: Oracle error 1031 returned by remote Oracle server ORA-01031: insufficient privileges when cloning a non-cdb oracle instance to a PDB

When attempting to clone my database testdb01, a normal, non-cdb database, into a CDB and convert it to a PDB, I hit a permission error. I found the solution on Oracle Supports web site, Doc ID 2485839.1.

Prior to this error, I had set up a database link in my CDB:
SYS@cdb>SQL>create database link noncdb connect to system identified by mypassword using 'testdb01.mydomain.no';
I tested the database link, worked fine:
SYS@cdb>SQL>select host_name from v$instance@noncdb;

HOST_NAME
--------------------------------------------------
mynoncdbserver.mydomain.no
I tried to create the pluggable database, using the appropriate file destination paths already created:
create pluggable database testdb01 
from non$cdb@noncdb 
file_name_convert=('/data1/oradata/testdb01/','/data1/oradata/cdb/testdb01/', 
                   '/data2/oradata/testdb01/', '/data2/oradata/cdb/testdb01/');
Errors out:
ORA-17628: Oracle error 1031 returned by remote Oracle server
ORA-01031: insufficient privileges
To solve the error, simply logon to your non-cdb database as a sysdba user and grant the privilege "create pluggable database" to the user you're using for copying (in my case, SYSTEM):
grant create pluggable database to system;
Try the create pluggable database command again, and it succeeds.

Tuesday, May 3, 2022

The cause for and solution to the error message "Database mount ID mismatch" in your previously configured standby database

If you have a previously configured standby database, and you have converted it to a free standing database, no longer a part of a data guard setup, you may see some errors in the alert log looking like this
2022-05-03T12:52:33.896905+02:00
RFS[1332]: Assigned to RFS process (PID:128748)
RFS[1332]: Database mount ID mismatch [0xb40d4ed8:0xb42c30e2] (3020771032:3022794978)
RFS[1332]: Not using real application clusters
Reason:
Even though you have used the data guard broker to stop log shipping, and activated the standby database (inn effect making it read writable), the broker will not stop the previously configured primary database from shipping logs to its previously configured standby destination.

Solution:
Cut off the log shipping from the previously configured primary database completely by either

1) changing the value of log_archive_dest_state_2 from enabled to defer:
alter system set log_archive_dest_state_2=defer scope=both;
or by

2) removing the value of log_archive_dest_2 altogether:
alter system set log_archive_dest_2='' scope=both;

Tuesday, April 5, 2022

Workaround for Automatic Diagnostic Repository Errors when executing "validate database" through Data Guard Broker

If you have a Data Guard setup and using the broker, you may see the following error when validating your setup before a switchover:
DGMGRL> validate database stb01

  Database Role:     Physical standby database
  Primary Database:  proddb01

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    proddb01:  Off
    stb01:     Off

  Managed by Clusterware:
    proddb01:  NO
    stb01:     NO
    Warning: Ensure primary database's StaticConnectIdentifier property
    is configured properly so that the primary database can be restarted
    by DGMGRL after switchover

  Automatic Diagnostic Repository Errors:
    Error                       proddb01 stb01
    System data file missing    NO       YES
    User data file missing      NO       YES
This problem and its solution is outlined in Doc Id 2300040.1 "Known issues when using "Validate database" DGMGRL command" at Oracle Support.

Cause: The issue occurs because the old health check messages were not purged properly and the command VALIDATE DATABASE signals it found a failed check.

To get rid of this warning, rename the file HM_FINDING.ams from the ADR rdbms metadata folder or move it to another folder. This is what I did on my standby server:
 cd /u01/oracle/diag/
find . -name "HM_FINDING.ams"
./rdbms/stb01/stb01/metadata/HM_FINDING.ams
./rdbms/stb01/proddb01/metadata/HM_FINDING.ams
./plsql/user_oracle/host_1804485962_107/metadata/HM_FINDING.ams
Rename or remove the files listed above and execute the "validate database" command in dgmgrl again. The message should now be gone for good.

Note that the file named HM_FINDING.ams will most likely reappear immediately after deletion. But this new copy will not cause the Data Guard Broker to throw warnings.

Wednesday, February 23, 2022

How to fix the pending_dst_session error when running the preupgrade script

If you receive an error during the precheck-phase of an upgrade from 18c to 19c, which points to "pending_dst_sessions", like this:

you can follow Doc ID 1509653.1 "Updating the RDBMS DST version in 12c Release 1 (12.1.0.1 ) and above using DBMS_DST"

In short, execute the following:
SELECT version FROM v$timezone_file;
In my case the result was

   VERSION
----------
        32
Then, check the following:
col property_name format a40
col value format a40
set lines 200
SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;
Result:
PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   32
DST_SECONDARY_TT_VERSION                 31
DST_UPGRADE_STATE                        UPGRADE
Then, create a file called "dst.sql" and add the following:
col property_name format a40
col value format a40
set lines 200
alter session set "_with_subquery"=materialize;
alter session set "_simple_view_merging"=TRUE;
set serveroutput on
VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/
VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/

SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;
exit
Execute the script:
oracle@myserver.mydomain.com:[testdb01]# sqlplus / as sysdba @dst.sql
Output:
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.14.0.0.0


Session altered.


Session altered.

Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
Number of failures: 0
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
Number of failures: 0
Table list: "DVSYS"."SIMULATION_LOG$"
Number of failures: 0
Table list: "DVSYS"."AUDIT_TRAIL$"
Number of failures: 0
Failures:0

PL/SQL procedure successfully completed.

An upgrade window has been successfully ended.
Failures:0

PL/SQL procedure successfully completed.


PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   32
DST_SECONDARY_TT_VERSION                 0
DST_UPGRADE_STATE                        NONE

SQL> exit
The error should now be fixed and you can try to run the prechecks for the upgrade once more.

Wednesday, September 22, 2021

How to resolve ORA-48128 during export

Applicable to Oracle 19.3 and onwards.

The following error may occur when you export data using Data Pump

Parameter file:
userid=system/mypasswd
directory=DATA_PUMP_DIR
logfile=exp_sales.log
dumpfile=klm.dmp
exclude=statistics
job_name=exp_klm
schemas=KLM
flashback_time=systimestamp

Execute the export:
expdp parfile=expdp.par
Result:
RA-39126: Worker unexpected fatal error in KUPW$WORKER.CREATE_OBJECT_ROWS [USER]
USER:"GRDA_TEST"
ORA-39155: error expanding dump file name "/data1/export/klm.dmp"
ORA-48128: opening of a symbolic link is disallowed

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 12630
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPF$FILE", line 9793
ORA-06512: at "SYS.KUPF$FILE_INT", line 1211
ORA-06512: at "SYS.KUPF$FILE", line 9734
ORA-06512: at "SYS.KUPW$WORKER", line 11341

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0xcf98d208     33543  package body SYS.KUPW$WORKER.WRITE_ERROR_INFORMATION
0xcf98d208     12651  package body SYS.KUPW$WORKER.DETERMINE_FATAL_ERROR
0xcf98d208     11612  package body SYS.KUPW$WORKER.CREATE_OBJECT_ROWS
0xcf98d208     15278  package body SYS.KUPW$WORKER.FETCH_XML_OBJECTS
0xcf98d208      3917  package body SYS.KUPW$WORKER.UNLOAD_METADATA
0xcf98d208     13746  package body SYS.KUPW$WORKER.DISPATCH_WORK_ITEMS
0xcf98d208      2439  package body SYS.KUPW$WORKER.MAIN
0xcf996200         2  anonymous block

KUPW: Object row index into parse items is: 1
KUPW: Parse item count is: 4
KUPW: In function CHECK_FOR_REMAP_NETWORK
KUPW: Nothing to remap
KUPW: In procedure BUILD_OBJECT_STRINGS - non-base info
KUPW: In procedure BUILD_SUBNAME_LIST with USER:KLM.KLM
KUPW: In function NEXT_PO_NUMBER
KUPW: PO number assigned: 13
KUPF$FILE.WRITE_LOB
KUPW: In procedure DETERMINE_FATAL_ERROR with ORA-39155: error expanding dump file name "/data1/export/klm.dmp"
ORA-06512: at "SYS.KUPF$FILE", line 9793
ORA-48128: opening of a symbolic link is disallowed
ORA-06512: at "SYS.KUPF$FILE_INT", line 1211
ORA-06512: at "SYS.KUPF$FILE", line 9734

Job "SYSTEM"."EXP_KLM" stopped due to fatal error at Wed Sep 22 08:42:49 2021 elapsed 0 00:00:04
Reason: bug

Solution:
Outlined in Doc ID 2654167.1 "Customer RecommendedDataPump Export (EXPDP) Fails Due to ORA-39155 ORA-48128":
ALTER SYSTEM  SET "_disable_directory_link_check" =true  SCOPE=SPFILE;
shutdown immediate
startup

Monday, August 2, 2021

How to work around ORA-38323: policy conflicts with policy

You try to enable a tablespace-wide ADO policy:
ORA-38323: policy conflicts with policy 42
If you lookup the error with oerr ora 38323, oracle will give you the following solution:
* Cause: An attempt was made to create a policy with the same action
         type and based on the same statistic as another policy on the
         object.
 *Action: Use a different action and/or statistic for the new policy,
          or delete the old policy.
Verify that there is a policy on the tablespace already:
SELECT * 
FROM DBA_ILMPOLICIES 
WHERE TABLESPACE IS NOT NULL;
Result:
POLICY_NAME POLICY_TYPE TABLESPACE ENABLED DELETED
P43 DATA MOVEMENT DATA1 YES NO

Theres is indeed a policy named P43. What kind of policy is it?
SELECT policy_name,action_type,scope,compression_level,condition_type,condition_days,policy_subtype
FROM DBA_ILMDATAMOVEMENTPOLICIES 
WHERE POLICY_NAME IN (SELECT POLICY_NAME FROM DBA_ILMPOLICIES WHERE TABLESPACE IS NOT NULL); 
Result:
POLICY_NAME ACTION_TYPE SCOPE COMPRESSION_LEVEL CONDITION_TYPE CONDITION_DAYS POLICY_SUBTYPE
P43 COMPRESSION GROUP ADVANCED LAST ACCESS TIME
1
DISK

So the policy is a directive to the oracle server to compress all objects after one day of no access. If you need to alter this policy, the old one must be dropped:
ALTER TABLESPACE DATA1 DEFAULT ILM DELETE POLICY P43;
You are now free to add a new ADO policy to the tablespace:
ALTER TABLESPACE DATA1
DEFAULT ILM ADD POLICY
ROW STORE COMPRESS ADVANCED
GROUP AFTER 30 DAYS OF NO ACCESS;

Wednesday, June 30, 2021

How to deal with ORA-01110 during export

If you receive errors like this during data pump export
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 204: '/datafiles/oradata/proddb/temp.dbf'
ORA-06512: at "SYS.DBMS_LOB", line 724
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4546
ORA-06512: at line 1
you need to drop and recreate your temporary tablespace(s). If you only use bigfile temporary tablespaces, you can use the following script to rectify the situation:
set trimspool on
set lines 200
set feedback off
set verify off
set heading off
set echo off

spool 1.drop_non_default_temp.sql
select 'spool 1.drop_non_default_temp.log' from dual;
select 'drop tablespace ' || ts.name || ' including contents and datafiles;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name not in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;

spool 2.recreate_temp.sql
select 'spool 2.recreate_temp.log' from dual;
select 'create bigfile temporary tablespace ' || ts.name || ' tempfile ''' || tf.name || ''' SIZE 32M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name not in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

spool 3.alter_default_temp.sql
select 'spool 3.alter_default_temp.log' from dual;
select 'alter database default temporary tablespace TEMP;' from dual;
select 'exit' from dual;
spool off

spool 4.drop_default_temp.sql
select 'spool 4.drop_default_temp.log' from dual;
select 'drop tablespace ' || ts.name || ' including contents and datafiles;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

spool 5.create_default_temp.sql
select 'spool 5.create_default_temp.log' from dual;
select 'create bigfile temporary tablespace ' || ts.name || ' tempfile ''' || tf.name || ''' SIZE 32M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

spool 6.reset_default_temp.sql
select 'spool 6.reset_default_temp.log' from dual;
select 'alter database default temporary tablespace ' || ts.name || ';'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

exit
Run the script in your database, and it will product 6 new sql files, numbered 1 to 6. Run them sequenctially and you should have recreated your temporary tablespaces. After this your data pump export will start as expected.