echo "Test: are there subdirectories in /orainst/oracle/cfgtoollogs/dbca/cdb ?" ls $(pwd)/*/ > /dev/null 2>&1 if [ $? == 0 ]; then echo "Yes" export PDBDIR=`ls -d */ | awk -F '/' '{ print $1 }'` grep ERROR --exclude-dir=${PDBDIR} * | egrep -v $ignorelist else echo "No" grep ERROR * | egrep -v $ignorelist fiThe code above is based on a finding in this post on Stack Overflow.
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Friday, February 9, 2024
Script snipplet to test for subdirectories
Thursday, February 8, 2024
Script snipplet to scan the dbca logs for errors
Here is a way to scan the logs in the directory where dbca creates its logfiles, by default under your $ORACLE_BASE/cfgtoollogs/dbca/$ORACLE_SID.
If it happens to be a multitenant database, dbca will create subdirectories for the PDB(s).
If you want to skip scanning these, exclude them from your grep command, like below:
If it happens to be a multitenant database, dbca will create subdirectories for the PDB(s).
If you want to skip scanning these, exclude them from your grep command, like below:
cd /orasw/oracle/cfgtoollogs/dbca/cdb export PDBDIR=`ls -d */ | awk -F '/' '{ print $1 }'` export ignorelist='ORA_IGNORE_CVU_ERRORS|exectask.sh|utlrp0.log|RMANUtil\$RMANUtilErrorListener.handleError:1386|RMANUtil\$RMANUtilErrorListener.handleError:1383|RMANEngine.readSqlOutput:1046' grep ERROR --exclude-dir=${PDBDIR} * | egrep -v $ignorelistThe errors I have excluded are based on my own observations on what's important or not during dbca usage.
Script to check for the number of control files in a database and create minimum two if only one is found
A script that checks for the number of controlfiles and creates two new ones if only one is found.
# 080224: Created # Check if there is only one controlfile present. If yes, create two new ones in the desired locations #!/bin/bash #cleanup old files chkcntrfiles() { rm -f /tmp/chk_cntrfile_location.sql rm -f /tmp/chk_cntrfile_location.log sqlfile=/tmp/chk_cntrfile_location.sql logfile=/tmp/chk_cntrfile_location.log echo "Check controlfile locations in database " $ORACLE_SID cat << EoF > ${sqlfile} set lines 200 heading off feedback off verify off echo off termout off set pages 0 set trimspool on col name format a70 spool ${logfile} select name from v\$controlfile; exit EoF cd /home/oracle/&&sqlplus -s -L / as sysdba @${sqlfile} } chkcntrfiles if [ `cat ${logfile} | wc -l` -ge 2 ]; then echo "There are at least 2 controlfiles present" else echo "There is only one controfile present." echo "Will now shutdown db and make another copy" echo "shutdown immediate" | sqlplus / as sysdba export org_contr_file=`cat ${logfile}` cp -v $org_contr_file /disk1/oradata/$ORACLE_SID/control01.ctl cp -v $org_contr_file /disk2/fra/$ORACLE_SID/control02.ctl echo "startup nomount" | sqlplus / as sysdba echo "alter system set control_files='/disk1/oradata/$ORACLE_SID/control01.ctl','/disk2/fra/$ORACLE_SID/control02.ctl' scope=spfile;" | sqlplus / as sysdba echo "startup mount force;" | sqlplus / as sysdba echo "Check new controlfile locations in database " $ORACLE_SID chkcntrfiles if [ ! `cat ${logfile} | wc -l` -ge 2 ]; then echo "We tried creating two new controlfiles but failed for some reason. Manual checkup required." exit 1 fi echo "Alles OK!" echo "alter database open;" | sqlplus / as sysdba fi exit
Wednesday, January 31, 2024
How come I cannot logon to my instance as sysdba, when the ORACLE_SID variable is set correctly?
Here's a reminder if you use a symbolic link for your ORACLE_HOME variable:
You will not be able to attach to your instance using OS authentication, like under the following circumstances:
You can, however, connect to the instance by going through the listener instead:
See this post about IPC and bequeath sessions for more details.
You will not be able to attach to your instance using OS authentication, like under the following circumstances:
echo $ORACLE_SID testdb01 echo $ORACLE_HOME=/orasw/oracle/product/current ls -la /orasw/oracle/product/current lrwxrwxrwx 1 oracle dba 12 Jan 31 08:34 /orasw/oracle/product/current -> 19.21 ls -la sqlplus / as sysdbaThe above statement will not be able to connect to your instance.
You can, however, connect to the instance by going through the listener instead:
sqlplus sys@testdb01 as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Wed Jan 31 08:38:42 2024 Version 19.21.0.0.0 Copyright (c) 1982, 2022, Oracle. All rights reserved. Enter password: Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.21.0.0.0 SYS@testdb01>SQL>show user USER is "SYS"So in order to create a bequeath session, you need an ORACLE_HOME variable that does not use symbolic links.
See this post about IPC and bequeath sessions for more details.
Saturday, January 27, 2024
Solution to ERRORMSG: Invalid SYSTEM database user credential when running txkPostPDBCreationTasks.pl as a part of an EBS database upgrade
When upgrading an EBS database from version 12c to 19c, you have to convert to the mulitenant architechture.
Oracle supplies script for this entire operation as outlined in Doc ID 580629.1 "Interoperability Notes: Oracle E-Business Suite Release 12.1 with Oracle Database 19c"
When reaching section 6 "Run the post PDB script" you are supposed to run the txkPostPDBCreationTasks.pl script which updates the PDB information.
I found that this script is essential and it must run to completion. It is restartable, so if it fails for some reason, you can fix the root cause and rerun the script.
Here is the error I ran into. The logfile shows it, and it's simple to understand:
Since this is a non-EBS password, it can be set manually with the alter user statement, which I did.
Log on to the cdb:
You can now rerun the script
Why is it essential that the script runs to completion, correctly without errors?
First of all, it updates important tables in the APPS schema. During an upgrade to 19c, the UTL_FILE_DIRS are no longer found in the database init.ora file or the spfile. It is saved in the tables v$parameter and v$parameter2.
In my case the
cdb_<yourservername>.env
EBSTEST_<yourservername>.env
Third, it creates your database server's context file, $ORACLE_HOME/appsutil/EBS_<yourservername>.xml
Without all of the above correctly done, you cannot proceed with running autoconfig on your apptier. It will keep failing until you fix the database-part of your migration.
Oracle supplies script for this entire operation as outlined in Doc ID 580629.1 "Interoperability Notes: Oracle E-Business Suite Release 12.1 with Oracle Database 19c"
When reaching section 6 "Run the post PDB script" you are supposed to run the txkPostPDBCreationTasks.pl script which updates the PDB information.
I found that this script is essential and it must run to completion. It is restartable, so if it fails for some reason, you can fix the root cause and rerun the script.
Here is the error I ran into. The logfile shows it, and it's simple to understand:
============================== Inside searchFileContents()... ============================== log_file: /u01/app/oracle/product/19x00_231017/appsutil/log/TXK_POST_PDB_Fri_Jan_26_19_15_34_2024/validate_system_password.log pattern: ERROR ================ Pattern found... ================ EXIT STATUS: 1 Invalid SYSTEM database user credentials. LOG FILE: /u01/app/oracle/product/19x00_231017/appsutil/log/TXK_POST_PDB_Fri_Jan_26_19_15_34_2024/validate_system_password.log. *******FATAL ERROR******* PROGRAM : (/u01/app/oracle/product/19x00_231017/appsutil/bin/txkPostPDBCreationTasks.pl) TIME : Fri Jan 26 19:18:47 2024 FUNCTION: main::validateSystemSchemaCredentials [ Level 1 ] ERRORMSG: Invalid SYSTEM database user credentials. *******FATAL ERROR******* PROGRAM : (/u01/app/oracle/product/19x00_231017/appsutil/bin/txkPostPDBCreationTasks.pl) TIME : Fri Jan 26 19:18:47 2024 FUNCTION: main::validateSystemSchemaCredentials [ Level 1 ] ERRORMSG: Invalid SYSTEM database user credentials.So the system password was not set correctly. Note that this is the only error in the logfile. All other tests had passed!
Since this is a non-EBS password, it can be set manually with the alter user statement, which I did.
Log on to the cdb:
export ORACLE_SID=cdb sqlplus / as sysdba alter user system identified by by mysecretpassword container=all;Test your connection, preferrably from the apptier. Log on to the PDB, in this case EBSTEST, using the system account:
sqlplus system@EBSTEST Enter password: Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production SQL> show user USER is "SYSTEM"So the password is now confirmed to be correct.
You can now rerun the script
export ORACLE_SID=cdb # source the environment cd $ORACLE_HOME/appsutil . ./txkSetCfgCDB.env dboraclehome=<your 19c OH path here> perl $ORACLE_HOME/appsutil/bin/txkPostPDBCreationTasks.pl -dboraclehome=<your 19c OH path here> \ -outdir=<your 19c OH path here>/appsutil/log -cdbsid=cdb -pdbsid=EBSTEST \ -appsuser=<APPSUSER> -dbport=<LISTENING PORT> -servicetype=onpremise
Why is it essential that the script runs to completion, correctly without errors?
First of all, it updates important tables in the APPS schema. During an upgrade to 19c, the UTL_FILE_DIRS are no longer found in the database init.ora file or the spfile. It is saved in the tables v$parameter and v$parameter2.
select name, value from v$parameter where name = 'utl_file_dir'; and select name, value from v$parameter where name = 'utl_file_dir';should yield the same output as the text file found in your $ORACLE_HOME/dbs/EBSTEST_utlfiledir.txt file.
In my case the
$ORACLE_HOME/dbs/EBSTEST_utlfiledir.txtshowed
/sw/oracle/product/temp/EBSTEST /usr/tmp/EBSTEST /<oracle_19c_home>/appsutil/outbound/EBSTEST_<myservername>Second, it creates two environment files in your $ORACLE_HOME that can be used to source the cdb and the PDB environments,respectively. The are called
Third, it creates your database server's context file, $ORACLE_HOME/appsutil/EBS_<yourservername>.xml
Without all of the above correctly done, you cannot proceed with running autoconfig on your apptier. It will keep failing until you fix the database-part of your migration.
Friday, January 26, 2024
How to find installed patches through SQL in an EBS database
During an EBS upgrade, I had to check whether or not a specific patch, 8796558, still needed to be applied.
Turned out is no longer relevant; it has been rolled up in cummulative patch sets. See listing below.
Turned out is no longer relevant; it has been rolled up in cummulative patch sets. See listing below.
set lines 200 col patch_name format a30 SELECT DISTINCT a.bug_number,e.patch_name, e.patch_type, TRUNC(c.end_date) applied_Date FROM ad_bugs a, ad_patch_run_bugs b, ad_patch_runs c, ad_patch_drivers d , ad_applied_patches e WHERE a.bug_id = b.bug_id AND b.patch_run_id = c.patch_run_id AND c.patch_driver_id = d.patch_driver_id AND d.applied_patch_id = e.applied_patch_id AND a.bug_number in ('8796558') ORDER BY 2 DESC; BUG_NUMBER PATCH_NAME PATCH_TYPE APPLIED_DATE ------------------------------ ------------------------------ ------------------------------ ------------ 8796558 22644544 PATCH-SET 04-SEP-19 8796558 21236633 PATCH-SET 27-MAI-16 8796558 19030202 PATCH-SET 15-DES-15 8796558 17774755 PATCH-SET 11-OKT-14 22644544 --> Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 5 [RPC5] 21236633 --> Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 4 [RPC4] 19030202 --> Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 2 [RPC2] 17774755 --> Oracle E-Business Suite Release 12.1.3+ Recommended Patch Collection 1 [RPC1]
[DBT-50000] Unable to check available system memory when running dbca
dbca returned the following error upon launch:
The TMP and/or TMPDIR directory doesn't exist.
Solution:
Make sure the .bash_profile contains the correct values for the TMP and TMPDIR directories. It must be an accessible and writable directory.
In my case:
This article from IBM pointed me in the right direction.
Doc ID 2534894.1 "How To Change Default TEMP Location When Creating Database By DBCA?" is also onto the same solution, but here oracle show you how to append the -j flag to your dbca command, e.g
[FATAL] [DBT-50000] Unable to check available system memory. CAUSE: An exception occured while checking kernel parameter. *ADDITIONAL INFORMATION:* Exception details - The default location for framework home is not available. It must be specifiedCause:
The TMP and/or TMPDIR directory doesn't exist.
Solution:
Make sure the .bash_profile contains the correct values for the TMP and TMPDIR directories. It must be an accessible and writable directory.
In my case:
mkdir -p /u01/ora19c/tmpOpen .bash_profile, and add the following
OWNER=ora19c;export OWNER TMP=/u01/$OWNER/tmp;export TMP TMPDIR=/u01/$OWNER/tmp;export TMPDIRLaunch dbca again. It should now proceed as expected.
This article from IBM pointed me in the right direction.
Doc ID 2534894.1 "How To Change Default TEMP Location When Creating Database By DBCA?" is also onto the same solution, but here oracle show you how to append the -j flag to your dbca command, e.g
dbca -J-Djava.io.tmpdir=/home/oracle/tmp
Subscribe to:
Posts (Atom)