Thursday, June 12, 2014

How to set the db_file_name_convert and log_file_name_convert parameters

In this example, I want to redirect the files to a slightly different path during a clone from prod to test.

In prod, files are to be found under:

* /u02/oradata/proddb01/datafile

In test, I want to be placed under:

* /u02/oradata/testdb01

Furthermore, some tempfiles are placed differently than regular datafiles in prod. In test I do not need or want multiple destinations; all files should be placed under /u02/oradata/testdb01. Therefore, my db_file_name_convert parameter must have multiple pairs of source and target locations.

For the log files, they could all be placed under similar locations, so the redirection string can simply contain the only thing that will differ: the ORACLE_SID.

When using a pfile:

db_file_name_convert=('/u02/oradata/proddb01/datafile','/u02/oradata/testdb01',
                      '/u02/oradata/proddb01/tempfile',/u02/oradata/testdb01')

log_file_name_convert=('proddb01','testdb01')
When using an spfile:
alter system set db_file_name_convert='/u02/oradata/proddb01/datafile','/u02/oradata/testdb01',
                                      '/u02/oradata/proddb01/tempfile',/u02/oradata/testdb01' scope=spfile;

alter system set log_file_name_convert='proddb01','testdb01' scope=spfile;

It is also supported to use the log_file_name_convert multiple times in the parameter file, like this:
log_file_name_convert=('/u01/app/oracle/oradata/proddb01/onlinelog/','/u04/oradata/testdb01/onlinelog/')
log_file_name_convert=('/u01/app/oracle/flash_recovery_area/proddb01/onlinelog/','/u04/oradata/testdb01/onlinelog/')
Oracle Docs:
db_file_name_convert
log_file_name_convert

How to pin SGA in memory on an AIX server

On AIX, if you want to pin the SGA in memory, set the parameter LOCK_SGA to TRUE.

A prerequisite for locking the SGA is that you set the proper capabilities for the user owning and spawning the Oracle rdbms processes.

Check the set capabilities as follows:
[root@myserver] lsuser -a capabilities oracle
 oracle
Set capabilities:
[root@myserver] chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle
 [root@myserver] lsuser -a capabilities oracle
 oracle capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE
 [root@myserver]


Wednesday, June 11, 2014

How to work around SEVERE: OUI-10197:Unable to create a new Oracle Home when using clone.pl

When cloning a database home onto a new server, the clone.pl script threw the following error:

perl clone.pl ORACLE_HOME=/u01/oracle/product/11204 ORACLE_HOME_NAME=11204 

SEVERE: OUI-10197:Unable to create a new Oracle Home at /u01/oracle/product/11204. Oracle Home already exists at this location. Select another location.
The server was freshly installed, new volume Groups, logical Volume Groups and file systems newly created.

I then realized that since the software was tared and untared into a new directory, there would potentially be an existing Oracle Inventory present, and I would needed to "detach" the ORACLE_HOME before cloning it. The tarball did indeed contain an Inventory:

 myserver>cd $ORACLE_HOME
 myserver>cat oraInst.loc.org
 inventory_loc=/u01/oracle/oraInventory
 inst_group=dba
Since there was no such directory present, I created it:
myserver>mkdir /u01/oracle/oraInventory/ContentsXML/
myserver>cd /u01/oracle/oraInventory/ContentsXML/
myserver>vi inventory.xml
In the file inventory.xml, I added the following minimal information:
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
 <!-- Do not modify the contents of this file by hand. -->
 <inventory>
 <version_info>
    <saved_with>11.2.0.4.0</saved_with>
    <minimum_ver>2.1.0.6.0</minimum_ver>
 </version_info>
 <home_list>
 <home idx="1" loc="/u01/oracle/product/11204" name="11204" type="O">
 </home></home_list>
 <compositehome_list>
 </compositehome_list>
 </inventory>

I then added
 -invPtrLoc /u01/oracle/product/11204/oraInst.loc
to file $ORACLE_HOME/clone/config/cs.properties

Next, I detached the ORACLE_HOME:
myserver>cd $ORACLE_HOME/oui/bin
 myserver>./runInstaller -detachHome ORACLE_HOME=/u01/oracle/product/11204 -invPtrLoc /u01/oracle/product/11204/oraInst.loc
 Starting Oracle Universal Installer...

 Checking swap space: must be greater than 500 MB.   Actual 8192 MB    Passed
 The inventory pointer is located at /u01/oracle/product/11204/oraInst.loc
 The inventory is located at /u01/oracle/oraInventory
 'DetachHome' was successful.

Finally I was able to clone the ORACLE_HOME:

perl clone.pl ORACLE_HOME=/u01/oracle/product/11204 ORACLE_HOME_NAME=11204 ORACLE_BASE=/u01/oracle OSDBA_GROUP=dba

 ********************************************************************************

 Your platform requires the root user to perform certain pre-clone
 OS preparation.  The root user should run the shell script 'rootpre.sh' before
 you proceed with cloning.  rootpre.sh can be found at
 /u01/oracle/product/11204/clone directory.
 Answer 'y' if the root user has run 'rootpre.sh' script.

 ********************************************************************************

 Has 'rootpre.sh' been run by the root user? [y/n] (n)
 y
 ./runInstaller -clone -waitForCompletion  "ORACLE_HOME=/u01/oracle/product/11204" "ORACLE_HOME_NAME=11204" "ORACLE_BASE=/u01/oracle" "oracle_install_OSDBA=dba" -silent -noConfig -nowait -invPtrLoc /u01/oracle/product/11204/oraInst.loc
 Starting Oracle Universal Installer...

 Checking swap space: must be greater than 500 MB.   Actual 8192 MB    Passed
 Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-06-11_03-12-31PM. Please wait ...Oracle Universal Installer, Version 11.2.0.4.0 Production
 Copyright (C) 1999, 2013, Oracle. All rights reserved.

 You can find the log of this install session at:
  /u01/oracle/oraInventory/logs/cloneActions2014-06-11_03-12-31PM.log
 .................................................................................................... 100% Done.



 Installation in progress (Wednesday, June 11, 2014 3:12:45 PM CEST)
 ................................................................................                                                80% Done.
 Install successful

 Linking in progress (Wednesday, June 11, 2014 3:12:54 PM CEST)
 Link successful

 Setup in progress (Wednesday, June 11, 2014 3:15:32 PM CEST)
 Setup successful

 End of install phases.(Wednesday, June 11, 2014 3:15:59 PM CEST)
 WARNING:
 The following configuration scripts need to be executed as the "root" user.
 /u01/oracle/product/11204/root.sh
 To execute the configuration scripts:
     1. Open a terminal window
     2. Log in as "root"
     3. Run the scripts

 The cloning of 11204 was successful.
 Please check '/u01/oracle/oraInventory/logs/cloneActions2014-06-11_03-12-31PM.log' for more details.

How to deal with ORA-00020: maximum number of processes (%s) exceeded


I recently had a situation where access to the database was completely blocked because of the infamous error message
ORA-00020: maximum number of processes (%s) exceeded
Facts:
  • The database had processes set to 1000.
  • The Cloud Control agent was spawning hundreds of processes (obviously an error to troubleshoot as a separate action)
  • Connecting through sqlplus with os authentication (sqlplus / as sysdba) didn't work due to the same reason

    At that time, the database had to become available to the users again ASAP.

    When I have encountered these situations in the past, I have had to kill all the operating system processes and restart the instance. A brut-force method that is not particularly pretty, but sometimes necessary:

    for a in $(ps -ef |grep $ORACLE_SID | grep -v grep | awk '{ print $2}'); do 
    >kill -9 $a; 
    >done
    

    It normally does the job when you really have no other option.
    This time however, after having killed all the processes, Oracle still rejected connections to the database using sqlplus:

    sqlplus /nolog
    
     SQL*Plus: Release 11.2.0.4.0 Production on Fri Jun 6 09:07:23 2014
    
     Copyright (c) 1982, 2013, Oracle.  All rights reserved.
     @ SQL> connect /
     ERROR:
     ORA-00020: maximum number of processes (1000) exceeded
    
    I then found the page by tech.e2sn.com that showed how to use sqlplus with the "preliminary connection".

    Simply by using
    
    sqlplus -prelim "/as sysdba"
    
    I was able to connect and shutdown the database with the abort option.
    sqlplus -prelim "/ as sysdba"
    
     SQL*Plus: Release 11.2.0.4.0 Production on Fri Jun 6 09:09:15 2014
     Copyright (c) 1982, 2013, Oracle.  All rights reserved.
    
     SQL> shutdown abort
     ORACLE instance shut down.
     SQL> exit
     Disconnected from ORACLE
    
    After this point the database could once again be restarted:
    sqlplus / as sysdba
    
     SQL*Plus: Release 11.2.0.4.0 Production on Fri Jun 6 09:10:38 2014
     Copyright (c) 1982, 2013, Oracle.  All rights reserved.
     Connected to an idle instance.
    SQL> startup
     ORACLE instance started.
    
     Total System Global Area 2137886720 bytes
     Fixed Size                  2248080 bytes
     Variable Size            1258291824 bytes
     Database Buffers          855638016 bytes
     Redo Buffers               21708800 bytes
     Database mounted.
     Databasen opened.
    

    The article referred to above is worth reading, but in short, the -prelim option will not try to create private session structures in the SGA. This allows you to connect to perform debugging or shutdown operations.
  • Great feature in adrci for searching in the alert log


    A potentially very timesaving feature in Oracle's adrci is the ability to search in the alert log for specific text, as I had to do to find when a specific parameter was set:

    adrci
    
     ADRCI: Release 11.2.0.4.0 - Production on On Jun 11 12:03:34 2014
    
     Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
     ADR base = "/u01/oracle/admin/proddb01/diagnostic"
     adrci> show homes
     ADR Homes:
     diag/rdbms/proddb01/proddb01
     adrci> show alert -p "message_text like '%event%'"
    
     ADR Home = /u01/oracle/admin/proddb01/diagnostic/diag/rdbms/proddb01/proddb01:
     *************************************************************************
     Output the results to file: /tmp/alert_3343326_1_proddb01_1.ado
     "/tmp/alert_3343326_1_proddb01_1.ado" 44 lines, 2408 characters
     2013-08-25 14:35:46.344000 +02:00
     One of the following events caused this:
     2014-01-02 10:56:34.311000 +01:00
     OS Pid: 8455160 executed alter system set events '10852 trace name context forever, level 16384'
     2014-01-02 10:56:47.555000 +01:00
     ALTER SYSTEM SET event='10852 trace name context forever, level 16384' SCOPE=SPFILE;
     2014-01-10 00:43:02.945000 +01:00
       event                    = "10852 trace name context forever, level 16384"
     2014-01-31 19:32:59.471000 +01:00
       event                    = "10852 trace name context forever, level 16384"
     2014-02-01 09:12:59.653000 +01:00
       event                    = "10852 trace name context forever, level 16384"
     CLOSE: Active sessions prevent database close operation
     2014-02-01 18:10:54.100000 +01:00
       event                    = "10852 trace name context forever, level 16384"
     2014-06-10 19:38:42.536000 +02:00
     ALTER SYSTEM SET event='10852 trace name context forever, level 16384 off' SCOPE=SPFILE;
     2014-06-10 19:43:12.770000 +02:00
       event                    = "10852 trace name context off"
    

    Without much effort I was able to find that the parameter was set 02.01.2014, and switched off 10.06.2014.

    How to disable an event parameter in the database

    During an Upgrade from 11.2.0.3 to 11.2.0.4 I had to remove an event-parameter in the database.

    The syntax for this is:
    alter system set event="10852 trace name context off" scope=spfile;
    

    Friday, June 6, 2014

    Why does the TIMESTAMP# column in the AUD$ table contain NULL values?

    According to Oracle Support Note 427296.1:

    "In database version 10gR1 and above, the TIMESTAMP# column is obsoleted in favor of the new NTIMESTAMP# column."

    So when exchanging the TIMESTAMP# with the NTIMESTAMP# column, my script works as intended, while it had previously showed NULL values:

    SELECT DBID "CURRENT DBID" FROM V$DATABASE;
    
     SET TIMING ON
     SET LINES 200
     COL "Earliest" format a30
     col "Latest" format a30
    
     PROMPT Counting the DBIDs and the number of audit entries each
     PROMPT Could take a while...
     COL TIMESTAMP# FORMAT A3
     SELECT DBID,COUNT(*),MIN(NTIMESTAMP#) "Earliest", MAX(NTIMESTAMP#) "Latest"
     FROM AUD$
     GROUP BY DBID;
    

    Output:
    Counting the DBIDs and the number of audit entries each
     Could take a while...
    
           DBID   COUNT(*) Earliest                       Latest
     ---------- ---------- ------------------------------ ------------------------------
    2367413790       1867 05.06.2014 14.01.30,193254     06.06.2014 06.17.08,485629
    

    The views built upon AUD$, for example DBA_AUDIT_TRAIL and DBA_FGA_AUDIT_TRAIL, will of course reflect the correct columns from AUD$ (NTIMESTAMP#) in their own TIMESTAMP column.