set lines 200 col owner format a20 col username format a30 col schema format a30 PROMPT =================================================== PROMPT List of all schemas and number of Objects they own: PROMPT =================================================== select u.username "Schema", decode(count(*),1,0, count(*)) "Number of Objects" from dba_users u left outer join dba_objects o on u.username = o.owner where u.username in (USER1','USER2','USER3') group by u.username order by username ASC; set heading off set trimspool on set lines 200 set feedback off spool drop_users.sql select 'spool drop_users_SD4440.log' from dual; select 'DROP USER ' || u.username || DECODE(decode(count(*),1,0, count(*)), '0',';' , ' CASCADE;') from dba_users u left outer join dba_objects o on u.username = o.owner where u.username in ('USER1','USER2','USER3') group by u.username; select 'exit' from dual; exit
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Tuesday, December 22, 2015
Generate "drop user" statement for schemas with and without objects
Friday, December 18, 2015
How to solve ORA-27102: out of memory on startup
You want to increase your SGA to a larger value, and you have taken care to increase the value of shmmax in the /etc/sysctl.conf file.
When starting up the database with increased values for sga_max_size and sga_target, you hit
Solution:
Increase the value of shmall as well. Shmall indicates the total amount of shared memory that the system can use at one time (measured in pages).
To find the page size:
Make sure that the shmall is larger, not identical, than the setting of your SGA.
For example, if you want a 12G SGA, do not set shmall to 3145728 which would be the number of pages for 12G
Set it to a value larger instead, for example 3670016, which is the number of pages equivalent to 14G:
As root, open /etc/sysctl.conf in an editor.
Search for the string kernel.shmall
Change the value
Save and exit.
Then, still as root, execute
Check that the new settings are in place:
The database should now be possible to open.
When starting up the database with increased values for sga_max_size and sga_target, you hit
SQL> startup ORA-27102: out of memory Linux-x86_64 Error: 28: No space left on device Additional information: 1543503872 Additional information: 1
Solution:
Increase the value of shmall as well. Shmall indicates the total amount of shared memory that the system can use at one time (measured in pages).
To find the page size:
getconf PAGE_SIZE 4096
Make sure that the shmall is larger, not identical, than the setting of your SGA.
For example, if you want a 12G SGA, do not set shmall to 3145728 which would be the number of pages for 12G
Set it to a value larger instead, for example 3670016, which is the number of pages equivalent to 14G:
3670016*4096=15032385536 15032385536/1024/1024/1024 = 14G
As root, open /etc/sysctl.conf in an editor.
Search for the string kernel.shmall
Change the value
Save and exit.
Then, still as root, execute
sysctl -p
Check that the new settings are in place:
/sbin/sysctl -a | grep shm
The database should now be possible to open.
How to check memory consumption and swapping on a Linux server
The easiest way to check for memory consumption on Linux is in my opinion the "free" utility:
Add totals with the -t flag:
So in this case I am using 7701 MB of memory, while 8354 MB is free for use.
To get a quick glance, you can have a look at the file /proc/meminfo, grep for the highlights, like this:
oracle@prodserver1:[proddb01]# free -m total used free shared buffers cached Mem: 16056 9027 7029 0 233 1093 -/+ buffers/cache: 7700 8356 Swap: 8191 2674 5517
Add totals with the -t flag:
free -t -m total used free shared buffers cached Mem: 16056 9029 7027 0 233 1093 -/+ buffers/cache: 7701 8354 Swap: 8191 2674 5517 Total: 24248 11703 12545
So in this case I am using 7701 MB of memory, while 8354 MB is free for use.
To get a quick glance, you can have a look at the file /proc/meminfo, grep for the highlights, like this:
egrep 'Mem|Cache|Swap' /proc/meminfo MemTotal: 16442312 kB MemFree: 7255224 kB Cached: 1120380 kB SwapCached: 45632 kB SwapTotal: 8388600 kB SwapFree: 5650100 kBA good option for finding out if a server is swapping, is by using vmstat:
oracle@myserver# vmstat -w 10 4 procs -----------------------memory---------------------- ---swap-- -----io---- -system-- --------cpu-------- r b swpd free buff cache si so bi bo in cs us sy id wa st 3 0 2772764 659396 20340 10764836 4 4 19 9 3 4 13 6 81 0 0 6 0 2771740 735076 20364 10766444 82 0 82 7 68360 52089 57 30 13 0 0 10 0 2771484 767816 20376 10766628 8 0 12 4 67349 52155 55 31 15 0 0 7 0 2771228 747480 22832 10768888 34 0 495 12 67799 52119 57 30 13 0 0where * The flag -w indicates wide output * Sample every 10 second * 4 iterations Columns: * si: Amount of memory swapped in from disk (/s) * so: Amount of memory swapped to disk (/s) When a lot of negative swapping occurs the value of "so" (swap out) is increasing.
Wednesday, December 16, 2015
How to resolve PLS-00201: identifier 'DBMS_SYSTEM.SET_BOOL_PARAM_IN_SESSION'
During migration of an EBS (E-Business Suite) database, I was left with some packages in the APPS schema that couldn't be compiled.
I first tried granting execute on dbms_system directly to the user app - no luck.
Then I found the following blog post
https://blogs.oracle.com/mandalika/entry/resolving_pls_00201_identifier_dbms
Which points to the following solution:
SQL> alter package apps.ad_parallel_updates_pkg compile body; Warning: Package Body altered with compilation errors. SQL> show errors Errors for PACKAGE BODY APPS.AD_PARALLEL_UPDATES_PKG: LINE/COL ERROR -------- ----------------------------------------------------------------- 338/8 PL/SQL: Statement ignored 338/8 PLS-00201: identifier 'DBMS_SYSTEM.SET_BOOL_PARAM_IN_SESSION' must be declared
I first tried granting execute on dbms_system directly to the user app - no luck.
Then I found the following blog post
https://blogs.oracle.com/mandalika/entry/resolving_pls_00201_identifier_dbms
Which points to the following solution:
SQL> create public synonym dbms_system for dbms_system; Synonym created. SQL> grant execute on dbms_system to apps; Grant succeeded. SQL> alter package apps.ad_parallel_updates_pkg compile body; Package body altered. SQL>
Sunday, December 13, 2015
Cardinality, Selectivity, Projection and Selection - definitions
From Oracle documentation:
Cardinality
The ratio of distinct values to the number of table rows. A column with only two distinct values in a million-row table would have low cardinality
Selectivity
In a query, the measure of how many rows from a row set pass a predicate test, for example, WHERE last_name = 'Smith'. A selectivity of 0.0 means no rows, whereas a value of 1.0 means all rows. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0.
Projection
The ability of a SELECT statement to choose a *subset* of the columns in a given table.
In other words, this corresponds to the column listing in the SELECT statement.
Selection
The ability of a SELECT statement to choose a subset of rows in a given table.
In other words, selection corresponds to the WHERE clause of the statement.
Cardinality
The ratio of distinct values to the number of table rows. A column with only two distinct values in a million-row table would have low cardinality
Selectivity
In a query, the measure of how many rows from a row set pass a predicate test, for example, WHERE last_name = 'Smith'. A selectivity of 0.0 means no rows, whereas a value of 1.0 means all rows. A predicate becomes more selective as the value approaches 0.0 and less selective (or more unselective) as the value approaches 1.0.
Projection
The ability of a SELECT statement to choose a *subset* of the columns in a given table.
In other words, this corresponds to the column listing in the SELECT statement.
Selection
The ability of a SELECT statement to choose a subset of rows in a given table.
In other words, selection corresponds to the WHERE clause of the statement.
Friday, December 11, 2015
Preparations to make before running dbms_redefinition on a large table
To avoid ORA-01555 "snapshot too old", you should take the following precaution before starting an online redefinition of a large table:
1. If possible, create a new BIGFILE undo tablespace:
CREATE BIGFILE UNDO TABLESPACE UNDOTBS2 DATAFILE '/u02/oradata/proddb01/undotbs02_01.dbf' SIZE 128M AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED RETENTION NOGUARANTEE;
2. Make the new UNDO tablespace database default:
ALTER SYSTEM SET undo_tablespace='UNDOTBS2' SCOPE=BOTH;
3. Alter the retention of UNDO data to GUARANTEED retention:
ALTER TABLESPACE UNDOTBS2 RETENTION GUARANTEE;
4. If possible, create a new BIGFILE temporary tablespace:
CREATE BIGFILE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/u02/oradata/proddb01/temp2_01.dbf' SIZE 1G AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
5. Make the new temporary tablespace the database default:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE "TEMP2";
6. Empty the recyclebin. As sysdba:
PURGE RECYCLEBIN;
7. Adjust the parameter undo_retention
Set the parameter high enough to sustain operation of the database for the amount of time you expect the redefintion to last.
For example, if you expect the redefinition to run for 48 hours, set the undo_retention to 172800 seconds:
alter system set undo_retention = 172800 scope=memory;
When the online redefinition has completed, return the UNDO tablespace to its normal retention:
ALTER TABLESPACE UNDOTBS2 RETENTION NOGUARANTEE;
About RETENTION GUARANTEED:
... specifies that Oracle Database should preserve unexpired undo data in all undo segments of tablespace even if doing so forces the failure of ongoing operations that need undo space in those segments. This setting is useful if you need to issue an Oracle Flashback Query or an Oracle Flashback Transaction Query to diagnose and correct a problem with the data.
About RETENTION NOGUARANTEE:
... returns the undo behavior to normal. Space occupied by unexpired undo data in undo segments can be consumed if necessary by ongoing transactions. This is the default
Source: Oracle Documentation
More about undo retention can be found here
Setting the minimum undo retention period
Friday, December 4, 2015
How to enable OLAP in an Oracle 11gR2 database
First, check that the prerequisites are fulfilled. The components below must be installed and valid before proceeding.
If java virtual machine is not installed:
If XML DB is not yet install, you need to follow "Master Note for Oracle XML Database (XDB) Install / Deinstall (Doc ID 1292089.1)" from Oracle Support
If Oracle Expression Filter is not installed:
1. Start by shutting down the database:
For Oracle 11R2 and above, use the chopt utility. It was provided to help DBAs add new options and implicitly relink the binaries.
For versions prior to 11gR2:
3. Open your database:
4. Set up OLAP
5. Verify that OLAP was installed:
select comp_id, comp_name, version,status from dba_registry where comp_id in ('XDB','XML','JAVAVM','EXF'); COMP_ID COMP_NAME VERSION STATUS ---------- ------------------------------ ---------- ---------- XDB Oracle XML Database 11.2.0.4.0 VALID EXF Oracle Expression Filter 11.2.0.4.0 VALID JAVAVM JServer JAVA Virtual Machine 11.2.0.4.0 VALID XML Oracle XDK 11.2.0.4.0 VALID
If java virtual machine is not installed:
sqlplus / as sysdba set echo on spool JServer.log @?/javavm/install/initjvm.sql; @?/xdk/admin/initxml.sql; @?/xdk/admin/xmlja.sql; @?/rdbms/admin/catjava.sql; spool off
If XML DB is not yet install, you need to follow "Master Note for Oracle XML Database (XDB) Install / Deinstall (Doc ID 1292089.1)" from Oracle Support
If Oracle Expression Filter is not installed:
sqlplus / as sysdba @?/rdbms/admin/catexf.sql
1. Start by shutting down the database:
sqlplus / as sysdba shutdown immediate2. Prepare your ORACLE_HOME by relinking the "oracle" executable.
For Oracle 11R2 and above, use the chopt utility. It was provided to help DBAs add new options and implicitly relink the binaries.
chopt enable olap Writing to /u01/oracle/product/11204/install/enable_olap.log... /usr/bin/make -f /u01/oracle/product/11204/rdbms/lib/ins_rdbms.mk olap_on ORACLE_HOME=/u01/oracle/product/11204 /usr/bin/make -f /u01/oracle/product/11204/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u01/oracle/product/11204
For versions prior to 11gR2:
cd $ORACLE_HOME/rdbms/lib make -f ins_rdbms.mk olap_on make -f ins_rdbms.mk ioracle
3. Open your database:
sqlplus / as sysdba startup
4. Set up OLAP
cd $ORACLE_HOME/olap/admin sqlplus / as sysdba spool install_olap.log @olap.sql SYSAUX TEMP
5. Verify that OLAP was installed:
select * from v$option where parameter='OLAP'; PARAMETER VALUE ------------------------------ ------------------------------ OLAP TRUE
How to check if the OLAP option is installed in the database
select comp_name, version, status from dba_registry where comp_name like '%OLAP%' order by 1 DESC;Typical output if OLAP is installed:
COMP_NAME | VERSION | STATUS |
---|---|---|
Oracle OLAP API | 11.2.0.4.0 | VALID |
OLAP Catalog | 11.2.0.4.0 | VALID |
OLAP Analytic Workspace | 11.2.0.4.0 | VALID |
You can also use:
select * from v$option where parameter='OLAP';
PARAMETER | VALUE |
---|---|
OLAP | TRUE |
Thursday, December 3, 2015
How to manually register the database with the listener
Sometimes there may be a delay before pmon register itself with the listener.
lsnrctl status will show
To manually force pmon to register, execute
lsnrctl
The listener supports no services The command completed successfully
To manually force pmon to register, execute
alter system register;
Thursday, November 19, 2015
How to handle RMAN DUPLICATE: Errors in krbm_getDupCopy during duplicate
I recently cloned a 11.2.0.4 database using tape backups as source.
The clone failed right before "open resetlogs". In other words, the database was restored and recovered, but DUPLICATE didn't succeed in opening it. So I did it manually, by recreating the controlfile using a template from the source database, and adjusted for the file layout on the auxiliary. The database opened nicely.
However, later during the day, the application team found that a new clone was needed due to some synch issue caused by a third-party replication tool. In other words, the entire database had to be copied from the latest version of our incremental level 1 backups and restored all over again.
No problem.
As usual, I started by saving away the spfile, password file and the block change tracking file located in $ORACLE_HOME/dbs. Following that, I mounted and dropped the database. All the files were automatically removed from disk by Oracle, and the instance terminated, as expected.
As I started to clone, I noticed some new messages in the alert log of the auxiliary database:
This error was repeated for every data file in the database and appended to the same trace file as in the message above.
I found the errors worth checking.
Fortunately, these errors can be ignored. MOS Doc ID 1476641.1 "RMAN DUPLICATE: Errors In Krbm_getDupCopy found in alert.log" explains
"This happens after a previous failed duplicate trial, if the files copied to auxiliary destination have been deleted."
and
"The messages in alert.log indicate that rman cannot use a previous datafile copy for those files and that the files must be copied again to auxiliary destination. Duplicate checks if there are datafile copies already created by a duplicate to avoid restoring the files again.
So this is a part of Oracle restore optimization concept:
"...from 11.2 RMAN always checks if the file copy exists at destination hosts to avoid copying it again"
Great, but how does RMAN do that?
"If duplicate fails between the first restore/copy and the zeroing of the dbid, a second duplicate will find the _rm_dup_@.dat file and the clone default channel will read it into memory, once determined the name of the datafilecopy to be created by the second duplicate it is compared with the existing datafilecopy from the previous duplicate. If the datafilecopy still exists and matches the vital information of the datafile (file number, database id, creation scn, database name) and its checkpoint scn is behind the until scn then the datafilecopy can be used by this new duplicate and restore/copy isnot necessary"
The solution is simple:
"As files have already been deleted from auxiliary destination, ignore those messages."
I could also have followed the following advice:
"If you don't want to see those messages in alert.log but datafiles have already been deleted, on Auxiliary host, delete the file $ORACLE_HOME/dbs/_rm_dup_.dat where dup_db is the name of the clone instance."
The clone failed right before "open resetlogs". In other words, the database was restored and recovered, but DUPLICATE didn't succeed in opening it. So I did it manually, by recreating the controlfile using a template from the source database, and adjusted for the file layout on the auxiliary. The database opened nicely.
However, later during the day, the application team found that a new clone was needed due to some synch issue caused by a third-party replication tool. In other words, the entire database had to be copied from the latest version of our incremental level 1 backups and restored all over again.
No problem.
As usual, I started by saving away the spfile, password file and the block change tracking file located in $ORACLE_HOME/dbs. Following that, I mounted and dropped the database. All the files were automatically removed from disk by Oracle, and the instance terminated, as expected.
As I started to clone, I noticed some new messages in the alert log of the auxiliary database:
RMAN DUPLICATE: Errors in krbm_getDupCopy Errors in file /u01/oracle/product/11204/admin/testdb01/diag/rdbms/testdb01/testdb01/trace/testdb01_ora_22544486.trc: ORA-19625: error identifying file /u02/oradata/testdb01/system01.dbf ORA-27037: unable to obtain file status IBM AIX RISC System/6000 Error: 2: No such file or directory Additional information: 3
This error was repeated for every data file in the database and appended to the same trace file as in the message above.
I found the errors worth checking.
Fortunately, these errors can be ignored. MOS Doc ID 1476641.1 "RMAN DUPLICATE: Errors In Krbm_getDupCopy found in alert.log" explains
"This happens after a previous failed duplicate trial, if the files copied to auxiliary destination have been deleted."
and
"The messages in alert.log indicate that rman cannot use a previous datafile copy for those files and that the files must be copied again to auxiliary destination. Duplicate checks if there are datafile copies already created by a duplicate to avoid restoring the files again.
So this is a part of Oracle restore optimization concept:
"...from 11.2 RMAN always checks if the file copy exists at destination hosts to avoid copying it again"
Great, but how does RMAN do that?
"If duplicate fails between the first restore/copy and the zeroing of the dbid, a second duplicate will find the _rm_dup_@.dat file and the clone default channel will read it into memory, once determined the name of the datafilecopy to be created by the second duplicate it is compared with the existing datafilecopy from the previous duplicate. If the datafilecopy still exists and matches the vital information of the datafile (file number, database id, creation scn, database name) and its checkpoint scn is behind the until scn then the datafilecopy can be used by this new duplicate and restore/copy isnot necessary"
The solution is simple:
"As files have already been deleted from auxiliary destination, ignore those messages."
I could also have followed the following advice:
"If you don't want to see those messages in alert.log but datafiles have already been deleted, on Auxiliary host, delete the file $ORACLE_HOME/dbs/_rm_dup_
Wednesday, November 18, 2015
How to drop a redo logfile group and redo logfile group members - syntax
alter database drop logfile group 4;
The drop statement will be confirmed by an entry in the database's alert log.
Don't forget to physically remove the file from disk afterwards.
You can also drop individual Group members:
alter database drop logfile member '/u03/oradata/proddb01/red04a.log';
Pay attention to the restrictions when dropping redo log groups and redo log members!
A good Query for redo log file information can be found here
Query to find basic redo log information
COL MEMBER FORMAT A50 SET LINES 200 SELECT A.GROUP#,A.MEMBER, B.BYTES/1024/1024 "MB", B.ARCHIVED,B.STATUS,B.SEQUENCE# FROM V$LOGFILE A INNER JOIN V$LOG B ON A.GROUP# = B.GROUP# ORDER BY GROUP# ASC;Result:
GROUP# | MEMBER | MB | ARCHIVED | STATUS | SEQUENCE# |
---|---|---|---|---|---|
1 | /u04/oradata/proddb01/log1a.ora | 1024 | YES | INACTIVE | 90510 |
1 | /u04/oradata/proddb01/log1b.ora | 1024 | YES | INACTIVE | 90510 |
2 | /u04/oradata/proddb01/log2b.ora | 1024 | YES | INACTIVE | 90511 |
2 | /u04/oradata/proddb01/log2a.ora | 1024 | YES | INACTIVE | 90511 |
3 | /u04/oradata/proddb01/log3b.ora | 1024 | NO | CURRENT | 90508 |
3 | /u04/oradata/proddb01/log3a.ora | 1024 | NO | CURRENT | 90508 |
Wednesday, November 11, 2015
How to add a comment on a column
Use this syntax:
COMMENT ON COLUMN SCOTT.EMP.DOB IS 'Date of Birth';
Tuesday, November 10, 2015
How to find the default tablespace type (smallfile vs bigfile)
SQL> select property_value from database_properties where property_name = 'DEFAULT_TBS_TYPE'; PROPERTY_VALUE --------------- SMALLFILE
Change like this:
alter database set default bigfile tablespace;The change will of course only take effect for future tablespaces, not existing ones.
Link to Oracle documentation
Monday, November 9, 2015
How to append text at beginning and at end of each line in vi (unix)
Go to top of line with :0
Append text to the end of each line, globally:
Add text to the beginning of each line, globally:
Append text to the end of each line, globally:
:%s/$/text/g
Add text to the beginning of each line, globally:
:%s/^/text/g
How to install and deinstall Oracle Spatial and Oracle Multimedia
The following advice is an excerpt from a Service Request with Oracle support services, on how to administer Spatial and Multimedia:
To remove Spatial:
To Install Multimedia:
To remove Spatial:
connect / as sysdba drop user MDSYS cascade;To remove Multimedia:
connect / as sysdba @?/rdbms/admin/catcmprm.sql ORDIMYou can reinstall both of these components later if needed by running the following scripts. Note, Multimedia must be installed first.
To Install Multimedia:
connect / as sysdba @?/ord/admin/ordinst.sql SYSAUX SYSAUX @?/ord/im/admin/catim.sql execute sys.validate_ordim;To install Spatial:
connect / as sysdba @?/md/admin/mdinst.sql
How to check if spatial and multimedia is installed in the database
The following advice was given to me by Oracle Support services, to determine if either of these components are being used in the database:
connect / as sysdba set pagesize 10000 col owner format a12 col table_name format a35 col column_name format a25 col index_name format a25 -- Is Spatial being used? select owner, index_name from dba_indexes where ityp_name = 'SPATIAL_INDEX'; select owner, table_name, column_name from dba_tab_columns where data_type= 'SDO_GEOMETRY' and owner != 'MDSYS';If both of these queries return no rows, Spatial is not being used.
-- Is Multimedia being used? set serveroutput on; @?/ord/im/admin/imremchk.sqlIf you're not using this component, you will get the message returned "Oracle Multimedia is not being used".
Tuesday, November 3, 2015
How to check if the database is in restricted mode
To check what kind of logins that are allowed in your instance, query the LOGINS field of the v$instance view:
Join with v$database for more information:
Which will show the following output if your database is in restricted mode, and in this example, mounted:
and for normal mode:
select logins from v$instance;Output here will be either ALLOWED or RESTRICTED
Join with v$database for more information:
select a.INSTANCE_NAME, a.ACTIVE_STATE,a.logins, b.open_mode from v$instance a inner join v$database b on UPPER(a.instance_name) = b.name;
Which will show the following output if your database is in restricted mode, and in this example, mounted:
INSTANCE_NAME ACTIVE_ST LOGINS OPEN_MODE ---------------- --------- ---------- -------------------- mydb01 NORMAL RESTRICTED MOUNTED
and for normal mode:
INSTANCE_NAME ACTIVE_ST LOGINS OPEN_MODE ---------------- --------- ---------- -------------------- mydb01 NORMAL ALLOWED READ WRITE
Monday, November 2, 2015
ORA-01722 when running ausy1120.sql during preparation of new target database for EBS
If you are following note Doc ID 741818.1 "Export/import process for 12.0 or 12.1 using 11gR1 or 11gR2" and run into the follwing problem when running the ausy1120.sql script:
the solution is to modify the script slightly and correct a typo.
Open the ausy1120.sql file in an editor and change
Then rerun the script. It should finish almost immediately and the output should be similar to:
The same error seems to occur when running the aujv1120.sql and the aumsc1120.sql as well. The solution is the same for all three.
sqlplus system/*** @ausy1120.sql SQL*Plus: Release 11.2.0.4.0 Production on Mon Nov 2 14:19:31 2015 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning option PL/SQL procedure successfully completed. declare * ERROR at line 1: ORA-01722: invalid number ORA-06512: at line 5
the solution is to modify the script slightly and correct a typo.
Open the ausy1120.sql file in an editor and change
select to_number(substr(version,1,instr(version,'.'))) into :dbver from v$instance where rownum=1;to
select to_number(substr(version,1,instr(version,'.')-1)) into :dbver from v$instance where rownum=1;
Then rerun the script. It should finish almost immediately and the output should be similar to:
PL/SQL procedure successfully completed. PL/SQL procedure successfully completed. -------------------------------------------------------------------------------- --- ausy1120 started at 02-NOV-2015 14:40:04 --- '---AUSY1120COMPLETEDAT'||TO_CHAR(SYSDATE,'DD-MON-YYYYHH24:MI:SS')||'----' -------------------------------------------------------------------------------- --- ausy1120 completed at 02-NOV-2015 14:40:04 ---- Commit complete.
The same error seems to occur when running the aujv1120.sql and the aumsc1120.sql as well. The solution is the same for all three.
Friday, October 30, 2015
Syntax for setting hidden parameter in Oracle
Syntax example:
alter system set "_system_trig_enabled"=TRUE scope=both;
Syntax for setting the default undo tablespace for the instance
ALTER SYSTEM SET undo_tablespace='APPS_UNDOTS1' SCOPE=BOTH;
Thursday, October 29, 2015
How to check your Operating System's code page for a particular character
I have been involved in several migration projects where we had to deal with Western European characters, which have invalid representation in the database, and needs to be converted.
Like I showed in my post about usage of the chr function, you sometimes need to find the decimal and/or hexadecimal value for a particular character.
But how can you verify that your operating system actually agrees; how would your Operating System translate a character passed from a terminal, e.g. putty?
First, make sure your putty terminal has its translation-settings set to the character set of the server you are logging into: Right-click your putty terminal upper frame, and pick "Change Settings" from the menu. Go to the "translation" settings and then select the correct character set from the drop-down menu "Remote character set".
On Linux platforms, a simple way to check how a character would be translated would be to use the hexdump utility. Thanks to Gray Watson for demonstrating this!
Let's try to hexdump the character ø, and see what the internal hexadecimal representation is:
The prefix x in front of the values represents hexadecimal values, so the important part here is "C3 and B8" - in a multibyte character set this represent the Scandinavian character ø ( I must admit, I never figured out what the 0A represents. Anyone?)
Another way is to use the "od" utility:
Using the -x flag straight off will give the hexadecimal value in 2-byte units:
This time, the values are cast around, and should be read backwards for meaning. I have not found an explanation to why od does this. Anyone?
However, if you use the -t x notation instead,:
( And again, I never figured out what the 0a represents. Anyone?)
Like I showed in my post about usage of the chr function, you sometimes need to find the decimal and/or hexadecimal value for a particular character.
But how can you verify that your operating system actually agrees; how would your Operating System translate a character passed from a terminal, e.g. putty?
First, make sure your putty terminal has its translation-settings set to the character set of the server you are logging into: Right-click your putty terminal upper frame, and pick "Change Settings" from the menu. Go to the "translation" settings and then select the correct character set from the drop-down menu "Remote character set".
On Linux platforms, a simple way to check how a character would be translated would be to use the hexdump utility. Thanks to Gray Watson for demonstrating this!
man hexdump The hexdump utility is a filter which displays the specified files, or the standard input, if no files are specified, in a user specified format.
Let's try to hexdump the character ø, and see what the internal hexadecimal representation is:
echo "ø" | hexdump -v -e '"x" 1/1 "%02X" " "' xC3 xB8 x0A
The prefix x in front of the values represents hexadecimal values, so the important part here is "C3 and B8" - in a multibyte character set this represent the Scandinavian character ø ( I must admit, I never figured out what the 0A represents. Anyone?)
Another way is to use the "od" utility:
man od od - dump files in octal and other formats -x same as -t x2, select hexadecimal 2-byte units
Using the -x flag straight off will give the hexadecimal value in 2-byte units:
echo "ø" | od -x 0000000 b8c3 000a 0000003
This time, the values are cast around, and should be read backwards for meaning. I have not found an explanation to why od does this. Anyone?
However, if you use the -t x
echo "ø" | od -t x1 0000000 c3 b8 0a 0000003The values come out as a DBA expects; c3b8 corresponds to decimal value 50104 which in turn represent the Scandinavian letter ø.
( And again, I never figured out what the 0a represents. Anyone?)
Wednesday, October 28, 2015
How to work around ORA-29275: partial multibyte character when using chr function
If you need to check what is actually stored in a database table, meaning the internal representation, you can use the DUMP function.
It will return a varcar2 value containing the data type code, length in bytes, and internal representation
So the type above is 96, representing a VARCHAR2 or NVARCHAR datatype, the length is 2 bytes, the characterset of the database is AL32UTF8 and the decimal values of each byte is 195 and 184.
For single-byte character set, it would be easy to check which character the decimal represented under the database's characterset; simply use the chr function and pass the decimal value as an argument.
However with a multibyte character set, which one of the returned decimal functions should you choose?
A workaround is to dump the character to hexadecimal format instead of decimal, by using the argument 1016 as the second argument to the function:
The c3b8 hexadecimal value corresponds to the decimal value of 50104. Pass this value to the chr function and you get the result you're looking for:
A brilliant converter between decimal and hexadecimal numbers can be found here
The documentation for the dump function can be found at http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions055.htm#SQLRF00635
The documentation for the chr function can be found at http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions026.htm#SQLRF00616
The datatype table is listed here: https://docs.oracle.com/cd/B19306_01/appdev.102/b14250/oci03typ.htm#i422575
It will return a varcar2 value containing the data type code, length in bytes, and internal representation
SQL> select dump ('ø',1010) from dual; DUMP('Ø',1010) -------------------------------------------------------------------------------- Typ=96 Len=2 CharacterSet=AL32UTF8: 195,184
So the type above is 96, representing a VARCHAR2 or NVARCHAR datatype, the length is 2 bytes, the characterset of the database is AL32UTF8 and the decimal values of each byte is 195 and 184.
For single-byte character set, it would be easy to check which character the decimal represented under the database's characterset; simply use the chr function and pass the decimal value as an argument.
However with a multibyte character set, which one of the returned decimal functions should you choose?
SQL> select chr(195) from dual; ERROR: ORA-29275: partial multibyte character
A workaround is to dump the character to hexadecimal format instead of decimal, by using the argument 1016 as the second argument to the function:
SQL> select dump ('ø',1016) from dual; DUMP('Ø',1016) -------------------------------------------------------------------------------- Typ=96 Len=2 CharacterSet=AL32UTF8: c3,b8
The c3b8 hexadecimal value corresponds to the decimal value of 50104. Pass this value to the chr function and you get the result you're looking for:
SQL> select chr(50104) from dual; CHR(50 ------ ø
A brilliant converter between decimal and hexadecimal numbers can be found here
The documentation for the dump function can be found at http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions055.htm#SQLRF00635
The documentation for the chr function can be found at http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions026.htm#SQLRF00616
The datatype table is listed here: https://docs.oracle.com/cd/B19306_01/appdev.102/b14250/oci03typ.htm#i422575
Monday, October 5, 2015
Is it possible to change the character set of the database by recreating the control file?
Question: Is it possible to change the character set of the database by recreating the controlfil?
Answer: No, this is not supported.
Doc ID 225912.1 states:
"Note also that it's NOT possible to change / adapt the NLS_CHARACTERSET by re-creating the control file.
The characterset specified in CREATE CONTROLFILE DATABASE ... RESETLOGS ....CHARACTER SET; ( the result of an ALTER DATABASE BACKUP CONTROLFILE TO TRACE..) need to be the actual current NLS_CHARACTERSET of the database."
Question: So why does CREATE CONTROLFILE generate a CHARACTERSET clause in the first place then?
Answer: In case media recovery is required after the controlfile has been recreated.
From the documentation:
"If you specify a character set, then Oracle Database reconstructs character set information in the control file. If media recovery of the database is subsequently required, then this information will be available before the database is open, so that tablespace names can be correctly interpreted during recovery."
The documentation goes on to say
"This clause is required only if you are using a character set other than the default, which depends on your operating system."
Without having found any sources on the internet to confirm my interpretation, I will claim that this means:
* If you recreate your controlfile for a database with, say, WE8ISO8859P1 as character set, you will indeed need to specify the character in the CREATE CONTROLFILE clause.
* If you recreate the controlfile of a database using UTF8, for example AL32UT8, you can skip this clause altogether if you wish
Answer: No, this is not supported.
Doc ID 225912.1 states:
"Note also that it's NOT possible to change / adapt the NLS_CHARACTERSET by re-creating the control file.
The characterset specified in CREATE CONTROLFILE DATABASE ... RESETLOGS ....CHARACTER SET
Question: So why does CREATE CONTROLFILE generate a CHARACTERSET clause in the first place then?
Answer: In case media recovery is required after the controlfile has been recreated.
From the documentation:
"If you specify a character set, then Oracle Database reconstructs character set information in the control file. If media recovery of the database is subsequently required, then this information will be available before the database is open, so that tablespace names can be correctly interpreted during recovery."
The documentation goes on to say
"This clause is required only if you are using a character set other than the default, which depends on your operating system."
Without having found any sources on the internet to confirm my interpretation, I will claim that this means:
* If you recreate your controlfile for a database with, say, WE8ISO8859P1 as character set, you will indeed need to specify the character in the CREATE CONTROLFILE clause.
* If you recreate the controlfile of a database using UTF8, for example AL32UT8, you can skip this clause altogether if you wish
How to pause an Oracle database
Sometimes it is useful to pause a database, and let it finish the running jobs at a later time.
Particularly in environments with shared resources contention may occur. Important databases need to be prioritized while less important ones should be suspended.
For this purpose, you can use the "suspend" command:
Alert log:
The database_status field in v$instance will reflect the new state of the database:
When you are ready to resume database activity, issue the following command:
More info: Oracle Documentation
Particularly in environments with shared resources contention may occur. Important databases need to be prioritized while less important ones should be suspended.
For this purpose, you can use the "suspend" command:
alter system suspend;
Alert log:
Mon Oct 05 10:14:49 2015 Suspending database after alter system suspend command
The database_status field in v$instance will reflect the new state of the database:
SQL> select database_status from v$instance; DATABASE_STATUS ----------------- SUSPENDED
When you are ready to resume database activity, issue the following command:
alter system resume;
More info: Oracle Documentation
Tuesday, September 8, 2015
How to recursively zip a folder and its subfolders
Use the -r flag to indicate recursive zipping:
zip -r myfile.zip myfolder
Wednesday, September 2, 2015
How to unzip multiple files on a Unix-based operating systems
When extracting multiple zip files, remember to use single quotes to mask the zip-files:
For example, when unpacking the files needed to install Oracle 11.2.0.4:
Without the quotes, unzip will look for "zip files within zip files".
Thanks to Chris Jean for the tip!
unzip '*.zip'
For example, when unpacking the files needed to install Oracle 11.2.0.4:
[oracle@lx01 Downloads]$ ls -latr total 5586768 -rwxr-x---. 1 root root 1395582860 Sep 2 10:14 p13390677_112040_Linux-x86-64_1of7.zip -rwxr-x---. 1 root root 1151304589 Sep 2 10:14 p13390677_112040_Linux-x86-64_2of7.zip -rwxr-x---. 1 root root 1205251894 Sep 2 10:15 p13390677_112040_Linux-x86-64_3of7.zip -rwxr-x---. 1 root root 656026876 Sep 2 10:16 p13390677_112040_Linux-x86-64_4of7.zip -rwxr-x---. 1 root root 599170344 Sep 2 10:16 p13390677_112040_Linux-x86-64_5of7.zip -rwxr-x---. 1 root root 488372844 Sep 2 10:16 p13390677_112040_Linux-x86-64_6of7.zip -rwxr-x---. 1 root root 119521122 Sep 2 10:16 p13390677_112040_Linux-x86-64_7of7.zip drwx------. 17 oracle oinstall 4096 Sep 2 10:27 .. drwxr-xr-x. 2 oracle oinstall 4096 Sep 2 10:34 . [oracle@lx01 Downloads]$ unzip 'p13390677_112040_Linux-x86-64_*.zip'
Without the quotes, unzip will look for "zip files within zip files".
Thanks to Chris Jean for the tip!
Oracle 11g out-of-place vs. in-place database upgrades
Starting with Oracle 11gR2, Oracle changed their patching strategy.
Previously, a patch set was supposed to be downloaded and installed into an existing ORACLE_HOME.
After 11.2, a "patch set" is really a full release, bundled in a number of zip files.
The default patching method is the "out-out-place", which means that every time customers need to upgrade to a new patch set (say, 11.2.0.4), a new ORACLE_HOME should be created, and the new binaries should be installed here.
So you will potentially end up with a number of ORACLE_HOME on your server, and you can choose to run your database out of either of these, upgrading whenever you please.
See Mike Dietrich's post about the same topic for more details.
Previously, a patch set was supposed to be downloaded and installed into an existing ORACLE_HOME.
After 11.2, a "patch set" is really a full release, bundled in a number of zip files.
The default patching method is the "out-out-place", which means that every time customers need to upgrade to a new patch set (say, 11.2.0.4), a new ORACLE_HOME should be created, and the new binaries should be installed here.
So you will potentially end up with a number of ORACLE_HOME on your server, and you can choose to run your database out of either of these, upgrading whenever you please.
See Mike Dietrich's post about the same topic for more details.
How to fix error thrown by OPatch: java.lang.NullPointerException
Problem:
Opatch throws the following error when you are querying the inventory:
Cause:
In my case, the file /etc/hosts was incorrectly configured.
Solution:
Configure the file /etc/hosts correctly.
Check the /etc/hosts:
Update the incorrect ip address from 192.168.1.55 to 192.168.0.55, and save the file.
Opatch will now display expected output:
Opatch throws the following error when you are querying the inventory:
[oracle@lx01 ~]$ opatch lsinventory Oracle Interim Patch Installer version 11.2.0.3.11 Copyright (c) 2015, Oracle Corporation. All rights reserved. Oracle Home : /u01/oracle/product/11.2.0/dbhome_1 Central Inventory : /u01/oraInventory from : /u01/oracle/product/11.2.0/dbhome_1/oraInst.loc OPatch version : 11.2.0.3.11 OUI version : 11.2.0.1.0 Log file location : /u01/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2015-09-02_08-52-28AM_1.log Lsinventory Output file location : /u01/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-09-02_08-52-28AM.txt -------------------------------------------------------------------------------- java.lang.NullPointerException at oracle.sysman.oii.oiin.OiinNetOps.addNICInfo(OiinNetOps.java:144) at oracle.sysman.oii.oiin.OiinNetOps.computeNICList(OiinNetOps.java:109) at oracle.sysman.oii.oiin.OiinNetOps.(OiinNetOps.java:76) at oracle.sysman.oii.oiin.OiinNetOps.getNetOps(OiinNetOps.java:90) at oracle.sysman.oii.oiix.OiixNetOps.getFullHostName(OiixNetOps.java:49) at oracle.opatch.LsInventoryPrinter.printLocalMachineInfo(LsInventoryPrinter.java:2336) at oracle.opatch.LsInventorySession.loadAndPrintInventory(LsInventorySession.java:480) at oracle.opatch.LsInventorySession.process(LsInventorySession.java:317) at oracle.opatch.OPatchSession.main(OPatchSession.java:2120) at oracle.opatch.OPatch.main(OPatch.java:626) Local Machine Information:: Hostname: lx01.oric.no ARU platform id: 226 ARU platform description:: Linux x86-64 Installed Top-level Products (1): Oracle Database 11g 11.2.0.1.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. -------------------------------------------------------------------------------- OPatch succeeded.
Cause:
In my case, the file /etc/hosts was incorrectly configured.
Solution:
Configure the file /etc/hosts correctly.
[root@lx01 ~]# ifconfig enp0s3: flags=4163mtu 1500 inet 192.168.0.55 netmask 255.255.255.0 broadcast 192.168.0.255
Check the /etc/hosts:
[root@lx01 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.55 lx01.oric.no lx01 [oracle@lx01 ~]$ vi /etc/hosts
Update the incorrect ip address from 192.168.1.55 to 192.168.0.55, and save the file.
Opatch will now display expected output:
[oracle@lx01 ~]$ opatch lsinventory Oracle Interim Patch Installer version 11.2.0.3.11 Copyright (c) 2015, Oracle Corporation. All rights reserved. Oracle Home : /u01/oracle/product/11.2.0/dbhome_1 Central Inventory : /u01/oraInventory from : /u01/oracle/product/11.2.0/dbhome_1/oraInst.loc OPatch version : 11.2.0.3.11 OUI version : 11.2.0.1.0 Log file location : /u01/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2015-09-02_09-07-43AM_1.log Lsinventory Output file location : /u01/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-09-02_09-07-43AM.txt -------------------------------------------------------------------------------- Local Machine Information:: Hostname: lx01.oric.no ARU platform id: 226 ARU platform description:: Linux x86-64 Installed Top-level Products (1): Oracle Database 11g 11.2.0.1.0 There are 1 products installed in this Oracle Home. There are no Interim patches installed in this Oracle Home. -------------------------------------------------------------------------------- OPatch succeeded.
Thursday, August 27, 2015
How to solve: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory
After a fresh install of an Oracle 32-bit client on a 64-bit Linux server, we received the following errors when we tried to use any of the binaries under the $ORACLE_CLIENT_HOME/bin directory:
It turned out that only the 64-bit version of libaio was installed:
Solution is to simply install the missing package (output abbreviated):
error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory
It turned out that only the 64-bit version of libaio was installed:
yum list libaio Loaded plugins: product-id, rhnplugin, security This system is receiving updates from RHN Classic or RHN Satellite. Installed Packages libaio.x86_64 0.3.107-10.el6 @anaconda-RedHatEnterpriseLinux-201301301459.x86_64/6.4 Available Packages libaio.i686 0.3.107-10.el6 prod-rhel-x86_64-server-6
Solution is to simply install the missing package (output abbreviated):
su - yum install libaio.i686 downloading Packages: libaio-0.3.107-10.el6.i686.rpm | 21 kB 00:00 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Installing : libaio-0.3.107-10.el6.i686 1/1 Verifying : libaio-0.3.107-10.el6.i686 1/1 Installed: libaio.i686 0:0.3.107-10.el6 Complete!
How to create a symlink in unix
Syntax:
ln -s /path/physical_file symlink
For example
ln -s /path/physical_file symlink
For example
cd $ORACLE_HOME/lib ln -s /u01/oracle/product/oracle_client32/lib/libclntsh.so.11.1 libclntsh.so ls -altr libclntsh.so libclntsh.so -> /u01/oracle/product/oracle_client32/lib/libclntsh.so.11.1
Monday, August 24, 2015
What does the DEFAULT DEGREE on tables actually mean?
The DOP (degree of parallelism) settings on a table has to do with the number of threads per instance for scanning the table (an integer), or DEFAULT.
When a tables DEGREE setting is set to DEFAULT, it means that the DOP) is calculated via the following formula:
For a single instance:
For an Oracle RAC configuration:
The DOP for a table can be viewed in either of the *_TABLES views.
When a tables DEGREE setting is set to DEFAULT, it means that the DOP) is calculated via the following formula:
For a single instance:
DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT
For an Oracle RAC configuration:
DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT x INSTANCE_COUNT
The DOP for a table can be viewed in either of the *_TABLES views.
SELECT DEGREE FROM DBA_TABLES WHERE TABLE_NAME = 'EMP' AND OWNER = 'SCOTT'; DEGREE ------------------ 1
Tuesday, August 11, 2015
How to use cpio on AIX
I recently received a patch from Oracle support, and when unpacking with unzip, the resulting file was in the cpio format. Unpacking this file, in turn, is done like this:
mkdir install cd install cpio -idcmv < 9205_aix5l64_release.cpio
Tuesday, July 28, 2015
How to create and how to extract a .tar file
Create a tar file of all the files in the current directory:
tar cvf myfile.tar *or put it in a different directory:
tar cvf $HOME/myfile.tar *To extract myfile.tar to the current directory
tar xvf myfile.tarTo a specific directory:
tar xvf myfile.tar -C mydir
Tuesday, July 21, 2015
How to solve ORA-17628: Oracle error 19505 returned by remote Oracle server during clone from active database
After some time, my RMAN "duplicate from active database" script threw the error below:
When I checked the alert log, it was clear
Solution: obvious.
RMAN-03009: failure of backup command on c1 channel at 07/21/2015 12:13:15 ORA-17627: ORA-12577: Message 12577 not found; product=RDBMS; facility=ORA RMAN-12019: continuing other job steps, job failed will not be re-run
When I checked the alert log, it was clear
IBM AIX RISC System/6000 Error: 28: No space left on device
Solution: obvious.
How to solve ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified during clone from active database
During an attempt to clone from active database, the following error was thrown:
This seems pretty obvious, since the error stack states that there is a connection problem, but it's easy to overlook if you run your command from the destination server, and like in my case, checked that all connections work, that the passwords are identical etc etc.
The solution is simply to add the source and the destination connect descriptor in the $TNS_ADMIN/tnsnames.ora files on both the source and auxiliary server.
See my other article on active database cloning for a more detailed description of the required setup.
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 07/21/2015 10:06:15 RMAN-05501: aborting duplication of target database RMAN-03015: error occurred in stored script Memory Script RMAN-03009: failure of backup command on c1 channel at 07/21/2015 10:06:15 ORA-17629: Cannot connect to the remote database server ORA-17627: ORA-12154: TNS:could not resolve the connect identifier specified ORA-17629: Cannot connect to the remote database server
This seems pretty obvious, since the error stack states that there is a connection problem, but it's easy to overlook if you run your command from the destination server, and like in my case, checked that all connections work, that the passwords are identical etc etc.
The solution is simply to add the source and the destination connect descriptor in the $TNS_ADMIN/tnsnames.ora files on both the source and auxiliary server.
See my other article on active database cloning for a more detailed description of the required setup.
Saturday, July 11, 2015
How to enable and disable the automatic SQL tuning advisor in Oracle 11g
To enable automatic SQL tuning, use the ENABLE procedure in the DBMS_AUTO_TASK_ADMIN package:
BEGIN DBMS_AUTO_TASK_ADMIN.ENABLE( client_name => 'sql tuning advisor', operation => NULL, window_name => NULL); END; / To disable automatic SQL tuning, use the DISABLE procedure in the DBMS_AUTO_TASK_ADMIN package: BEGIN DBMS_AUTO_TASK_ADMIN.DISABLE( client_name => 'sql tuning advisor', operation => NULL, window_name => NULL); END; /
Monday, July 6, 2015
How to create an ACL
-- step 1: Create the Access Control List and its Privilege Definitions BEGIN DBMS_NETWORK_ACL_ADMIN.CREATE_ACL ( acl => 'my_acl.xml', description => 'Limit the use of package utl_http til webservice', principal => 'SCOTT', is_grant => TRUE, privilege => 'connect'); END; / -- Step 2: Assign the Access Control List to One or More Network Hosts -- After you create the access control list, then you are ready to assign it to one or more network host computers. BEGIN DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL ( acl => 'my_acl.xml', host => '*'); END; /
In the code above, there is no restrictions on which ports to use. If desirable, use the lower_port and upper_port directives in the ASSIGN_ACL procedure. You can also assign the ACL to apply to one specific host, or group of hosts.
For example
BEGIN DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL ( acl => 'my_acl.xml', host => 'appserver1.mycompany.com', lower_port => 80, upper_port => 3999); END; /
You can find information about the currently set up ACLs in the following views:
Sources: Oracle Documentation
http://docs.oracle.com/cd/E11882_01/network.112/e36292/authorization.htm#DBSEG121 - creating an ACL
http://docs.oracle.com/cd/E11882_01/network.112/e36292/authorization.htm#DBSEG99984 - finding information about ACL
http://docs.oracle.com/cd/E11882_01/network.112/e36292/authorization.htm#DBSEG106 - Specifying a Group of Network Host
Monday, June 1, 2015
How to fix error >>> DBMS_AW_EXP: BIN$DdyrY during export
During export using data pump, you see the following:
Solution: empty the recyclebin:
Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 7.968 GB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE >>> DBMS_AW_EXP: BIN$DdyrY/cyAMLgU5/YLVaAbg==$0 not AW$
Solution: empty the recyclebin:
purge dba_recyclebin;and restart the export.
Sunday, May 31, 2015
How to use SUBSTR and INSTR to find the path used for the files in a tablespace
SELECT SUBSTR(FILE_NAME, 1,
INSTR(FILE_NAME, '/', -1, 1) -- Search for position of the first occurrence of the char '/', start at end of string
-1) "PATH" -- starting from the rightmost side of the string, search from position found in INSTR above, and work your way to position 1
FROM DBA_DATA_FILES
WHERE TABLESPACE_NAME = 'USERS'
If you want to limit the output the first file found for a tablespace, add
AND ROWNUM < 2;
See also this post for how to use this piece of code in a script using input variables.
Source: "SQL Certified Expert Exam Guide" by Steve O'Hearn, page 220
Thursday, May 28, 2015
How to unzip to a specific folder in Unix-based operating systems
Use the -d switch and redirect the content of the zip file to the desired directory
unzip myzipfile.zip -d myfolder
Wednesday, May 27, 2015
How to create a stored procedure for partition maintenance, executed by the Oracle Scheduler.
I needed to setup a small job that would remove partitions from a table with records older than 35 days.
The procedure should be flexible enough to accept different tables in different schemas, and there should be some basic logging, too.
Below is how I solved it.
First, grant permission, setup the logging table and add a sequence:
Then, create the stored procedure that will do the job. It requires two parameters to be passed to it: table owner and table name. Note that the procedure will need to be called with invoker's right, since it is accessing DBA_TAB_PARTITION:
The procedure should be flexible enough to accept different tables in different schemas, and there should be some basic logging, too.
Below is how I solved it.
First, grant permission, setup the logging table and add a sequence:
connect / as sysdba grant select on dba_tab_partitions to scott; connect scott/tiger whenever sqlerror continue drop sequence part_maint_seq; create sequence part_maint_seq increment by 1 start with 1 nomaxvalue nominvalue nocycle; drop table partition_maintenance; create table partition_maintenance( opid number not null, table_owner varchar2(8), table_name varchar2(30), partition_name varchar2(30), operation varchar2(10), success char(1), exe_date date, exec_by varchar2(30) default 'automatic', errormsg varchar2(512) ); alter table partition_maintenance add constraint part_maint_pk primary key (opid);
Then, create the stored procedure that will do the job. It requires two parameters to be passed to it: table owner and table name. Note that the procedure will need to be called with invoker's right, since it is accessing DBA_TAB_PARTITION:
connect scott/tiger set serveroutput on CREATE OR REPLACE PROCEDURE DROP_PARTITIONS ( towner IN VARCHAR2, tname IN VARCHAR2) AUTHID CURRENT_USER AS PNAME DBA_TAB_PARTITIONS.PARTITION_NAME%TYPE; L_DATA LONG; V_HVD DATE; V_PURGE_DATE DATE; SQLTORUN VARCHAR2(200); CURSOR PART_NAME IS SELECT PARTITION_NAME FROM DBA_TAB_PARTITIONS WHERE TABLE_NAME = UPPER(tname) AND TABLE_OWNER = UPPER(towner) AND INTERVAL='YES' ORDER BY partition_position; BEGIN OPEN PART_NAME; FETCH PART_NAME INTO PNAME; IF PART_NAME%ROWCOUNT = 0 THEN dbms_output.put_line('no such table as ' || UPPER(towner) ||'.'||UPPER(tname) ); INSERT INTO PARTITION_MAINTENANCE (OPID, TABLE_OWNER,TABLE_NAME,PARTITION_NAME, OPERATION, SUCCESS, EXE_DATE, ERRORMSG) VALUES (PART_MAINT_SEQ.NEXTVAL,UPPER(towner),UPPER(tname),PNAME ,'DROP','N', SYSDATE, 'No table such as '|| towner || '.' || tname ); COMMIT; END IF; WHILE PART_NAME%FOUND LOOP --DBMS_OUTPUT.PUT_LINE( pname ); SELECT HIGH_VALUE INTO L_DATA FROM DBA_TAB_PARTITIONS WHERE TABLE_OWNER= UPPER(towner) AND TABLE_NAME = UPPER(tname) AND PARTITION_NAME = PNAME; --dbms_output.put_line('partition name: ' || pname || ' ' || trim (substr ( l_data, 12, 10 ) ) ); V_HVD := TO_DATE(TRIM (SUBSTR ( L_DATA, 12, 10 )) ,'YYYY-MM-DD'); -- dbms_output.put_line('high value date: ' || v_hvd); V_PURGE_DATE := TRUNC(SYSDATE)-35; --dbms_output.put_line('purge date: ' || v_purge_date); IF V_HVD < V_PURGE_DATE THEN DBMS_OUTPUT.PUT_LINE('partition ' || PNAME ||' too old, high value date for partition is: ' || V_HVD); -- drop statement goes here SQLTORUN := 'alter table ' || towner || '.' || tname || ' drop partition ' || PNAME || ' UPDATE INDEXES'; DBMS_OUTPUT.PUT_LINE(SQLTORUN); EXECUTE IMMEDIATE SQLTORUN; -- logging to a separate table goes here INSERT INTO PARTITION_MAINTENANCE (OPID, TABLE_OWNER,TABLE_NAME,PARTITION_NAME, OPERATION, SUCCESS, EXE_DATE, ERRORMSG) VALUES (PART_MAINT_SEQ.NEXTVAL,UPPER(towner),UPPER(tname),PNAME ,'DROP','Y', SYSDATE, 'Partition ' || pname || ' of table ' || towner || '.' || tname || ' was dropped.'); COMMIT; END IF; FETCH PART_NAME INTO PNAME; END LOOP; CLOSE PART_NAME; END; /Finally, create a program of type "STORED_PROCEDURE", define its arguments, and create a job that executes the program on a regular basis:
whenever sqlerror continue set serveroutput on BEGIN DBMS_SCHEDULER.DROP_JOB( job_name => 'DROP_PARTITION_5WEEK' ); DBMS_SCHEDULER.DROP_PROGRAM( program_name => 'DROP_PARTITION_PROG' ); DBMS_SCHEDULER.create_program ( program_name => 'DROP_PARTITION_PROG', program_type => 'STORED_PROCEDURE', program_action => 'DROP_PARTITIONS', number_of_arguments => 2, enabled => FALSE, comments => 'Program to drop partitions with contents older than 35 days..'); DBMS_SCHEDULER.define_program_argument ( program_name => 'DROP_PARTITION_PROG', argument_name => 'towner', argument_position => 1, argument_type => 'VARCHAR2', default_value => 'SCOTT'); DBMS_SCHEDULER.define_program_argument ( program_name => 'DROP_PARTITION_PROG', argument_name => 'tname', argument_position => 2, argument_type => 'VARCHAR2', default_value => 'SALES'); DBMS_SCHEDULER.enable( name => 'DROP_PARTITION_PROG'); DBMS_SCHEDULER.CREATE_JOB ( job_name => 'DROP_PARTITION_5WEEK', program_name => 'DROP_PARTITION_PROG', start_date => TO_TIMESTAMP_TZ('2015/05/26 21:45:00.000000 +02:00','yyyy/mm/dd hh24:mi:ss.ff tzh:tzm'), repeat_interval => 'FREQ=WEEKLY;', end_date => NULL, enabled => TRUE, comments => 'Job will execute the program drop_partitions.'); END; /
Friday, May 22, 2015
Some common errors seen when configuring TSM
ANU2534E Option file error.
The above error can occur when you run the command
tdpoconf showenvas a non-root user (typically the user owning the oracle database software):
tdpoconf showenv IBM Tivoli Storage Manager for Databases: Data Protection for Oracle Version 6, Release 3, Level 0.0 (C) Copyright IBM Corporation 1997, 2011. All rights reserved. ANU2534E Option file error.
Solution: Change permissions on two of the configuration files:
chmod 644 /usr/tivoli/tsm/client/oracle/bin64/tdpo.opt chmod 644 /usr/tivoli/tsm/client/api/bin64/dsm.sys
ANS1217E (RC409) Server name not found in System Options File.
The following error typically occurs during a redirected restore, which is often needed during duplication of Oracle databases:
ANS1217E (RC409) Server name not found in System Options File
The above indicates that the attribute "TCPServeraddress" found in the file /usr/tivoli/tsm/client/api/bin64/dsm.sys must be changed so that it identical to the source server.
ANS1087E (RC106) Access to the specified file or directory is denied.
This error could occur when you run the command tdpoconf shownev as a non-root user:
ANS1087E (RC106) Access to the specified file or directory is denied
The above error is resolved by setting correct permissions on the /var/adm/log folder:
su - cd /var/adm/ chmod 755 log cd log chmod 777 tdpoerror.log dsmerror.log
ANS1035S (RC406) Options file '*' could not be found, or it cannot be read.
This error is also triggered when you access tdpo as a non-root user, for example, by executing
tdpoconf showenvas a non-root user:
ANS1035S (RC406) Options file '*' could not be found, or it cannot be read.
To solve the problem, grant the right permissions on the files:
su - chmod 644 /usr/tivoli/tsm/client/api/bin64/dsm.opt chmod 644 /usr/tivoli/tsm/client/api/bin64/dsm.sys
ANS1025E (RC137) Session rejected Authentication failure.
The above error may occur when you are attempting to set a new password as a part of reconfiguring the server for a redirected restore:
ANS1025E (RC137) Session rejected: Authentication failure
When you run
tdpoconf passwd
you are prompted for password three times:
current password new password confirm new password
At this point during the configuration, the tsm server pointed to in the attribute "TCPServeraddress" in the configuration file /usr/Tivoli/tsm/client/api/bin64/dsm.sys, believes that it is communicating to the original backupclient. Therefore, it is important that you supply the original backupclient's password at all three prompts.
ANS0282E (RC168) Password file is not available.
When the above error message occurs, you simply need to execute
tdpoconf passwd
to reset the password. Supply the original backup client's password at all three prompts.
Thursday, May 21, 2015
Valid formats for JDBC URLs
Format for JDBC URL with SID:
•jdbc:oracle:thin:@host:1521:sid
•jdbc:oracle:thin:user/password@host:1521:sid
Format for JDBC URL with Service Name:
•jdbc:oracle:thin:@//host:1521/servicename
•jdbc:oracle:thin:user/password@//host:1521/servicename
The difference is in the use of slashes (/) and colons (:)
Source: Oracle Documentation, note 832455.1
•jdbc:oracle:thin:@host:1521:sid
•jdbc:oracle:thin:user/password@host:1521:sid
Format for JDBC URL with Service Name:
•jdbc:oracle:thin:@//host:1521/servicename
•jdbc:oracle:thin:user/password@//host:1521/servicename
The difference is in the use of slashes (/) and colons (:)
Source: Oracle Documentation, note 832455.1
Wednesday, May 20, 2015
How to deinstall Oracle 10g database software
How to deinstall Oracle 10g database software
In oracle 10g, you need to use the oui (oracle universal installer) script supplied in the $ORACLE_HOME/oui/bin folder, as shown below.
# Log in as the oracle 10g user, and set the environment to the ORACLE_HOME you are about remove.
# Check the current ORACLE_HOME
echo $ORACLE_HOME
/u01/oracle/product/10.2.0.4/db
# run oui in silent mode, and point to the correct Oracle 10g home:
Expected output:
There should be a row similar to:
Remove the old ORACLE_HOME folder physically from disk:
In oracle 10g, you need to use the oui (oracle universal installer) script supplied in the $ORACLE_HOME/oui/bin folder, as shown below.
# Log in as the oracle 10g user, and set the environment to the ORACLE_HOME you are about remove.
# Check the current ORACLE_HOME
echo $ORACLE_HOME
/u01/oracle/product/10.2.0.4/db
# run oui in silent mode, and point to the correct Oracle 10g home:
cd /u01/oracle/product/10.2.0.4/db/oui/bin
./runInstaller -deinstall -silent REMOVE_HOMES={"/u01/oracle/product/10.2.0.4/db"}
./runInstaller -deinstall -silent REMOVE_HOMES={"/u01/oracle/product/10.2.0.4/db"}
Expected output:
Starting Oracle Universal Installer... No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed. Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-05-20_08-51-59AM. Please wait ... Oracle Universal Installer, Version 10.2.0.4.0 Production Copyright (C) 1999, 2008, Oracle. All rights reserved. Starting deinstall Deinstall in progress (Wednesday, May 20, 2015 8:52:24 AM CEST) Configuration assistant "iSQL*Plus Configuration Assistant" succeeded Configuration assistant "Oracle Database Configuration Assistant" succeeded Configuration assistant "Oracle Net Configuration Assistant - Deinstall Script" failed ............................................................................................................... 100% Done. Deinstall successful End of install phases.(Wednesday, May 20, 2015 8:56:17 AM CEST) End of deinstallations Please check '/u01/oracle/oraInventory/logs/silentInstall2015-05-20_08-51-59AM.log' for more details.To verify the deinstallation, check the oracle inventory, in my case /u01/oracle/oraInventory/ContentsXML/inventory.xml
There should be a row similar to:
Remove the old ORACLE_HOME folder physically from disk:
cd /u01/oracle/product rm -rf 10.2.0.4
How to deinstall Oracle 11g database software
In oracle 11g, you can use the deinstall script supplied in the $ORACLE_HOME/deinstall folder, as shown below.
In my example, I have two Oracle software installations on the host
/u01/oracle/product/11203 <-- not in use, can be removed
/u01/oracle/product/11204 <-- in use
I have found that when you have a listener running on the host out of a different ORACLE_HOME, the deinstall script will often (but not always) want to deconfigure it, despite the fact that it runs out of a different ORACLE_HOME. So for a brief period, the host may be without a valid listener process. However, this was never a problem , simply change directory to the working /u01/oracle/product/11204/network/admin folder and restart the listener.
Sometimes (but not always) you may have to manually remove the deinstall folder under the ORACLE_HOME home you just deinstalled.
In my example, I have two Oracle software installations on the host
cd /tmp
# first perform a trial run
/u01/oracle/product/11203/deinstall/deinstall -checkonly
# do the actual deinstall
/u01/oracle/product/11203/deinstall/deinstall
I have found that when you have a listener running on the host out of a different ORACLE_HOME, the deinstall script will often (but not always) want to deconfigure it, despite the fact that it runs out of a different ORACLE_HOME. So for a brief period, the host may be without a valid listener process. However, this was never a problem , simply change directory to the working /u01/oracle/product/11204/network/admin folder and restart the listener.
Sometimes (but not always) you may have to manually remove the deinstall folder under the ORACLE_HOME home you just deinstalled.
Thursday, May 7, 2015
How to add a logfile group and a logfile member - syntax
alter database add logfile group 2 ( '/u01/app/oracle/flash_recovery_area/mydb/onlinelog/redo02a.log', '/u01/app/oracle/oradata/mydb/onlinelog/redo02b.log' ) size 50M;
Remember, if you have just recently dropped the redo log members, they are still present physically on disk.
If you'd like to reuse the log file member names, and to avoid
ORA-00301: error in adding log file /u03/oradata/arutvt/redo03.log - file cannot be created,
add the REUSE keyword at the end of the statement:
alter database add logfile group 3 ('/u03/oradata/mydb/redo03.log') size 1024M REUSE;To add another member to an already existing group:
ALTER DATABASE ADD LOGFILE MEMBER '/u03/oradata/mydb/redo03.log' TO GROUP 3;If the redo log file member is already present on disk, use reuse:
ALTER DATABASE ADD LOGFILE MEMBER '/u03/oradata/mydb/redo3b.log' REUSE TO GROUP 3;
If you do not specifically say which log group number you want to create, Oracle will take add another group based on the largest log file group number in v$logfile. So if I have 3 groups already, and execute
alter database add logfile '/rdodata/mydb/redo04.log' size 2048M;It will create group number 4 for you, even if you do not explicitly say so:
SYS@cdb>SQL>@redo GROUP# MEMBER MB ARC STATUS SEQUENCE# ---------- -------------------------------------------------- ---------- --- ---------------- ---------- 1 /rdodata/mydbredo01.log 2048 YES INACTIVE 220 2 /rdodata/mydb/redo02.log 2048 YES INACTIVE 221 3 /rdodata/mydb/redo03.log 2048 NO CURRENT 222 4 /rdodata/mydb/redo04.log 2048 YES UNUSED 0
Documentation for Oracle 19c is found here
Thursday, April 30, 2015
How to find information about CPUs on a Linux server
On RHEL-based Linux distributions, cat the file /proc/cpuinfo, and you will find detailed information about the server's CPUs, including model name, cache size and vendor.
Or
cat /proc/cpuinfo | grep processor processor : 0 processor : 1 processor : 2 processor : 3
Or
lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 58 Stepping: 0 CPU MHz: 2800.000 BogoMIPS: 5600.00 Hypervisor vendor: VMware Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 25600K NUMA node0 CPU(s): 0-3
Wednesday, April 29, 2015
How to execute a stored dbms_scheduler job
In sqlplus, execute the following:
execute dbms_scheduler.run_job('CHANGE_PERMS',TRUE);
Friday, April 24, 2015
How to select rows from a specific partition
The syntax to use if you want to select directly from a specific partition:
SELECT row1, row2 ... FROM my_partitioned_table PARTITION (my_partition_name) WHERE ....;
From a subpartition:
SELECT row1, row2 ... FROM my_partitioned_table SUBPARTITION (my_subpartition_name) WHERE ....;
With Oracle 11g interval partitioning, the syntax has changed so that you can reference a date range, rather than a partition name, since the auto generated names won't say anything meaningful about their content:
select row1, row2 .. from my_partitioned_table PARTITION FOR (to_date('25-FEB-2015','dd-mon-yyyy'));
Remember that aliases must be specified after the entire FROM-clause, including the PARTITION-part, like this:
select ee.col1 from my_partitioned_table partition (SYS_P2447) ee where ee.col2 = 'string1' and ee.col3 like '%string2%' order by ee.col1,col2;
If you place the alias immediately after the table name, you'll get the error:
partition (SYS_P2447) * ERROR at line 3: ORA-00924: missing BY keywordupon execution.
Tuesday, April 21, 2015
What do do when Oracle Universal Installer reports missing package pdksh-5.2.14 oracle
On RHEL 6, if you encounter situations where the OUI is complaining about the absence of a package pdksh-5.2.14 oracle, do the following:
When installing a 32-bit client like in my case, the OUI will warn that "all requirements have not been met".
This message can be ignored and you can proceed with the installation.
Source: MOS document ID 1454982.1
1. Change directory to/database/stage/cvu/cv/admin 2. Backup cvu_config: % cp cvu_config backup_cvu_config 3. Edit cvu_config and change the following line: CV_ASSUME_DISTID=OEL4 to: CV_ASSUME_DISTID=OEL6 4. Save the updated cvu_config file 5. Install the 11.2.0.3 or 11.2.0.4 software using /database/runInstaller % cd /database % ./runInstaller
When installing a 32-bit client like in my case, the OUI will warn that "all requirements have not been met".
This message can be ignored and you can proceed with the installation.
Source: MOS document ID 1454982.1
How to check whether a library is compiled to 32-bit or 64-bit code on AIX
Use the nm-utility:
In the following case, the Library file is 32-bit, since using the -X64 flag is throwing an exception:
while using the -X32 flag will produce output similar to the following (abbreviated):
Note that you can use the OBJECT_MODE environment variable and instead execute
"Displays information about symbols in object files, executable files, and object-file libraries."
In the following case, the Library file is 32-bit, since using the -X64 flag is throwing an exception:
nm -X64 /u01/oracle/product/ora11g_client32/mylib.so 0654-210 /u01/oracle/product/ora11g_client32/mylib.so is not valid in the current object file mode. Use the -X option to specify the desired object mode.
while using the -X32 flag will produce output similar to the following (abbreviated):
z00u070:ar3u>nm -X32 /u01/oracle/product/ora11g_client32/mylib.so f - ../../../../../../../src/bos/usr/ccs/lib/libm/m_tables.c f - zstcXAForget U - zstcXAOpen U - zstcXAPrepare U - zstcXARecover U - zstcXARollback U - zstcXAStart U -
Note that you can use the OBJECT_MODE environment variable and instead execute
export OBJECT_MODE=64 nm /u01/oracle/product/ora11g_client32/mylib.so
How to check whether a library is compiled to 32-bit or 64-bit code on Linux
On Linux, use objdump:
objdump -f /u01/oracle/product/oracle_client32/mylibdir/mylibfile.so /u01/oracle/product/oracle_client32/mylibdir/mylibfile.so: file format elf32-i386 architecture: i386, flags 0x00000150: HAS_SYMS, DYNAMIC, D_PAGED start address 0x00000560
Thursday, April 9, 2015
How to use dbms_metadata to generate DDL for profiles
An easy way to migrate your profiles from a source database to a target database during migration is to use the dbms_metadata package.
To generate one call for each profile:
In my case, the result was a total of three profiles. Use the resulting rows in the script below:
To generate one call for each profile:
SELECT UNIQUE 'SELECT DBMS_METADATA.GET_DDL(''PROFILE'',' || ''''|| PROFILE || ''') FROM DUAL;' FROM DBA_PROFILES;
In my case, the result was a total of three profiles. Use the resulting rows in the script below:
SET HEADING OFF SET TRIMSPOOL ON SET FEEDBACK OFF EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR', TRUE); -- your calls to dbms_metadata here SELECT DBMS_METADATA.GET_DDL('PROFILE','PROF') from dual; SELECT DBMS_METADATA.GET_DDL('PROFILE','DEFAULT') from dual; SELECT DBMS_METADATA.GET_DDL('PROFILE','ONLINEUSR') from dual;
Tuesday, March 24, 2015
How to use the nmap tool for Oracle networking troubleshooting
The Network exploration tool and security / port skanner (nmap) came in handy as I was checking the prerequisites for a Golden Gate installation.
We had to open ports in a firewall between the two servers to allow the Golden Gate Manager processes on each side to communicate with one another.
Initially, the ports seemed to be closed even though the firewall administrator claimed it was open.
Oracle Golden Gate needs two-way communication over the designated manager port, which by default is 7809.
So I used nmap to prove that it was indeed closed.
When the nmap status is closed or filtered, the man pages explains their state as
"Closed ports have no application listening on them, though they could open up at any time. Ports are classified as unfiltered when they are responsive to nmap's probes, but nmap cannot determine whether they are open or closed.
Filtered ports means that a firewall, filter, or other network obstacle is blocking the port so that nmap cannot tell whether it is open or closed."
Port 1521 was opened, as requested from the firewall team:
Port 7809 was closed, as seen by the output below:
Later, the port range 7809-7820 was opened, as can be seen below. Note that there is no activity on ports 7810-7820 so they are for the time being marked as closed:
We had to open ports in a firewall between the two servers to allow the Golden Gate Manager processes on each side to communicate with one another.
Initially, the ports seemed to be closed even though the firewall administrator claimed it was open.
Oracle Golden Gate needs two-way communication over the designated manager port, which by default is 7809.
So I used nmap to prove that it was indeed closed.
When the nmap status is closed or filtered, the man pages explains their state as
"Closed ports have no application listening on them, though they could open up at any time. Ports are classified as unfiltered when they are responsive to nmap's probes, but nmap cannot determine whether they are open or closed.
Filtered ports means that a firewall, filter, or other network obstacle is blocking the port so that nmap cannot tell whether it is open or closed."
Port 1521 was opened, as requested from the firewall team:
[root@myserver2 ~]# nmap -p 1521 myserver1 Starting Nmap 5.51 ( http://nmap.org ) at 2015-03-24 14:02 CET Nmap scan report for myserver1 (159.216.45.70) Host is up (0.0018s latency). rDNS record for 159.216.45.70: myserver1.mydomain.no PORT STATE SERVICE 1521/tcp open oracle
Port 7809 was closed, as seen by the output below:
[root@myserver2 ~]# nmap -p 7809 myserver1 Starting Nmap 5.51 ( http://nmap.org ) at 2015-03-24 15:14 CET Nmap scan report for myserver1 (159.216.45.70) Host is up. rDNS record for 159.216.45.70: myserver1.mydomain.no PORT STATE SERVICE 7809/tcp filtered unknown Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds
Later, the port range 7809-7820 was opened, as can be seen below. Note that there is no activity on ports 7810-7820 so they are for the time being marked as closed:
root@myserver2 ~]# nmap -p 7809-7820 myserver1 Starting Nmap 5.51 ( http://nmap.org ) at 2015-03-24 15:48 CET Nmap scan report for myserver1(159.216.45.70) Host is up (0.0024s latency). rDNS record for 159.216.45.70: myserver1.mydomain.no PORT STATE SERVICE 7809/tcp open unknown 7810/tcp closed unknown 7811/tcp closed unknown 7812/tcp closed unknown 7813/tcp closed unknown 7814/tcp closed unknown 7815/tcp closed unknown 7816/tcp closed unknown 7817/tcp closed unknown 7818/tcp closed unknown 7819/tcp closed unknown 7820/tcp closed unknown
Thursday, March 12, 2015
How to use DECODE to create a script for compilation of both packages and package bodies
connect scott/tiger alter session set nls_language='AMERICAN'; set heading off set trimspool on set feedback off set verify off set echo off set pagesize 0 spool recompl.lst select 'alter '||decode(object_type, 'PACKAGE BODY', 'package', object_type) || ' ' || object_name || ' compile' || decode(object_type, 'PACKAGE BODY', ' body;', ';') from user_objects where status = 'INVALID' order by object_type; select 'show errors' from dual; select 'exit' from dual; spool off start recompl.lst
Monday, February 23, 2015
How to solve ORA-02180 when specifying COMPRESSION type
You get
Solution:
Add the DEFAULT keyword to specify the default parameters for the database:
ORA-02180: invalid option for CREATE TABLESPACEwhen executing a create tablespace statement like this one:
CREATE TABLESPACE test DATAFILE '/u02/oradata/mydb/test.ora' SIZE 32M AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED COMPRESS FOR OLTP EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO;
Solution:
Add the DEFAULT keyword to specify the default parameters for the database:
CREATE TABLESPACE test DATAFILE '/u02/oradata/mydb/test.ora' SIZE 32M AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED DEFAULT COMPRESS FOR OLTP EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO;
Tuesday, February 17, 2015
How cloning from backup became a little more secure in Oracle 11g
From version 11, Oracle supplied another method to use during cloning, namely "Backup-based duplication without a target connection".
From the Oracle 11g Documentation:
If you choose method number 2, you need to use the following syntax:
In the 10g documentation you will need to use the "target" keyword
Oracle points out:
In other words, cloning has become more secure since version 11g, where we can totally avoid connecting to the target database, which is often in production, during cloning from backup.
An example of an incident where a connection to the target could potentially jeopardize production, is when you are scripting jobs for RMAN and accidently issue "shutdown", which will bring down your target database and not your auxiliary database, which was what you intended.
For the record, the keyword "target" is an unfortunate choice of syntax during cloning. In IT, a "target" would generally be interpreted as a synonym for "destination", which is where you want to clone *TO*.
From the Oracle 11g Documentation:
RMAN can perform the duplication in any of the following supported modes: 1. Active duplication 2. Backup-based duplication without a target connection 3. Backup-based duplication with a target connection 4. Backup-based duplication without connection to target and recovery catalog
If you choose method number 2, you need to use the following syntax:
DUPLICATE DATABASE <target database> TO <auxiliary database>;
In the 10g documentation you will need to use the "target" keyword
DUPLICATE TARGET DATABASE TO <auxiliary database>;
Oracle points out:
«This mode is useful when the target database is not available or a connection to it is not desirable».
In other words, cloning has become more secure since version 11g, where we can totally avoid connecting to the target database, which is often in production, during cloning from backup.
An example of an incident where a connection to the target could potentially jeopardize production, is when you are scripting jobs for RMAN and accidently issue "shutdown", which will bring down your target database and not your auxiliary database, which was what you intended.
For the record, the keyword "target" is an unfortunate choice of syntax during cloning. In IT, a "target" would generally be interpreted as a synonym for "destination", which is where you want to clone *TO*.
The emcli utility
If you're working in a large Enterprise and are using Cloud Control, you will have access to a neat utility called emcli.
emcli is short for "Enterprise Manager Command Line Interface" and is documented here.
In short, emcli can be used to perform scripted jobs on hosts, or on databases running on hosts, that are registered in your Cloud Control 12c.
The jobs you create and execute will show up in Cloud Control under "Enterprise"-->"Jobs"-->"Job Activity"
In this particular case, I wanted to use the
Instead of logging into each and every oracle 11g databaser server, I used emcli to loop through the different groups that our Cloud Control Administrator had created.
To change target from a specific host to a group, exchange the -target to a valid group name, followed by the keyword "group". For example
First, make sure you're logged into emcli:
You can query the targets usable by emcli by using "get_targets" like this:
If you have a large enterprise with many targets, you may need to add "-limit_rows" to the "get_targets" command in order to accommodate a larger list, since "get_targets" will automatically limit the rows extracted to 2000:
The following command will execute a host command of type "file", and the actual file is pointed out by the -input_file flag:
The file "change_adrci_policy.sh":
emcli is short for "Enterprise Manager Command Line Interface" and is documented here.
In short, emcli can be used to perform scripted jobs on hosts, or on databases running on hosts, that are registered in your Cloud Control 12c.
The jobs you create and execute will show up in Cloud Control under "Enterprise"-->"Jobs"-->"Job Activity"
In this particular case, I wanted to use the
adrciutility to set new retention policies.
Instead of logging into each and every oracle 11g databaser server, I used emcli to loop through the different groups that our Cloud Control Administrator had created.
To change target from a specific host to a group, exchange the -target to a valid group name, followed by the keyword "group". For example
-targets="prod_db_servers:group"
First, make sure you're logged into emcli:
emcli login -username=sysman -password=mypassword Login successful
You can query the targets usable by emcli by using "get_targets" like this:
oracle@myomsserver:[OMS]# emcli get_targets |grep -i myserver1
If you have a large enterprise with many targets, you may need to add "-limit_rows" to the "get_targets" command in order to accommodate a larger list, since "get_targets" will automatically limit the rows extracted to 2000:
oracle@myomsserver: [OMS]# emcli get_targets -limit_rows=5000 |grep -i myserver2
The following command will execute a host command of type "file", and the actual file is pointed out by the -input_file flag:
emcli execute_hostcmd -cmd="/bin/sh -s" -osscript="FILE" -input_file="FILE:/u01/app/scripts/change_adrci_policy.sh" -credential_set_name="HostCredsNormal" -targets="myhost.mydomain.no:host"
The file "change_adrci_policy.sh":
# Oric Consulting AS
# Vegard Kasa
# Version 2.0
#!/usr/bin/ksh
# Special coding required for AIX hosts, as there are many different flavours of setting the environment...
#
echo "Hostname: " `hostname`
echo "Running as user: " `whoami`
export NUMDB=`cat /etc/oratab | egrep -v "\#|\*" | sed '/^$/d' | wc -l | tr -d ' '`
export OS=`uname -s`
echo "There are $NUMDB database(s) on this server."
# Loop through the databases extracted from /etc/oratab
for x in $(cat /etc/oratab | grep -v "#" | grep -v "*" | awk -F":" '{print $1}' | sed '/^$/d'); do
case $OS in
'Linux') echo "This is Linux, sourcing .bash_profile...";
. ~/.bash_profile;
# set the ORACLE_SID, and make sure LIBPATH is set
export ORACLE_SID=$x;
export LIBPATH=$ORACLE_HOME/lib;
# PATH must be set explictly, even after sourcing profile, to avoid mixing database home binaries
# and Oracle Agent home binaries.
export PATH=$ORACLE_HOME/bin:/usr/bin:/bin:/usr/local/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/drift:/home/oracle/bin;
echo "adrci exec=\"spool /home/oracle/get_adr_home.lst;show homes;quit\"" > find_adr_home.sh;
chmod 755 /home/oracle/find_adr_home.sh;
/home/oracle/find_adr_home.sh;
export ADR_HOME=`cat /home/oracle/get_adr_home.lst | grep rdbms | grep $ORACLE_SID`;;
'AIX') echo "This is AIX...";
echo "Now checking for oracle11g installations..."
USEDSHELL=`cat /etc/passwd | grep ora11g | cut -f7 -d":"`
# The user ora11g is not found
if [ -z "${USEDSHELL}" ]; then
echo "Oracle user \"ora11g\" does not exist on server `hostname`"
break;
# The user ora11g is indeed found in /etc/passwd
else
if [ "${USEDSHELL}" = "/usr/bin/bash" ]; then
# We found that ora11g is using bash shell. Source .bash_profile
echo "Sourcing .bash_profile in this environment..."
. ~/.bash_profile
else
# We found that ora11g is using a shell <> from bash. Source .profile
echo "Sourcing .profile in this environment..."
. ~/.profile;
IAM=`whoami`
# We have sourced .profile and the user turns out to be ora10g
if [ "${IAM}" = "ora10g" ]; then
echo "Oracle installation under user \"ora10g\" found."
echo "Only Oracle11g is applicable for adrci purging. Exiting."
break;
fi
fi
fi
# set the ORACLE_SID, and make sure LIBPATH is set
export ORACLE_SID=$x;
export LIBPATH=$ORACLE_HOME/lib
# PATH must be set explictly, even after sourcing profile, to avoid mixing database home binaries
# and Oracle Agent home binaries.
export PATH=$ORACLE_HOME/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/ora11g/bin:/usr/bin/X11:/sbin:/usr/java131/bin:/usr/vac/bin:/usr/local/bin;
# create an executable file on the target server
echo "adrci exec=\"spool /home/ora11g/get_adr_home.lst;show homes;quit\"" > find_adr_home.sh;
chmod 755 /home/ora11g/find_adr_home.sh;
# execute the file
/home/ora11g/find_adr_home.sh;
# grep for the ADR home in the log file produced by the execution above. Look for rdbms home
# Then set the ADR_HOME to the path reflecting the rdbms instance
export ADR_HOME=`cat /home/ora11g/get_adr_home.lst | grep rdbms | grep $ORACLE_SID`;;
esac
# Show the current settings (for logging purposes)
echo "ORACLE_SID is " $ORACLE_SID
echo "ORACLE_HOME is: " $ORACLE_HOME
echo "PATH: " $PATH
echo "LIBPATH: " $LIBPATH
echo "adrci located in " `which adrci`
echo "ADR HOME selected: " $ADR_HOME
# finally, execute the set control statements in adrci against the rdbms home
adrci exec="set home $ADR_HOME;show control;set control \(shortp_policy = 336\);set control\(longp_policy=2160\);show control;quit"
done
exit 0
Friday, January 23, 2015
What is the difference between a BEQUEATH connection and an IPC connection?
A bequeath connection
runs on your local host
bypasses the listener
the protocol creates the server process for you directly
An IPC (Inter-Process Communication) connection
will use the native protocol on each OS, but uses the generic term "IPC" for all of them
can only be used when the Client and Server reside on the same host
can only be used by having the Client connect through the Oracle Listener
the Database Listener must be configured to listen on an IPC endpoint
the listener spawns the server process for you
Example setup:
tnsnames.ora:
Check the connections and their types:
And here is the output. Notice how the first session (.. / as sysdba) results in a Bequeath session, while the other one ( ...@proddb_ipc) results in a session using IPC:
An IPC (Inter-Process Communication) connection
Example setup:
Listener.ora LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = myserver)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) )
tnsnames.ora:
proddb01_ipc = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = proddb01) ) )Connect to your database locally:
sqlplus /nolog SQL*Plus: Release 11.2.0.4.0. Production on Thu Jan 22 15:35:44 2015 Copyright (c) 1982,2013, Oracle. All rights reserved. SQL> connect scott/tiger Connected.From another window, create another session:
sqlplus /nolog SQL*Plus: Release 11.2.0.4.0. Production on Thu Jan 22 15:35:44 2015 Copyright (c) 1982,2013, Oracle. All rights reserved. SQL> connect scott/tiger@proddb01_ipc Connected.
Check the connections and their types:
SELECT S.SID, S.OSUSER,S.PROGRAM,S.USERNAME,S.MACHINE, SCI.NETWORK_SERVICE_BANNER,S.LOGON_TIME,S.STATUS FROM V$SESSION S INNER JOIN V$SESSION_CONNECT_INFO SCI ON S.SID = SCI.SID WHERE S.USERNAME = UPPER('scott') AND SCI.NETWORK_SERVICE_BANNER LIKE '%IPC%' OR SCI.NETWORK_SERVICE_BANNER LIKE INITCAP('%BEQ%') AND S.TYPE <> 'BACKGROUND' ORDER BY LOGON_TIME;
And here is the output. Notice how the first session (.. / as sysdba) results in a Bequeath session, while the other one ( ...@proddb_ipc) results in a session using IPC:
SID | OSUSER | PROGRAM | USERNAME | MACHINE | NETWORK_SERVICE_BANNER | LOGON_TIME | STATUS |
---|---|---|---|---|---|---|---|
9 | oracle | sqlplus@myserver.mydomain.com (TNS V1-V3) | SCOTT | myserver.mydomain.com | Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production | 22.01.2015 15:35:49 | INACTIVE |
1160 | oracle | sqlplus@myserver.mydomain.com (TNS V1-V3) | SCOTT | myserver.mydomain.com | Unix Domain Socket IPC NT Protocol Adaptor for Linux: Version 11.2.0.4.0 - Production | 22.01.2015 15:40:44 | INACTIVE |
Thursday, January 22, 2015
A complete script for gathering system, data dictionary and fixed objects stats
alter session set nls_language='AMERICAN'; set timing on set serveroutput on set lines 200 col pname format a20 BEGIN DBMS_STATS.GATHER_DICTIONARY_STATS ( Estimate_Percent => DBMS_STATS.AUTO_SAMPLE_SIZE ,method_opt => 'FOR ALL COLUMNS SIZE AUTO' ,Degree => NULL ,Cascade => DBMS_STATS.AUTO_CASCADE ,granularity => 'AUTO' ,No_Invalidate => DBMS_STATS.AUTO_INVALIDATE); END; / BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; END; / SELECT PNAME, PVAL1 FROM SYS.AUX_STATS$; PROMPT Gather workload system stats, sample for 1 hour BEGIN DBMS_STATS.GATHER_SYSTEM_STATS ( GATHERING_MODE=>'INTERVAL', interval => 60, statid => 'DAYTIME'); END; / SELECT PNAME, PVAL1 FROM SYS.AUX_STATS$; exit
Tim Hall has written a good article about how the gather_system_stats procedure works, read it at www.oracle-base.com
Wednesday, January 21, 2015
How to relocate the block change tracking file
To relocate the block change tracking file you have two options:
1) shutdown database, mount database, update control file, open database
OR
2) disable and re-enable block change tracking, and point to the new location when re-enabling.
See Section "4.4.4.3 Moving the Change Tracking File" in the Oracle Documentation regarding this feature.
1) shutdown database, mount database, update control file, open database
sqlplus / as sysdba shutdown immediate exit -- Move the block change tracking file to the new location using the appropriate os utility. -- sqlplus / as sysdba startup mount ALTER DATABASE RENAME FILE 'ora_home/dbs/change_trk.f' TO '/new_disk/change_trk.f'; ALTER DATABASE OPEN;
OR
2) disable and re-enable block change tracking, and point to the new location when re-enabling.
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING; ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 'new_location';
See Section "4.4.4.3 Moving the Change Tracking File" in the Oracle Documentation regarding this feature.
Subscribe to:
Posts (Atom)