objdump -f /u01/oracle/product/oracle_client32/mylibdir/mylibfile.so /u01/oracle/product/oracle_client32/mylibdir/mylibfile.so: file format elf32-i386 architecture: i386, flags 0x00000150: HAS_SYMS, DYNAMIC, D_PAGED start address 0x00000560
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Tuesday, April 21, 2015
How to check whether a library is compiled to 32-bit or 64-bit code on Linux
On Linux, use objdump:
Thursday, April 9, 2015
How to use dbms_metadata to generate DDL for profiles
An easy way to migrate your profiles from a source database to a target database during migration is to use the dbms_metadata package.
To generate one call for each profile:
In my case, the result was a total of three profiles. Use the resulting rows in the script below:
To generate one call for each profile:
SELECT UNIQUE 'SELECT DBMS_METADATA.GET_DDL(''PROFILE'',' || ''''|| PROFILE || ''') FROM DUAL;' FROM DBA_PROFILES;
In my case, the result was a total of three profiles. Use the resulting rows in the script below:
SET HEADING OFF SET TRIMSPOOL ON SET FEEDBACK OFF EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'SQLTERMINATOR', TRUE); -- your calls to dbms_metadata here SELECT DBMS_METADATA.GET_DDL('PROFILE','PROF') from dual; SELECT DBMS_METADATA.GET_DDL('PROFILE','DEFAULT') from dual; SELECT DBMS_METADATA.GET_DDL('PROFILE','ONLINEUSR') from dual;
Tuesday, March 24, 2015
How to use the nmap tool for Oracle networking troubleshooting
The Network exploration tool and security / port skanner (nmap) came in handy as I was checking the prerequisites for a Golden Gate installation.
We had to open ports in a firewall between the two servers to allow the Golden Gate Manager processes on each side to communicate with one another.
Initially, the ports seemed to be closed even though the firewall administrator claimed it was open.
Oracle Golden Gate needs two-way communication over the designated manager port, which by default is 7809.
So I used nmap to prove that it was indeed closed.
When the nmap status is closed or filtered, the man pages explains their state as
"Closed ports have no application listening on them, though they could open up at any time. Ports are classified as unfiltered when they are responsive to nmap's probes, but nmap cannot determine whether they are open or closed.
Filtered ports means that a firewall, filter, or other network obstacle is blocking the port so that nmap cannot tell whether it is open or closed."
Port 1521 was opened, as requested from the firewall team:
Port 7809 was closed, as seen by the output below:
Later, the port range 7809-7820 was opened, as can be seen below. Note that there is no activity on ports 7810-7820 so they are for the time being marked as closed:
We had to open ports in a firewall between the two servers to allow the Golden Gate Manager processes on each side to communicate with one another.
Initially, the ports seemed to be closed even though the firewall administrator claimed it was open.
Oracle Golden Gate needs two-way communication over the designated manager port, which by default is 7809.
So I used nmap to prove that it was indeed closed.
When the nmap status is closed or filtered, the man pages explains their state as
"Closed ports have no application listening on them, though they could open up at any time. Ports are classified as unfiltered when they are responsive to nmap's probes, but nmap cannot determine whether they are open or closed.
Filtered ports means that a firewall, filter, or other network obstacle is blocking the port so that nmap cannot tell whether it is open or closed."
Port 1521 was opened, as requested from the firewall team:
[root@myserver2 ~]# nmap -p 1521 myserver1 Starting Nmap 5.51 ( http://nmap.org ) at 2015-03-24 14:02 CET Nmap scan report for myserver1 (159.216.45.70) Host is up (0.0018s latency). rDNS record for 159.216.45.70: myserver1.mydomain.no PORT STATE SERVICE 1521/tcp open oracle
Port 7809 was closed, as seen by the output below:
[root@myserver2 ~]# nmap -p 7809 myserver1 Starting Nmap 5.51 ( http://nmap.org ) at 2015-03-24 15:14 CET Nmap scan report for myserver1 (159.216.45.70) Host is up. rDNS record for 159.216.45.70: myserver1.mydomain.no PORT STATE SERVICE 7809/tcp filtered unknown Nmap done: 1 IP address (1 host up) scanned in 2.08 seconds
Later, the port range 7809-7820 was opened, as can be seen below. Note that there is no activity on ports 7810-7820 so they are for the time being marked as closed:
root@myserver2 ~]# nmap -p 7809-7820 myserver1 Starting Nmap 5.51 ( http://nmap.org ) at 2015-03-24 15:48 CET Nmap scan report for myserver1(159.216.45.70) Host is up (0.0024s latency). rDNS record for 159.216.45.70: myserver1.mydomain.no PORT STATE SERVICE 7809/tcp open unknown 7810/tcp closed unknown 7811/tcp closed unknown 7812/tcp closed unknown 7813/tcp closed unknown 7814/tcp closed unknown 7815/tcp closed unknown 7816/tcp closed unknown 7817/tcp closed unknown 7818/tcp closed unknown 7819/tcp closed unknown 7820/tcp closed unknown
Thursday, March 12, 2015
How to use DECODE to create a script for compilation of both packages and package bodies
connect scott/tiger alter session set nls_language='AMERICAN'; set heading off set trimspool on set feedback off set verify off set echo off set pagesize 0 spool recompl.lst select 'alter '||decode(object_type, 'PACKAGE BODY', 'package', object_type) || ' ' || object_name || ' compile' || decode(object_type, 'PACKAGE BODY', ' body;', ';') from user_objects where status = 'INVALID' order by object_type; select 'show errors' from dual; select 'exit' from dual; spool off start recompl.lst
Monday, February 23, 2015
How to solve ORA-02180 when specifying COMPRESSION type
You get
Solution:
Add the DEFAULT keyword to specify the default parameters for the database:
ORA-02180: invalid option for CREATE TABLESPACEwhen executing a create tablespace statement like this one:
CREATE TABLESPACE test DATAFILE '/u02/oradata/mydb/test.ora' SIZE 32M AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED COMPRESS FOR OLTP EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO;
Solution:
Add the DEFAULT keyword to specify the default parameters for the database:
CREATE TABLESPACE test DATAFILE '/u02/oradata/mydb/test.ora' SIZE 32M AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED DEFAULT COMPRESS FOR OLTP EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO;
Tuesday, February 17, 2015
How cloning from backup became a little more secure in Oracle 11g
From version 11, Oracle supplied another method to use during cloning, namely "Backup-based duplication without a target connection".
From the Oracle 11g Documentation:
If you choose method number 2, you need to use the following syntax:
In the 10g documentation you will need to use the "target" keyword
Oracle points out:
In other words, cloning has become more secure since version 11g, where we can totally avoid connecting to the target database, which is often in production, during cloning from backup.
An example of an incident where a connection to the target could potentially jeopardize production, is when you are scripting jobs for RMAN and accidently issue "shutdown", which will bring down your target database and not your auxiliary database, which was what you intended.
For the record, the keyword "target" is an unfortunate choice of syntax during cloning. In IT, a "target" would generally be interpreted as a synonym for "destination", which is where you want to clone *TO*.
From the Oracle 11g Documentation:
RMAN can perform the duplication in any of the following supported modes: 1. Active duplication 2. Backup-based duplication without a target connection 3. Backup-based duplication with a target connection 4. Backup-based duplication without connection to target and recovery catalog
If you choose method number 2, you need to use the following syntax:
DUPLICATE DATABASE <target database> TO <auxiliary database>;
In the 10g documentation you will need to use the "target" keyword
DUPLICATE TARGET DATABASE TO <auxiliary database>;
Oracle points out:
«This mode is useful when the target database is not available or a connection to it is not desirable».
In other words, cloning has become more secure since version 11g, where we can totally avoid connecting to the target database, which is often in production, during cloning from backup.
An example of an incident where a connection to the target could potentially jeopardize production, is when you are scripting jobs for RMAN and accidently issue "shutdown", which will bring down your target database and not your auxiliary database, which was what you intended.
For the record, the keyword "target" is an unfortunate choice of syntax during cloning. In IT, a "target" would generally be interpreted as a synonym for "destination", which is where you want to clone *TO*.
The emcli utility
If you're working in a large Enterprise and are using Cloud Control, you will have access to a neat utility called emcli.
emcli is short for "Enterprise Manager Command Line Interface" and is documented here.
In short, emcli can be used to perform scripted jobs on hosts, or on databases running on hosts, that are registered in your Cloud Control 12c.
The jobs you create and execute will show up in Cloud Control under "Enterprise"-->"Jobs"-->"Job Activity"
In this particular case, I wanted to use the
Instead of logging into each and every oracle 11g databaser server, I used emcli to loop through the different groups that our Cloud Control Administrator had created.
To change target from a specific host to a group, exchange the -target to a valid group name, followed by the keyword "group". For example
First, make sure you're logged into emcli:
You can query the targets usable by emcli by using "get_targets" like this:
If you have a large enterprise with many targets, you may need to add "-limit_rows" to the "get_targets" command in order to accommodate a larger list, since "get_targets" will automatically limit the rows extracted to 2000:
The following command will execute a host command of type "file", and the actual file is pointed out by the -input_file flag:
The file "change_adrci_policy.sh":
emcli is short for "Enterprise Manager Command Line Interface" and is documented here.
In short, emcli can be used to perform scripted jobs on hosts, or on databases running on hosts, that are registered in your Cloud Control 12c.
The jobs you create and execute will show up in Cloud Control under "Enterprise"-->"Jobs"-->"Job Activity"
In this particular case, I wanted to use the
adrciutility to set new retention policies.
Instead of logging into each and every oracle 11g databaser server, I used emcli to loop through the different groups that our Cloud Control Administrator had created.
To change target from a specific host to a group, exchange the -target to a valid group name, followed by the keyword "group". For example
-targets="prod_db_servers:group"
First, make sure you're logged into emcli:
emcli login -username=sysman -password=mypassword Login successful
You can query the targets usable by emcli by using "get_targets" like this:
oracle@myomsserver:[OMS]# emcli get_targets |grep -i myserver1
If you have a large enterprise with many targets, you may need to add "-limit_rows" to the "get_targets" command in order to accommodate a larger list, since "get_targets" will automatically limit the rows extracted to 2000:
oracle@myomsserver: [OMS]# emcli get_targets -limit_rows=5000 |grep -i myserver2
The following command will execute a host command of type "file", and the actual file is pointed out by the -input_file flag:
emcli execute_hostcmd -cmd="/bin/sh -s" -osscript="FILE" -input_file="FILE:/u01/app/scripts/change_adrci_policy.sh" -credential_set_name="HostCredsNormal" -targets="myhost.mydomain.no:host"
The file "change_adrci_policy.sh":
# Oric Consulting AS
# Vegard Kasa
# Version 2.0
#!/usr/bin/ksh
# Special coding required for AIX hosts, as there are many different flavours of setting the environment...
#
echo "Hostname: " `hostname`
echo "Running as user: " `whoami`
export NUMDB=`cat /etc/oratab | egrep -v "\#|\*" | sed '/^$/d' | wc -l | tr -d ' '`
export OS=`uname -s`
echo "There are $NUMDB database(s) on this server."
# Loop through the databases extracted from /etc/oratab
for x in $(cat /etc/oratab | grep -v "#" | grep -v "*" | awk -F":" '{print $1}' | sed '/^$/d'); do
case $OS in
'Linux') echo "This is Linux, sourcing .bash_profile...";
. ~/.bash_profile;
# set the ORACLE_SID, and make sure LIBPATH is set
export ORACLE_SID=$x;
export LIBPATH=$ORACLE_HOME/lib;
# PATH must be set explictly, even after sourcing profile, to avoid mixing database home binaries
# and Oracle Agent home binaries.
export PATH=$ORACLE_HOME/bin:/usr/bin:/bin:/usr/local/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/drift:/home/oracle/bin;
echo "adrci exec=\"spool /home/oracle/get_adr_home.lst;show homes;quit\"" > find_adr_home.sh;
chmod 755 /home/oracle/find_adr_home.sh;
/home/oracle/find_adr_home.sh;
export ADR_HOME=`cat /home/oracle/get_adr_home.lst | grep rdbms | grep $ORACLE_SID`;;
'AIX') echo "This is AIX...";
echo "Now checking for oracle11g installations..."
USEDSHELL=`cat /etc/passwd | grep ora11g | cut -f7 -d":"`
# The user ora11g is not found
if [ -z "${USEDSHELL}" ]; then
echo "Oracle user \"ora11g\" does not exist on server `hostname`"
break;
# The user ora11g is indeed found in /etc/passwd
else
if [ "${USEDSHELL}" = "/usr/bin/bash" ]; then
# We found that ora11g is using bash shell. Source .bash_profile
echo "Sourcing .bash_profile in this environment..."
. ~/.bash_profile
else
# We found that ora11g is using a shell <> from bash. Source .profile
echo "Sourcing .profile in this environment..."
. ~/.profile;
IAM=`whoami`
# We have sourced .profile and the user turns out to be ora10g
if [ "${IAM}" = "ora10g" ]; then
echo "Oracle installation under user \"ora10g\" found."
echo "Only Oracle11g is applicable for adrci purging. Exiting."
break;
fi
fi
fi
# set the ORACLE_SID, and make sure LIBPATH is set
export ORACLE_SID=$x;
export LIBPATH=$ORACLE_HOME/lib
# PATH must be set explictly, even after sourcing profile, to avoid mixing database home binaries
# and Oracle Agent home binaries.
export PATH=$ORACLE_HOME/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/home/ora11g/bin:/usr/bin/X11:/sbin:/usr/java131/bin:/usr/vac/bin:/usr/local/bin;
# create an executable file on the target server
echo "adrci exec=\"spool /home/ora11g/get_adr_home.lst;show homes;quit\"" > find_adr_home.sh;
chmod 755 /home/ora11g/find_adr_home.sh;
# execute the file
/home/ora11g/find_adr_home.sh;
# grep for the ADR home in the log file produced by the execution above. Look for rdbms home
# Then set the ADR_HOME to the path reflecting the rdbms instance
export ADR_HOME=`cat /home/ora11g/get_adr_home.lst | grep rdbms | grep $ORACLE_SID`;;
esac
# Show the current settings (for logging purposes)
echo "ORACLE_SID is " $ORACLE_SID
echo "ORACLE_HOME is: " $ORACLE_HOME
echo "PATH: " $PATH
echo "LIBPATH: " $LIBPATH
echo "adrci located in " `which adrci`
echo "ADR HOME selected: " $ADR_HOME
# finally, execute the set control statements in adrci against the rdbms home
adrci exec="set home $ADR_HOME;show control;set control \(shortp_policy = 336\);set control\(longp_policy=2160\);show control;quit"
done
exit 0
Subscribe to:
Posts (Atom)