Thursday, November 5, 2020

How to change the control_files parameter used in the spfile

Mind your syntax when you are changing certain multi-value parameters directly in the spfile. Oracle will tell you that the syntax is accepted, but it will give you an error later. The following example is from an attempt to change the control_files parameter.

Won't work:
SQL> alter system set control_files='/oradata/proddb01/control01.ctl, /fra/proddb01/control02.ctl' scope=spfile;

System altered.
During mount stage, this will give you the following message in the database's alert log:
ORA-205 signalled during: ALTER DATABASE   MOUNT...
ORA-00202: control file: /oradata/proddb01/control01.ctl, /fra/proddb01/control02.ctl
ORA-27037: cannot obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
Works:
SQL> alter system set control_files='/oradata/proddb01/control01.ctl','/fra/proddb01/control02.ctl' scope=spfile;

System altered.
You should enclose both values within ' ' apostroph characters.

Wednesday, October 21, 2020

How to transfer SQL Profiles from one database to another

Following up from part 1 on my post on How to transfer SQL Plan Baselines from one database to another, I will now show you the steps to transfer a SQL Profile in the same manner.
For SQL Profiles you depend on the package dbms_sqltune
I am connecting to the databases as a privileged user called dbadmin throughout this post.

1. First, find the name of SQL profile you are about to transfer:
select * from dba_sql_profiles order by created;
The query above will return the SQL Profiles in the order in which they were created. Notice the name of the relevant one, as this will be used as an argument in the procedures later.

2. On the source database, create a staging table for SQL Profiles:
vi 2.cre_SQLProfile_stgtab.sql
Add the following:
connect dbadmin
BEGIN
  DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (
    table_name  => 'MYSTAGETAB_SQLPROF'
,   schema_name => 'DBADMIN'
);
END;
/
exit

Execute the file:
sqlplus /nolog @2.cre_SQLProfile_stgtab.sql
3. Pack your SQL Profile into your staging table

Create the file 3.pack_SQLProfile_stgtab.sql:
vi 3.pack_SQLProfile_stgtab.sql
Add the following:
conn dbadmin
alter session set nls_language='american';

BEGIN
  DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (
    profile_name         => 'SYS_SQLPROF_014fb0ca24980001'
,   staging_table_name   => 'MYSTAGETAB_SQLPROF'
,   staging_schema_owner => 'DBADMIN'
);
END;
/
exit
Run the file:
sqlplus /nolog @3.pack_SQLProfile_stgtab.sql
Let's verify that the staging table now has some rows in it:
connect dbadmin
select count(*)
from  MYSTAGETAB_SQLPROF;
4. Create the export parameter file and export the table:

vi 4.expdp_SQLPROFILE_stgtab.sql

Add the following:

userid=dbaadmin
tables=MYSTAGETAB_SQLPROF
logfile=expdp_MYSTAGETAB_SQLPROF.log
dumpfile=MYSTAGETAB_SQLPROF.dmp
job_name=exp_MYSTAGETAB_SQLPROF
directory=DP


Make sure the directory DP exists in the database. If not, create it with
create directory DP as '/exports/';
Then export the table:
expdp parfile=4.expdp_SQLPROFILE_stgtab.sql
Logon to the destination server. 

5. Import the staging table in the destination database

Create an import parameter file:
vi 5.impdp_SQLProfile.par
Add the following:
userid=dbadmin
tables=MYSTAGETAB_SQLPROF
logfile=impdp_MYSTAGETAB_SQLPROF.log
dumpfile=MYSTAGETAB_SQLPROF.dmp
job_name=imp_MYSTAGETAB_SQLPROF
directory=DP
Run the import:
impdp parfile=5.impdp_SQLProfile.par
6. Unpack the information in the staging table

Create the file for unpacking the SQL Plans to the local SMB:
vi 6.unpack.sql
Add the following:
conn dbadmin
alter session set nls_language='american';
set serveroutput on
BEGIN
   DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(
     replace            => true
,    staging_table_name => 'MYSTAGETAB_SQLPROF'
,    profile_name => 'SYS_SQLPROF_014fb0ca24980001'
);
END;
/
exit
Execute the file:
sqlplus /nolog @6.unpack.sql

Let's verify that the optimizer has actually created a new execution plan based on the SQL Profile:
SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN,
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID in('5cbuqgt5m5961')
ORDER BY LAST_ACTIVE_TIME DESC;
Output:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:19 5cbuqgt5m5961
1355560190
     
1
20850937263
1488
20851
14
21.10.2020 12:55 5cbuqgt5m5961
1355560190
     
0
112190741129
27134
112191
4,1
21.10.2020 14:47 5cbuqgt5m5961
3389950398
SQL_PLAN_ds58srfg86uf4f5a21fb1 SYS_SQLPROF_014fb0ca24980001  
3
116651
302
0
0
After the SQL Profile was created, the performance was finally stabelized.
The documentation for the dbms_sqltune package can be found here

How to transfer SQL Plan baselines from one database to another

A customer called with a request to transfer a specific SQL plan from production to test. The query was found in the shared pool:
SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN "AdaptivePlan",
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID in('5cbuqgt5m5961')
ORDER BY LAST_ACTIVE_TIME DESC;
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:34 5cbuqgt5m5961
3389950398
SQL_PLAN_ds58srfg86uf4f5a21fb1 SYS_SQLPROF_014fb0ca24980001  
1
210182320
1402169
210
0

In the test database, the same query does not have a plan in the SMB, nor does it have a SQL Profile. The same query as above gave me this result in the test database:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 10:47 5cbuqgt5m5961
1355560190
     
0
110508458900
27032
110508
4,1
We see that the query needs a SQL Plan baseline and a SQL Profile to perform well.
Oracle has a way to export the metadata needed to stabilize the query plan, and import it in another database.
When exporting SQL Plan baselines you would use the package dbms_spm

Here is what I did to transfer the SQL Plan from one database to another.
I am connecting to the databases as a privileged user called dbadmin throughout this post. In part two of this post, I will show you how I transferred the SQL Profile from one database to another.

1. On the source database, create a staging table for SQL Plan Baselines:
vi 1.cre_stagetab.sql
Add the following:
connect dbadmin
BEGIN
  DBMS_SPM.CREATE_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB');
END;
/
Execute the file:
sqlplus /nolog @1.cre_stagetab.sql
Still in the source database, check who the owner of the baseline is:
SELECT plan_name,
         sql_handle,
         creator,
         origin,
         TO_CHAR (created, 'dd.mm.yyyy hh24:mi') "created",
         TO_CHAR (last_executed, 'dd.mm.yyyy hh24:mi') "lastexecuted",
         enabled,
         accepted,
         fixed,
         autopurge,
         module,
         ROUND (ELAPSED_TIME / 1000000) "duration in seconds",
         cpu_time,
         buffer_gets,
         disk_reads
    FROM DBA_SQL_PLAN_BASELINES
where plan_name='SQL_PLAN_ds58srfg86uf4f5a21fb1';
The output shows that the owner is SYS:
PLAN_NAME SQL_HANDLE CREATOR ORIGIN created lastexecuted ENABLED ACCEPTED FIXED AUTOPURGE MODULE duration in seconds CPU_TIME BUFFER_GETS DISK_READS
SQL_PLAN_ds58srfg86uf4f5a21fb1 SQL_dc1518bb9e8369c4 SYS MANUAL-LOAD-FROM-STS 30.03.2020 15:13 18.10.2020 03:13 YES YES YES YES JDBC Thin Client
8
3897100
597270
715

2. Create the file 2.pack_stagetab.sql:
vi 2.pack_stagetab.sql
Add the following:
conn dbadmin
alter session set nls_language='american';

DECLARE
  v_plan_cnt NUMBER;
BEGIN
  v_plan_cnt := DBMS_SPM.PACK_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB'
,   enabled    => 'yes'
,   creator    => 'SYS'
,   plan_name  => 'SQL_PLAN_ds58srfg86uf4f5a21fb1'
);
END;
/
exit
Run the file:
sqlplus /nolog @2.pack_stagetab.sql
Let's verify that the staging table has some rows in it:
connect dbadmin
select count(*)
from  MYSTAGETAB;
3. Create an export parameter file:
vi 3.expdp.par
Add the following. Make sure the directory DP exist and is writable:
userid=dbadmin
tables=MYSTAGETAB
logfile=expdp_MYSTAGETAB.log
dumpfile=MYSTAGETAB.dmp
job_name=exp_MYSTAGETAB
directory=DP
Run the export:
expdp parfile=3.expdp.par
When the export is done, transfer the file to the destination server using ssh.
At this point logon to the destination server.

4. Create an import parameter file:
vi 4.impdp.par
Add the following. Make sure the directory DP exist and is writable:
userid=dbadmin
tables=MYSTAGETAB
logfile=impdp_MYSTAGETAB.log
dumpfile=MYSTAGETAB.dmp
job_name=imp_MYSTAGETAB
directory=DP
Run the import:
impdp parfile=4.impdp.par
5. Create the file for unpacking the SQL Plans to the local SMB:
vi 5.unpack.sql
Add the following:
conn dbadmin
alter session set nls_language='american';
set serveroutput on
DECLARE
  v_plan_cnt NUMBER;
BEGIN
  v_plan_cnt := DBMS_SPM.UNPACK_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB'
    ,plan_name  => 'SQL_PLAN_ds58srfg86uf4f5a21fb1'
);
  DBMS_OUTPUT.put_line('Number of plans loaded: ' || v_plan_cnt);
END;
/
exit
Run the file:
sqlplus /nolog @5.unpack.sql
You should now be able to see a different execution plan when executing the query at the top of this article:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:19 5cbuqgt5m5961
1355560190
     
1
20850937263
1488
20851
14
21.10.2020 12:55 5cbuqgt5m5961
1355560190
     
0
112190741129
27134
112191
4,1
The performance actually got worse! The reason is that we are missing the SQL Profile I will now transfer the SQL Profile to the same server in the next post.

Thursday, October 15, 2020

How to select a specific line number using awk

The file clonetext.txt contains:
  Clone Volume: true
                      Clone Parent server Name: myserver1-cluster49
                        SnapClone Parent Volume: myvolume_mirror1
                        

You would like to get the name of the volume, which is the 4th word on the 3rd line of the file. When grepping for the keyword "Clone" all lines are returned:
 cat clonetext.txt | grep Clone
  Clone Volume: true
                      Clone Parent server Name: myserver1-cluster49
                        SnapClone Parent Volume: myvolume_mirror1
Grepping for the 4th column helps, but you still get two rows, not just the single one you're interested in:
oracle@oric-db01:[mydb01]# cat clonetext.txt | grep Clone | awk -F' ' '{print $4}'

Name:
myvolume_mirror1
Adding the NR flag to your command solves the problem:
cat clonetext.txt | grep Clone | awk -F' ' 'NR==3 {print $4}'
myvolume_mirror1

Monday, October 12, 2020

Different ways to check your path variable on a windows client

Check the windows registry entry for the value of the PATH variable: Open your regedit application by going to your windows menu and start typing "regedit"
HKEY_CURRENT_USER\Environment
If using Windows powershell, use
Get-ItemProperty -Path "HKCU:\Environment"
In windows command-line enviroment use
echo %PATH%

Thursday, September 3, 2020

Potential solution to dgmgrl error ORA-16665: time out waiting for the result from a member

After having added a terminal standby database to an existing configuration, the Data Guard Broker configuration seemed unhappy with communicating with the new member. The output from "show configuration" showed the following:
DGMGRL> show configuration;

Configuration - DB01

  Protection Mode: MaxPerformance
  Members:
  DB01      - Primary database
    DB01_STB  - Physical standby database
      DB01_TSTB - Physical standby database (receiving current redo)
        Error: ORA-16665: time out waiting for the result from a member

    DB01_RO   - Physical standby database
When looking at the details by using
show database verbose "DB01_TSTB"
the entire operation would take very long, and at the, the following message is displayed:
Database Status:
DGM-17016: failed to retrieve status for database "DB01_TSTB"
ORA-16665: time out waiting for the result from a member
The broker log file showed:
09/02/2020 15:08:52
Data Guard Broker Status Summary:
  Type                        Name                            Severity  Status
  Configuration               DB01                            Warning  ORA-16607
  Primary Database            DB01                            Success  ORA-0
  Physical Standby Database   DB01_STB                        Success  ORA-0
  Physical Standby Database   DB01_RO                         Success  ORA-0
  Physical Standby Database   DB01_TSTB                       Error  ORA-16665
Root cause here was firewalls. The terminal standby database could not reach the primary database. Although the terminal standby database isn't set up to receive redo data from the primary database directly, in a broker configuration all members must be able to communicate with eachother. A good tool for troubleshooting issues dealing with ports and firewalls is nmap. I installed it on the terminal server and issued:
[root@db04_server ~]# nmap -n -p 1511 db01_sever.oric.no

Starting Nmap 6.40 ( http://nmap.org ) at 2020-09-02 14:23 CEST
Nmap scan report for db01_sever.oric.no (xxx.xxx.xxx.xxx)
Host is up (0.016s latency).
PORT     STATE    SERVICE
1511/tcp filtered 3l-l1

Nmap done: 1 IP address (1 host up) scanned in 0.49 seconds
A filtered port means that it is not possible to determine whether the port is open or closed, most often due to firewalls along the way. Further checks in the firewall log files showed
action=Drop service=1511 dst=xxx.xxx.xxx.xxx scr=yyy.yyy.yyy.yyy
where xxx.xxx.xxx.xxx was matching the ipadress of the terminal standby server, while yyy.yyy.yyy.yyy was matching the ipadress of the primary server. The network admin opened the port, and the ORA-16665 immediately disappeared from the dgmgrl output.

Monday, August 31, 2020

How to zip files in the current directory

In its simplest form, to zip a number of similary named files, use this syntax:
zip client_sql_trace client*.trc

where the resulting zip file will be called "client_sql_trace.zip" and contain all files in the current directory named client_.trc.