Wednesday, October 21, 2020

How to transfer SQL Profiles from one database to another

Following up from part 1 on my post on How to transfer SQL Plan Baselines from one database to another, I will now show you the steps to transfer a SQL Profile in the same manner.
For SQL Profiles you depend on the package dbms_sqltune
I am connecting to the databases as a privileged user called dbadmin throughout this post.

1. First, find the name of SQL profile you are about to transfer:
select * from dba_sql_profiles order by created;
The query above will return the SQL Profiles in the order in which they were created. Notice the name of the relevant one, as this will be used as an argument in the procedures later.

2. On the source database, create a staging table for SQL Profiles:
vi 2.cre_SQLProfile_stgtab.sql
Add the following:
connect dbadmin
BEGIN
  DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (
    table_name  => 'MYSTAGETAB_SQLPROF'
,   schema_name => 'DBADMIN'
);
END;
/
exit

Execute the file:
sqlplus /nolog @2.cre_SQLProfile_stgtab.sql
3. Pack your SQL Profile into your staging table

Create the file 3.pack_SQLProfile_stgtab.sql:
vi 3.pack_SQLProfile_stgtab.sql
Add the following:
conn dbadmin
alter session set nls_language='american';

BEGIN
  DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (
    profile_name         => 'SYS_SQLPROF_014fb0ca24980001'
,   staging_table_name   => 'MYSTAGETAB_SQLPROF'
,   staging_schema_owner => 'DBADMIN'
);
END;
/
exit
Run the file:
sqlplus /nolog @3.pack_SQLProfile_stgtab.sql
Let's verify that the staging table now has some rows in it:
connect dbadmin
select count(*)
from  MYSTAGETAB_SQLPROF;
4. Create the export parameter file and export the table:

vi 4.expdp_SQLPROFILE_stgtab.sql

Add the following:

userid=dbaadmin
tables=MYSTAGETAB_SQLPROF
logfile=expdp_MYSTAGETAB_SQLPROF.log
dumpfile=MYSTAGETAB_SQLPROF.dmp
job_name=exp_MYSTAGETAB_SQLPROF
directory=DP


Make sure the directory DP exists in the database. If not, create it with
create directory DP as '/exports/';
Then export the table:
expdp parfile=4.expdp_SQLPROFILE_stgtab.sql
Logon to the destination server. 

5. Import the staging table in the destination database

Create an import parameter file:
vi 5.impdp_SQLProfile.par
Add the following:
userid=dbadmin
tables=MYSTAGETAB_SQLPROF
logfile=impdp_MYSTAGETAB_SQLPROF.log
dumpfile=MYSTAGETAB_SQLPROF.dmp
job_name=imp_MYSTAGETAB_SQLPROF
directory=DP
Run the import:
impdp parfile=5.impdp_SQLProfile.par
6. Unpack the information in the staging table

Create the file for unpacking the SQL Plans to the local SMB:
vi 6.unpack.sql
Add the following:
conn dbadmin
alter session set nls_language='american';
set serveroutput on
BEGIN
   DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(
     replace            => true
,    staging_table_name => 'MYSTAGETAB_SQLPROF'
,    profile_name => 'SYS_SQLPROF_014fb0ca24980001'
);
END;
/
exit
Execute the file:
sqlplus /nolog @6.unpack.sql

Let's verify that the optimizer has actually created a new execution plan based on the SQL Profile:
SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN,
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID in('5cbuqgt5m5961')
ORDER BY LAST_ACTIVE_TIME DESC;
Output:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:19 5cbuqgt5m5961
1355560190
     
1
20850937263
1488
20851
14
21.10.2020 12:55 5cbuqgt5m5961
1355560190
     
0
112190741129
27134
112191
4,1
21.10.2020 14:47 5cbuqgt5m5961
3389950398
SQL_PLAN_ds58srfg86uf4f5a21fb1 SYS_SQLPROF_014fb0ca24980001  
3
116651
302
0
0
After the SQL Profile was created, the performance was finally stabelized.
The documentation for the dbms_sqltune package can be found here

How to transfer SQL Plan baselines from one database to another

A customer called with a request to transfer a specific SQL plan from production to test. The query was found in the shared pool:
SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN "AdaptivePlan",
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID in('5cbuqgt5m5961')
ORDER BY LAST_ACTIVE_TIME DESC;
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:34 5cbuqgt5m5961
3389950398
SQL_PLAN_ds58srfg86uf4f5a21fb1 SYS_SQLPROF_014fb0ca24980001  
1
210182320
1402169
210
0

In the test database, the same query does not have a plan in the SMB, nor does it have a SQL Profile. The same query as above gave me this result in the test database:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 10:47 5cbuqgt5m5961
1355560190
     
0
110508458900
27032
110508
4,1
We see that the query needs a SQL Plan baseline and a SQL Profile to perform well.
Oracle has a way to export the metadata needed to stabilize the query plan, and import it in another database.
When exporting SQL Plan baselines you would use the package dbms_spm

Here is what I did to transfer the SQL Plan from one database to another.
I am connecting to the databases as a privileged user called dbadmin throughout this post. In part two of this post, I will show you how I transferred the SQL Profile from one database to another.

1. On the source database, create a staging table for SQL Plan Baselines:
vi 1.cre_stagetab.sql
Add the following:
connect dbadmin
BEGIN
  DBMS_SPM.CREATE_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB');
END;
/
Execute the file:
sqlplus /nolog @1.cre_stagetab.sql
Still in the source database, check who the owner of the baseline is:
SELECT plan_name,
         sql_handle,
         creator,
         origin,
         TO_CHAR (created, 'dd.mm.yyyy hh24:mi') "created",
         TO_CHAR (last_executed, 'dd.mm.yyyy hh24:mi') "lastexecuted",
         enabled,
         accepted,
         fixed,
         autopurge,
         module,
         ROUND (ELAPSED_TIME / 1000000) "duration in seconds",
         cpu_time,
         buffer_gets,
         disk_reads
    FROM DBA_SQL_PLAN_BASELINES
where plan_name='SQL_PLAN_ds58srfg86uf4f5a21fb1';
The output shows that the owner is SYS:
PLAN_NAME SQL_HANDLE CREATOR ORIGIN created lastexecuted ENABLED ACCEPTED FIXED AUTOPURGE MODULE duration in seconds CPU_TIME BUFFER_GETS DISK_READS
SQL_PLAN_ds58srfg86uf4f5a21fb1 SQL_dc1518bb9e8369c4 SYS MANUAL-LOAD-FROM-STS 30.03.2020 15:13 18.10.2020 03:13 YES YES YES YES JDBC Thin Client
8
3897100
597270
715

2. Create the file 2.pack_stagetab.sql:
vi 2.pack_stagetab.sql
Add the following:
conn dbadmin
alter session set nls_language='american';

DECLARE
  v_plan_cnt NUMBER;
BEGIN
  v_plan_cnt := DBMS_SPM.PACK_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB'
,   enabled    => 'yes'
,   creator    => 'SYS'
,   plan_name  => 'SQL_PLAN_ds58srfg86uf4f5a21fb1'
);
END;
/
exit
Run the file:
sqlplus /nolog @2.pack_stagetab.sql
Let's verify that the staging table has some rows in it:
connect dbadmin
select count(*)
from  MYSTAGETAB;
3. Create an export parameter file:
vi 3.expdp.par
Add the following. Make sure the directory DP exist and is writable:
userid=dbadmin
tables=MYSTAGETAB
logfile=expdp_MYSTAGETAB.log
dumpfile=MYSTAGETAB.dmp
job_name=exp_MYSTAGETAB
directory=DP
Run the export:
expdp parfile=3.expdp.par
When the export is done, transfer the file to the destination server using ssh.
At this point logon to the destination server.

4. Create an import parameter file:
vi 4.impdp.par
Add the following. Make sure the directory DP exist and is writable:
userid=dbadmin
tables=MYSTAGETAB
logfile=impdp_MYSTAGETAB.log
dumpfile=MYSTAGETAB.dmp
job_name=imp_MYSTAGETAB
directory=DP
Run the import:
impdp parfile=4.impdp.par
5. Create the file for unpacking the SQL Plans to the local SMB:
vi 5.unpack.sql
Add the following:
conn dbadmin
alter session set nls_language='american';
set serveroutput on
DECLARE
  v_plan_cnt NUMBER;
BEGIN
  v_plan_cnt := DBMS_SPM.UNPACK_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB'
    ,plan_name  => 'SQL_PLAN_ds58srfg86uf4f5a21fb1'
);
  DBMS_OUTPUT.put_line('Number of plans loaded: ' || v_plan_cnt);
END;
/
exit
Run the file:
sqlplus /nolog @5.unpack.sql
You should now be able to see a different execution plan when executing the query at the top of this article:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:19 5cbuqgt5m5961
1355560190
     
1
20850937263
1488
20851
14
21.10.2020 12:55 5cbuqgt5m5961
1355560190
     
0
112190741129
27134
112191
4,1
The performance actually got worse! The reason is that we are missing the SQL Profile I will now transfer the SQL Profile to the same server in the next post.

Thursday, October 15, 2020

How to select a specific line number using awk

The file clonetext.txt contains:
  Clone Volume: true
                      Clone Parent server Name: myserver1-cluster49
                        SnapClone Parent Volume: myvolume_mirror1
                        

You would like to get the name of the volume, which is the 4th word on the 3rd line of the file. When grepping for the keyword "Clone" all lines are returned:
 cat clonetext.txt | grep Clone
  Clone Volume: true
                      Clone Parent server Name: myserver1-cluster49
                        SnapClone Parent Volume: myvolume_mirror1
Grepping for the 4th column helps, but you still get two rows, not just the single one you're interested in:
oracle@oric-db01:[mydb01]# cat clonetext.txt | grep Clone | awk -F' ' '{print $4}'

Name:
myvolume_mirror1
Adding the NR flag to your command solves the problem:
cat clonetext.txt | grep Clone | awk -F' ' 'NR==3 {print $4}'
myvolume_mirror1

Monday, October 12, 2020

Different ways to check your path variable on a windows client

Check the windows registry entry for the value of the PATH variable: Open your regedit application by going to your windows menu and start typing "regedit"
HKEY_CURRENT_USER\Environment
If using Windows powershell, use
Get-ItemProperty -Path "HKCU:\Environment"
In windows command-line enviroment use
echo %PATH%