Wednesday, November 8, 2017

How to use ROWTYPE to fetch entire rows for insert into an interim table

Here is how I used a PL/SQL record to insert rows from a table into an interim table, using parallel slaves and direct path:
alter session force parallel dml;
alter session force parallel query;

DECLARE
  --declare a table-based record for the table "document"
  v_document_rec document%ROWTYPE;

  CURSOR document_cur IS
    SELECT *
    FROM document
    WHERE trunc(created_date) between TO_DATE('01.01.2017','dd.mm.yyyy') AND to_date(to_char(SYSDATE,'dd.mm.yyyy'))
    FETCH FIRST 100000 ROWS ONLY;

   -- declare a cursor-based record
   v_document_rec document_cur%ROWTYPE;

BEGIN
   OPEN document_cur;
   LOOP
   -- fetch the content of the cursor into the record
   FETCH document_cur into v_document_rec;
   EXIT WHEN document_cur%NOTFOUND;
   INSERT /*+ APPEND PARALLEL 8 */ INTO document_interim VALUES( v_document_rec.ID,
                                                 v_document_rec.document_id,
                                                 v_document_rec.document_name,
                                                 v_document_rec.created_date,
                                                 v_document_rec.security_level,
                                                 v_document_rec.content );
   END LOOP;
   COMMIT;
   CLOSE document_cur;
END;
/

Friday, November 3, 2017

What is the difference between a bind-sensitive and bind-aware cursor?

The terms "bind-sensitive" and "bind-aware" are used when explaining Oracles Adaptive Cursor Sharing (ACS) feature.

Adaptive cursors sharing makes it possible for a single statement to use different execution plans, depending on the values of its bind variables.

Oracle explains the difference between bind-sensitive and bind-aware in their documentation like this:


"A bind-sensitive cursor is a cursor whose optimal plan may depend on the value of a bind variable."

and

"A bind-aware cursor is a bind-sensitive cursor that is eligible to use different plans for different bind values."


In other words, when a bind-sensitive cursor have been executed, the database saves the execution statistics for the new values, and compares them with the values used previously.

If (and only if) there is a significant change, Oracle marks the cursor as bind-aware.
During future executions of the cursor, the CBO will generate a plan based on the bind values and their cardinality estimates.

Query to search for details about a specific SQL ID in the cursor cache


I have plenty of examples that Oracle 12cR1 selects poor execution plans over good ones.
This is a recent example from one of my databases. Notice that there are two different plans for the same SQL ID, one that seems to take long per exection, and a good plan that executes each statement very quickly:

SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN,
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID =('fczwz45f4d46q')
ORDER BY LAST_ACTIVE_TIME DESC;

Result:

Last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
16.08.2017 10:57 fczwz45f4d46q
2259800413
  SYS_SQLPROF_0156739adc130001
0
350642099493
311
350642
1127,5
16.08.2017 01:39 fczwz45f4d46q
927328071
  SYS_SQLPROF_0156739adc130001
9
2675614671
26048
2676
0,1

In this case, my goal is to create a fixed baseline for the execution plan with hash value 927328071.

Thursday, November 2, 2017

How to bulk load execution plans for a statement into the SMB


Short background:
Customer were complaining about long execution times for a specific SQL.

Checking in the database:
select distinct sql_id,plan_hash_value, child_number,ROUND(ELAPSED_TIME/1000000) "duration in seconds"
from v$sql where sql_text like 'SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and ROUND(ELAPSED_TIME/1000000) > 1000
order by 4 desc
;
These were the queries with the longest elapsed times:
SQL_ID PLAN_HASH_VALUE CHILD_NUMBER duration in seconds
cr43vm86rp9cy
3790053540
0
2387
fhtdk8ms42z0s
3790053540
0
2255
abbv8g54zzhjf
3790053540
0
1424
cragwkcwwvkrq
3790053540
0
1106

Do these SQL IDs have any alternative plans?
select distinct sql_id,count(*) "num plans" 
from v$sql where sql_text like ''SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and sql_id in ('cr43vm86rp9cy','fhtdk8ms42z0s','abbv8g54zzhjf','cragwkcwwvkrq')
group by sql_id;
Yes they do:
SQL_ID num plans
cr43vm86rp9cy
2
abbv8g54zzhjf
3
fhtdk8ms42z0s
3
cragwkcwwvkrq
3


Looking more closely at these 4 specific SQL IDs, it quickly became appaerant that the queries all had alternative plans generated for them, and plan hash value 133974613 gave the lowest execution times for all of these 4 samples:
select distinct sql_id,plan_hash_value, child_number,ROUND(ELAPSED_TIME/1000000) "duration in seconds", LPAD(is_resolved_adaptive_plan,15,' ') "adaptive?"
from v$sql where sql_text like 'SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and sql_id in ('cr43vm86rp9cy','fhtdk8ms42z0s','abbv8g54zzhjf','cragwkcwwvkrq')
order by 1,2
;
SQL_ID PLAN_HASH_VALUE CHILD_NUMBER duration in seconds
abbv8g54zzhjf
133974613
1
1
abbv8g54zzhjf
133974613
2
16
abbv8g54zzhjf
3790053540
0
1424
cr43vm86rp9cy
133974613
1
15
cr43vm86rp9cy
3790053540
0
2387
cragwkcwwvkrq
133974613
1
1
cragwkcwwvkrq
133974613
2
14
cragwkcwwvkrq
3790053540
0
1106
fhtdk8ms42z0s
133974613
1
1
fhtdk8ms42z0s
133974613
2
19
fhtdk8ms42z0s
3790053540
0
2255

We can also confirm by looking at the average duration for each cursor:
select sql_id, plan_hash_value, avg( ROUND(ELAPSED_TIME/1000000) ) "avg duration in seconds"
from v$sql where sql_text like ''SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and sql_id in ('cr43vm86rp9cy','fhtdk8ms42z0s','abbv8g54zzhjf','cragwkcwwvkrq')
group by sql_id, plan_hash_value
order by 3;

Result:

SQL_ID PLAN_HASH_VALUE avg duration in seconds
cragwkcwwvkrq
133974613
7,5
abbv8g54zzhjf
133974613
8,5
fhtdk8ms42z0s
133974613
10
cr43vm86rp9cy
133974613
15
cragwkcwwvkrq
3790053540
1106
abbv8g54zzhjf
3790053540
1424
fhtdk8ms42z0s
3790053540
2255
cr43vm86rp9cy
3790053540
2387

We can firmly establish that one of the plans, the one with hash value 133974613, is much more efficent than the other.

As a matter of fact, I found that almost all of the SQL IDs having executed this query, would be much better off With using plan hash value = 133974613.


So let's load them into the management base.

First, create an empty SQL Tuning Set (STS):
exec dbms_sqltune.create_sqlset(sqlset_name=>'SD16659');

Populate the STS. Since almost all of the queries would be better off using this particular plan, we can use the plan hash value as the only criterion in our "basic filter" directive

DECLARE
  my_cur  dbms_sqltune.sqlset_cursor;
BEGIN
  OPEN my_cur FOR
     SELECT VALUE(x)
     FROM   TABLE( dbms_sqltune.select_cursor_cache(basic_filter=>'plan_hash_value = ''133974613''')) x;

     dbms_sqltune.load_sqlset(sqlset_name => 'SD16659', populate_cursor => my_cur);
END;
/

From the view dba_sqlset, we can see that the previous code loaded the STS with 177 statements:

select name,owner,created,statement_count
from dba_sqlset 
where name = 'SD16659';

ID CON_DBID NAME OWNER CREATED STATEMENT_COUNT
4
2788575331
SD16659 SYS 02.11.2017 12:16:04
177

Some more information about each statement in the STS can be found in the view DBA_SQLSET_STATEMENTS (fetching only the first 3 rows as an example):

select sql_id,plan_timestamp,parsing_schema_name,plan_hash_value,round(elapsed_time/1000000) "duration in seconds",cpu_time,buffer_gets,disk_reads,executions
from   dba_sqlset_statements
where  sqlset_name = 'SD16659'
fetch first 3 rows only;

SQL_ID PLAN_TIMESTAMP PARSING_SCHEMA_NAME PLAN_HASH_VALUE duration in seconds CPU_TIME BUFFER_GETS DISK_READS EXECUTIONS
fza8j57a8n779 01.11.2017 23:25:45 MYUSER
133974613
4
3568458
62313
126
6
2srumd4vbs8j2 01.11.2017 23:51:21 MYUSER
133974613
2
1925707
17561
10
2
5ptw9m80y4asg   MYUSER
133974613
4
3550461
167656
9
18

Now we can manually add them to the database's SQL Plan Management Base.

VARIABLE cnt NUMBER
EXECUTE :cnt := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( -
                    sqlset_name=>'SD16659', sqlset_owner=>'SYS');
print :cnt;
When executing the previous code, 100 plans were loaded into the SMB.
Details about the content of your SMB can be found by querying the view DBA_SQL_PLAN_BASELINES.

For example:
SELECT plan_name,
         sql_handle,
         creator,
         origin,
         TO_CHAR (created, 'dd.mm.yyyy hh24:mi') "created",
         TO_CHAR (last_executed, 'dd.mm.yyyy hh24:mi') "lastexecuted",
         enabled,
         accepted,
         fixed,
         autopurge,
         module,
         ROUND (ELAPSED_TIME / 1000000) "duration in seconds",
         cpu_time,
         buffer_gets,
         disk_reads
    FROM DBA_SQL_PLAN_BASELINES
   WHERE TO_CHAR (created, 'dd.mm.yyyy') = TRUNC (SYSDATE)
ORDER BY created ASC
;

The link to Oracle Documentation about manually loading the SMB can be found here

The usage of dbms_spm can be found here

What does it mean to "evolve" a SQL baseline?

Simply put, "evolving" an SQL baseline means testing whether or not an unaccepted plan performs better than any current plan in the plan baseline.

You can use Cloud Control or the package dbms_spm to evolve a plan.


Read more in the Oracle Documentation

Wednesday, November 1, 2017

How to purge a cursor from the shared pool using dbms_shared_pool

Find the address and hash value for the particular SQL you're looking for:
select address, hash_value from v$sqlarea where sql_id like 'bjkd7pcyfns86';


ADDRESS HASH_VALUE
0000000085CD2838
1021993222

Execute the purge procedure:
SQL> exec dbms_shared_pool.purge('0000000085CD2838, 1021993222','C');

Note that the name of the object is the combined values of address and hash value from v$sqlarea.
The value C is what you want to use for cursors.

Note that even if an SQL Plan Baseline is dropped from the SMB, it doesn't mean it won't get used if it is still in the cache.

If you have dropped a plan from your SMB that you would like the optimizer to ignore, you must clear out the cursor that previously using the now dropped SQL Plan baseline from the shared pool using the command above. 

Wednesday, October 25, 2017

How to use the find command on Unix/Linux systems


Here is how to list the regular files (indicated by -type f) files generated the last 30 minutes in a particular directory on a Linux/Unix server:

find . -mmin -30 -type f -exec ls -l {} +

Another example, listing files in my audit_file_dest older than 10 days:
find /u01/admin/proddb01/adump/ -name '*.aud' -mtime +10 -exec ls -latrh {} \;

Count all files with extension ".trm" generated one day or less ago:
find . -name '*.trm' -mtime -1 -type f -exec ls -l {} + | wc -l

Count all files with extension ".trm" generated more than 30 days ago:
find . -name '*.trm' -mtime +30 -type f -exec ls -l {} + | wc -l

Find all files with timestamp new than April 15th, 16:14 in the afternoon:
export TIMENOW=`date +"%a %b %e %H:%M"`
echo $TIMENOW
Thu Apr 15 16:14
find . -type f -newermt "$TIMENOW" -exec ls -la {} \;

Find and delete can be done in different ways, the recommended way is to use the -delete option. In the command below, I want to find and delete all *.trc and *.trm files older than 10 days:
find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +10 -delete

Delete all *.trm files in the current directory generated earlier than 24 hrs ago:
find *.trm -mmin +1440 -type f -delete

If the -delete option is not available, you can use a different method:
find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +10 -exec rm -f {} \;

Or, using xargs, finding files older than 2 days:
find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +2 -print0 | xargs -I {} -0 rm -v "{}"

Pipe to wc to get the number of files deleted:

find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +2 -print0 | xargs -I {} -0 rm -v "{}" | wc -l


Some good external sources:

How To Format Date For Display or Use In a Shell Script
Linux / Unix: Find And Remove Files With One Command On Fly
Linux Find Command with Practical Examples