Monday, December 18, 2017

How to make use of the GATHER_PLAN_STATISTICS hint

Use the gather_plan_statistics hint to view the optimizer's estimation of rows vs. the actual returned number of rows.

Notice the case - gather_plan_statistics - I had trouble getting it to work when stated in capital letters as GATHER_PLAN_STATISTICS. So stick to lower case!

Use the hint in the SELECT part of your query, for example:
select /*+ gather_plan_statistics */
from mytable t1 join mytable t2....

set linesize 200
set pagesize 0
select * from table
(dbms_xplan.display_cursor(NULL,NULL,'TYPICAL ALLSTATS LAST'));
exit

Alternatively, after the query has run to completion, check the cursor cache for the details of your specific SQL ID, and use the display_cursor with the SQL ID explicitly set:
set linesize 200
set pagesize 0
select * from table
(dbms_xplan.display_cursor( 'dnyrjuumj8psq',1,'TYPICAL ALLSTATS LAST'));
exit

Either way will work and give you the following example output:

Plan hash value: 4147659309

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name    | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |         |      1 |        |       |  1862 (100)|          |   3012 |00:08:14.63 |     576K|    553K|       |       |          |
|   1 |  SORT ORDER BY                           |         |      1 |      2 |   294 |  1862   (1)| 00:00:01 |   3012 |00:08:14.63 |     576K|    553K|   372K|   372K|  330K (0)|
|*  2 |   FILTER                                 |         |      1 |        |       |            |          |   3012 |00:08:45.50 |     576K|    553K|       |       |          |
|   3 |    NESTED LOOPS                          |         |      1 |      2 |   294 |  1861   (1)| 00:00:01 |   3012 |00:08:45.50 |     576K|    553K|       |       |          |
|   4 |     NESTED LOOPS                         |         |      1 |      2 |   294 |  1861   (1)| 00:00:01 |   3012 |00:08:45.34 |     573K|    553K|       |       |          |
|   5 |      NESTED LOOPS                        |         |      1 |      2 |   270 |  1859   (1)| 00:00:01 |   3012 |00:08:45.31 |     573K|    553K|       |       |          |
|*  6 |       TABLE ACCESS BY INDEX ROWID BATCHED| T1      |      1 |      1 |    31 |     3   (0)| 00:00:01 |    439 |00:00:00.27 |      89 |     73 |       |       |          |
|*  7 |        INDEX RANGE SCAN                  | T1_IDX4 |      1 |      4 |       |     2   (0)| 00:00:01 |    440 |00:00:00.01 |       3 |      1 |       |       |          |
|*  8 |       TABLE ACCESS BY INDEX ROWID BATCHED| T2      |    439 |      2 |   208 |  1856   (1)| 00:00:01 |   3012 |00:08:54.15 |     573K|    553K|       |       |          |
|*  9 |        INDEX RANGE SCAN                  | T2_IDX4 |    439 |  23118 |       |    69   (0)| 00:00:01 |     12M|00:00:54.73 |   38436 |  37439 |       |       |          |
|* 10 |      INDEX UNIQUE SCAN                   | T3_IDX1 |   3012 |      1 |       |     0   (0)|          |   3012 |00:00:00.01 |       9 |      2 |       |       |          |
|  11 |     TABLE ACCESS BY INDEX ROWID          | T3      |   3012 |      1 |    12 |     1   (0)| 00:00:01 |   3012 |00:00:00.01 |    3012 |      2 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Notice the column "E-Rows". It deviates substantially from the column A-Rows, which are the actual rows returned.

So the optimizer is clearly not able to make a good estimation about the number of rows.

Let's add a couple of statements which will give the optimizer better statistics. This particular database is not certified with any of the 12.1 adaptive tuning features, so I will have to feed the optimizer using other methods:

alter session set statistics_level='ALL';
alter session set optimizer_dynamic_sampling=11;

After running it a second time, the following plan is generated:

Plan hash value: 1509429641

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name   | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |        |      1 |        |       |   735K(100)|          |   3012 |00:09:44.25 |     572K|    551K|       |       |          |
|   1 |  SORT ORDER BY                           |        |      1 |   2986 |   428K|   735K  (1)| 00:00:29 |   3012 |00:09:44.25 |     572K|    551K|   372K|   372K|  330K (0)|
|*  2 |   FILTER                                 |        |      1 |        |       |            |          |   3012 |00:09:44.25 |     572K|    551K|       |       |          |
|*  3 |    HASH JOIN                             |        |      1 |   2986 |   428K|   735K  (1)| 00:00:29 |   3012 |00:09:44.25 |     572K|    551K|  1066K|  1066K|  954K (0)|
|   4 |     NESTED LOOPS                         |        |      1 |   2986 |   393K|   735K  (1)| 00:00:29 |   3012 |00:09:43.01 |     569K|    547K|       |       |          |
|   5 |      NESTED LOOPS                        |        |      1 |     10M|   393K|   735K  (1)| 00:00:29 |     12M|00:00:38.92 |   38488 |  37114 |       |       |          |
|*  6 |       TABLE ACCESS BY INDEX ROWID BATCHED| T1     |      1 |    439 | 13609 |    72   (0)| 00:00:01 |    439 |00:00:00.01 |      76 |      0 |       |       |          |
|*  7 |        INDEX RANGE SCAN                  | T1_IDX4|      1 |    440 |       |     2   (0)| 00:00:01 |    440 |00:00:00.01 |       3 |      0 |       |       |          |
|*  8 |       INDEX RANGE SCAN                   | T2_IDX4|    439 |  23118 |       |    69   (0)| 00:00:01 |     12M|00:00:28.47 |   38412 |  37114 |       |       |          |
|*  9 |      TABLE ACCESS BY INDEX ROWID         | T2     |     12M|      7 |   728 |  1856   (1)| 00:00:01 |   3012 |00:08:47.65 |     531K|    510K|       |       |          |
|  10 |     TABLE ACCESS FULL                    | T3     |      1 |  74866 |   877K|   572   (1)| 00:00:01 |  74866 |00:00:01.22 |    3022 |   3017 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Consider the same column - the E-Rows and the A-Rows, and how much more accurate the optimizer is estimating the rows to be returned.
Notice also that the former of these two plans generates an index lookup on the T3 table, while the ladder generates a full table scan. The FTS proved to be the most efficient, since the runtime dropped from 15 to 9 minutes.

NOTE: You can only get the results you expect when you actually execute the statement. If you try to add the hint along with the EXPLAIN PLAN FOR statement, you will see warnings as these in your output:
Note
-----
   - Warning: basic plan statistics not available. These are only collected when:
       * hint 'gather_plan_statistics' is used for the statement or
       * parameter 'statistics_level' is set to 'ALL', at session or system level

These will of course be shown even if you do set the statistics_level to ALL. As Mathew McPeak points out in an aswer on stackoverflow.com
The way you are using it, you are printing the plan of the last statement you explained, not the last statement you executed. And "explain" will not execute the query, so it will not benefit from a gather_plan_statistics hint.

How to use dbms_spm.load_plans_from_sqlset

To load a specific SQL ID with a specific plan hash value from an STS into your SQL Plan Management Base, use:

set serveroutput on
declare
   l_num_plans PLS_INTEGER;
begin
 l_num_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET (sqlset_name=> 'LongRunningSQL1',sqlset_owner=>'DBADMIN' ,basic_filter=>'sql_id=''83fhwyajathc3'' and plan_hash_value=3643960001 ');
 DBMS_OUTPUT.put_line('Number of plans loaded: ' || l_num_plans);
end;
/
exit

Documented here

Friday, December 15, 2017

A list of eBS users

To view the users that are installed as a part of an eBS installation, use the following query:

SELECT EBS.ORACLE_USERNAME,U.PROFILE 
FROM APPS.FND_ORACLE_USERID EBS JOIN DBA_USERS U
ON (EBS.ORACLE_USERNAME = U.USERNAME);

Tuesday, December 5, 2017

How to execute and capture an SQL statement in the SMB at the same time

An experienced co-worker showed me this neat trick to execute an SQL statement and capture its execution plan in the SQL Plan Management Base at the same time.

/* First, select the bind variables used in the SQL statement you're interested in, into the variable value_string.
Record them in a text file. You will be using them later
*/
SELECT DECODE (value_string, 'NULL,', ''''',', value_string)
  FROM (  SELECT DECODE (datatype_string,
                         'VARCHAR2(32)', '''' || value_string || ''',',
                         value_string || ',')
                    value_string
            FROM v$sql_bind_capture
           WHERE sql_id = '4d6sdzm63st1u' AND child_number = 1
        ORDER BY position);

/*
Get the SQL text including the bind usage (but not the bind variable values)
*/
SET SERVEROUTPUT ON

DECLARE
   l_sql      VARCHAR2 (32000);
   l_sql_id   VARCHAR2 (30) := '4d6sdzm63st1u';
BEGIN
   BEGIN
      SELECT sql_fulltext
        INTO l_sql
        FROM v$sql                                          
       WHERE sql_id = l_sql_id AND ROWNUM < 2;
   EXCEPTION
      WHEN NO_DATA_FOUND
      THEN
         SELECT sql_text
           INTO l_sql
           FROM dba_hist_sqltext
          WHERE sql_id = l_sql_id AND ROWNUM < 2;
   END;

   DBMS_OUTPUT.put_line (l_sql);

/* Capture the baseline for this statement */ 
   EXECUTE IMMEDIATE
      'alter session set optimizer_capture_sql_plan_baselines=true';

/* Send the SQL including the bind variable values to the database */
   EXECUTE IMMEDIATE l_sql
      USING 5022000,
            9834,
            9822,
            9,
            4,
            6;
/* Turn the capture off again */
   EXECUTE IMMEDIATE
      'alter session set optimizer_capture_sql_plan_baselines=false';
END;
/

Friday, December 1, 2017

How to match an exact string using grep

grep -w string

See it in use here

How to remove unwanted parameters from init.ora using a shell script

1. Create a "removal list" containing your unwanted parameters. For example, put the following into a text file called removal_list.txt, and save it in $ORACLE_HOME/dbs:

_b_tree_bitmap_plans
_fast_full_scan_enabled
_grant_secure_role
_like_with_bind_as_equality
_projection_pushdown

2. create a Shell script to loop through the lines in remval_list.txt, called remove_params.sh:
for a in $(cat removal_list.txt ); do
 echo now testing $a
 sleep 1
 curr_param=`grep -w $a init${ORACLE_SID}.ora | awk -F '[=]' '{ print $1 }'`
 if [ $curr_param ]; then
  echo removing $curr_param
  sed -i "/${curr_param}/d" init${ORACLE_SID}.ora
  sleep 1
 else
  echo $a not found in parameter file. Skipping...
  sleep 1
 fi;
done
exit 0

It is important to use the -w flag with grep, as many Oracle initialization parameters can be named similarly and the script will then save multiple values in the curr_param variable

3. Make Your script executable:
chmod 755 remove_params.sh

4. execute the script:
./remove_params.sh

Expected output:
now testing _b_tree_bitmap_plans
removing _b_tree_bitmap_plans
now testing _fast_full_scan_enabled
removing _fast_full_scan_enabled
now testing _grant_secure_role
_grant_secure_role not found in parameter file. Skipping...
now testing _like_with_bind_as_equality
removing _like_with_bind_as_equality
now testing _projection_pushdown
_projection_pushdown not found in parameter file. Skipping...

Thursday, November 16, 2017

Basic compression, advanced compression, HCC - an overview


Basic Compression:


Basic compression was released in Oracle 9iR2.It is part of the Enterprise Edition and comes at no extra licencing cost.

If you specify only the keyword COMPRESS, it is equivalent to specifying ROW STORE COMPRESS BASIC and enables basic table compression.

Basic Compression

CREATE TABLE SCOTT.test
(
id NUMBER,
fname VARCHAR2(32),
ename VARCHAR2(32),
added_date DATE
)
TABLESPACE USERS
COMPRESS;

The keyword COMPRESS is kept for backward compability, but the current preferred syntax is
ROW STORE COMPRESS
or
ROW STORE COMPRESS BASIC
Basic compression does not apply compression to DML operations after the initial load.
With basic compression, Oracle will attempt to compress data during direct-path INSERT operations, but not during conventional-path INSERT operations. Make sure you fullfil the requirements for direct path inserts.

Advanced Compression:


Avanced compression was released in Oracle 11gR1 in 2007, and was called "OLTP Table Compression" at the time. You need to purchace the Advanced Compression Option from Oracle to use it.

When you enable table compression by specifying ROW STORE COMPRESS ADVANCED, you enable Advanced Row Compression. Oracle Database compresses data during all DML operations on the table. This form of compression is recommended for OLTP environments.

Advanced Compression

CREATE TABLE SCOTT.test
(
  id          NUMBER,
  fname       VARCHAR2(32),
  ename       VARCHAR2(32),
  added_date  DATE
)
TABLESPACE USERS
ROW STORE COMPRESS ADVANCED;

In earlier releases, Advanced Row Compression was called OLTP table compression and was enabled using COMPRESS FOR OLTP. This syntax is still supported for backward compatibility.


Hybrid Columnar Compression


For certain storage systems, Oracle has introduced Hybrid Columnar Compression, or HCC.


COLUMN STORE COMPRESS FOR QUERY is useful in data warehousing environments. Valid values are LOW and HIGH, with HIGH providing a higher compression ratio. The default is HIGH:

HCC Compression - non-dormant data

CREATE TABLE SCOTT.test
(
  id          NUMBER,
  fname       VARCHAR2(32),
  ename       VARCHAR2(32),
  added_date  DATE
)
TABLESPACE USERS
COLUMN STORE COMPRESS FOR QUERY HIGH | LOW;

COLUMN STORE COMPRESS FOR ARCHIVE uses higher compression ratios than COLUMN STORE COMPRESS FOR QUERY, and is useful for compressing data that will be stored for long periods of time. Valid values are LOW and HIGH, with HIGH providing the highest possible compression ratio. The default is LOW:

HCC Compression - dormant data

CREATE TABLE SCOTT.test
(
  id          NUMBER,
  fname       VARCHAR2(32),
  ename       VARCHAR2(32),
  added_date  DATE
)
TABLESPACE USERS
COLUMN STORE COMPRESS FOR ARCHIVE HIGH | LOW;

If you attempt to create a table With HCC on a non-supported Storage system, you will receive the following error Message:
ORA-64307:  Exadata Hybrid Columnar Compression is not supported for tablespaces on this storage type


Sources: Create Table

Oracle Advanced Compression With Oracle 12cR2 - excellent document

Wednesday, November 15, 2017

How to find the number of full table scans



From the Oracle Documentation:

The view DBA_HIST_SEG_STAT view displays historical information about segment-level statistics. This view captures the top segments based on a set of criteria and captures information from V$SEGSTAT. The total value is the value of the statistics since instance startup. The delta value is the value of the statistics from the BEGIN_INTERVAL_TIME to the END_INTERVAL_TIME in the DBA_HIST_SNAPSHOT view.

 SELECT O.OBJECT_NAME,O.OWNER, SUM(S.TABLE_SCANS_TOTAL) "# of FTS",(SELECT NUM_ROWS FROM DBA_TABLES T WHERE T.TABLE_NAME = O.OBJECT_NAME) "num rows total"
 FROM DBA_HIST_SEG_STAT S JOIN DBA_OBJECTS O ON (S.OBJ# = O.DATA_OBJECT_ID)
 WHERE O.OWNER NOT IN ( 'SYS','DBSNMP','XDB')
 AND O.OBJECT_TYPE = 'TABLE'
 GROUP BY S.OBJ#,O.OBJECT_NAME,O.OWNER
 ORDER BY 4 DESC;


OBJECT_NAME OWNER # of FTS num rows total
TABLE1 USER1
5702
1583895743
TABLE2 USER1
11
588921781
TABLE3 USER1
6195
241550246

As can be seen from above, the extreme number of rows in TABLE1, coupled with 5702 full table scans, should bring it to your attention as a candidate for partitioning!

How to purge objects from the recycle bin


sqlplus / as sysdba
purge dba_recyclebin; --> purges all recyclebin objects, globally
purge tablespace user_data user scott; --> purge only scott's Objects from the user_data tablespace
purge tablespace user_data; --> purge all recyclebin Objects from the user_data tablespace

conn scott/tiger
purge recyclebin; --> purges scott's recycle bin only

The next statement purges the specified object from the recycle bin. The first example refers directly to the system-generated name of the object, the second is refering to the original name of the object.
purge table "BIN$W/n28BqBgRbgU64JAQoz3A==$0";
purge table emp; 

Tuesday, November 14, 2017

What is the Adaptive SQL Plan Management introduced in Oracle 12c?



By "Adaptive SQL Plan Management", Oracle is referring to a new autotask that is executed every night during the maintenance window, as a part of the "sql tuning advisor".

It will automatically evolve all non-accepted plans in the SQL Plan Management Base.
If the New plans improves the performance of the Query, they will automatically be promoted to an accepted plan.

In previous versions of Oracle, plans needed to be evolved manually by the DBA by running dbms_spm.evolve_sql_plan_baselines.

Is the sql tuning task enabled?
SELECT CLIENT_NAME, STATUS
FROM   DBA_AUTOTASK_CLIENT
WHERE  CLIENT_NAME = 'sql tuning advisor';


View details about a specific run:
SELECT OWNER,TASK_ID, TASK_NAME,DESCRIPTION,ADVISOR_NAME,EXECUTION_TYPE,STATUS,RECOMMENDATION_COUNT "num rec",EXECUTION_START,EXECUTION_END,LAST_EXECUTION,HOW_CREATED
FROM  DBA_ADVISOR_TASKS 
WHERE TASK_NAME = 'SYS_AUTO_SPM_EVOLVE_TASK';

OWNER TASK_ID TASK_NAME DESCRIPTION ADVISOR_NAME EXECUTION_TYPE STATUS num rec EXECUTION_START EXECUTION_END LAST_EXECUTION HOW_CREATED
SYS
2
SYS_AUTO_SPM_EVOLVE_TASK Automatic SPM Evolve Task SPM Evolve Advisor SPM EVOLVE COMPLETED
0
13.11.2017 23:00:06 13.11.2017 23:00:07 EXEC_20606 AUTO



If the parameter accepted_plans is set to TRUE, it means plans will be automatically evolved:
SELECT PARAMETER_NAME,PARAMETER_VALUE
FROM DBA_ADVISOR_PARAMETERS 
WHERE TASK_NAME LIKE 'SYS_AUTO_SPM_EVOLVE_TASK'
AND PARAMETER_NAME='ACCEPT_PLANS';

PARAMETER_NAME PARAMETER_VALUE
ACCEPT_PLANS TRUE


If you want to view the results of such an automatic job, use the following code:

SET LONG 1000000 PAGESIZE 1000 LONGCHUNKSIZE 100 LINESIZE 100
SELECT DBMS_SPM.report_evolve_task(task_name => 'SYS_AUTO_SPM_EVOLVE_TASK', execution_name => 'EXEC_20606') AS output
FROM dual;

Wednesday, November 8, 2017

How to use ROWTYPE to fetch entire rows for insert into an interim table

Here is how I used a PL/SQL record to insert rows from a table into an interim table, using parallel slaves and direct path:
alter session force parallel dml;
alter session force parallel query;

DECLARE
  --declare a table-based record for the table "document"
  v_document_rec document%ROWTYPE;

  CURSOR document_cur IS
    SELECT *
    FROM document
    WHERE trunc(created_date) between TO_DATE('01.01.2017','dd.mm.yyyy') AND to_date(to_char(SYSDATE,'dd.mm.yyyy'))
    FETCH FIRST 100000 ROWS ONLY;

   -- declare a cursor-based record
   v_document_rec document_cur%ROWTYPE;

BEGIN
   OPEN document_cur;
   LOOP
   -- fetch the content of the cursor into the record
   FETCH document_cur into v_document_rec;
   EXIT WHEN document_cur%NOTFOUND;
   INSERT /*+ APPEND PARALLEL 8 */ INTO document_interim VALUES( v_document_rec.ID,
                                                 v_document_rec.document_id,
                                                 v_document_rec.document_name,
                                                 v_document_rec.created_date,
                                                 v_document_rec.security_level,
                                                 v_document_rec.content );
   END LOOP;
   COMMIT;
   CLOSE document_cur;
END;
/

Friday, November 3, 2017

What is the difference between a bind-sensitive and bind-aware cursor?

The terms "bind-sensitive" and "bind-aware" are used when explaining Oracles Adaptive Cursor Sharing (ACS) feature.

Adaptive cursors sharing makes it possible for a single statement to use different execution plans, depending on the values of its bind variables.

Oracle explains the difference between bind-sensitive and bind-aware in their documentation like this:


"A bind-sensitive cursor is a cursor whose optimal plan may depend on the value of a bind variable."

and

"A bind-aware cursor is a bind-sensitive cursor that is eligible to use different plans for different bind values."


In other words, when a bind-sensitive cursor have been executed, the database saves the execution statistics for the new values, and compares them with the values used previously.

If (and only if) there is a significant change, Oracle marks the cursor as bind-aware.
During future executions of the cursor, the CBO will generate a plan based on the bind values and their cardinality estimates.

Query to search for details about a specific SQL ID in the cursor cache


I have plenty of examples that Oracle 12cR1 selects poor execution plans over good ones.
This is a recent example from one of my databases. Notice that there are two different plans for the same SQL ID, one that seems to take long per exection, and a good plan that executes each statement very quickly:

SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN,
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID =('fczwz45f4d46q')
ORDER BY LAST_ACTIVE_TIME DESC;

Result:

Last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
16.08.2017 10:57 fczwz45f4d46q
2259800413
  SYS_SQLPROF_0156739adc130001
0
350642099493
311
350642
1127,5
16.08.2017 01:39 fczwz45f4d46q
927328071
  SYS_SQLPROF_0156739adc130001
9
2675614671
26048
2676
0,1

In this case, my goal is to create a fixed baseline for the execution plan with hash value 927328071.

Thursday, November 2, 2017

How to bulk load execution plans for a statement into the SMB


Short background:
Customer were complaining about long execution times for a specific SQL.

Checking in the database:
select distinct sql_id,plan_hash_value, child_number,ROUND(ELAPSED_TIME/1000000) "duration in seconds"
from v$sql where sql_text like 'SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and ROUND(ELAPSED_TIME/1000000) > 1000
order by 4 desc
;
These were the queries with the longest elapsed times:
SQL_ID PLAN_HASH_VALUE CHILD_NUMBER duration in seconds
cr43vm86rp9cy
3790053540
0
2387
fhtdk8ms42z0s
3790053540
0
2255
abbv8g54zzhjf
3790053540
0
1424
cragwkcwwvkrq
3790053540
0
1106

Do these SQL IDs have any alternative plans?
select distinct sql_id,count(*) "num plans" 
from v$sql where sql_text like ''SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and sql_id in ('cr43vm86rp9cy','fhtdk8ms42z0s','abbv8g54zzhjf','cragwkcwwvkrq')
group by sql_id;
Yes they do:
SQL_ID num plans
cr43vm86rp9cy
2
abbv8g54zzhjf
3
fhtdk8ms42z0s
3
cragwkcwwvkrq
3


Looking more closely at these 4 specific SQL IDs, it quickly became appaerant that the queries all had alternative plans generated for them, and plan hash value 133974613 gave the lowest execution times for all of these 4 samples:
select distinct sql_id,plan_hash_value, child_number,ROUND(ELAPSED_TIME/1000000) "duration in seconds", LPAD(is_resolved_adaptive_plan,15,' ') "adaptive?"
from v$sql where sql_text like 'SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and sql_id in ('cr43vm86rp9cy','fhtdk8ms42z0s','abbv8g54zzhjf','cragwkcwwvkrq')
order by 1,2
;
SQL_ID PLAN_HASH_VALUE CHILD_NUMBER duration in seconds
abbv8g54zzhjf
133974613
1
1
abbv8g54zzhjf
133974613
2
16
abbv8g54zzhjf
3790053540
0
1424
cr43vm86rp9cy
133974613
1
15
cr43vm86rp9cy
3790053540
0
2387
cragwkcwwvkrq
133974613
1
1
cragwkcwwvkrq
133974613
2
14
cragwkcwwvkrq
3790053540
0
1106
fhtdk8ms42z0s
133974613
1
1
fhtdk8ms42z0s
133974613
2
19
fhtdk8ms42z0s
3790053540
0
2255

We can also confirm by looking at the average duration for each cursor:
select sql_id, plan_hash_value, avg( ROUND(ELAPSED_TIME/1000000) ) "avg duration in seconds"
from v$sql where sql_text like ''SELECT col1, col2, col3 .... FROM table 1 join table 2 %'
and parsing_schema_name='MYUSER'
and sql_id in ('cr43vm86rp9cy','fhtdk8ms42z0s','abbv8g54zzhjf','cragwkcwwvkrq')
group by sql_id, plan_hash_value
order by 3;

Result:

SQL_ID PLAN_HASH_VALUE avg duration in seconds
cragwkcwwvkrq
133974613
7,5
abbv8g54zzhjf
133974613
8,5
fhtdk8ms42z0s
133974613
10
cr43vm86rp9cy
133974613
15
cragwkcwwvkrq
3790053540
1106
abbv8g54zzhjf
3790053540
1424
fhtdk8ms42z0s
3790053540
2255
cr43vm86rp9cy
3790053540
2387

We can firmly establish that one of the plans, the one with hash value 133974613, is much more efficent than the other.

As a matter of fact, I found that almost all of the SQL IDs having executed this query, would be much better off With using plan hash value = 133974613.


So let's load them into the management base.

First, create an empty SQL Tuning Set (STS):
exec dbms_sqltune.create_sqlset(sqlset_name=>'SD16659');

Populate the STS. Since almost all of the queries would be better off using this particular plan, we can use the plan hash value as the only criterion in our "basic filter" directive

DECLARE
  my_cur  dbms_sqltune.sqlset_cursor;
BEGIN
  OPEN my_cur FOR
     SELECT VALUE(x)
     FROM   TABLE( dbms_sqltune.select_cursor_cache(basic_filter=>'plan_hash_value = ''133974613''')) x;

     dbms_sqltune.load_sqlset(sqlset_name => 'SD16659', populate_cursor => my_cur);
END;
/

From the view dba_sqlset, we can see that the previous code loaded the STS with 177 statements:

select name,owner,created,statement_count
from dba_sqlset 
where name = 'SD16659';

ID CON_DBID NAME OWNER CREATED STATEMENT_COUNT
4
2788575331
SD16659 SYS 02.11.2017 12:16:04
177

Some more information about each statement in the STS can be found in the view DBA_SQLSET_STATEMENTS (fetching only the first 3 rows as an example):

select sql_id,plan_timestamp,parsing_schema_name,plan_hash_value,round(elapsed_time/1000000) "duration in seconds",cpu_time,buffer_gets,disk_reads,executions
from   dba_sqlset_statements
where  sqlset_name = 'SD16659'
fetch first 3 rows only;

SQL_ID PLAN_TIMESTAMP PARSING_SCHEMA_NAME PLAN_HASH_VALUE duration in seconds CPU_TIME BUFFER_GETS DISK_READS EXECUTIONS
fza8j57a8n779 01.11.2017 23:25:45 MYUSER
133974613
4
3568458
62313
126
6
2srumd4vbs8j2 01.11.2017 23:51:21 MYUSER
133974613
2
1925707
17561
10
2
5ptw9m80y4asg   MYUSER
133974613
4
3550461
167656
9
18

Now we can manually add them to the database's SQL Plan Management Base.

VARIABLE cnt NUMBER
EXECUTE :cnt := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( -
                    sqlset_name=>'SD16659', sqlset_owner=>'SYS');
print :cnt;
When executing the previous code, 100 plans were loaded into the SMB.
Details about the content of your SMB can be found by querying the view DBA_SQL_PLAN_BASELINES.

For example:
SELECT plan_name,
         sql_handle,
         creator,
         origin,
         TO_CHAR (created, 'dd.mm.yyyy hh24:mi') "created",
         TO_CHAR (last_executed, 'dd.mm.yyyy hh24:mi') "lastexecuted",
         enabled,
         accepted,
         fixed,
         autopurge,
         module,
         ROUND (ELAPSED_TIME / 1000000) "duration in seconds",
         cpu_time,
         buffer_gets,
         disk_reads
    FROM DBA_SQL_PLAN_BASELINES
   WHERE TO_CHAR (created, 'dd.mm.yyyy') = TRUNC (SYSDATE)
ORDER BY created ASC
;

The link to Oracle Documentation about manually loading the SMB can be found here

The usage of dbms_spm can be found here

What does it mean to "evolve" a SQL baseline?

Simply put, "evolving" an SQL baseline means testing whether or not an unaccepted plan performs better than any current plan in the plan baseline.

You can use Cloud Control or the package dbms_spm to evolve a plan.


Read more in the Oracle Documentation

Wednesday, November 1, 2017

How to purge a cursor from the shared pool using dbms_shared_pool

Find the address and hash value for the particular SQL you're looking for:
select address, hash_value from v$sqlarea where sql_id like 'bjkd7pcyfns86';


ADDRESS HASH_VALUE
0000000085CD2838
1021993222

Execute the purge procedure:
SQL> exec dbms_shared_pool.purge('0000000085CD2838, 1021993222','C');

Note that the name of the object is the combined values of address and hash value from v$sqlarea.
The value C is what you want to use for cursors.

Note that even if an SQL Plan Baseline is dropped from the SMB, it doesn't mean it won't get used if it is still in the cache.

If you have dropped a plan from your SMB that you would like the optimizer to ignore, you must clear out the cursor that previously using the now dropped SQL Plan baseline from the shared pool using the command above. 

Wednesday, October 25, 2017

How to use the find command on Unix/Linux systems


Here is how to list the regular files (indicated by -type f) files generated the last 30 minutes in a particular directory on a Linux/Unix server:

find . -mmin -30 -type f -exec ls -l {} +

Another example, listing files in my audit_file_dest older than 10 days:
find /u01/admin/proddb01/adump/ -name '*.aud' -mtime +10 -exec ls -latrh {} \;

Count all files with extension ".trm" generated one day or less ago:
find . -name '*.trm' -mtime -1 -type f -exec ls -l {} + | wc -l

Count all files with extension ".trm" generated more than 30 days ago:
find . -name '*.trm' -mtime +30 -type f -exec ls -l {} + | wc -l

Find all files with timestamp new than April 15th, 16:14 in the afternoon:
export TIMENOW=`date +"%a %b %e %H:%M"`
echo $TIMENOW
Thu Apr 15 16:14
find . -type f -newermt "$TIMENOW" -exec ls -la {} \;

Find and delete can be done in different ways, the recommended way is to use the -delete option. In the command below, I want to find and delete all *.trc and *.trm files older than 10 days:
find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +10 -delete

Delete all *.trm files in the current directory generated earlier than 24 hrs ago:
find *.trm -mmin +1440 -type f -delete

If the -delete option is not available, you can use a different method:
find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +10 -exec rm -f {} \;

Or, using xargs, finding files older than 2 days:
find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +2 -print0 | xargs -I {} -0 rm -v "{}"

Pipe to wc to get the number of files deleted:

find /u01/oracle/diag/rdbms/mydb/mydb/trace -type f -name 'mydb*.tr*' -mtime +2 -print0 | xargs -I {} -0 rm -v "{}" | wc -l


Some good external sources:

How To Format Date For Display or Use In a Shell Script
Linux / Unix: Find And Remove Files With One Command On Fly
Linux Find Command with Practical Examples

Monday, October 23, 2017

How to to use the script-command to log everything a script does

The utility "script" will make a typescript of a terminal session.
Very useful if you have a script which produces lots of output, and you need to search for errors after the script has run to completion:

script -c /tmp/install_oracle.sh

Unless you add the name of the output file, the resulting file will be called typescript.

Thursday, October 19, 2017

How to monitor rollback activity after killing long-running session

Found this brilliant script here: http://www.oracle-wiki.net/startsqlshowrollinfo



prompt This query will monitor rollbacks of transactions after killing a job or session. 
prompt If the used undo records/used undo blocks count is going down, then the transaction is rolling back. 
prompt If they are going up, then the transaction is still running.
SET PAUSE ON
SET PAUSE 'Press Return to Continue'
SET PAGESIZE 60
SET LINESIZE 300
 
COLUMN username FORMAT A20
COLUMN sid FORMAT 9999
COLUMN serial# FORMAT 99999
 
SELECT s.username,
       s.sid,
       s.serial#,
       t.used_ublk,
       t.used_urec,
       rs.segment_name,
       r.rssize,
       r.status
FROM   v$transaction t,
       v$session s,
       v$rollstat r,
       dba_rollback_segs rs
WHERE  s.saddr = t.ses_addr
AND    t.xidusn = r.usn
AND   rs.segment_id = t.xidusn
ORDER BY t.used_ublk DESC
/
In my environment, the output looked as follows:
This query will monitor rollbacks of transactions after killing a job or session.
If the used undo records/used undo blocks count is going down, then the transaction is rolling back.
If they are going up, then the transaction is still running.

USERNAME               SID SERIAL#  USED_UBLK  USED_UREC SEGMENT_NAME                       RSSIZE STATUS         
-------------------- ----- ------- ---------- ---------- ------------------------------ ---------- ---------------
MYUSERNAME_XX_XXX_  1194   10616     670875   14088690 _SYSSMU22_1415974968$          1189502976 ONLINE

The USED_UBLK was dropping, so the ongoing rollback could be confirmed to the users.

Thursday, October 12, 2017

How to delete a specific line from a file, or all characters following a specific character using sed

If you want to search a file for a particular string, and then delete that line, use sed, like this:

sed -i "/_use_column_stats_for_function/d" init.ora

In bash, using the double quotation marks was the only way to make this work when using a variable instead of a fixed string.

To delete a specific line number, in this case line number 7:
 sed -i '7d' myfile.txt
 

To delete lines 7 and 8:
 sed -i '7,8d' insert_data.sql
To delete everything after a specific character, for example #:
sed -i "s/#.*//g" /tmp/autoupgrade.ora

Tuesday, October 10, 2017

How to find the currently enabled unified auditing policies in an Oracle 12c database



Oracle 12c comes with two auditing policies enabled by default: ORA_SECURECONFIG and ORA_LOGON_FAILURES.
You can find the currently enabled policies in the AUDIT_UNIFIED_ENABLED_POLICIES view.

The query below will reveal what the currently enabled policies will actually audit
select policy_name,audit_option,condition_eval_opt
from audit_unified_policies
where policy_name in (  select policy_name 
                        from audit_unified_enabled_policies);
                        


The result shows that the default Unfied Auditing Setup covers many of the actions you would certainly want to audit, like DROP USER, ALTER SYSTEM etc:

POLICY_NAME AUDIT_OPTION CONDITION_EVAL_OPT
ORA_SECURECONFIG LOGMINING NONE
ORA_SECURECONFIG TRANSLATE ANY SQL NONE
ORA_SECURECONFIG EXEMPT REDACTION POLICY NONE
ORA_SECURECONFIG PURGE DBA_RECYCLEBIN NONE
ORA_SECURECONFIG ADMINISTER KEY MANAGEMENT NONE
ORA_SECURECONFIG DROP ANY SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG ALTER ANY SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG CREATE ANY SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG CREATE SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG CREATE EXTERNAL JOB NONE
ORA_SECURECONFIG CREATE ANY JOB NONE
ORA_SECURECONFIG GRANT ANY OBJECT PRIVILEGE NONE
ORA_SECURECONFIG EXEMPT ACCESS POLICY NONE
ORA_SECURECONFIG CREATE ANY LIBRARY NONE
ORA_SECURECONFIG GRANT ANY PRIVILEGE NONE
ORA_SECURECONFIG DROP ANY PROCEDURE NONE
ORA_SECURECONFIG ALTER ANY PROCEDURE NONE
ORA_SECURECONFIG CREATE ANY PROCEDURE NONE
ORA_SECURECONFIG ALTER DATABASE NONE
ORA_SECURECONFIG GRANT ANY ROLE NONE
ORA_SECURECONFIG DROP PUBLIC SYNONYM NONE
ORA_SECURECONFIG CREATE PUBLIC SYNONYM NONE
ORA_SECURECONFIG DROP ANY TABLE NONE
ORA_SECURECONFIG ALTER ANY TABLE NONE
ORA_SECURECONFIG CREATE ANY TABLE NONE
ORA_SECURECONFIG DROP USER NONE
ORA_SECURECONFIG CREATE USER NONE
ORA_SECURECONFIG AUDIT SYSTEM NONE
ORA_SECURECONFIG ALTER SYSTEM NONE
ORA_LOGON_FAILURES LOGON NONE
ORA_SECURECONFIG CREATE DATABASE LINK NONE
ORA_SECURECONFIG DROP DATABASE LINK NONE
ORA_SECURECONFIG ALTER USER NONE
ORA_SECURECONFIG CREATE ROLE NONE
ORA_SECURECONFIG DROP ROLE NONE
ORA_SECURECONFIG SET ROLE NONE
ORA_SECURECONFIG CREATE PROFILE NONE
ORA_SECURECONFIG DROP PROFILE NONE
ORA_SECURECONFIG ALTER PROFILE NONE
ORA_SECURECONFIG ALTER ROLE NONE
ORA_SECURECONFIG CREATE DIRECTORY NONE
ORA_SECURECONFIG DROP DIRECTORY NONE
ORA_SECURECONFIG ALTER DATABASE LINK NONE
ORA_SECURECONFIG CREATE PLUGGABLE DATABASE NONE
ORA_SECURECONFIG ALTER PLUGGABLE DATABASE NONE
ORA_SECURECONFIG DROP PLUGGABLE DATABASE NONE
ORA_SECURECONFIG EXECUTE NONE

If the $ORACLE_HOME/rdbms/admin/secconf.sql script was created during database creation, you will have some other policies in your database, too, but they won't be enabled by default. These can be found by executing the following statement:
select policy_name,count(audit_option) "number of audits"
from AUDIT_UNIFIED_POLICIES
where policy_name not in (select unique policy_name from audit_unified_enabled_policies )
group by policy_name
order by 2 desc;
In my database, the following collection shows up as being created, but not yet enabled:
POLICY_NAME number of audits
ORA_RAS_POLICY_MGMT
33
ORA_CIS_RECOMMENDATIONS
26
ORA_RAS_SESSION_MGMT
14
ORA_ACCOUNT_MGMT
9
ORA_DATABASE_PARAMETER
3

Wednesday, September 27, 2017

How to use the dbms_explan.display_cursor function to find the execution plan of a query in the Cursor Cache

set linesize 200
set pagesize 0
select * from table
(dbms_xplan.display_cursor( '0syc4upspr638',3,'TYPICAL'));
exit

Where
- the first argument is the SQL id
- the second argument cursor child number
- the third argument is the format string. TYPICAL is the default value.


See the Oracle Documentation for more information on how to use dbms_xplan

Wednesday, September 20, 2017

How to migrate a non-CDB database to a PDB on the same host

There are many ways to migrate your non-cdb Oracle databases to the new Multitenant Architecture. Here I will show you how to clone a non-cdb database to a PDB running in a container database

Asumptions:
You have two Oracle databases of version 12.1 or higher running on the same server:

1. Your original, non-cdb database called db01
2. Your new container database called cdb01
Both of these databases are running out of Oracle Home installe in /u01/oracle/product/12c

Step 1: For Oracle 12.1, open your non-CDB in read only mode (not needed from version 12.2 and onwards):
shutdown immedate
startup mount
alter database open read only;

Step 2: create an xml file that describes the non-CDB database using the package dbms_pdb:
export ORACLE_SID=db01
sqlplus / as sysdba
Generate the file:
set serveroutput on
begin
  dbms_pdb.describe( pdb_descr_file => '/tmp/ncdb.xml');
end;
/

Step 3: Create the pluggable database

Connect to your CDB, and create the PDB using the script you created in step 2.

export ORACLE_SID=cdb01
sqlplus / as sysdba
create pluggable database pdb01
using '/tmp/ncdb.xml'
copy
file_name_convert = ('/u02/oradata/db01/', '/u02/oradata/cdb01/PDBS/pdb01/');
Note that I am choosing to copy the files from the original location of the non-CDB database, to a brand new one, using the directive file_name_convert. There are other options, too: MOVE and NOCOPY

Step 4: execute $ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql


sqlplus / as sysdba
alter session set container=PDB01;
@?/rdbms/admin/noncdb_to_pdb.sql

I did receive errors during this phase:
ORA-65172: cannot run noncdb_to_pdb.sql unless pluggable database is an
unconverted non-container database
ORA-06512: at "SYS.DBMS_PDB", line 154
ORA-06512: at line 1

The error message seemed to be harmless:
oracle@myserver.mydomain.com:[cdb1]# oerr ora 65172
65172, 00000, "cannot run noncdb_to_pdb.sql unless pluggable database is an unconverted non-container database"
// *Cause:  An attempt was made to run 'noncdb_to_pdb.sql' on a pluggable
//          database (PDB) that was not an unconverted non-container database.
// *Action: 'noncdb_to_pdb.sql' is not necessary for this PDB.
//


Further research showed 1) the Pdb was indeed created, and 2) that there were errors in PDB_PLUG_IN_VIOLATIONS:
select PDB_ID,PDB_NAME,STATUS,CON_ID from cdb_pdbs

    PDB_ID PDB_NAME                       STATUS                          CON_ID
---------- ------------------------------ --------------------------- ----------
         2 PDB01                          NEW                                  2

alter session set container=cdb$root;


SELECT TO_CHAR(TIME,'dd.mm.yyyy hh24:mi') "time",NAME,STATUS,MESSAGE 
FROM PDB_PLUG_IN_VIOLATIONS;

Result:
time NAME STATUS MESSAGE
20.09.2017 14:17 PDB01 PENDING Database option CATJAVA mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option CONTEXT mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option JAVAVM mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option ORDIM mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option XML mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Sync PDB failed with ORA-65177 during 'alter user sys identified by *'
20.09.2017 14:17 PDB01 PENDING Sync PDB failed with ORA-65177 during 'alter user system identified by *'

A quick search on Oracles support site revealed that these errors can be ignored. See Doc ID 2020172.1 "OPTION WARNING Database option mismatch: PDB installed version NULL" in PDB_PLUG_IN_VIOLATIONS"

Finally, remember to open your pdb in read write mode. It was in MIGRATE mode after the noncdb_to_pdb.sql script had been run and failed:
 select con_id,name,open_mode from v$containers;

    CON_ID NAME      OPEN_MODE
---------- --------- -----------------------------
         1 CDB$ROOT  READ WRITE
         2 PDB01     MIGRATE

alter session set container=PDB01;

alter pluggable database close;

Pluggable database altered.

alter pluggable database open;

 select con_id,name,open_mode from v$containers;

    CON_ID NAME      OPEN_MODE
---------- --------- -----------------------------
         2 PDB01     READ WRITE

Wednesday, September 13, 2017

Oracle Multitentant licensing

For Oracle 12c, you don't need to purchase an additional license when running a Multitenant database shop with only 1 - one - PDB. This configuration is also refered to as "single tenant".

Franck Pachot is pointing this out in a community discussion:


You don't need an option when you have only one pluggable database.
You can detect violation with:

select name, aux_count from dba_feature_usage_statistics where name like '%Pluggable%' or name like '%Multitenant%';

If AUX_COUNT is >1 then you need to buy the option, or drop the additional PDBs

In 12.2 you can set MAX_PDBS to 1 to be sure nobody creates more than allowed.
Here are the features you can use in this configuration (know as 'single-tenant'): http://www.slideshare.net/pachot/12cr2-singletenant-multitenant-features-for-all-editions


In Oracle 19c, the MAX_PDBS can be set to 3 without breaking the license.

A workaround for ORA-06553: PLS-213: package STANDARD not accessible when using datapatch in a CDB

Short background:

I was having trouble applying "patch 26550023 - COMBO of OJVM Component 12.1.0.2.170718 DB PSU + DB PSU 12.1.0.2.170814" in my Multitenant environment. The container database only had one PDB at the time, the PDB$SEED.

After having successfully applied opatch apply for both patches, I ran datapatch -verbose to load modified SQL into the database. I had already opened my container database in upgrade mode, and also opened the PDB$SEED in upgrade mode by executing
alter pluggable database all open upgrade;
The state of the PDB$SEED could be confirmed in the alert log, as well as from v$pdb:

SELECT name, open_mode FROM v$pdbs;

NAME       OPEN_MODE
--------- --------------
PDB$SEED   MIGRATE

Still, I kept getting weird errors like


Bootstrapping registry and package to current versions...done
Error in bootstrap log /u01/oracle/cfgtoollogs/sqlpatch/sqlpatch_17273_2017_09_13_13_29_29/bootstrap1_CDBVEG_PDBSEED.log:
Error at line 7: ORA-06553: PLS-213: package STANDARD not accessible
Error at line 17: ORA-06553: PLS-213: package STANDARD not accessible
Error at line 25: SP2-0310: unable to open file "/u01/oracle/product/12102/sqlpatch/FALSE.sql"
Prereq check failed, exiting without installing any patches.


There was little information about the problem and potential workarounds to be found on the internet.

After trying different options without success, I could find no other solution than to drop the PDB$SEED container, so that patching could continue.

Here's how:
SQL> 
-- necessarry to avoid "ORA-65017: seed pluggable database may not be dropped or altered"
alter session set "_oracle_script"=TRUE;

Session altered.

SQL> alter pluggable database PDB$SEED close;

Pluggable database altered.

SQL> drop pluggable database pdb$seed including datafiles;

Pluggable database dropped.

SQL> alter session set "_oracle_script"=FALSE;

Session altered.

SQL> select * from cdb_pdbs;

no rows selected


After this point, run datapatch again:
oracle@myserver:[cdbveg]# datapatch -verbose
SQL Patching tool version 12.1.0.2.0 Production on Wed Sep 13 13:34:05 2017
Copyright (c) 2012, 2016, Oracle.  All rights reserved.

Log file for this invocation: /u01/oracle/cfgtoollogs/sqlpatch/sqlpatch_18072_2017_09_13_13_34_05/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 26027162 (Database PSU 12.1.0.2.170718, Oracle JavaVM Component (JUL2017)):
  Installed in the binary registry only
Bundle series PSU:
  ID 170814 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT
    Nothing to roll back
    The following patches will be applied:
      26027162 (Database PSU 12.1.0.2.170718, Oracle JavaVM Component (JUL2017))
      26609783 (DATABASE PATCH SET UPDATE 12.1.0.2.170814)

Installing patches...
Patch installation complete.  Total patches installed: 2

Validating logfiles...
Patch 26027162 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26027162/21319014/26027162_apply_CDBVEG_CDBROOT_2017Sep13_13_34_18.log (no errors)
Patch 26609783 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_CDBROOT_2017Sep13_13_34_18.log (no errors)
SQL Patching tool complete on Wed Sep 13 13:34:52 2017

Verification that the patches are applied:
SQL> select ACTION,DESCRIPTION,STATUS,BUNDLE_SERIES from registry$sqlpatch;

ACTION     DESCRIPTION                                                            STATUS               BUNDLE_SERIES
---------- ---------------------------------------------------------------------- -------------------- --------------------
APPLY      Database PSU 12.1.0.2.170718, Oracle JavaVM Component (JUL2017)        SUCCESS
APPLY      DATABASE PATCH SET UPDATE 12.1.0.2.170814                              SUCCESS              PSU

Tuesday, September 12, 2017

How to attach an ORACLE_HOME to an existing inventory

I wanted to check the patchlevel in one of my Oracle installations, and the following errow was returned:
oracle@tsl0map-dbteam-sandbox-db04:[vegdb01]# opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/oracle/product/12102
Central Inventory : /home/oracle/oraInventory
   from           : /u01/oracle/product/12102/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/oracle/product/12102/cfgtoollogs/opatch/opatch2017-09-12_15-30-47PM_1.log

List of Homes on this system:

  Home name= agent13c2, Location= "/u01/oracle/product/agent13c/agent_13.2.0.0.0"
  Home name= 11204, Location= "/u01/oracle/product/11204"
  Home name= OraHome3, Location= "/u01/oracle/product/gg121"
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

Turned out my inventory had not been updated with my new Oracle Home. When I looked in the inventory.xml file on my server, there was no entry for the installation there.

To fix this, add the new OH to your inventory. From your OH that is missing, do the following:
cd $ORACLE_HOME/oui/bin
./runInstaller -invPtrLoc /u01/oracle/product/12102/oraInst.loc -attachHome ORACLE_HOME=/u01/oracle/product/12102 ORACLE_HOME_NAME="Ora12cHome"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 8191 MB    Passed
The inventory pointer is located at /u01/oracle/product/12102/oraInst.loc
'AttachHome' was successful.

Check the inventory file now, and you'll see a new entry for the OH (in my case the Ora12cHome):
<HOME NAME="agent13c2" LOC="/u01/oracle/product/agent13c/agent_13.2.0.0.0" TYPE="O" IDX="8"/>
<HOME NAME="11204" LOC="/u01/oracle/product/11204" TYPE="O" IDX="1"/>
<HOME NAME="Ora12cHome" LOC="/u01/oracle/product/12102" TYPE="O" IDX="9"/>

How to create a container database in Oracle 12c

Create an initialization file in your $ORACLE_HOME/dbs folder:

enable_pluggable_database=true
processes=300
control_files="/u02/oradata/cdbveg/control01.ctl","/u03/oradata/cdbveg/control02.ctl"
audit_file_dest="/u01/oracle/admin/cdbveg/adump"
audit_trail=DB
compatible=12.1.0.2
db_block_size=8192
db_domain=""
db_name="cdbveg"
db_recovery_file_dest_size=429496729600
db_recovery_file_dest="/u04/fra"
diagnostic_dest=/u01/oracle
dispatchers="(PROTOCOL=TCP) (SERVICE=cdbvegXDB)"
local_listener = "(ADDRESS = (PROTOCOL=TCP)(HOST=myserver.mydomain.com)(PORT=1521))"
open_cursors=300
remote_login_passwordfile=EXCLUSIVE
undo_tablespace=UNDOTBS1
# use ASMM (Automatic shared memory managaement)
sga_target=2G
sga_max_size=2G
pga_aggregate_target=128M
# Disable AMM (Automatic Memory Managament)
memory_target=0
memory_max_target=0

Create the necessarry directories:
mkdir -p /u02/oradata/cdbveg
mkdir -p /u03/oradata/cdbveg
mkdir -p /u01/oracle/admin/cdbveg/adump
mkdir -p /u02/oradata/cdbveg/seed
Set your environment variables correcly, in my case they were:
export ORACLE_SID=cdbveg
export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/product/12102

Start an instance:
sqlplus / as sysdba 
statup nomount

Create your container database:
create database cdbveg
user sys identified by MySecretPasswd
user system identified by MySecretPasswd
logfile group 1(
 '/u02/oradata/cdbveg/red01a.log',
 '/u03/oradata/cdbveg/red01b.log'
) size 128M,
group 2 (
 '/u02/oradata/cdbveg/red02a.log',
 '/u03/oradata/cdbveg/red02b.log'
) size 128M
character set al32utf8 national character set al16utf16
extent management local datafile '/u02/oradata/cdbveg/system01.dbf' size 256M autoextend on next 16M maxsize unlimited
sysaux datafile '/u02/oradata/cdbveg/sysaux01.dbf' size 128M autoextend on next 16M maxsize unlimited
default temporary tablespace temp tempfile '/u02/oradata/cdbveg/temp.dbf' size 256M
undo tablespace undotbs1 datafile '/u02/oradata/cdbveg/undotbs01.dbf' size 256M
enable pluggable database
seed
file_name_convert = ('/u02/oradata/cdbveg','/u02/oradata/cdbveg/seed')
system datafiles size 128M autoextend on next 8M maxsize 2048M
sysaux datafiles size 64M
USER_DATA tablespace users datafile '/u02/oradata/cdbveg/seed/users.dbf' size 128M reuse autoextend on next 32M maxsize unlimited;

The above statement will create a container database, and a seed database which will follow the specifications listed under the keyword "seed" in the create-statement above.
The /u02/oradata/cdbveg/seed directory will contain the datafiles used for future creations of PDBs based on the seed template.
In addition to the mandatory system and sysaux tablespaces, I add a tablespace for user data, too.

Finally, run the necessary scripts:
sqlplus / as sysdba 

SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catblock.sql
SQL> @?/rdbms/admin/catproc.sql

SQL> alter session set "_ORACLE_SCRIPT"=false;

SQL> @?/rdbms/admin/catoctk.sql
SQL> @?/rdbms/admin/owminst.plb
SQL> @?/sqlplus/admin/pupbld.sql

Verify creation:
select name,con_id,dbid,open_mode from v$containers

NAME                     CON_ID       DBID OPEN_MODE
-------------------- ---------- ---------- ------------------------------
CDB$ROOT                      1 3469467854 READ WRITE
PDB$SEED                      2  682995139 READ ONLY

Monday, September 11, 2017

Why is my PDB seemingly stuck in RESTRICTED mode?


Check out the view pdb_plug_in_violations - it will point you in the right direction.

In my case, I had created a pluggable database some days after my CDB was finished.

I had completely forgotten about a common user that I had created at that time:

-- create a common user for all CONTAINERS
-- common users must use the prefix c##
create user C##dba identified by MySecretPasswor
default tablespace USERS
temporary tablespace TEMP
quota unlimited on USERS
container=all;

The new PDB was created without the USERS tablespace, which prevented the new PDB to be synchronized with the parent container.
The pdb_plug_in_violations contained the following message:

Sync PDB failed with ORA-959 during 'create user C##dba identified by *default tablespace USERS
temporary tablespace TEMP
quota unlimited on USERS
container=all'


To resolve the situation, connect to the container with the missing tablespace:
alter session set container=pdbveg2;
Make sure my session OMF parameter is correctly set:
select name,value,DEFAULT_VALUE,ISDEFAULT,ISPDB_MODIFIABLE, ISSES_MODIFIABLE, DESCRIPTION
from V$PARAMETER where name like '%create_file_dest%';
NAME VALUE DEFAULT_VALUE ISDEFAULT ISPDB_MODIFIABLE ISSES_MODIFIABLE DESCRIPTION
db_create_file_dest /u02/oradata/cdbveg/pdbveg2 NONE TRUE TRUE TRUE default database location

Create the missing tablespace:
create tablespace USERS
datafile size 8M autoextend on next 2M maxsize 2G;
Finally, close and reopen your pluggable database:
alter pluggable database pdbveg2 close;
alter pluggable database pdbveg2 open read write;
Check status:
select CON_ID,name,OPEN_MODE, RESTRICTED
from v$containers;
CON_ID NAME OPEN_MODE RESTRICTED
4 PDBVEG2 READ WRITE NO

How to create and drop a pluggable database in Oracle 12c


This method uses OMF (Oracle Managed Files):

1. Create directory on host:
mkdir -p /u02/oradata/cdbveg/pdbveg2

2. Set session parameter to enable OMF:
alter session set db_create_file_Dest='/u02/oradata/cdbveg/pdbveg2';

3. Create a pluggable database with a new and local PDBA:
create PLUGGABLE database PDBVEG2
admin user VEGARD
identified by "MySecretPassword"
roles=(DBA);

The statement above copies data files from the seed PDB to the target directory specified in step 2.
It will also grant the local role PDB_DBA to the new PDBA, Vegard.

4. Check the status of the newly created PDB:
select pdb_name,status,con_id from CDB_PDBS;

PDB_NAME STATUS CON_ID
PDBVEG1 NORMAL 3
PDB$SEED NORMAL 2
PDBVEG2 NEW 4

5. Open the pluggable database:
alter pluggable database pdbveg2 open;

6. Check status again:
select pdb_name,status,con_id from CDB_PDBS;
PDB_NAME STATUS CON_ID
PDBVEG1 NORMAL 3
PDB$SEED NORMAL 2
PDBVEG2 NORMAL 4

To drop a pluggable database, perform the following steps:
alter pluggable database PDB1 close;
-- Unplug the database. If you try to drop it before unplugging it, Oracle report
-- ORA-65179: cannot keep datafiles for a pluggable database that is not unplugged
alter pluggable database PDB1 unplug into '/tmp/PDB1.xml';
drop pluggable database PDB1;
If you want Oracle to remove the physical files from disk for you in the same operation, use the following syntax:
drop pluggable database PDB1 including datafiles;

Saturday, September 9, 2017

What is the "datapatch" utility released in Oracle 12c?

From Datapatch: Database 12c Post Patch SQL Automation (Doc ID 1585822.1):

Datapatch is the new tool that enables automation of post-patch SQL actions for RDBMS patches.

and

Datapatch can be executed to complete post-patch SQL actions after the database is restarted following patch application.
For patches that do not have post-patch SQL actions to be performed, calling datapatch is a no-op.
For patches that do have post-patch SQL instructions to be invoked on the database instance, datapatch will automatically detect ALL pending actions (from one installed patch or multiple installed patches) and complete the actions as appropriate.


How does Oracle detect ALL pending actions?

Datapatch determines the requisite apply/rollback actions by matching an internal repository with the patch inventory.


When should datatach be invoked?

Datapatch should be invoked when the database is restarted after a patching session.

From 12.1.0.2 and onwards, Oracle Corporation points out that they have made some change to the datapatch utility. The most important in my opinion is that datapatch now assumes "the role of catbundle when applying bundles/PSU."

This means that

catbundle.sql has been deprecated and patch registry is maintained only in registry$sqlpatch.
Application of the PSU does not update registry$history table anymore.
The status for patch application is now entirely maintained in registry$sqlpatch


Here's an abbreviated output from using datapatch as part of applying Oracles latest PSU, Database Patch Set Update 12.1.0.2.170814 (Includes CPUJul2017):

1. Download and unpack the patch
[oracle@lx01oric software]$ unzip p26609783_121020_Linux-x86-64.zip
2. Shutdown your oracle instance and the listener
3. Check for conflicts:
[oracle@lx01oric software]$ cd 26609783
oracle@lx01oric 26609783]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/oracle/product/db/12.1/12.1.0.2.0
Central Inventory : /u01/oraInventory
   from           : /u01/oracle/product/db/12.1/12.1.0.2.0/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/oracle/product/db/12.1/12.1.0.2.0/cfgtoollogs/opatch/opatch2017-09-06_13-41-14PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

4. Apply the patch:
[oracle@lx01oric 26609783]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/oracle/product/db/12.1/12.1.0.2.0
Central Inventory : /u01/oraInventory
   from           : /u01/oracle/product/db/12.1/12.1.0.2.0/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.1.0.2.0
Log file location : /u01/oracle/product/db/12.1/12.1.0.2.0/cfgtoollogs/opatch/opatch2017-09-06_14-16-35PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   19769480  20299023  20831110  21359755  21948354  22291127  23054246  24006101  24732082  25171037  25755742  26609783  

Do you want to proceed? [y|n]

... output abbreviated.

Composite patch 26609783 successfully applied.
Log file location: /u01/oracle/product/db/12.1/12.1.0.2.0/cfgtoollogs/opatch/opatch2017-09-06_14-16-35PM_1.log

OPatch succeeded.

5. Start your database. When opened, execute datapatch:
[oracle@lx01oric] cd $ORACLE_HOME/OPatch
[oracle@lx01oric OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 Production on Wed Sep  6 14:54:49 2017
Copyright (c) 2012, 2016, Oracle.  All rights reserved.

Log file for this invocation: /u01/oracle/cfgtoollogs/sqlpatch/sqlpatch_16001_2017_09_06_14_54_50/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...
Bootstrap timed out after 240 seconds
Retrying bootstrap after retryable errors....done
Determining current state...done

Current state of SQL patches:
Bundle series PSU:
  ID 170814 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    The following patches will be applied:
      26609783 (DATABASE PATCH SET UPDATE 12.1.0.2.170814)

Installing patches...
Patch installation complete.  Total patches installed: 2

Validating logfiles...
Patch 26609783 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_CDBROOT_2017Sep06_15_01_26.log (no errors)
Patch 26609783 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_PDBSEED_2017Sep06_15_09_34.log (no errors)
SQL Patching tool complete on Wed Sep  6 15:18:53 2017
[oracle@lx01oric OPatch]$ vi /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_PDBSEED_2017Sep06_15_09_34.log
6. If applicable, activate the OJVM Mitigation patch.

According to Oracle, this patch is applicable in "situations where the latest OJVM PSU cannot be installed immediately... The "Mitigation Patch" is an interim solution to protect against all currently known (Jul 2015) Oracle JavaVM security vulnerabilities in the database until such time as the OJVM PSU can be installed."

Since this is my situation, I install the mitigation patch:
[oracle@lx01oric 26609783]$ cd $ORACLE_HOME/rdbms/admin
[oracle@lx01oric admin]$ sqlplus / as sysdba @dbmsjdev.sql
[oracle@lx01oric admin]$ sqlplus / as sysdba
SQL> exec dbms_java_dev.disable;
PL/SQL procedure successfully completed.
SQL> 

Thursday, September 7, 2017

How to solve OUI-10197:Unable to create a new Oracle Home during cloning

Short background:

After a failed attempt to clone an Oracle 12c installation, using Oracle's perl script clone.pl, like this:
oracle@myserver:[DBSID]# cd /u01/oracle/product/12102/clone/bin
oracle@myserver:[DBSID]# perl clone.pl ORACLE_HOME=/u01/oracle/product/12102 ORACLE_HOME_NAME=12102 ORACLE_BASE=/u01/oracle OSDBA_GROUP=dba

the following error was thrown:

OUI-10197:Unable to create a new Oracle Home at /u01/oracle/product/12102. Oracle Home already exists at this location. Select another location.
SEVERE:OUI-10197:Unable to create a new Oracle Home at /u01/oracle/product/12102. Oracle Home already exists at this location. Select another location.

The solution is to detach the Oracle Home from the Inventory:
oracle@myserver:[DBSID]# ./runInstaller -detachHome ORACLE_HOME=/u01/oracle/product/12102 invPtrLoc=/u01/oracle/product/12102/oraInst.loc
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 7814 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.

You can now rerun clone.pl.
Note that you do not need to physically remove files from disk and unzip new ones, since your Inventory is now unaware of the previously failed installation attempt.

Simply run your clone again. Log output below.

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 1810 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 7814 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-09-07_09-28-15AM. Please wait ...You can find the log of this install session at:
 /home/oracle/oraInventory/logs/cloneActions2017-09-07_09-28-15AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of 12102 was successful.
Please check '/home/oracle/oraInventory/logs/cloneActions2017-09-07_09-28-15AM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
..................................................   95% Done.
As a root user, execute the following script(s):
        1. /u01/oracle/product/12102/root.sh
..................................................   100% Done.