Showing posts with label Oracle 12c new features. Show all posts
Showing posts with label Oracle 12c new features. Show all posts

Tuesday, July 14, 2020

Identity columns in Oracle 12c



The identity column introduced in Oracle 12c can be used in the different manners, according to your needs. Here are some basic examples:

create table songs (
    song_id number generated always as identity,
    title varchar2(128)
);

The "generated always" clause is default and may be omitted.

In this scenario, no inserts may contain a value for the column song_id, even if the datatype is correct. Neither null values are accepted. Oracle will generate its own value and will not allow any manipulation of the value recorded in the identity column.


Allow for the possibility to override the auto generated numbers by specifying "by default" instead of "always":
create table songs (
    song_id number generated by default as identity,
    title varchar2(128)
);
In other words, the statements below would both be permissible:
insert into songs (song_id, title)values(10,'Born to run');
update songs set song_id=20 where song_id=10;
However, null cannot be specified as a legitimate value for the identity column:
insert into songs values(null, 'White wedding');
would return the error
ORA-01400: cannot insert NULL into ("MUSIC"."SONGS"."SONG_ID")

If you need to allow for NULL values in the identity column, you create the table as follows:
create table songs (
    song_id number generated by default on null as identity,
    title varchar2(128)
);

With the above definition, Oracle

* permits overriding the autogenerated numbers
* allows update of the identity column
* allows null values to be specified during inserts

All of the three statements below are thus valid:
insert into songs (song_id, title)values(10,'Born to run');
update songs set song_id=20 where song_id=10;
insert into songs values(null, 'White wedding');

Tim Hall has published a more in-depth article about the same topic, as usual of the highest quality

Friday, October 11, 2019

New security feature in Oracle 12.2: INACTIVE_ACCOUNT_TIME



A profile in Oracle 12.2 can now be configured with the setting INACTIVE_ACCOUNT_TIME, which specifies the maximum number of days an account can remain unused. Unless a new login occur within the specified number of days, the account will be automatically locked.

If not set, any custom-made profile will inherit the setting of the DEFAULT profile, which is UNLIMITED.

Syntax:
CREATE PROFILE my_profile LIMIT
INACTIVE_ACCOUNT_TIME 30;

ALTER PROFILE my_profile LIMIT
INACTIVE_ACCOUNT_TIME 30;

Sources:
Oracle 12.2 New Features guide

Automatically Locking Inactive Database User Accounts


Wednesday, May 22, 2019

Deprecated security parameter in Oracle 12c: SEC_CASE_SENSITIVE_LOGON


Oracle states:

"The use of the Oracle instance initialization parameter SEC_CASE_SENSITIVE_LOGON is deprecated in favor of setting the SQLNET.ALLOWED_LOGON_VERSION_SERVER parameter to 12 to ensure that passwords are treated in a case-sensitive fashion.
Disabling password case sensitivity is not supported in Exclusive mode (when SQLNET.ALLOWED_LOGON_VERSION_SERVER is set to 12 or 12a.)


See this post for an example used during implementation

Documentation here

Thursday, May 16, 2019

How to compress tables on import



Oracle 12c introduces a way to compress tables on the fly, during import using impdp.

Use the directive
transform=table_compression_clause:"row store compress advanced"
in your impdp parameter file.

Documentation here

Tuesday, April 9, 2019

A workaround for ORA-54033: column to be modified is used in a virtual column expression



Applicable for Oracle 12.1.0.2.

My customer reports that an attempt to execute a simple DDL statement failed:
ALTER TABLE mytable
MODIFY (COL2 DATE );

The error thrown was: ORA-54033: column to be modified is used in a virtual column expression.

Quite correctly, a look at the table revealed a virtual column:

set lines 200
col owner format a15
col table_name format a30
col column_name format a30
col data_default format a60

select owner,table_name,column_name, data_Default
from dba_tab_cols
where table_name='MYTABLE'
and hidden_column='YES';

Output:
OWNER           TABLE_NAME      COLUMN_NAME                    DATA_DEFAULT                                                                    
--------------- --------------- ------------------------------ -----------------------------------------------
SCOTT           MYTABLE         SYS_STSC13O20ML6_5OD25YOF16STK SYS_OP_COMBINED_HASH("COL1","COL2","COL3","COL4")             
1 row selected.


You can also query the DBA_STAT_EXTENSION to get similar information:
set lines 200
col extension_name format a30
col extension format a50

SELECT EXTENSION_NAME, EXTENSION,creator,droppable
FROM DBA_STAT_EXTENSIONS
WHERE TABLE_NAME='MYTABLE';

Output:
EXTENSION_NAME                 EXTENSION                      CREATOR DROPPABLE
------------------------------ ------------------------------ ------- ---------
SYS_STSC13O20ML6_5OD25YOF16STK ("COL1","COL2","COL3","COL4")  SYSTEM  YES      
1 row selected.

So in order to modify the existing column, the extended statistics will have to be dropped and recreated:
set lines 200
set serveroutput on
set timing on

BEGIN
  dbms_stats.drop_extended_stats('SCOTT', 'MYTABLE', '(COL1,COL2,COL3,COL4)');
END;
/

ALTER TABLE mytable
MODIFY (COL2 DATE );

select dbms_stats.create_extended_stats('SCOTT', 'MYTABLE', '(COL1,COL2,COL3,COL4)') 
from dual;

Monday, January 7, 2019

How to create a multicolumn list-partitioned table in Oracle 12.2



From Oracle 12.2, you can LIST partition a table using multiple columns:

CREATE TABLE regional_orders(
  id            NUMBER,
  country_code  VARCHAR2(5),
  region_code   VARCHAR2(20),
  customer_id   NUMBER,
  order_date    DATE,
  order_total   NUMBER(8,2),
  CONSTRAINT regional_orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code,region_code) 
(
                PARTITION NORWAY_SOUTH_WEST VALUES (
                                        ( 'NOR','AUST-AGDER'),
                                        ( 'NOR','VEST-AGDER'),
                                        ( 'NOR','ROGALAND')
                                         ),
                PARTITION NORWAY_WEST VALUES (
                                        ( 'NOR','HORDALAND'),
                                        ( 'NOR','SOGN OG FJORDANDE'),
                                        ( 'NOR','MØRE OG ROMSDAL')
                                         ),
                PARTITION NORWAY_CENTRAL VALUES (
                                        ( 'NOR','TRØNDELAG')
                                         ),
                PARTITION NORWAY_EAST VALUES (
                                        ( 'NOR','BUSKERUD'),
                                        ( 'NOR','HEDMARK'),
                                        ( 'NOR','TELEMARK')
                                         )
);
Incoming data that has any of the combinations below, will enter the partition NORWAY_SOUTH_WEST

'NOR','AUST-AGDER'
'NOR','VEST-AGDER'
'NOR','ROGALAND'

This functionality removes the need for LIST-LIST subpartitioning, as far as I can see.


Some queries to verify that the partitions exist:

select  partition_name,partition_position,tablespace_name
from user_tab_partitions 
where table_name='REGIONAL_ORDERS';

PARTITION_NAME PARTITION_POSITION TABLESPACE_NAME
NORWAY_SOUTH_WEST
1
USERS
NORWAY_WEST
2
USERS
NORWAY_CENTRAL
3
USERS
NORWAY_EAST
4
USERS

SELECT PARTITIONING_TYPE,PARTITION_COUNT,PARTITIONING_KEY_COUNT 
FROM USER_PART_TABLES 
where table_name='REGIONAL_ORDERS';
PARTITIONING_TYPE PARTITION_COUNT PARTITIONING_KEY_COUNT
LIST
4
2

SELECT column_name,column_position 
FROM USER_PART_KEY_COLUMNS
where name='REGIONAL_ORDERS';

COLUMN_NAME COLUMN_POSITION
COUNTRY_CODE
1
REGION_CODE
2

Let's insert a row:

insert into regional_orders
values (1,'NOR','ROGALAND',3344,'04.01.2019', 1000);
commit;

Analyze the table:
exec DBMS_STATS.GATHER_TABLE_STATS (OwnName => 'SCOTT',TabName => 'REGIONAL_ORDERS');

Verify:
select partition_name, num_rows
from user_tab_partitions 
where table_name='REGIONAL_ORDERS';
PARTITION_NAME NUM_ROWS
NORWAY_CENTRAL
0
NORWAY_EAST
0
NORWAY_SOUTH_WEST
1
NORWAY_WEST
0

Insert with a different value:
insert into regional_orders
values (2,'NOR','BUSKERUD',3345,'04.01.2019', 1200);

COMMIT;
Analyze again:
exec DBMS_STATS.GATHER_TABLE_STATS (OwnName => 'SCOTT',TabName => 'REGIONAL_ORDERS');

Verify:
select partition_name, num_rows
from user_tab_partitions 
where table_name='REGIONAL_ORDERS';

PARTITION_NAME NUM_ROWS
NORWAY_CENTRAL
0
NORWAY_EAST
1
NORWAY_SOUTH_WEST
1
NORWAY_WEST
0


If you send in a value that is not covered in your pre-specified list, your statement will fail:
insert into regional_orders
values (3,'NOR','NORDLAND',3346,'04.01.2019', 1100);

ORA-14400: inserted partition key does not map to any partition

The solution to this is to either create your table with a default partition or to use the Oracle 12.2 new feature of automatic list partitioning.

Using a default partition will be as follows:
CREATE TABLE regional_orders(
  id            NUMBER,
  country_code  VARCHAR2(5),
  region_code   VARCHAR2(20),
  customer_id   NUMBER,
  order_date    DATE,
  order_total   NUMBER(8,2),
  CONSTRAINT regional_orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code,region_code) 
                (       
                PARTITION NORWAY_SOUTH_WEST VALUES (
                                        ( 'NOR','AUST-AGDER'),
                                        ( 'NOR','VEST-AGDER'),
                                        ( 'NOR','ROGALAND')
                                         ),
                PARTITION NORWAY_WEST VALUES (
                                        ( 'NOR','HORDALAND'),
                                        ( 'NOR','SOGN OG FJORDANDE'),
                                        ( 'NOR','MØRE OG ROMSDAL')
                                         ),
                PARTITION NORWAY_CENTRAL VALUES (
                                        ( 'NOR','TRØNDELAG')
                                         ),
                PARTITION NORWAY_EAST VALUES (
                                        ( 'NOR','BUSKERUD'),
                                        ( 'NOR','HEDMARK'),
                                        ( 'NOR','TELEMARK')
                                         ),
                PARTITION NORWAY_REST VALUES (DEFAULT) 
                )
;

Using the AUTOMATIC keyword will be as easy as adding the keyword AUTOMATIC to your partitioning-clause and to remove the specification for your overflow partition:
CREATE TABLE regional_orders(
  id            NUMBER,
  country_code  VARCHAR2(5),
  region_code   VARCHAR2(20),
  customer_id   NUMBER,
  order_date    DATE,
  order_total   NUMBER(8,2),
  CONSTRAINT regional_orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code,region_code) AUTOMATIC
                (       
                PARTITION NORWAY_SOUTH_WEST VALUES (
                                        ( 'NOR','AUST-AGDER'),
                                        ( 'NOR','VEST-AGDER'),
                                        ( 'NOR','ROGALAND')
                                         ),
                PARTITION NORWAY_WEST VALUES (
                                        ( 'NOR','HORDALAND'),
                                        ( 'NOR','SOGN OG FJORDANDE'),
                                        ( 'NOR','MØRE OG ROMSDAL')
                                         ),
                PARTITION NORWAY_CENTRAL VALUES (
                                        ( 'NOR','TRØNDELAG')
                                         ),
                PARTITION NORWAY_EAST VALUES (
                                        ( 'NOR','BUSKERUD'),
                                        ( 'NOR','HEDMARK'),
                                        ( 'NOR','TELEMARK')
                                         )
                )
;

New values will be added automatically and given system-generated partition names.

There is an obvious trade-off here: if you use automation, you will not be able to give your partition logical names and place values logically belong together, in the same partition.

For example, the values
'NOR','NORDLAND'
may logically belong to the same partition as the values
'NOR','TROMS'
and
'NOR','FINNMARK'
If I knew such a value may be sent at the time of table creation, I would of course create another partition, like this:
PARTITION NORWAY_NORTH VALUES (
                                        ( 'NOR','NORDLAND'),
                                        ( 'NOR','TROMS'),
                                        ( 'NOR','FINNMARK')
                                         )

If you use a default partition, you will always be able to save your incoming rows, and you have full controll over your partition names and their values.
But new rows end up in an overflow partition together with any other record that cannot find its way into a properly named and configured partition.
You may need to split your default partition later, to satisify your business needs or to reduce the size of your default partition.

Wednesday, November 28, 2018

Changes in privilege "SELECT ANY DICTIONARY" in Oracle 12c


From version 12.1 and onwards, Oracle has introduced some changes to enhance security when granting the system privilege "SELECT ANY DICTIONARY".

In the New Features guide for version 12.1, the authors explain:

The SELECT ANY DICTIONARY privilege no longer permits access to security sensitive data dictionary tables DEFAULT_PWD$, ENC$, LINK$, USER$, USER_HISTORY$, and XS$VERIFIERS.

This change increases the default security of the database by not allowing access to a subset of data dictionary tables through the SELECT ANY DICTIONARY privilege.

The access to USER$ has also been excempt when granting the system privilege SELECT ANY TABLE and the role SELECT_CATALOG_ROLE, but I have not yet found the documentation that verifies this.

Wednesday, November 14, 2018

New parameter in dbms_redefinition.finish_redef_table in 12c



Oracle 12c adds a potentiall very useful feature to the dbms_redefinition package, in the procedure finish_redef_table: dml_lock_timeout

It specifies a time limit in seconds for how long a DML statment waits in a DML lock queue, before the procedure terminates gracefully.

By default, this parameter is set to 0 which means no waiting at all; the procedure will simply error out if it cannot get a lock on the table.

The maximumm value is 1000000 and will cause any DML statements to wait in perpetuity to aquire a DML lock.

Here is an example of its usage, where I have set the limit to 5 minutes (300 seconds):
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(uname=>'SCOTT',
                                     orig_table=>'EMP',
                                     int_table=>'EMP_INTERIM', 
                                     dml_lock_timeout=>300);
END;
/

Friday, September 28, 2018

How to work around ORA-38338: incorrect ILM policy scope for row-level ADO policies


When adding an ADO policy on a table, like this:

CREATE TABLE TEST_TABLE1
(
  col1 NUMBER(38),
  col2 NUMBER(38),
  col3 DATE
)
NOCOMPRESS 
TABLESPACE USERS
ILM ADD POLICY ROW STORE COMPRESS ADVANCED 
ROW AFTER 1 DAY OF NO ACCESS;

oracle returned


ORA-38338: incorrect ILM policy scope

This policy is not valid for row-level ADO policy. The only valid option is

ILM ADD POLICY ROW STORE COMPRESS ADVANCED 
ROW AFTER X DAY OF NO MODIFICATION;

Thursday, August 9, 2018

How to connect to a PDB using jdbc

Before you connect to a PDB using jdbc, make sure you have added the following to your database server's $TNS_ADMIN/listener.ora file:

USE_SID_AS_SERVICE_<listener_nam>=on

Restart or reload the listener. You should now be able to connect to the service_name representing your .

Here is a screenshot from SQuirrel SQL client running on Ubuntu, connecting to a PDB running on a Virtual Box with CentOS and Oracle 12.1:



















The entire connection string is

jdbc:oracle:thin:@lx01oric.oric.no:1521:pdbveg1.oric.no

Thursday, June 21, 2018

What is the meaning of INCREMENTAL_STALENESS in dbms_stats?



INCREMENTAL_STALENESS means "how we decide if a partition's statistics are stale".

Its values can be either USE_STALE_PERCENT or USE_LOCKED_STATS:

USE_STALE_PERCENT - a partition/subpartition is not considered as stale if DML changes are less than the
STALE_PERCENT preference value.

USE_LOCKED_STATS - locked partitions/subpartitions statistics are not considered as stale, regardless of DML changes

NULL - this is the default value, meaning a partition or subpartition is considered as stale as long as it has any DML changes

STALE_PERCENT is the value that determine the percentage of rows in a table that have to change stats before the statistics are deemed stale and should be regathered. Default = 10%


I have used the following script to implement this in one of my 12.2 database.
The last preference (DEGREE) set has nothing to do with incremental statistics, but I have found that using the default degree of parallelism is a good setting for most databases

define owner = 'SCOTT';
define table_name = 'EMP';
BEGIN
   dbms_stats.set_table_prefs('&&owner','&&table_name','INCREMENTAL','TRUE');
   dbms_stats.set_table_prefs('&&owner','&&table_name','INCREMENTAL_STALENESS','USE_STALE_PERCENT, USE_LOCKED_STATS');
   dbms_stats.set_table_prefs('&&owner','&&table_name','DEGREE','DBMS_STATS.DEFAULT_DEGREE');
END;
/


Verfiy your settings with the following script::
 
define owner = 'SCOTT';
define table_name = 'EMP';
set lines 200
set verify off
col "incremental" format a20
col "incr staleness" format a40
col "incremental level" format a10
col "stale percent" format a20
col "synopsis type" format a40
col "degree" format a30
col "granularity" format a20

select dbms_stats.get_prefs('INCREMENTAL','&&owner','&&table_name') "incremental",
        dbms_stats.get_prefs('INCREMENTAL_STALENESS','&&owner','&&table_name') "incr staleness",
        dbms_stats.get_prefs('INCREMENTAL_LEVEL','&&owner','&&table_name') "incremental level",
        dbms_stats.get_prefs('STALE_PERCENT','&&owner','&&table_name') "stale percent",
        dbms_stats.get_prefs('APPROXIMATE_NDV_ALGORITHM','&&owner','&&table_name') "synopsis type",
        dbms_stats.get_prefs('DEGREE','&&owner','&&table_name') "degree" ,
        dbms_stats.get_prefs('GRANULARITY','&&owner','&&table_name') "granularity"
FROM DUAL
/

Example output:

incremental incr staleness incremental level stale percent synopsis type degree granularity
TRUE USE_STALE_PERCENT, USE_LOCKED_STATS PARTITION 10 REPEAT OR HYPERLOGLOG 32 AUTO


Source: "Understanding Optimizer Statistics With Oracle Database 12c Release 2"

Thursday, January 11, 2018

How to toggle between "Mixed mode" Auditing, Traditional Auditing and Unified Auditing


This article is applicable to Oracle database versions 12.1 and onwards.

For newly created databases, mixed mode auditing is enabled by default through the predefined policy ORA_SECURECONFIG. 
 

Verify that the database is using "Mixed Mode" auditing

select parameter, value from v$option where parameter='Unified Auditing';

PARAMETER VALUE
Unified Auditing FALSE

Check for any enabled unified audit policies:
select policy_name, enabled_option
from audit_unified_enabled_policies;
POLICY_NAME ENABLED_OPTION
ORA_SECURECONFIG BY USER

If v$option shows FALSE for unified auditing AND the database have at least one enabled unified auditing policy, we are running in "Mixed Mode" auditing. 

In Mixed Mode Auditing, all of the existing auditing startup parameters for the database are still valid: AUDIT_TRAIL, AUDIT_FILE_DEST, AUDIT_SYS_OPERATIONS, and AUDIT_SYSLOG_LEVEL. So if your AUDIT_TRAIL is set to "DB", you can still use all the expected data dictionary views to obtain your audit information. If AUDIT_TRAIL is set to "OS", your auditing information will be sent to the location specified by the parameter AUDIT_FILE_DEST.

To enable "pure" Unified Auditing

1. Shutdown the database:
shutdown immediate
2. Relink the Oracle database binaries:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk uniaud_on ioracle ORACLE_HOME=$ORACLE_HOME

3. Enable at least one unified audit policy. By default, two unified auditing policies are created when you create your a 12.2 database: ORA_SECURECONFIG and ORA_LOGON_FAILURES. The first one is enabled by the default, the last one is not. Let's enable the ORA_LOGIN_FAILURES, too:
audit policy ORA_LOGON_FAILURES;
Verify:
select parameter, value from v$option where parameter='Unified Auditing';

PARAMETER VALUE
Unified Auditing TRUE
select *
from audit_unified_enabled_policies;

POLICY_NAME ENABLED_OPTION ENTITY_NAME ENTITY_TYPE SUCCESS FAILURE
ORA_LOGON_FAILURES BY USERS ALL USERS USER YES YES
ORA_SECURECONFIG BY USERS ALL USERS USER YES YES

If v$option shows TRUE for Unified Auditing AND we have at least one enabled unified auditing policy, we are using "Pure" Unified Auditing. 


It doesn't matter what all the Traditional Auditing parameters are set to at this point; they will not have any effect. 

Your audit information will from now on be written to the table AUDSYS.AUD$UNIFIED.

The SYS.AUD$ and SYS.FGA_LOG$ tables will still be accessible, but not used by the Oracle instance. They will only contain auditing records from before unified auditing was enabled. Consequently, your previously used queries based on familiar data dictionary views such as dba_audit_trail will only return information from before Unified Auditing was enabled.

The Oracle documentation provides a table which is very helpfull in determining the pros and cons of migrating to Unified Auditing. 

In my opinion, the most important drawback with Unfied Auditing is that it doesn't allow the auditing data to be written to the operating system.


To enable traditional Auditing

1. First, disable any unified audit policies that are currently enabled. Find the currently enabled policies:
select user_name, policy_name, enabled_opt, enabled_option
from audit_unified_enabled_policies;
USER_NAME POLICY_NAME ENABLED_OPT ENABLED_OPTION
ALL USERS ORA_LOGON_FAILURES BY BY USER
ALL USERS ORA_SECURECONFIG BY BY USER

2. Take them out of audit. This step prevents the database from going into mixed mode auditing after you complete this procedure:
noaudit policy ORA_SECURECONFIG;
noaudit policy ORA_LOGON_FAILURES;
3. Shutdown the database:
shutdown immediate
4. Relink the Oracle database binaries:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk uniaud_off ioracle ORACLE_HOME=$ORACLE_HOME
5. Start the database
sqlplus / as sysdba
startup
The database should now be in Traditional Auditing mode. There will be no more entries logged to the unified_audit_trail. Your audit records will go to the SYS.AUD$ and SYS.FGA_LOG$ tables, or to the operating system, depending on your value for the parameter AUDIT_TRAIL.

More about disabling unified auditing policies can be found here

Another good source for more information about Unified Auditing is this article found at oracle-base.com

How to check if Unified Auditing is enabled in your database



Check that Unified Auditing is enabled with the following query:
col parameter format a20
col value format a20
set lines 200
select parameter, value from v$option where parameter='Unified Auditing';
exit
Output shows that it is enabled:
PARAMETER            VALUE
-------------------- --------------------
Unified Auditing     TRUE

Wednesday, January 10, 2018

What is "Automatic Reoptimization" introduced in Oracle 12cR1?

Some definitions here, mostly taken directly from Oracls documentation, or quoted from Tim Hall's www.oracle-base.com

Automatic Reoptimization


Automatic Reoptimization is the ability of the optimizer to automatically change a plan on subsequent executions of a SQL statement.

Automatic reoptimization takes two forms: Statistics Feedback and Performance Feedback.

Statistics feedback


Statistics Feedback (formerly known as cardinality feedback and first introduced in Oracle 11gR2) is a type of reoptimization that automatically improves plans for repeated queries that have cardinality misestimates.
  • At the end of an execution, the optimizer compares its initial cardinality estimates to the actual number of rows returned by each operation in the plan during execution. 
  • If estimates differ significantly from actual cardinalities, then the optimizer stores the correct estimates for subsequent use and the statement is marked as "reoptimizable". 
  • When the query executes again, the optimizer uses the corrected cardinality estimates instead of its usual estimates, allowing a better plan to be determined
  • Statistics Feedback is statement specific and is lost if the instance is restarted or the statement is aged out of the shared pool.


Performance Feedback


This form of automatic reoptimization helps improve the degree of parallelism automatically chosen for repeated SQL statements when PARALLEL_DEGREE_POLICY is set to ADAPTIVE.

  • At the end of an initial execution, the optimizer compares the degree of parallelism chosen by the optimizer with the degree of parallelism computed based on the performance statistics (for example, the CPU time) gathered during the actual execution of the statement
  • If the two values vary significantly, then the database marks the statement for reparsing, and stores the initial execution  statistics as feedback. This feedback helps better compute the degree of parallelism for subsequent executions.

Note that even if PARALLEL_DEGREE_POLICY is not set to ADAPTIVE, statistics feedback may influence the degree of parallelism chosen for a statement.

In short, to remember it more easily:

  • statistics feedback deals with incorrect cardinality
  • performance feedback deals with incorrect parallel execution


Sources:

Tuesday, November 14, 2017

What is the Adaptive SQL Plan Management introduced in Oracle 12c?



By "Adaptive SQL Plan Management", Oracle is referring to a new autotask that is executed every night during the maintenance window, as a part of the "sql tuning advisor".

It will automatically evolve all non-accepted plans in the SQL Plan Management Base.
If the New plans improves the performance of the Query, they will automatically be promoted to an accepted plan.

In previous versions of Oracle, plans needed to be evolved manually by the DBA by running dbms_spm.evolve_sql_plan_baselines.

Is the sql tuning task enabled?
SELECT CLIENT_NAME, STATUS
FROM   DBA_AUTOTASK_CLIENT
WHERE  CLIENT_NAME = 'sql tuning advisor';


View details about a specific run:
SELECT OWNER,TASK_ID, TASK_NAME,DESCRIPTION,ADVISOR_NAME,EXECUTION_TYPE,STATUS,RECOMMENDATION_COUNT "num rec",EXECUTION_START,EXECUTION_END,LAST_EXECUTION,HOW_CREATED
FROM  DBA_ADVISOR_TASKS 
WHERE TASK_NAME = 'SYS_AUTO_SPM_EVOLVE_TASK';

OWNER TASK_ID TASK_NAME DESCRIPTION ADVISOR_NAME EXECUTION_TYPE STATUS num rec EXECUTION_START EXECUTION_END LAST_EXECUTION HOW_CREATED
SYS
2
SYS_AUTO_SPM_EVOLVE_TASK Automatic SPM Evolve Task SPM Evolve Advisor SPM EVOLVE COMPLETED
0
13.11.2017 23:00:06 13.11.2017 23:00:07 EXEC_20606 AUTO



If the parameter accepted_plans is set to TRUE, it means plans will be automatically evolved:
SELECT PARAMETER_NAME,PARAMETER_VALUE
FROM DBA_ADVISOR_PARAMETERS 
WHERE TASK_NAME LIKE 'SYS_AUTO_SPM_EVOLVE_TASK'
AND PARAMETER_NAME='ACCEPT_PLANS';

PARAMETER_NAME PARAMETER_VALUE
ACCEPT_PLANS TRUE


If you want to view the results of such an automatic job, use the following code:

SET LONG 1000000 PAGESIZE 1000 LONGCHUNKSIZE 100 LINESIZE 100
SELECT DBMS_SPM.report_evolve_task(task_name => 'SYS_AUTO_SPM_EVOLVE_TASK', execution_name => 'EXEC_20606') AS output
FROM dual;

Tuesday, October 10, 2017

How to find the currently enabled unified auditing policies in an Oracle 12c database



Oracle 12c comes with two auditing policies enabled by default: ORA_SECURECONFIG and ORA_LOGON_FAILURES.
You can find the currently enabled policies in the AUDIT_UNIFIED_ENABLED_POLICIES view.

The query below will reveal what the currently enabled policies will actually audit
select policy_name,audit_option,condition_eval_opt
from audit_unified_policies
where policy_name in (  select policy_name 
                        from audit_unified_enabled_policies);
                        


The result shows that the default Unfied Auditing Setup covers many of the actions you would certainly want to audit, like DROP USER, ALTER SYSTEM etc:

POLICY_NAME AUDIT_OPTION CONDITION_EVAL_OPT
ORA_SECURECONFIG LOGMINING NONE
ORA_SECURECONFIG TRANSLATE ANY SQL NONE
ORA_SECURECONFIG EXEMPT REDACTION POLICY NONE
ORA_SECURECONFIG PURGE DBA_RECYCLEBIN NONE
ORA_SECURECONFIG ADMINISTER KEY MANAGEMENT NONE
ORA_SECURECONFIG DROP ANY SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG ALTER ANY SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG CREATE ANY SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG CREATE SQL TRANSLATION PROFILE NONE
ORA_SECURECONFIG CREATE EXTERNAL JOB NONE
ORA_SECURECONFIG CREATE ANY JOB NONE
ORA_SECURECONFIG GRANT ANY OBJECT PRIVILEGE NONE
ORA_SECURECONFIG EXEMPT ACCESS POLICY NONE
ORA_SECURECONFIG CREATE ANY LIBRARY NONE
ORA_SECURECONFIG GRANT ANY PRIVILEGE NONE
ORA_SECURECONFIG DROP ANY PROCEDURE NONE
ORA_SECURECONFIG ALTER ANY PROCEDURE NONE
ORA_SECURECONFIG CREATE ANY PROCEDURE NONE
ORA_SECURECONFIG ALTER DATABASE NONE
ORA_SECURECONFIG GRANT ANY ROLE NONE
ORA_SECURECONFIG DROP PUBLIC SYNONYM NONE
ORA_SECURECONFIG CREATE PUBLIC SYNONYM NONE
ORA_SECURECONFIG DROP ANY TABLE NONE
ORA_SECURECONFIG ALTER ANY TABLE NONE
ORA_SECURECONFIG CREATE ANY TABLE NONE
ORA_SECURECONFIG DROP USER NONE
ORA_SECURECONFIG CREATE USER NONE
ORA_SECURECONFIG AUDIT SYSTEM NONE
ORA_SECURECONFIG ALTER SYSTEM NONE
ORA_LOGON_FAILURES LOGON NONE
ORA_SECURECONFIG CREATE DATABASE LINK NONE
ORA_SECURECONFIG DROP DATABASE LINK NONE
ORA_SECURECONFIG ALTER USER NONE
ORA_SECURECONFIG CREATE ROLE NONE
ORA_SECURECONFIG DROP ROLE NONE
ORA_SECURECONFIG SET ROLE NONE
ORA_SECURECONFIG CREATE PROFILE NONE
ORA_SECURECONFIG DROP PROFILE NONE
ORA_SECURECONFIG ALTER PROFILE NONE
ORA_SECURECONFIG ALTER ROLE NONE
ORA_SECURECONFIG CREATE DIRECTORY NONE
ORA_SECURECONFIG DROP DIRECTORY NONE
ORA_SECURECONFIG ALTER DATABASE LINK NONE
ORA_SECURECONFIG CREATE PLUGGABLE DATABASE NONE
ORA_SECURECONFIG ALTER PLUGGABLE DATABASE NONE
ORA_SECURECONFIG DROP PLUGGABLE DATABASE NONE
ORA_SECURECONFIG EXECUTE NONE

If the $ORACLE_HOME/rdbms/admin/secconf.sql script was created during database creation, you will have some other policies in your database, too, but they won't be enabled by default. These can be found by executing the following statement:
select policy_name,count(audit_option) "number of audits"
from AUDIT_UNIFIED_POLICIES
where policy_name not in (select unique policy_name from audit_unified_enabled_policies )
group by policy_name
order by 2 desc;
In my database, the following collection shows up as being created, but not yet enabled:
POLICY_NAME number of audits
ORA_RAS_POLICY_MGMT
33
ORA_CIS_RECOMMENDATIONS
26
ORA_RAS_SESSION_MGMT
14
ORA_ACCOUNT_MGMT
9
ORA_DATABASE_PARAMETER
3

Tuesday, September 12, 2017

How to create a container database in Oracle 12c

Create an initialization file in your $ORACLE_HOME/dbs folder:

enable_pluggable_database=true
processes=300
control_files="/u02/oradata/cdbveg/control01.ctl","/u03/oradata/cdbveg/control02.ctl"
audit_file_dest="/u01/oracle/admin/cdbveg/adump"
audit_trail=DB
compatible=12.1.0.2
db_block_size=8192
db_domain=""
db_name="cdbveg"
db_recovery_file_dest_size=429496729600
db_recovery_file_dest="/u04/fra"
diagnostic_dest=/u01/oracle
dispatchers="(PROTOCOL=TCP) (SERVICE=cdbvegXDB)"
local_listener = "(ADDRESS = (PROTOCOL=TCP)(HOST=myserver.mydomain.com)(PORT=1521))"
open_cursors=300
remote_login_passwordfile=EXCLUSIVE
undo_tablespace=UNDOTBS1
# use ASMM (Automatic shared memory managaement)
sga_target=2G
sga_max_size=2G
pga_aggregate_target=128M
# Disable AMM (Automatic Memory Managament)
memory_target=0
memory_max_target=0

Create the necessarry directories:
mkdir -p /u02/oradata/cdbveg
mkdir -p /u03/oradata/cdbveg
mkdir -p /u01/oracle/admin/cdbveg/adump
mkdir -p /u02/oradata/cdbveg/seed
Set your environment variables correcly, in my case they were:
export ORACLE_SID=cdbveg
export ORACLE_BASE=/u01/oracle
export ORACLE_HOME=/u01/oracle/product/12102

Start an instance:
sqlplus / as sysdba 
statup nomount

Create your container database:
create database cdbveg
user sys identified by MySecretPasswd
user system identified by MySecretPasswd
logfile group 1(
 '/u02/oradata/cdbveg/red01a.log',
 '/u03/oradata/cdbveg/red01b.log'
) size 128M,
group 2 (
 '/u02/oradata/cdbveg/red02a.log',
 '/u03/oradata/cdbveg/red02b.log'
) size 128M
character set al32utf8 national character set al16utf16
extent management local datafile '/u02/oradata/cdbveg/system01.dbf' size 256M autoextend on next 16M maxsize unlimited
sysaux datafile '/u02/oradata/cdbveg/sysaux01.dbf' size 128M autoextend on next 16M maxsize unlimited
default temporary tablespace temp tempfile '/u02/oradata/cdbveg/temp.dbf' size 256M
undo tablespace undotbs1 datafile '/u02/oradata/cdbveg/undotbs01.dbf' size 256M
enable pluggable database
seed
file_name_convert = ('/u02/oradata/cdbveg','/u02/oradata/cdbveg/seed')
system datafiles size 128M autoextend on next 8M maxsize 2048M
sysaux datafiles size 64M
USER_DATA tablespace users datafile '/u02/oradata/cdbveg/seed/users.dbf' size 128M reuse autoextend on next 32M maxsize unlimited;

The above statement will create a container database, and a seed database which will follow the specifications listed under the keyword "seed" in the create-statement above.
The /u02/oradata/cdbveg/seed directory will contain the datafiles used for future creations of PDBs based on the seed template.
In addition to the mandatory system and sysaux tablespaces, I add a tablespace for user data, too.

Finally, run the necessary scripts:
sqlplus / as sysdba 

SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catblock.sql
SQL> @?/rdbms/admin/catproc.sql

SQL> alter session set "_ORACLE_SCRIPT"=false;

SQL> @?/rdbms/admin/catoctk.sql
SQL> @?/rdbms/admin/owminst.plb
SQL> @?/sqlplus/admin/pupbld.sql

Verify creation:
select name,con_id,dbid,open_mode from v$containers

NAME                     CON_ID       DBID OPEN_MODE
-------------------- ---------- ---------- ------------------------------
CDB$ROOT                      1 3469467854 READ WRITE
PDB$SEED                      2  682995139 READ ONLY

Monday, September 11, 2017

Why is my PDB seemingly stuck in RESTRICTED mode?


Check out the view pdb_plug_in_violations - it will point you in the right direction.

In my case, I had created a pluggable database some days after my CDB was finished.

I had completely forgotten about a common user that I had created at that time:

-- create a common user for all CONTAINERS
-- common users must use the prefix c##
create user C##dba identified by MySecretPasswor
default tablespace USERS
temporary tablespace TEMP
quota unlimited on USERS
container=all;

The new PDB was created without the USERS tablespace, which prevented the new PDB to be synchronized with the parent container.
The pdb_plug_in_violations contained the following message:

Sync PDB failed with ORA-959 during 'create user C##dba identified by *default tablespace USERS
temporary tablespace TEMP
quota unlimited on USERS
container=all'


To resolve the situation, connect to the container with the missing tablespace:
alter session set container=pdbveg2;
Make sure my session OMF parameter is correctly set:
select name,value,DEFAULT_VALUE,ISDEFAULT,ISPDB_MODIFIABLE, ISSES_MODIFIABLE, DESCRIPTION
from V$PARAMETER where name like '%create_file_dest%';
NAME VALUE DEFAULT_VALUE ISDEFAULT ISPDB_MODIFIABLE ISSES_MODIFIABLE DESCRIPTION
db_create_file_dest /u02/oradata/cdbveg/pdbveg2 NONE TRUE TRUE TRUE default database location

Create the missing tablespace:
create tablespace USERS
datafile size 8M autoextend on next 2M maxsize 2G;
Finally, close and reopen your pluggable database:
alter pluggable database pdbveg2 close;
alter pluggable database pdbveg2 open read write;
Check status:
select CON_ID,name,OPEN_MODE, RESTRICTED
from v$containers;
CON_ID NAME OPEN_MODE RESTRICTED
4 PDBVEG2 READ WRITE NO

How to create and drop a pluggable database in Oracle 12c


This method uses OMF (Oracle Managed Files):

1. Create directory on host:
mkdir -p /u02/oradata/cdbveg/pdbveg2

2. Set session parameter to enable OMF:
alter session set db_create_file_Dest='/u02/oradata/cdbveg/pdbveg2';

3. Create a pluggable database with a new and local PDBA:
create PLUGGABLE database PDBVEG2
admin user VEGARD
identified by "MySecretPassword"
roles=(DBA);

The statement above copies data files from the seed PDB to the target directory specified in step 2.
It will also grant the local role PDB_DBA to the new PDBA, Vegard.

4. Check the status of the newly created PDB:
select pdb_name,status,con_id from CDB_PDBS;

PDB_NAME STATUS CON_ID
PDBVEG1 NORMAL 3
PDB$SEED NORMAL 2
PDBVEG2 NEW 4

5. Open the pluggable database:
alter pluggable database pdbveg2 open;

6. Check status again:
select pdb_name,status,con_id from CDB_PDBS;
PDB_NAME STATUS CON_ID
PDBVEG1 NORMAL 3
PDB$SEED NORMAL 2
PDBVEG2 NORMAL 4

To drop a pluggable database, perform the following steps:
alter pluggable database PDB1 close;
-- Unplug the database. If you try to drop it before unplugging it, Oracle report
-- ORA-65179: cannot keep datafiles for a pluggable database that is not unplugged
alter pluggable database PDB1 unplug into '/tmp/PDB1.xml';
drop pluggable database PDB1;
If you want Oracle to remove the physical files from disk for you in the same operation, use the following syntax:
drop pluggable database PDB1 including datafiles;

Saturday, September 9, 2017

What is the "datapatch" utility released in Oracle 12c?

From Datapatch: Database 12c Post Patch SQL Automation (Doc ID 1585822.1):

Datapatch is the new tool that enables automation of post-patch SQL actions for RDBMS patches.

and

Datapatch can be executed to complete post-patch SQL actions after the database is restarted following patch application.
For patches that do not have post-patch SQL actions to be performed, calling datapatch is a no-op.
For patches that do have post-patch SQL instructions to be invoked on the database instance, datapatch will automatically detect ALL pending actions (from one installed patch or multiple installed patches) and complete the actions as appropriate.


How does Oracle detect ALL pending actions?

Datapatch determines the requisite apply/rollback actions by matching an internal repository with the patch inventory.


When should datatach be invoked?

Datapatch should be invoked when the database is restarted after a patching session.

From 12.1.0.2 and onwards, Oracle Corporation points out that they have made some change to the datapatch utility. The most important in my opinion is that datapatch now assumes "the role of catbundle when applying bundles/PSU."

This means that

catbundle.sql has been deprecated and patch registry is maintained only in registry$sqlpatch.
Application of the PSU does not update registry$history table anymore.
The status for patch application is now entirely maintained in registry$sqlpatch


Here's an abbreviated output from using datapatch as part of applying Oracles latest PSU, Database Patch Set Update 12.1.0.2.170814 (Includes CPUJul2017):

1. Download and unpack the patch
[oracle@lx01oric software]$ unzip p26609783_121020_Linux-x86-64.zip
2. Shutdown your oracle instance and the listener
3. Check for conflicts:
[oracle@lx01oric software]$ cd 26609783
oracle@lx01oric 26609783]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/oracle/product/db/12.1/12.1.0.2.0
Central Inventory : /u01/oraInventory
   from           : /u01/oracle/product/db/12.1/12.1.0.2.0/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/oracle/product/db/12.1/12.1.0.2.0/cfgtoollogs/opatch/opatch2017-09-06_13-41-14PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.

4. Apply the patch:
[oracle@lx01oric 26609783]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.9
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/oracle/product/db/12.1/12.1.0.2.0
Central Inventory : /u01/oraInventory
   from           : /u01/oracle/product/db/12.1/12.1.0.2.0/oraInst.loc
OPatch version    : 12.2.0.1.9
OUI version       : 12.1.0.2.0
Log file location : /u01/oracle/product/db/12.1/12.1.0.2.0/cfgtoollogs/opatch/opatch2017-09-06_14-16-35PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   19769480  20299023  20831110  21359755  21948354  22291127  23054246  24006101  24732082  25171037  25755742  26609783  

Do you want to proceed? [y|n]

... output abbreviated.

Composite patch 26609783 successfully applied.
Log file location: /u01/oracle/product/db/12.1/12.1.0.2.0/cfgtoollogs/opatch/opatch2017-09-06_14-16-35PM_1.log

OPatch succeeded.

5. Start your database. When opened, execute datapatch:
[oracle@lx01oric] cd $ORACLE_HOME/OPatch
[oracle@lx01oric OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 Production on Wed Sep  6 14:54:49 2017
Copyright (c) 2012, 2016, Oracle.  All rights reserved.

Log file for this invocation: /u01/oracle/cfgtoollogs/sqlpatch/sqlpatch_16001_2017_09_06_14_54_50/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...
Bootstrap timed out after 240 seconds
Retrying bootstrap after retryable errors....done
Determining current state...done

Current state of SQL patches:
Bundle series PSU:
  ID 170814 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    The following patches will be applied:
      26609783 (DATABASE PATCH SET UPDATE 12.1.0.2.170814)

Installing patches...
Patch installation complete.  Total patches installed: 2

Validating logfiles...
Patch 26609783 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_CDBROOT_2017Sep06_15_01_26.log (no errors)
Patch 26609783 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_PDBSEED_2017Sep06_15_09_34.log (no errors)
SQL Patching tool complete on Wed Sep  6 15:18:53 2017
[oracle@lx01oric OPatch]$ vi /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_PDBSEED_2017Sep06_15_09_34.log
6. If applicable, activate the OJVM Mitigation patch.

According to Oracle, this patch is applicable in "situations where the latest OJVM PSU cannot be installed immediately... The "Mitigation Patch" is an interim solution to protect against all currently known (Jul 2015) Oracle JavaVM security vulnerabilities in the database until such time as the OJVM PSU can be installed."

Since this is my situation, I install the mitigation patch:
[oracle@lx01oric 26609783]$ cd $ORACLE_HOME/rdbms/admin
[oracle@lx01oric admin]$ sqlplus / as sysdba @dbmsjdev.sql
[oracle@lx01oric admin]$ sqlplus / as sysdba
SQL> exec dbms_java_dev.disable;
PL/SQL procedure successfully completed.
SQL>