Wednesday, October 30, 2013

How to create a DDL trigger in a schema



CREATE TABLE AUDIT_DDL (
 d date,
 OSUSER varchar2(255),
 CURRENT_USER varchar2(255),
 HOST varchar2(255),
 TERMINAL varchar2(255),
 owner varchar2(30),
 type varchar2(30),
 name varchar2(30),
 sysevent varchar2(30));


connect uid/pwd
CREATE OR REPLACE TRIGGER audit_ddl_trg after ddl on schema
BEGIN
IF(ora_sysevent='TRUNCATE') THEN
 null; -- If we do not care about truncate
ELSE
 INSERT INTO audit_ddl(d, osuser,current_user,host,terminal,owner,type,name,sysevent)
 values(
  sysdate,
  sys_context('USERENV','OS_USER') ,
  sys_context('USERENV','CURRENT_USER') ,
  sys_context('USERENV','HOST') , 
  sys_context('USERENV','TERMINAL') ,
  ora_dict_obj_owner,
  ora_dict_obj_type,
  ora_dict_obj_name,
  ora_sysevent
  );
 END IF;
END;
/


Source: Don Burleson http://www.dba-oracle.com/t_ddl_triggers.htm

How to print the predefined error messages in the error number in pl/sql

dbms_output.put_line(sqlerrm(sqlcode));

Tuesday, October 29, 2013

How to set the current schema in a session

alter session set current_schema=SCOTT;

The CURRENT_SCHEMA parameter changes the current schema of the session to the specified schema.
Subsequent unqualified references to schema objects during the session will resolve to objects in the specified schema.
The setting persists for the duration of the session or until you issue another ALTER SESSION SET CURRENT_SCHEMA statement.

This setting offers a convenient way to perform operations on objects in a schema other than that of the current user
without having to qualify the objects with the schema name.

This setting changes the current schema, but it does not change the session user or the current user,
nor does it give the session user any additional system or object privileges for the session.

Source: Oracle Documentation

Monday, October 28, 2013

How to exclude certain file systems when search for files belonging to tablespaces

Statement can be useful for example when reorganizing databases etc.


SELECT file_name FROM dba_data_files
WHERE file_name NOT LIKE '/data/oracle/u01%'
INTERSECT
SELECT file_name FROM dba_data_files
WHERE file_name NOT LIKE '/data/oracle/u02%';

Examples of expdp and impdp parameter files, and some useful directives


Export:

USERID='/ as sysdba'
DIRECTORY=DPUMP
DUMPFILE=file_name_%U.dmp
LOGFILE=expdp_log_file_name.log
JOB_NAME=EXP_DATA
PARALLEL=8
SCHEMAS=schema1,[schema2],[schema n]
EXCLUDE=STATISTICS

To get a consistent dump file, add the flashback_time or flashback_scn directive, for example:
FLASHBACK_TIME="to_timestamp(to_char(sysdate,'ddmmyyyy hh24:mi:ss'),'ddmmyyyy hh24:mi:ss')" 

Import:

USERID='/ as sysdba'
DIRECTORY=DPUMP
DUMPFILE=file_name_%U.dmp
LOGFILE=impdp_log_file_name.log
JOB_NAME=IMP_DATA
SCHEMAS=schema1,[schema2],[schema n]
REMAP_SCHEMA=SCOTT:JAMES
REMAP_TABLE=SCOTT.MYTABLE:MYTABLE_INTERIM
REMAP_TABLESPACE=USERS:TOOLS
PARALLEL=8
TRANSFORM=oid:n
TABLE_EXISTS_ACTION=REPLACE

Note: the directive "TRANSFORM=oid:n" makes sure all imported objects will be assigned a new OID instead of inhibiting the old OID from exported objects.

You can remap serveral tablespaces during the same run, just specify the directive multiple times in the same parameter file:

remap_tablespace=USER_DATA:HISTORICAL_DATA
remap_tablespace=APP_DATA:HISTORICAL_DATA
remap_tablespace=LOB_DATA:HISTORICAL_DATA


To limit export or import to specific tables only, use:
tables=SCOTT.EMP,
SCOTT.DEPT,
SCOTT.SALARIES

To limit export or import to specific partitions only, use:
TABLES=SCOTT.SALES:P_201701

To limit export or import to partitions matching a specific pattern, use:
TABLES=SCOTT.SALES:CONTRACT%2021%

Note that you cannot use both SCEMAS and TABLES as directives in the same export/import session.


Other useful directives:

CONTENT=METADATA_ONLY (only export object definition, not data)
EXCLUDE=TABLE:"LIKE 'ABC%_TMP'" (exclude tables based on name pattern)
EXCLUDE=TABLE:"= 'BONUS'" (exclude an entire table and its partitions, if any)
EXCLUDE=INDEX:"LIKE 'EMP%'" (exclude indexes that match a certain string)
EXCLUDE=VIEW,PACKAGE, FUNCTION (exclude certain types of objects)
EXCLUDE = TRIGGER:"IN ('TRIG1', 'TRIG2')", INDEX:"='INDX1'", REF_CONSTRAINT (exclude specific triggers, a specific index and referential constraints)
EXCLUDE=SEQUENCE, TABLE:"IN ('EMP', 'DEPT')" (exclude specific tables)
EXCLUDE=INDEX:"='MY_INDX'" (exclude a specific index)
EXCLUDE=MATERIALIZED_VIEW (exclude materialized views)
EXCLUDE=MATERIALIZED_VIEW,MATERIALIZED_VIEW_LOG (exlude materialized views + mv logs)


INCLUDE=FUNCTION, PACKAGE, TABLE:"='EMP'" (only include specific objects types, and tables called "EMP")
INCLUDE=PROCEDURE:"LIKE 'MY_PROC_%'" (only include procedures named according to a specific string)

QUERY=SCOTT.TABLE1:"WHERE 1=0" (no data is being exported, but the table is created)
QUERY=SCOTT.TABLE2:"WHERE SALES_DATE >= to_date('01.12.2011','dd.MM.yyyy')" (exports only rows older than 01.12.2011)
EXCLUDE=SCHEMA:"='SCOTT'" (exclude a specific user and all objects of that user)
Note:
Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.
If you try to exclude a user by using a statement such as EXCLUDE=USER:"= 'SCOTT'", only the CREATE USER scott DDL statement will be excluded, and you may not get the results you expect.

To import only the user definition over a database link

See this post.

To compress tables on import

See this post

A note on integrity checking in data pump import TABLE_EXISTS_ACTION

TRUNCATE is subject to stricter integrity checking, than in case of REPLACE.

• TRUNCATE deletes existing rows and then loads rows from the source.
• REPLACE drops the existing table and then creates and loads it from the source.

• When you use TRUNCATE or REPLACE, make sure that rows in the affected tables are not targets of any referential constraints.
• When you use TRUNCATE, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored.
• For REPLACE, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded.
• When you use TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action.

If the existing table has active constraints and triggers, it is loaded using the external tables access method. If any row violates an active constraint, the load fails and no data is loaded. You can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the Import command line.
If you have data that must be loaded, but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before re-enabling the constraints.

A note about excluding Constraints
The following constraints cannot be excluded:
• NOT NULL constraints.
• Constraints needed for the table to be created and loaded successfully (for example, primary key constraints for index-organized tables or REF SCOPE and WITH ROWID constraints for tables with REF columns).

Examples:

Exclude all nonreferential constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading:

EXCLUDE=CONSTRAINT

Exclude referential integrity (foreign key) constraints:
EXCLUDE=REF_CONSTRAINT

Turn on compression during export:
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}

For most cases, I would stick to the MEDIUM compression algorithm. Oracle states in the 12.2 documentation:
Recommended for most environments. This option, like the BASIC option, provides a good combination of compression ratios and speed, but it uses a different algorithm than BASIC.

I find compression of exports very useful when space is an issue, or transfer between servers over the network is needed.


Sources: Oracle 11g Documentation
Oracle 12cR2 documentation

Source: My Oracle Support

How to only export specific partitions using Oracles data pump utility

In your parameter file, use the following notation:

TABLES=SCOTT.TABLE1:PART_1

You can export more partitions in the same run:

TABLES=SCOTT.TABLE1:PART_1,
TABLES=SCOTT.TABLE2:PART_3,
TABLES=SCOTT.TABLE2:PART_4


Note that you may not export data from multiple schema in this mode, in other words, the notation:

TABLES=SCOTT.TABLE1:PART_1,
TABLES=HR.TABLE1:PART_1

will throw an error:

UDE-00012: table mode exports only allow objects from one schema

Wednesday, October 23, 2013

How to create a Flashback Data Archive

Remember that tablespaces used for flashback data archive must use ASSM (Automatic Segment Space Managment)

Create a tablespace (here I am using Oracle Managed Files, no file specification necessary)

SQL> CREATE TABLESPACE LAST_MONTH;

Tablespace created.

SQL> CREATE TABLESPACE LAST_YEAR;

Tablespace created.

Next, create the flashback archive containers:

SQL> CREATE FLASHBACK ARCHIVE ONE_MONTH TABLESPACE LAST_MONTH RETENTION 1 MONTH;

FLASHBACK ARCHIVE CREATED.

SQL> CREATE FLASHBACK ARCHIVE ONE_YEAR TABLESPACE LAST_YEAR RETENTION 1 YEAR;

FLASHBACK ARCHIVE CREATED.


Prepare the table so it can use flashback archive:

SQL> ALTER TABLE SCOTT.CUSTOMERS ENABLE ROW MOVEMENT;

TABLE ALTERED.

Enable flashback archiving by assigning the table to the appropriate flashback archive container:

SQL> ALTER TABLE SCOTT.CUSTOMERS FLASHBACK ARCHIVE LAST_YEAR;

TABLE ALTERED.


To revers the operations:

SQL> ALTER TABLE CUSTOMERS NO FLASHBACK ARCHIVE;

TABLE ALTERED.

SQL> DROP FLASHBACK ARCHIVE ONE_YEAR;

FLASHBACK ARCHIVE DROPPED.

SQL> DROP FLASHBACK ARCHIVE ONE_MONTH;

FLASHBACK ARCHIVE DROPPED.

Useful views:
  • DBA_FLASHBACK_ARCHIVE
  • DBA_FLASHBACK_ARCHIVE_TABLES
  • DBA_FLASHBACK_ARCHIVE_TS