Monday, December 16, 2013

Is java installed in my database?

Check the following views to confirm whether or not java is a part of your Oracle installation:

ALL_REGISTRY_BANNERS displays the valid components loaded into the database.
SELECT * FROM ALL_REGISTRY_BANNERS
WHERE   BANNER LIKE INITCAP('%Java%') 
OR      BANNER LIKE UPPER('%JAVA%');
If java is installed, the query would typically return:
BANNER
--------------------------------------------------------------------------------
JServer JAVA Virtual Machine Release 11.1.0.7.0 - Production
Oracle Database Java Packages Release 11.1.0.7.0 - Production
DBA_REGISTRY displays information about the components loaded into the database.
SELECT  COMP_NAME,VERSION,STATUS 
FROM    DBA_REGISTRY 
WHERE   COMP_NAME LIKE INITCAP('%Java%') 
OR      COMP_NAME LIKE UPPER('%JAVA%');
If java is installed, the query would typically return:
COMP_NAME                                VERSION              STATUS
---------------------------------------- -------------------- ---------------------------------
JServer JAVA Virtual Machine             11.1.0.7.0           VALID
Oracle Database Java Packages            11.1.0.7.0           VALID

V$OPTION lists database options and features.
Some options must be separately purchased and installed, while other features come with the product and are enabled based on the product that is running (Standard Edition, Enterprise Edition, or Personal Oracle).
SELECT * FROM V$OPTION 
WHERE PARAMETER = 'Java';

Source: Oracle Documentation

Friday, December 13, 2013

SQL statement to list all database objects and their sizes

Use the WITH clause to create two different subqueries that you can reference later in the query:
WITH
  OBJECT_COUNT AS
   (
    SELECT OWNER, OBJECT_TYPE, COUNT(*) "NUM_OBJECTS"
    FROM DBA_OBJECTS
    GROUP BY OWNER,OBJECT_TYPE
    ),
    SPACE_USAGE AS
    (
    SELECT SEGMENT_TYPE, SUM(BYTES)  "BYTES"
    FROM DBA_SEGMENTS
    GROUP BY SEGMENT_TYPE
    )
    SELECT  O.OWNER,O.OBJECT_TYPE,O.NUM_OBJECTS, ROUND(U.BYTES/1024/1024) "MB"
    FROM    OBJECT_COUNT O LEFT OUTER JOIN SPACE_USAGE U
    ON      O.OBJECT_TYPE = U.SEGMENT_TYPE
    ORDER BY 1 ASC;

Thursday, December 12, 2013

SQL statement to find all referential integrity constraints that relates to a specific table



Here is how to use a self-join of the view USER_CONSTRAINTS to find constraints that is referring to a specific table. Note that you can also use the DBA_CONSTRAINTS if logged in as a privileged user.

First, set up a couple of extra tables in the SCOTT schema to illustrate the point (the existing SCOTT objects will contain only one - 1 - Foreign Key by default):

CONNECT SCOTT/TIGER
CREATE TABLE PORTS
 (
    PORT_ID     NUMBER,
    PORT_NAME   VARCHAR2 (20),
    COUNTRY     VARCHAR2 (40),
    CAPACITY    NUMBER
 );


 CREATE TABLE SHIPS
 (
    SHIP_ID        NUMBER,
    SHIP_NAME      VARCHAR2 (20),
    CAPACITY       NUMBER,
    LENGTH         NUMBER,
    HOME_PORT_ID   NUMBER
 );

 ALTER TABLE SHIPS
 ADD CONSTRAINT PK_SHIPS PRIMARY KEY (SHIP_ID);

 ALTER TABLE PORTS
 ADD CONSTRAINT PK_PORTS PRIMARY KEY (PORT_ID);

 ALTER TABLE SHIPS
 ADD CONSTRAINT SHIPS_PORTS_FK FOREIGN KEY (HOME_PORT_ID)
 REFERENCES PORTS (PORT_ID);

Now, let's find the relationships between the Foreign key constraints and their corresponding unique key constraint:
CONNECT SCOTT/TIGER
SET LINES 200
SET HEADING OFF
SET FEEDBACK OFF
SET PAGES 0
SELECT    B.CONSTRAINT_NAME
        || ' on table '
        || B.TABLE_NAME
        || ' points to table '
        || A.TABLE_NAME
           "Relationship"
   FROM USER_CONSTRAINTS A
        FULL OUTER JOIN USER_CONSTRAINTS B
           ON A.CONSTRAINT_NAME = B.R_CONSTRAINT_NAME
  WHERE A.OWNER = 'SCOTT' AND B.R_CONSTRAINT_NAME IS NOT NULL;
Here you can see the relationships between the constraints and their tables:
Relationship
SHIPS_PORTS_FK on table SHIPS points to table PORTS
FK_DEPTNO on table EMP points to table DEPT

In my case I wanted to drop the constraints, so I concatenated the columns with strings to generate a "drop constraint" script:
CONNECT SCOTT/TIGER

 SELECT    'alter table '
        || B.OWNER
        || '.'
        || B.TABLE_NAME
        || ' drop constraint '
        || B.CONSTRAINT_NAME
        || ';'
  FROM USER_CONSTRAINTS A
        FULL OUTER JOIN USER_CONSTRAINTS B
           ON A.CONSTRAINT_NAME = B.R_CONSTRAINT_NAME
  WHERE A.OWNER = 'SCOTT' AND B.R_CONSTRAINT_NAME IS NOT NULL;
Returned rows:
alter table SCOTT.SHIPS drop constraint SHIPS_PORTS_FK;
 alter table SCOTT.EMP drop constraint FK_DEPTNO;

Note that you can achieve the same goal by using a subquery instead of a FULL OUTER JOIN:
CONNECT SCOTT/TIGER
 SELECT OWNER,TABLE_NAME,CONSTRAINT_NAME
 FROM  USER_CONSTRAINTS
WHERE  R_CONSTRAINT_NAME IN (
             SELECT CONSTRAINT_NAME
            FROM USER_CONSTRAINTS
             WHERE CONSTRAINT_TYPE='P'
            AND OWNER = 'SCOTT'
 );

Output:
OWNER TABLE_NAME CONSTRAINT_NAME
SCOTT EMP FK_DEPTNO
SCOTT SHIPS SHIPS_PORTS_FK






Friday, December 6, 2013

Two different ways to retreive the current System Change Number

Use either of the following methods:

SQL> col current_scn format 9999999999999
SQL> select current_scn from v$database;

   CURRENT_SCN
--------------
 6525378448555

SQL> col GET_SYSTEM_CHANGE_NUMBER format 9999999999999
SQL> select dbms_flashback.get_system_change_number from dual;

GET_SYSTEM_CHANGE_NUMBER
------------------------
           6525378448442
Oracle officially recommends tht if your goal is to obtain an SCN from within an application, then you should use the dbms_flashback.get_system_change_number function.

Friday, November 29, 2013

How to create a sequence and a trigger that will emulate "autonumber" in other RDBMS systems

If you have been working with other RDBMS systems, such as SQL Server from Microsoft, you may be used to a datatype called AUTONUMBER, which will automatically insert a freshly generated number and create a unique value in your table. Perfect for generation of primary key values.
In oracle, no such data type exist. To achieve the same functionality, you need to create two additional objects: a sequence and a trigger. The trigger will fire on certain events. The sequence will generate a new number. Here is a simple example on how to set it up.
CREATE TABLE ADDRESSES
(
 ADDRESS_ID NUMBER,
 EMLOYEE_ID NUMBER,
 STREET_ADDRESS VARCHAR2(40),
 STREET_ADDRESS2 VARCHAR2(40),
 CITY VARCHAR2 (30),
 STATE VARCHAR2(2),
 ZIP VARCHAR2(5),
 ZIP_PLUS VARCHAR2(4),
 COUNTRY VARCHAR2(20),
 CONSTRAINT PK_ADDRESSES PRIMARY KEY (ADDRESS_ID)-- "OUT-OF-LINE" CONSTRAINT CREATION
);

CREATE SEQUENCE ADDRESSES_SEQ
 START WITH 1
 INCREMENT BY 1
 MINVALUE 1
 MAXVALUE 1000000
 NOCACHE
 NOCYCLE
 NOORDER;

CREATE OR REPLACE TRIGGER ADDRESSES_TRG
BEFORE INSERT
ON ADDRESSES
FOR EACH ROW
WHEN (NEW.ADDRESS_ID IS NULL)
BEGIN
 SELECT ADDRESSES_SEQ.NEXTVAL
 INTO   :NEW.ADDRESS_ID
 FROM   DUAL;
END;
/

Wednesday, November 27, 2013

How does the 11gR2 feature "deferred segment creation" work?

Starting from 11gR2 onwards, Oracle introduced a feature called deferred segment creation. This means that Oracle will not need to create any segments for a table until you actually insert rows into it.

Oracle creates all needed metadata about the new table in data dictionary, but doesn’t actually allocate any space from the tablespace. This applies to other segment types like index and table/index partitions as well.

Syntax:

-- Without deffered segment creation
CREATE TABLE TEST(tab_id number)
SEGMENT CREATION IMMEDIATE
TABLESPACE USERS;

-- With deffered segment creation
CREATE TABLE TEST (tab_id number)
SEGMENT CREATION DEFERRED
TABLESPACE USERS;


The parameter deferred_segment_creation controls the default behavior:
SQL> show parameter deferred

NAME_COL_PLUS_SHOW_PARAM     TYPE        VALUE
---------------------------- ----------- --------
deferred_segment_creation    boolean     TRUE

What is the difference between automatic segment space management and manual segment space management?

In a locally managed tablespace, there are two methods that Oracle Database can use to manage segment space: automatic and manual.

Manual segment space management uses linked lists called "freelists" to manage free space in the segment, while automatic segment space management uses bitmaps.

Automatic segment space management is the more efficient method, and is the default for all new permanent, locally managed tablespaces.

How to identify processes that are consuming lots of CPU power

Here is one way to identify processes that are consuming lots of CPU on your sever that I have used successfully in the past.

First, use the mpstat command, which will print the CPU statistics n times with n seconds interval.
For example, mpstat 5 5
would yield the following output:
CPU  minf  mjf  xcal  intr  ithr  csw  icsw  migr  smtx  srw  syscl  usr  sys  wt  idl 
0  1  0  0  345  224  589  220  0  0  0  799  29  1  0  70 
0  1  0  0  302  200  752  371  0  0  0  1191  99  1  0  0 
0  0  0  0  341  221  767  375  0  0  0  1301  100  0  0  0 
0  0  0  0  411  256  776  378  0 0  0  1313  99  1  0  0 
0  0  0  0  382  241  738  363  0  0  0  1163  97  3  0 0 

In the output sample above, 4 of the 5 samples have CPU 0 with

* a combined user time and system time at 100
and
* idle time at 0 (column headings usr, sys, idl).

This indicates that the CPU is completely consumed on this system.

After gathering the data from mpstat, which indicates that the system CPU resources are overtaxed, you can use prstat to identify which processes are consuming the CPU resources.

The prstat -s cpu -n 5 command is used to list the five processes that are consuming the most CPU resources.

* The -s cpu flag tells prstat to sort the output by CPU usage.
* The -n 5 flag tells prstat to restrict the output to the top five processes.
prstat -s cpu -n 5 

   PID USERNAME  SIZE   RSS STATE  PRI NICE      TIME  CPU PROCESS/NLWP
 25906 oracle   2692M 1912M cpu2     0    0   0:34:39 3.1% oracle/11
 25472 oracle    133M  130M cpu1     0    0   0:00:47 0.4% emagent/7
  8709 oracle     33M   19M cpu28   10    0   0:00:03 0.4% perl/1
 19404 root       88M   24M sleep   21    0 141:23:44 0.3% vxconfigd/1
  9767 root     5752K 5224K sleep   59    0   0:00:01 0.2% vmstat/1

How to find size, free space, used space, allocated space for the tablespaces in your database

SELECT 
        F.FILE_ID,
        F.FILE_NAME,
        ROUND(F.BYTES/1024/1024) "Allocated(MB)",
        NVL(TO_CHAR(ROUND(F.BYTES/1024/1024 - SUM(E.BYTES/1024/1024),1)),'File full') "Used (MB)",
        NVL(TO_CHAR(ROUND(SUM(E.BYTES/1024/1024),1)), '0') "Free (MB)",
        LPAD(DECODE(F.AUTOEXTENSIBLE,'YES', 'JA','NO','NEIN!'), 11,' ') "Autoextend?",
        LPAD(DECODE(ROUND(F.MAXBYTES/1024/1024),0,'N/A',ROUND(F.MAXBYTES/1024/1024)),15,' ') "Max size (MB)"
FROM    DBA_DATA_FILES F LEFT OUTER JOIN DBA_FREE_SPACE E
ON      E.FILE_ID = F.FILE_ID
GROUP BY F.FILE_ID,F.FILE_NAME,F.BYTES,AUTOEXTENSIBLE,MAXBYTES
ORDER BY 1;

For a specific tablespace:
alter session set nls_language='american';
SET LINES 400
COL "File ID" format 999
COL "Tablespace name" format a20
COL "File name" format a50
COL "Available space (MB)" format 9999999999
COL "Free space (MB)" format a20
COL "Percentage used" format a20
COL "Max size (MB)" format a20
COL "Can file grow?" format a20
 
WITH   free_spc AS (
                  SELECT FILE_ID, SUM(BYTES/1024/1024) AS MBFree
                  FROM DBA_FREE_SPACE fre
                  WHERE TABLESPACE_NAME = UPPER('&&tablespace')
                  GROUP BY FILE_ID
          )
SELECT  F.FILE_ID "File ID",
             F.TABLESPACE_NAME "Tablespace name",
             F.FILE_NAME "File name",
             ROUND(F.USER_BYTES/1024/1024) "Available space (MB)",
             NVL( TO_CHAR(ROUND(FS.MBFree)), 'NADA') "Free space (MB)",
             LPAD(NVL(ROUND(( ROUND(F.BYTES/1024/1024 - FS.MBFree)/ROUND(F.BYTES/1024/1024) )  * 100),100),15,' ') || ' %' "Percentage used",
             LPAD(DECODE(F.AUTOEXTENSIBLE,'YES', 'JA','NO','NEIN!'), 11,' ') "Can file grow?",
             LPAD(DECODE(ROUND(F.MAXBYTES/1024/1024),0,'N/A',ROUND(F.MAXBYTES/1024/1024)),15,' ') "Max size (MB)"
FROM DBA_DATA_FILES F LEFT JOIN free_spc FS
ON F.FILE_ID = FS.FILE_ID
WHERE F.TABLESPACE_NAME = UPPER('&&tablespace')
ORDER BY F.FILE_ID ASC;

How to compare the content of memory parameter and spfile parameters

### Compare static (pfile) and dynamic (spfile) parameter setting:
set pages 100
set lines 400
col init_value format a36
col spfile_value format a36
col name format a38
SELECT P.NAME, P.VALUE INIT_VALUE, S.VALUE SPFILE_VALUE 
FROM   V$PARAMETER P 
    JOIN V$SPPARAMETER S ON P.NAME=S.NAME 
    WHERE P.ISBASIC='TRUE' 
    ORDER BY NAME;

How to clean away control character from a text file

I found this little example on the internet and saved it as a shorthand - it has come in handy many times. Unfortunately I have forgotten the name of the original author.

When files are sent from windows to unix, you will sometimes see that the when you open the files in a unix text editor has ^M's in them.

For example
------------------------------------------------------------------
import java.util.Hashtable; ^M
import java.util.Properties; ^Mimport java.io.IOException;
import org.xml.sax.AttributeList; ^M
import org.xml.sax.HandlerBase; ^Mimport org.xml.sax.SAXException;

/**^M
  * XMLHandler: This class parses the elements contained^M
  * within a XML message and builds a Hashtable^M

------------------------------------------------------------------
Notice that some programs are not consistent in the way they insert the line breaks so you end up with some lines that have both a carrage return and a ^M and some
lines that have a ^M and no carrage return (and so blend into one). There are two steps to clean this up.

1. replace all extraneous ^M:

:%s/^M$//g

BE SURE YOU MAKE the ^M USING "CTRL-V CTRL-M" NOT BY TYPING "CARROT M"! This expression will replace all the ^M's that have carriage returns
after them with nothing. (The dollar ties the search to the end of a line)

2. replace all ^M's that need to have carriage returns:

:%s/^M/ /g

Once again: BE SURE YOU MAKE the ^M USING "CTRL-V CTRL-M" NOT BY TYPING "CARROT M"! This expression will replace all the ^M's that didn't have
carriage returns after them with a carriage return.

Voila! Clean file. Map this to something if you do it frequently.

How to write a procedure that lets developers set the time for the database using the FIXED_DATE procedure

The following procedure may be used to let designated database developers change the system time according to their testing needs.

CREATE OR REPLACE PACKAGE admin_pack AUTHID DEFINER
AS
 PROCEDURE set_fixed_date(vv_fixed_date VARCHAR2);
 PROCEDURE unset_fixed_date;
END admin_pack;
/

CREATE OR REPLACE PACKAGE body admin_pack
AS
PROCEDURE set_fixed_date (vv_fixed_date VARCHAR2) IS
 BEGIN
  IF UPPER(vv_fixed_date) = 'NONE' THEN
     EXECUTE IMMEDIATE 'alter system set fixed_date=none';
  ELSE
     EXECUTE IMMEDIATE 'alter system set fixed_date=''' || vv_fixed_date || '''';
  END IF;
END;
PROCEDURE unset_fixed_date IS
 BEGIN
  EXECUTE IMMEDIATE 'alter system set fixed_date=none';
 END;
END admin_pack;
/


CREATE PUBLIC SYNONYM ADMIN_PACK FOR SYS.ADMIN_PACK;
GRANT EXECUTE ON ADMIN_PACK TO SCOTT;

Note the AUTHID clause in the package declaration:

A unit whose AUTHID value is CURRENT_USER is called an "invoker's rights" unit, or IR unit. A unit whose AUTHID value is DEFINER is called a "definer's rights" unit, or DR unit.

The AUTHID property of a unit determines whether the unit is IR or DR, and it affects both name resolution and privilege checking at run time. It will check:

• If the context for name resolution is CURRENT_SCHEMA.
• If the privileges checked are those of the CURRENT_USER and the enabled roles.

At compilation time, the AUTHID property has no effect—both DR and IR units are treated like DR units. At run time, however, the AUTHID property determines whether a unit is IR or DR, and the unit is treated accordingly.

Author: Tomasz Gozdz
Documentation for the AUTHID DEFINER SETTING
Documentation for the FIXED_DATE procedure

Friday, November 22, 2013

What is the difference between BY ACCESS and BY SESSION in Oracle Auditing?


The difference between BY ACCESS and BY SESSION lies in how the DBA_AUDIT_TRAIL data dictionary view records the actions that they capture:

BY ACCESS: Inserts one record in the audit trail for each audited statement.

BY SESSION: Inserts only one audit record in the audit trail, for each user and schema object, during a session that includes an audited action.

Example:

First, enable auditing on the table TABLE1:

AUDIT SELECT, UPDATE,DELETE,INSERT ON HR.TABLE1 BY SESSION WHENEVER SUCCESSFUL;

Secondly, assuming there is activity against TABLE1, check the audit trail:
SELECT  timestamp, os_username, obj_name,userhost, username,action,action_name, ses_actions
FROM    dba_audit_trail
WHERE   obj_name IN ('TABLE1')
AND     username = 'HR'
ORDER BY timestamp DESC;
The query in my case returned the following:
TIMESTAMP OS_USERNAME OBJ_NAME USERHOST USERNAME ACTION ACTION_NAME SES_ACTIONS
04.07.2011 10:35:20 joe TABLE1 laptop-joe HR 103 SESSION REC ---------S------
08.02.2011 18:11:10 joe TABLE1 appserver1 HR 2 INSERT  
08.02.2011 18:11:10 joe TABLE1 appserver1 HR 6 UPDATE  
08.02.2011 18:11:10 jack TABLE1 appserver1 HR 7 DELETE  
08.02.2011 14:50:41 tina TABLE1 workstation-tina HR 103 SESSION REC ---S--S---S-----

For statements which are audited BY SESSION, the column SES_ACTIONS will indicate a pattern to view which operation we are talking about.
The 16 characters string in the SES_ACTION column represents different actions, one for each action type in the order (note that positions 14, 15, and 16 are reserved for future use):

* ALTER, AUDIT, COMMENT, DELETE, GRANT, INDEX, INSERT, LOCK, RENAME, SELECT, UPDATE, REFERENCES, and EXECUTE.

The characters are:

* - - None
* S - Success
* F - Failure
* B - Both

Finally, interpret the audit trail:
The first row in the table indicates that a successfull SELECT (the S is found on position 10) was executed on TABLE1, 04.07.2011 10:35:20.
The last row indicates a successful DELETE, INSERT and UPDATE (the S is found in position 4, 7 and 11) in TABLE1 during the session that was audited 08.02.2011 14:50:41.
The three rows where ACTION_NAME is INSERT, UPDATE and DELETE are self-explanatory.

At the tail of the AUDIT clause comes the directive WHENEVER [NOT] SUCCESSFUL which means that oracle should audit only SQL statements and operations that succeed, fail or result in errors. If you omit this clause, then Oracle Database performs the audit regardless of success or failure.

Thursday, November 21, 2013

Why won't my DML statement use partition pruning?

I had a case today where a developer issued a DELETE statement against a 600 mill row table.
The table is RANGE PARTITIONED, with partitions per month. The table is created with parallel degree 8.

The DML statement looked correct, and was using the partition key in the predicate:
delete 
from  mytable 
where val_date = to_date('31.10.2013','DD.MM.YYYY');
The execution plan looked as follows:
---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT      |            |   611K|    16M|  5597   (2)| 00:01:08 |       |       |        |      |            |
|   1 |  DELETE               | MYTABLE    |       |       |            |          |       |       |        |      |            |
|   2 |   PX COORDINATOR      |            |       |       |            |          |       |       |        |      |            |
|   3 |    PX SEND QC (RANDOM)| :TQ10000   |   611K|    16M|  5597   (2)| 00:01:08 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR |            |   611K|    16M|  5597   (2)| 00:01:08 |   118 |   118 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| MYTABLE    |   611K|    16M|  5597   (2)| 00:01:08 |   118 |   118 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("VAL_DATE"=TO_DATE(' 2013-10-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
It looks like oracle is ignoring the predicate and performs a full table scan.
True or not?
When explaining without parallel slaves:
explain plan for 
delete /*+ no_parallel (a) */ 
from mytable a
where val_date = to_date('31.10.2013','DD.MM.YYYY');
select * from table(dbms_xplan.display);
Then the execution plan is revealed as follows:
------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT        |            |   611K|    16M| 40443   (2)| 00:08:06 |       |       |
|   1 |  DELETE                 | MYTABLE    |       |       |            |          |       |       |
|   2 |   PARTITION RANGE SINGLE|            |   611K|    16M| 40443   (2)| 00:08:06 |   118 |   118 |
|*  3 |    TABLE ACCESS FULL    | MYTABLE    |   611K|    16M| 40443   (2)| 00:08:06 |   118 |   118 |
-----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("VAL_DATE"=TO_DATE(' 2013-10-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
Note that when you use the NO_PARALLEL hint, you need to reference the table by an alias, in my case "a". The statement below
delete /*+ no_parallel */ 
from mytable...

will not, to my surprise, override the parallel settings, while
delete /*+ no_parallel (a)*/ 
from mytable a...
will.

In conclusion, simulating a non-parallel execution revealed that the query was indeed using partition pruning (as can be seen from line 2 above that reads "PARTITION RANGE SINGLE"). In the original explained plan, this is not pointed out by oracle, and can easily lead you think that oracle is not taking advantage of partitioning.

Tuesday, November 19, 2013

How to hide the input from the user when using the accept directive in sqlplus

accept sch prompt "Schema : "
accept pwd prompt "Password (not displayed) : " hide

Errors ORA-31693 and ORA-29913 thrown during import

Yesterday when importing some test data from production to test, I encountered the following error during import:

. . imported "< schema name >"."< table name > ":"< partition name >" 77.08 KB 0 rows
ORA-31693: Table data object "< schema name >"."< table name > ":"< partition name >" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
. . imported "< schema name >"."< table name > ":"< partition name >" 25.69 KB 0 rows

The preceeding export executed successfully, without any errors or warnings. I used multiple threads, and the result was 7 dump files.

I first tried reimporting the table, which in many similar cases actually work. This time, it didn't; the same error was thrown.

The solution was to remove the parallel directive from the import parameter file and reimport only the missing table:

USERID='/ AS SYSDBA'
DIRECTORY=DPUMP
DUMPFILE=EXDP_SCHEMA_EXPORT_201311181340_%U.DMP
LOGFILE=IMPDP_missing_tables.log
PARALLEL=1
TRANSFORM=OID:N
TABLE_EXISTS_ACTION=REPLACE
REMAP_SCHEMA=< schema_name >:
TABLES=< schema_name >.< table_name >


. . imported "< schema name >"."< table name > ":"< partition name >" 77.08 KB 0 rows
. . imported "< schema name >"."< table name > ":"< partition name >" 1.205 GB 6758593 rows
. . imported "< schema name >"."< table name > ":"< partition name >" 25.69 KB 0 rows

Monday, November 18, 2013

How to use the APPEND hint to optimize INSERT statements


When using the APPEND hint, you are telling the optimizer to use direct-path INSERT.

Serialized inserts:
Simply specify the APPEND hint in each INSERT statement, either immediately after the INSERT keyword or immediately after the SELECT keyword in the subquery of the INSERT statement.
INSERT /*+ APPEND */ INTO sales 
       SELECT product_id, customer_id, TRUNC(sales_date), 3, promotion_id, quantity, amount
       FROM sales_activity_direct;

Parallelized inserts:
First, enable parallel DML in your session:
ALTER SESSION ENABLE PARALLEL DML;
Then, specify the APPEND hint with the PARALLEL hint:
INSERT /*+ APPEND PARALLEL */ INTO sales 
       SELECT product_id, customer_id, TRUNC(sales_date), 3, promotion_id, quantity, amount
       FROM sales_activity_direct;

You can use NOLOGGING with APPEND to make the process even faster. NOLOGGING allows the direct-path INSERT operation to generate a negligible amount of REDO:
INSERT /*+ APPEND PARALLEL NOLOGGING */ INTO sales 
       SELECT product_id, customer_id, TRUNC(sales_date), 3, promotion_id, quantity, amount
       FROM sales_activity_direct;


Note:
* If the database or tablespace is in FORCE LOGGING mode, then direct path INSERT always logs, regardless of the logging setting.
* Direct-path INSERT is the default insert method when running in parallel mode, but unless you have specified the DEGREE attribute for your table, you still need to use the PARALLEL hint for each insert operation.
* Oracle Database performs index maintenance at the end of direct-path INSERT operations on tables that have indexes.





What types of indexes can be created on a partitioned table?

My customer wanted to introduce partitioning on certain tables. I was asked me to give an overview of the different indexing options available after a table has been partitioned.

In short, there are four types of indexes to be considered for partitioned tables:

1. Local Partitioned Indexes
2. Global Partitioned Indexes
3. Global Non-partitioned Indexes
4. Bitmap Indexes



1. Local Partitioned Indexes

Local partitioned indexes always inherit the partitioning key from the table and are fully aligned with the table partitions. Oracle calls this concept "equipartitioning": "each partition of a local index is associated with exactly one partition of the table"

Local indexes can only be unique if the partitioning key is a part of the index key.

New partitions are added to local indexes only when you add a partition to the underlying table. Likewise, they are dropped only when you drop a partition from the underlying table. In other words, you cannot explicitly add a partition to a local index.

Two types of local indexes

There are two types of local indexes: Prefixed and non-prefixed.

Prefixed means that the table's partition key column is referenced first in the index, while a non-prefixed key does not.

Oracle states that non-prefixed local indexes are useful for historical databases, although I haven't had any expirence yet that would confirm this.

For an example of a unique local prefixed index along with some useful queries, click here
For an example of a local non-prefixed index along With some useful queries, click here

The obvious advantage of using local indexes is managebility and flexibilty. Any operation on a single partition will only affect the relevant index partition, and not the intire index as such.

2. Global Partitioned Indexes

Partitioned indexes whose patitions are not aligned with the table's partitions are called global partitioned indexes. In other Words, there is no relation between a table's partitions and a global index partition.

The keys in a particular index partition may refer to rows stored in multiple underlying table partitions or subpartitions.

Global partitioned indexes give the flexibility to choose a partitioning key that is most optimal for performance. Oracle also claims that under some circumstances, having multiple segments for an index can be beneficial for performance.

Two types of global partitioned indexes

There are two types of Global Partitioned Indexes:

* Global Hash Partitioned indexes: improves performance by spreading out contention when the index is monotonically growing on the right side of the index. Using a hashing algorithm, Oracle creates the specified number of hash partitions and spread the index Blocks uniformly accross these.

For an example, see this post


* Global Range Partitioned indexes: adds flexibility: the degree of partitioning and the partitioning key are independent from the table's partitioning method.

For an example, see this post


Both of these types can be defined on any type of partitioned table (list, range, hash etc.).

3. Global Non-partitioned Indexes

These are simply normal indexes. The fact that the underlying table is partitioned does not matter. The index spans all partitions.

Global indexes can be unique.

4. Bitmap Indexes
You can create bitmap indexes on partitioned tables, with the restriction that the bitmap indexes must be local to the partitioned table. They cannot be global indexes.

Friday, November 15, 2013

What is the difference between direct-path INSERT and conventional INSERT?

• During conventional INSERT operations, the database reuses free space in the table, interleaving newly inserted data with existing data. During such operations, the database also maintains referential integrity constraints.

• During direct-path INSERT operations, the database appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. These procedures combined can enhance performance.


Source: Oracle Documentation

Possible reason for RMAN-06059: "expected archived log not found, lost of archived log compromises recoverability"

After having changed the archive log destination, I received the following when performing ARCHIVELOG backups:

Starting backup at 25-NOV-2008
current log archived
released channel: t1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 11/25/2008 10:57:56
RMAN-06059: expected archived log not found, lost of archived log compromises recoverability
ORA-19625: error identifying file /app/oracle/admin/PRODDP/archive670098694_1_1.arc
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3

I needed to run a crosscheck after having pointed out the new archive log destination to the recovery catalog:
allocate channel for maintenance type disk; 
run {
set archivelog destination to '/app/oracle/admin/PRODDB/archive';
change archivelog all crosscheck;
}

After this has been run, I could backup up the ARCHIVELOGS successfully using tape (or disk) backups:
run {
allocate channel t1 type sbt PARMS="BLKSIZE=1048576" maxpiecesize 32G maxopenfiles 64;
send 'NB_ORA_POLICY=O_SYR_loc1,NB_ORA_CLIENT=prodserver1-bkp.mydomain.net,NB_ORA_SCHED=O_SYR_loc1_user';
backup archivelog all; 
release channel t1;
}

How to use dbms_random to generate an always-changing string

For a demonstration of Flashback Data Archive, I had to create a daily update to a column in a test table.

The following function did the trick:

select dbms_random.string( 'L',dbms_random.value(10,30) ) from dual;

by returning strings similar to jsytbbtmxrusgmlz or gilqpizjaislueuocqlxydpngtup

"L" is an optional paramter to the STRING function, stating what type of string that should be produced. "L" means "always lowercase".

The numbers 10 and 30 in the value () function returns a string between 10 and 30 characters in length.

Other types of strings you can generate are:

• 'u', 'U' - uppercase alpha characters
• 'l', 'L' - lowercase alpha characters
• 'a', 'A' - mixed case alpha characters
• 'x', 'X' - uppercase alpha-numeric characters
• 'p', 'P' - any printable characters

Oracle-Base contains an excellent article about dbms_random.




How to use the PRODUCT_USER_PROFILE (PUP) table to prevent SQL*Plus access to database


This article is relevant to versions < 18.1. In Oracle 18c, the PRODUCT_USER_PROFILE table is deprecated, and will be completely removed in Oracle 19c. For more information, consult the documentation

In cases where you have an oracle client installed on a user's PC, skilled users can easily us sqlplus as a "backdoor" to the database by connecting directly through sqlplus instead of using their designated applications. Here is a trick to prevent that from happening, without too much effort for the DBA nor for the PC maintenance guys.

What is it?
-----------
Users can be restricted based on the SQL*Plus product usage.
This is achieved by inserting the restrictions in the PRODUCT_USER_PROFILE (PUP) table, owned by the SYSTEM user.
SQL*Plus reads the restrictions from the PUP table when a user logs in and applies those restrictions for the session.

How does it work?
-----------------
SQL*Plus will not read the PUP table if a user logs in with the SYSDBA or
SYSOPER privilege and therefore no restrictions will apply.

How to set up?
--------------
The table SQLPLUS_PRODUCT_PROFILE is automatically created on installation in the SYSTEM schema.
To explicitly create it, run pupbld.sql script

Then insert the following lines to prevent user SCOTT from doing DML and SELECT:

insert into product_user_profile(product,userid,attribute,char_value) values ('SQL*Plus','SCOTT','SELECT','DISABLED');
insert into product_user_profile (product,userid,attribute,char_value) values ('SQL*Plus','SCOTT','UPDATE','DISABLED');
insert into product_user_profile (product,userid,attribute,char_value) values ('SQL*Plus','SCOTT','INSERT','DISABLED');
insert into product_user_profile (product,userid,attribute,char_value) values ('SQL*Plus','SCOTT','DELETE','DISABLED');

Check with:
SELECT product,userid,attribute,char_value
FROM system.product_user_profile;

PRODUCT    USERID    ATTRIBUTE             NUMERIC_VALUE 
---------- --------- --------------------  ------------- 
SQL*Plus   SCOTT     DELETE                DISABLED
SQL*Plus   SCOTT     INSERT                DISABLED
SQL*Plus   SCOTT     SELECT                DISABLED
NB!
To prevent access from using PL/SQL:
For example, the DELETE privilege disabled for user SCOTT can easily be executed through a PL/SQL block!
This can be avoided by removing the PL/SQL block creation access itself.
insert into system.product_profile (product, userid, attribute, char_value) 
                               values ('SQL*Plus', 'SCOTT', 'DECLARE', 'DISABLED'); 

   insert into system.product_profile (product, userid, attribute, char_value) 
                             values ('SQL*Plus', 'SCOTT', 'BEGIN', 'DISABLED'); 
Result:

SQL> connect scott/tiger
Connected.
SQL> select * from user_tables;
SP2-0544: Command "select" disabled in Product User Profile

Restrictions:

- Can only be used for SQL*Plus
- Only applies to local databases. The PUP table restrictions will not be implemented using a database link to a remote database.

More info here

How to temporarily alter a PROFILE

Temporarily disable the password verify function and set it back to the original value again when you're done:

SQL> ALTER PROFILE DEFAULT LIMIT password_verify_function NULL;
Profile altered.

SQL> ALTER PROFILE DEFAULT LIMIT password_verify_function VERIFY_PASSWORD;

Profile altered.

How to enable automatically implemented SQL profiles recommended by the SQL Tuning Advisor

During the maintanace window of an Oracle 11g database, several out-of-the box jobs are scheduled to run. One of them is the Automatic SQL Tuning Advisor, which will come up with recommendations on how to improve the performance of your most performance intensive SQL statements.

One of the findings of the Automatic SQL Tuner may be that a profile can improve the performance of the SQL statement.
You can set up oracle to automatically implement the SQL Profile for you. Here is how:

First, make sure the required parameters are set:

alter system set optimizer_use_sql_plan_baselines=true scope=both; (default)
alter system set optimizer_capture_sql_plan_baselines=true scope=both;
Check the status of the Automatic Tuning job:
SELECT client_name, status, consumer_group, window_group
FROM   dba_autotask_client
WHERE  client_name = 'sql tuning advisor';
If not enabled already, enable the automatic tuning job:
BEGIN
  DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => 'sql tuning advisor',
    operation => NULL,
    window_name => NULL);
END;
/
Then customize so that profiles are automatically implemented:
BEGIN
  DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER(
    task_name => 'SYS_AUTO_SQL_TUNING_TASK',
    parameter => 'ACCEPT_SQL_PROFILES', value => 'TRUE');
END;
/
Done!
Note: This feature is not necessarily beneficial for your particular database. Try it out carefully and observe the overall results of the peformance.

To disable automatic SQL tuning, use the DISABLE procedure in the DBMS_AUTO_TASK_ADMIN package:
BEGIN
  DBMS_AUTO_TASK_ADMIN.DISABLE(
    client_name => 'sql tuning advisor', 
    operation => NULL, 
    window_name => NULL);
END;
/

Thursday, November 14, 2013

Guaranteed restore points in standby databases

"Before an important release, I decided to create a guaranteed restore point on the primary database.
Unfortunately I forgot to create the same on my standby database. Can I use the as of timestamp to create a guaranteed restore point in the past?"

No, this is not supported.

The syntax below:
SQL> create restore point PRE_RELEASE2_STBY as of timestamp to_date('09.11.2013 09:15:00','DD.MM.YYYY HH24:MI:SS') guarantee flashback database;

Will fail with:
ERROR at line 1:
ORA-38864: cannot create a guaranteed restore point with user specified SCN or
time.

// *Cause:  An SCN or timestamp was specified when creating a guaranteed
//          restore point. This is not supported. You can only create a
//          guaranteed restore point as of the current time.
// *Action: Avoid this combination of options for creating a restore point.

You will have to do with a normal restore point instead:
CREATE RESTORE POINT PRE_RELEASE2_STBY 
AS OF TIMESTAMP TO_DATE('09.11.2013 09:15:00','DD.MM.YYYY HH24:MI:SS');
But you will receive the following error:

ERROR at line 1:
ORA-38867: database not open: cannot create restore point with the specified
timestamp

// *Cause:  An attempt was made to create a restore point with the specified
//          timestamp while the database was not open. The database must be
//          open in order to map the timestamp to an SCN.
// *Action: Open the database first or pick another way to create the restore 
//          point.
So on a standby database, using the AS OF TIMESTAMP is not allowed on a normal restore point, either.
How about using the SCN?

SQL> SELECT timestamp_to_scn(to_date('09.11.2013 09:15:00','DD.MM.YYYY HH24:MI:SS')) SCN 
     FROM   dual;

     SCN
--------
170259165
SQL> create restore point PRE_RELEASE2_STBY AS OF SCN 170259165;
Restore point created.

How does the profile password security setting work?

Oracles profile settings for password security can be somewhat confusing at first, particularly when you consider the ramification of the default values.

Here are some of my notes on the matter:

The PASSWORD_LOCK_TIME sets the number of days an account will be locked after the specified number of consecutive failed login attempts. After the time passes, then the account becomes unlocked. If you specify PASSWORD_LOCK_TIME as UNLIMITED, then the account must be explicitly unlocked using an ALTER USER statement.

The FAILED_LOGIN_ATTEMPTS specifies the number of failed attempts to log in to the user account before the account is locked. The default is three failed attempts.

The PASSWORD_LIFE_TIME specifies the number of days the same password can be used for authentication.
The default setting of UNLIMITED will cause the database to issue a warning but let the user continue to connect indefinitely.

The PASSWORD_GRACE_TIME specifes the number of days after the grace period begins during which a warning is issued and login is allowed. If the password is not changed during the grace period, the password expires and no further logons to the account is allowed. If you also set a value for PASSWORD_GRACE_TIME, then the password expires if it is not changed within the grace period, and further connections are rejected.

The PASSWORD_REUSE_TIME specifies the number of days before which a password cannot be reused. UNLIMITED means never, you can always reuse the password.

The PASSWORD_REUSE_MAX specifies the number of password changes required before the current password can be reused. UNLIMITED means never, you are ignoring the setting altogether.

These two parameters PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX must be set in conjunction with each other. For these parameter to have any effect, you must specify an integer for both of them.

* If you specify an integer for both of these parameters, then the user cannot reuse a password until the password has been changed the number of times specified for PASSWORD_REUSE_MAX during the number of days specified for PASSWORD_REUSE_TIME.
For example, if you specify PASSWORD_REUSE_TIME to 30 and PASSWORD_REUSE_MAX to 10, then the user can reuse the password after 30 days if the password has already been changed 10 times.
* If you specify an integer for either of these parameters and specify UNLIMITED for the other, then the user can never reuse a password.
* If you specify DEFAULT for either parameter, then Oracle Database uses the value defined in the DEFAULT profile. By default, all parameters are set to UNLIMITED in the DEFAULT profile.
If you have not changed the default setting of UNLIMITED in the DEFAULT profile, then the database treats the value for that parameter as UNLIMITED.
* If you set both of these parameters to UNLIMITED, then the database ignores both of them.


The PASSWORD_VERIFY_FUNCTION clause lets a PL/SQL password complexity verification script be passed as an argument to the CREATE PROFILE statement.
Oracle Database provides a default script, but you can create your own routine or use third-party software instead.
* For function, specify the name of the password complexity verification routine.
    * Specify NULL to indicate that no password verification is performed.

Example:

The first time scott tries to log in to the database after 90 days (this can be any day after the 90th day, that is, the 70th day, 100th day, or another day), he receives a warning message that his password will expire in three days.
If three days pass, and he does not change his password, then the password expires.
After this, he receives a prompt to change his password on any attempt to log in, and cannot log in until he does so.

CREATE PROFILE prof 
 LIMIT
 FAILED_LOGIN_ATTEMPTS 4 <-- number of login attempts is 4
 PASSWORD_LOCK_TIME 30   <-- account will be locked for 30 days on the 5th attempt to logon
 PASSWORD_LIFE_TIME 90   <-- the password is valid for 90 days
 PASSWORD_GRACE_TIME 3;  <-- after 90 days, the user has 3 days to change it. 

ALTER USER scott PROFILE prof;
Another example, where I alter the default profile to require the following minimum of any new user:
ALTER PROFILE "DEFAULT" LIMIT
 PASSWORD_LIFE_TIME 90     <-- Rewnewal of password every 90 days
 PASSWORD_GRACE_TIME 10    <-- I give the user 10 days grace period
 PASSWORD_REUSE_MAX 12     <-- Ensure at least 12 different passwords before reusing a password
 PASSWORD_REUSE_TIME 90    <-- The number of days that must pass before a password can be reused
 FAILED_LOGIN_ATTEMPTS 5   <-- Allow 5 attempts to logon
 PASSWORD_LOCK_TIME .0208  <-- After 5 unsuccessful logons, lock account for 30 min, then allow new attempts
 PASSWORD_VERIFY_FUNCTION VERIFY_PASSWORD; <-- enforce password complexity as desired

To reveal if the account was locked as a consequence of a violation of the FAILED_LOGIN_ATTEMPTS, look at the STATUS column of the DBA_USERS view: If the value shows "LOCKED(TIMED)" then you know you're dealing with a user who has tried to log on too many times.

How to map the operating system sessions with RMAN Channels


When only one RMAN session is active, the easiest method for determining the server session ID for an RMAN channel is to execute the following query on the target database while the RMAN job is executing:

COLUMN CLIENT_INFO FORMAT a30
COLUMN SID FORMAT 999
COLUMN SPID FORMAT 9999

SELECT s.SID, p.SPID, s.CLIENT_INFO
FROM V$PROCESS p, V$SESSION s
WHERE p.ADDR = s.PADDR
AND CLIENT_INFO LIKE 'rman%'
;
If you do not run the SET COMMAND ID command in the RMAN job, then the CLIENT_INFO column displays in the following format:

rman channel=channel_id

For example, in my case:

SID SPID         CLIENT_INFO
---- ------------ ------------------------------
  14 8374         rman channel=ORA_SBT_TAPE_1

As pointed out by the oracle documentation, you can monitor the sbt events:
COLUMN SECONDS_IN_WAIT FORMAT 999
COLUMN STATE FORMAT a20
COLUMN CLIENT_INFO FORMAT a30

SELECT p.SPID, EVENT, SECONDS_IN_WAIT AS SEC_WAIT, 
       sw.STATE, CLIENT_INFO
FROM V$SESSION_WAIT sw, V$SESSION s, V$PROCESS p
WHERE sw.EVENT LIKE 's%bt%'
       AND s.SID=sw.SID
       AND s.PADDR=p.ADDR
;

In an example, the output from the above could look as below:
SPID EVENT             SEC_WAIT   STATE                CLIENT_INFO
---- ----------------- ---------- -------------------- ------------------------------
8642 Backup: sbtbackup 600        WAITING              rman channel=ORA_SBT_TAPE_1
indicating that RMAN has been waiting for the sbtbackup function to return for ten minutes.

What is an RMAN CROSSCHECK?

Purpose: To ensure that data about backups in the recovery catalog or control file is synchronized with actual files on disk or in the media management catalog.

If the backup is on disk, then the CROSSCHECK command checks whether the header of the file is valid.
If the backup is on tape, then the crosscheck command checks that the backups exist in the media management software's catalog.

Backup pieces and image copies can have the status AVAILABLE, EXPIRED, or UNAVAILABLE.

NOTE:
The CROSSCHECK command *does not* delete operating system files or remove repository records.
For such operations, you must use the DELETE command.

In short, in RMAN terms, obsolete means "file is not needed", whereas expired means it "file not found".

How to properly mask qotation marks in an SQL statement

Example:

You want to insert the string

order_date=to_date('2013-11-14','YYYY-MM-DD')

into a column of a table.

For oracle to understand that you want the single quotes to be part of the inserted string, you need to mask it as below:

INSERT INTO SCOTT.FILTER_TABLE
       (filter_name,
        filter_priority,
        table_owner_pattern,
        table_name_pattern,
        column_name_pattern,
        sql_filter)
VALUES ('MYFILTER',
        3,
        'LXPROD%',
        '%',
        'ORDER_DATE',
        'order_date=to_date(''&&target_date'',''YYYY-MM-DD'')');

Friday, November 8, 2013

How to check for progress on a long-running RMAN job

If you want to know if your backup is actually doing anything then the following SQL is useful

col INPUT_BYTES format  999,999,999,999
col OUTPUT_BYTES format 999,999,999,999
col OPERATION format a30
col OUTPUT_DEVICE_TYPE format a9
col RECID format 99999
col PARENT_RECID format 99999
col PARENT_RECID heading P_RECID
col SID format 99999
col STATUS format a30

set linesize 300
set pagesize 5000
ALTER SESSION SET NLS_DATE_FORMAT = 'YYYY MM DD HH24:MI:SS';
SELECT   ST.SID, 
         LO.SERIAL#,
         ST.RECID, 
         ST.PARENT_RECID, 
         ST.STATUS, 
         ST.START_TIME, 
         ST.END_TIME, 
         ST.OPERATION,
         ST.OBJECT_TYPE, 
         ST.OUTPUT_DEVICE_TYPE, 
         ST.INPUT_BYTES, 
         ST.OUTPUT_BYTES, 
         TY.INPUT_TYPE,
         LO.MESSAGE,
         ROUND(LO.TIME_REMAINING/60) "ETA (MIN)"
FROM V$RMAN_STATUS ST INNER JOIN V$SESSION_LONGOPS LO ON ST.SID = LO.SID
LEFT OUTER JOIN V$RMAN_BACKUP_TYPE TY ON TY.INPUT_TYPE = ST.OBJECT_TYPE
WHERE ST.STATUS = 'RUNNING'
ORDER BY ST.RECID ASC, ST.START_TIME DESC;

Output:
SID SERIAL# RECID PARENT_RECID STATUS START_TIME END_TIME OPERATION OBJECT_TYPE OUTPUT_DEVICE_TYPE INPUT_BYTES OUTPUT_BYTES INPUT_TYPE ETA (MIN)
227 42437 44435   RUNNING 24.01.2014 12:50:36 24.01.2014 16:06:12 RMAN     0 0   0
227 42676 44435   RUNNING 24.01.2014 12:50:36 24.01.2014 16:06:12 RMAN     0 0   0
227 42437 44436 44435 RUNNING 24.01.2014 12:51:11 24.01.2014 16:06:12 BACKUP DB FULL DISK 298811883520 52378599424 DB FULL 0
227 42676 44436 44435 RUNNING 24.01.2014 12:51:11 24.01.2014 16:06:12 BACKUP DB FULL DISK 298811883520 52378599424 DB FULL 0

As long as your INPUT_BYTES and OUTPUT_BYTES bytes are increasing then something is happening!

How to use RMANs REPORT command

Report which objects need backup under the currently configured retention policy:

REPORT NEED BACKUP;

Variants:

REPORT NEED BACKUP RECOVERY WINDOW OF 2 DAYS DATABASE SKIP TABLESPACE TBS_2;
REPORT NEED BACKUP TABLESPACE TBS_3; # uses configured retention policy
REPORT NEED BACKUP INCREMENTAL 2; # checks entire database
REPORT NEED BACKUP RECOVERY WINDOW OF 2 DAYS DATABASE DEVICE TYPE SBT;


To identify datafiles affected by an unrecoverable (such as a direct load insert)operation and the type of backup
required to ensure the datafile can be restored from backup:

REPORT UNRECOVERABLE;

Variants:
REPORT OBSOLETE;
REPORT OBSOLETE RECOVERY WINDOW OF 3 DAYS;
REPORT OBSOLETE REDUNDANCY 1;


Lists and display information about the database files:
REPORT SCHEMA;

If you use a recovery catalog:

REPORT SCHEMA AT TIME 'SYSDATE-14'; # schema 14 days ago
REPORT SCHEMA AT SCN 1000; # schema at scn 1000
REPORT SCHEMA AT SEQUENCE 100 THREAD 1; # schema at sequence 100

What is the difference between DELETE INPUT and DELETE ALL INPUT in RMAN?

With DELETE INPUT, RMAN only deletes the specific copy of the archived redo log chosen for the backup set.
With DELETE ALL INPUT, RMAN will delete each backed-up archived redo log file from all log archiving destinations.

For example, assume that you archive to /arc_dest1, /arc_dest2, and /arc_dest3, and you run the following command:
BACKUP DEVICE TYPE sbt 
  ARCHIVELOG ALL 
  DELETE ALL INPUT;

In this case RMAN backs up only one copy of each log sequence number in these directories, and then deletes all copies of any
log that it backed up from the (potentially multiple) archiving destinations.

If you had specified DELETE INPUT rather than DELETE ALL INPUT, then RMAN would only delete the specific archived redo log files
that it backed up (for example, it would delete the archived redo log files in /arc_dest1 if those were the files
used as the source of the backup, but it would leave the contents of the /arc_dest2 and /arc_dest3 intact).

How to use the VALIDATE options in RMAN - both 10g and 11g


Validate if database can be backed up (10g)
-------------------------------------------
BACKUP VALIDATE validate that all database files and archived logs can be backed up.
When you run BACKUP VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during
a real backup. RMAN does not, however, actually produce any backup sets or image copies.

Validate against logical corruption (10g)
-----------------------------------------
BACKUP VALIDATE CHECK LOGICAL checks for logical corruptions.
In a logical corruption, the contents of the block are logically inconsistent.
Examples of logical corruption include corruption of a row piece or index entry.
If RMAN detects logical corruption, then it logs the block in the alert log and server session trace file.

Validate if a file or backupset can be restored (10g)
-------------------------------------------------------
RESTORE ... VALIDATE test whether RMAN can restore a specific file or set of files from a backup.
RMAN chooses which backups to use.
Validation of backups of the datafiles only reads the backups and does not affect the production datafiles.
RMAN reads all blocks in the backup piece or image copy. RMAN also validates offsite backups.
The validation is identical to a real restore operation except that RMAN does not write output files.
The lack of error messages means that RMAN had confirmed that it can use these backups successfully
during a real restore and recovery.

Validate against physical corruption (11g)
------------------------------------------
In Oracle Database 11g, a new command in RMAN, VALIDATE DATABASE, makes the operation of checking that the database is healthy and has no bad blocks trivial.
It can substitute the dbverify tool used in previous releases.
If physical corruption is detected, it logs into the Automatic Diagnostic Repository.
RMAN then produces an output that is partially shown below:
RMAN> validate database;

 You can also validate a specific tablespace: 

 RMAN> validate tablespace users;

 Or, datafile: 

 RMAN> validate datafile 1;

 Or, even a block in a datafile: 

 RMAN> validate datafile 4 block 56;

Add all the archivelogs, too:

RMAN> validate database plus archivelog check logical;

The VALIDATE command extends much beyond datafiles however. You can validate spfile, controlfilecopy, recovery files, Flash Recovery Area, and so on.

Oracle 19c documentation here

What are restore points and how are they used?

Definition:

A restore point is an alias to the system change number (SCN) of the database at the time the restore point was created.

Types of restore points:

1. Normal
2. Guaranteed

For both types, the name of the restore point and the SCN are recorded in the database control file.

Normal restore points are very lightweight. The control file can maintain a record of thousands of normal restore points with no significant impact upon database performance. Normal restore points eventually age out of the control file if not manually deleted, so they require no ongoing maintenance.

Guaranteed restore points on the other hand, guarantees that Oracle will retain the flashback logs for a Flashback Database operation.

A guaranteed restore point does not age out of the control file and must be explicitly dropped.
Guaranteed restore points will utilize space in the flash recovery area.
The flash recovery area must be therefore be defined and large enough to hold the flashback logs for the duration of the guaranteed restore point's existence.

Creation example:

CREATE RESTORE POINT PRE_EOD_201208;
CREATE RESTORE POINT PRE_RELEASE2 GUARANTEE FLASHBACK DATABASE;
Usage:

You can use restore points with any commands that recognize a RESTORE POINT clause as a shorthand for specifying an SCN.

Examples:

FLASHBACK DATABASE TO RESTORE POINT < restore point name >;
RESTORE DATABASE TO RESTORE POINT < restore point name >;
FLASHBACK TABLE emp TO RESTORE POINT < restore point name >;

To view the restore points created in your database, use:
select name,scn,time,database_incarnation#, 
guarantee_flashback_database,storage_size/1024/1024 "MB"
from v$restore_point;

NAME                SCN               TIME                                     DATABASE_INCARNATION# GUA        MB
------------------- ---------------- ---------------------------------------- --------------------- --- ----------
AKSEPT_TEST_START   153050263689      17-NOV-14 07.53.33.000000000 AM                              2 YES      14336


To create a normal restore point, you need either
  • the SELECT ANY DICTIONARY system privilege, or
  • the FLASHBACK ANY TABLE system privilege

    To create a guaranteed restore point, you need SYSDBA privileges.

    To view or use a restore point, the user need either of the following:
  • The SELECT ANY DICTIONARY system privilege
  • The FLASHBACK ANY TABLE system privilege
  • The SELECT_CATALOG_ROLE role

    To drop a restore point:
    DROP RESTORE POINT AKSEPT_TEST_START;
    
  • How to drop a plan from the SMB

    Find the SQL Plan Baselines name and sql handle:
    SELECT  SQL_HANDLE,PLAN_NAME
    FROM    DBA_SQL_PLAN_BASELINES 
    WHERE   PLAN_NAME ='SQL_PLAN_b8p2xjvyx9ja5bcc2b4c9';
    
    Run the following PL/SQL as a privileged user (a DBA or "as sysdba")
    SET SERVEROUTPUT ON
    DECLARE
      l_plans_dropped  PLS_INTEGER;
    BEGIN
      l_plans_dropped := DBMS_SPM.drop_sql_plan_baseline (
        sql_handle => 'SQL_b4545d8efdd4c545',
        plan_name  => 'SQL_PLAN_b8p2xjvyx9ja5bcc2b4c9');
    
      DBMS_OUTPUT.put_line('Number of plans dropped: ' || l_plans_dropped);
    END;
    /
    

    How to perform a full restore of Oracle

    The following scripts have been tested and proven to work.
    Make sure the NB_ORA_POLICY is correctly entered and adheres to the documentation from your MML provider:
    
    #!/bin/ksh
    LOGFILE=$SCRIPT_BASE/log/restore_${ORACLE_SID}_`date +%Y%m%d%H%M%S`.log
    export RECOVERY_CATALOG=catowner/**********@RMANCAT
    nohup rman target / catalog $RECOVERY_CATALOG << EOF > $LOGFILE 2>&1  &
    startup force nomount;
     set DBID=1554594349;
     run {
      allocate channel t1 type sbt;
      allocate channel t2 type sbt
      send 'NB_ORA_POLICY=ora_dbserver1_00_netwzone1,NB_ORA_CLIENT=dbserver1-bkp.mydomain.com,NB_ORA_SCHED=ora_dbserver1_00_netwzone1_user';
      set until sequence < logseq > thread 1;
      restore controlfile;
      alter database mount;
      restore database;
      recover database;
     }
     alter database open resetlogs;
    
    EOF
    echo "RMAN restore started in background. Check logfile: $LOGFILE"
    

    A good crontab helptext

    # Use the hash sign to prefix a comment
    # +---------------- minute (0 - 59)
    # |  +------------- hour (0 - 23)
    # |  |  +---------- day of month (1 - 31)
    # |  |  |  +------- month (1 - 12)
    # |  |  |  |  +---- day of week (0 - 7) (Sunday=0 or 7)
    # |  |  |  |  |
    # *  *  *  *  *  command to be executed
    
    

    How to use EXPLAIN PLAN statement and view the results using the DBMS_XPLAN.DISPLAY function



    This table function simply displays the contents of the plan table.

    Execute an explain plan command on a SELECT statement:

    EXPLAIN PLAN FOR
       SELECT * FROM emp e, dept d
       WHERE e.deptno = d.deptno
       AND   e.ename  = 'benoit';
    
    Display the plan using the DBMS_XPLAN.DISPLAY table function:
    SET LINESIZE 130
    SET PAGESIZE 0
    SELECT * FROM table(DBMS_XPLAN.DISPLAY);
    
    This query produces the following output:
    Plan hash value: 3693697075
    ---------------------------------------------------------------------------
    | Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    ---------------------------------------------------------------------------
    |   0 | SELECT STATEMENT   |      |     1 |    57 |     6  (34)| 00:00:01 |
    |*  1 |  HASH JOIN         |      |     1 |    57 |     6  (34)| 00:00:01 |
    |*  2 |   TABLE ACCESS FULL| EMP  |     1 |    37 |     3  (34)| 00:00:01 |
    |   3 |   TABLE ACCESS FULL| DEPT |     4 |    80 |     3  (34)| 00:00:01 |
    ---------------------------------------------------------------------------
    
    Predicate Information (identified by operation id):
    ---------------------------------------------------
       1 - access("E"."DEPTNO"="D"."DEPTNO")
       2 - filter("E"."ENAME"='benoit')
    
    15 rows selected.
    

    You can also use the SET STATEMENT_ID directive to identify a specific plan:
    EXPLAIN PLAN SET STATEMENT_ID='EXPL1' FOR
    SELECT * FROM emp e, dept d
       WHERE e.deptno = d.deptno
       AND e.ename='benoit';
    
    SELECT plan_table_output FROM TABLE(DBMS_XPLAN.DISPLAY('plan_table','EXPL1','serial'));
    

    If no statement_id is specified, the function will show you the plan of the most recent explained statement.

    Source: Oracle Documentation

    How to create a Range-Partitioned Global Index

    You have the following table:
    CREATE TABLE sales
      ( prod_id       NUMBER(6)
      , cust_id       NUMBER
      , time_id       DATE
      , channel_id    CHAR(1)
      , promo_id      NUMBER(6)
      , quantity_sold NUMBER(3)
      , amount_sold   NUMBER(10,2)
      )
     PARTITION BY RANGE (time_id)
     ( PARTITION sales_q1_2006 VALUES LESS THAN (TO_DATE('01-APR-2006','dd-MON-yyyy'))
        TABLESPACE tsa
     , PARTITION sales_q2_2006 VALUES LESS THAN (TO_DATE('01-JUL-2006','dd-MON-yyyy'))
        TABLESPACE tsb
     , PARTITION sales_q3_2006 VALUES LESS THAN (TO_DATE('01-OCT-2006','dd-MON-yyyy'))
        TABLESPACE tsc
     , PARTITION sales_q4_2006 VALUES LESS THAN (TO_DATE('01-JAN-2007','dd-MON-yyyy'))
        TABLESPACE tsd
     )
     ENABLE ROW MOVEMENT; 
    
    Note: The ENABLE ROW MOVEMENT clause is specified to allow the automatic migration of a row to a new partition if an update to a key value is made that would place the row in a different partition.

    The rules for creating range-partitioned global indexes are similar to those for creating range-partitioned tables.
    Each index partition is named but is stored in the default tablespace for the index.
    CREATE INDEX amount_sold_ix ON sales(amount_sold)
       GLOBAL PARTITION BY RANGE(sale_month)
          ( PARTITION p_100 VALUES LESS THAN (100)
          , PARTITION p_1000 VALUES LESS THAN (1000)
          , PARTITION p_10000 VALUES LESS THAN (10000)
          , PARTITION p_100000 VALUES LESS THAN (100000)
          , PARTITION p_1000000 VALUES LESS THAN (1000000)
          , PARTITION p_greater_than_1000000 VALUES LESS THAN (maxvalue)
          );
    
    Source: Oracle Documentation

    Thursday, November 7, 2013

    Some useful asmcmd commands

    When you need to work with ASM disks, you can use the asmcmd (ASM Command Line Utility) utility. I have found the following commands very useful:

    asmcmd -p 
    (-p option gives you the current directory at any given time - convenient)
    
    lsdg
    --> lists disk groups
    
    lsdsk -s
    --> -s shows various I/O related statistics of the disks
    
    lsdsk --candidate -p
    --> lists all available ASM disk candidates
    
    lsdsk --member -p
    --> lists all the current member of a disk group, as opposed to the candidates.
    
    or simply
    
    lsdsk -p
    

    How to check the database's flashback settings

    I normally use the following script to gather the most important facts about my flashback settings and readyness for a potential flashback database operation:

    alter session set nls_date_format='DD.MM.YYYY HH24:MI:SS';
    set linesize 300
    set trimspool on
    col name format a35
    col time format a35
    col guaranteed format a10
    col "oldest flback SCN" format 9999999999999
    col SCN format 9999999999999
    set pagesize 200
    spool chk_flashb.log
    prompt =================================================
    Prompt give me the
    prompt * estimated flashback log size
    prompt * retention target
    prompt * current accumulated size of all flashback logs
    prompt =================================================
    select estimated_flashback_size/1024/1024 "Estimated Flbacklog Size mb",
           retention_target/60                "Hours of flback logs",
           flashback_size/1024/1024           "Current Flbacklog Size mb"
    from v$flashback_database_log
    /
    
    prompt ===============================================
    Prompt How far back can the database be flashed back?
    prompt ===============================================
    select oldest_flashback_scn  "oldest flback SCN",
           oldest_flashback_time "oldest flback time"
    from v$flashback_database_log
    /
    
    prompt =================================================
    prompt show the restore points created the last 2 weeks
    prompt =================================================
    SELECT NAME,
           SCN,
           TIME,
           DATABASE_INCARNATION#,
           GUARANTEE_FLASHBACK_DATABASE "guaranteed",
           STORAGE_SIZE
    FROM V$RESTORE_POINT
    where time >= SYSDATE-14;
    Prompt ===================================
    prompt Show the flash recovery area usage
    Prompt ===================================
    select * from v$flash_recovery_area_usage;
    
    Prompt ======================================
    prompt I/O information for flashback logging
    Prompt ======================================
    select begin_time "Begin time",
            end_time "End time",
            round(flashback_data/1024/1024) "MB of flbk data written",
            round(redo_data/1024/1024)      "MB of redo data"
    from   v$flashback_database_stat
    order by begin_time asc;
    exit
    

    Wednesday, November 6, 2013

    How to use the dbms_explan.display_awr function to find the execution plan of a query in the AWR

    set linesize 200
    set pagesize 0
    select * from table 
    (dbms_xplan.display_awr( '44snxh96pfuxb',1084138060,null, 'TYPICAL'));
    
    Where
    - the first argument is the SQL id
    - the second argument is plan hash value (optional)
    - the third argument the DB_ID. If not specified, the value from V$DATABASE is used
    - the fourth argument is the format string. TYPICAL is the default value.

    Source: Oracle Documentation

    How to use dbms_xplan.display_sql_plan_baseline to reveal execution plan for a SQL in the SMB

    1. Find the plan_name:

    SELECT sql_text,plan_name
    FROM DBA_SQL_PLAN_BASELINES
    WHERE sql_text LIKE '%

    2. Use the dbms_xplan package to generate information + execution plan about this specific SQL plan:
    
    SET LONG 10000
    SET LINES 200
    SET PAGES 300
    SELECT *
    FROM   TABLE(DBMS_XPLAN.display_sql_plan_baseline(plan_name=>'SYS_SQL_PLAN_2f5ea4968dd47f8f'));
    
    

    How to find the plan for a particular SQL in the SMB


    SELECT sql_text,plan_name,sql_handle,creator,origin,created, last_executed,enabled,accepted,fixed,autopurge,module
    FROM   DBA_SQL_PLAN_BASELINES
    WHERE  sql_text LIKE '%your sql text%';
    

    Joined with v$sql, to reveal the SQL ID, and with some formatting:
    select  b.SQL_handle,
            b.plan_name,
            b.origin,
            to_char(b.created, 'dd.mm.rrrr hh24:mi') created,
            to_char(b.last_modified, 'dd.mm.rrrr hh24:mi')last_mod, 
            to_char(b.last_executed, 'dd.mm.rrrr hh24:mi') last_exe,
            b.enabled,
            b.accepted, 
            b.fixed,
            b.optimizer_cost,
            b.executions,
            b.cpu_time,
            b.buffer_Gets,
            b.disk_Reads,
            round((b.elapsed_time/b.executions)/1000000,1) "seconds per execution",
            a.sql_id 
    FROM DBA_SQL_PLAN_BASELINES b left outer join v$sql a
    ON (b.plan_name = a.sql_plan_baseline)
    where b.sql_text LIKE 'SELECT * FROM( SELECT KLM.VIEWTYPE KLMTYPE , TS%';
    

    Result may look like the following:
    SQL_HANDLE PLAN_NAME ORIGIN CREATED LAST_MOD LAST_EXE ENABLED ACCEPTED FIXED OPTIMIZER_COST EXECUTIONS CPU_TIME BUFFER_GETS DISK_READS seconds pr execution SQL_ID
    SQL_5bdd86105a3f22a2 SQL_PLAN_5rrc621d3y8p2a8f06e05 MANUAL-LOAD 11.08.2016 14:42 12.08.2016 07:36 12.08.2016 07:36 YES YES YES
    10948
    27
    35862546
    7299186
    297
    1,4
    5h0syvrgsjf0j
    SQL_5bdd86105a3f22a2 SQL_PLAN_5rrc621d3y8p23e45ab6f MANUAL-LOAD 11.08.2016 14:42 11.08.2016 14:47   NO YES NO
    30537
    1
    3549461
    434057
    5371
    10,8
     
    SQL_5bdd86105a3f22a2 SQL_PLAN_5rrc621d3y8p28dd60076 MANUAL-LOAD 11.08.2016 14:42 11.08.2016 15:02   NO YES NO
    1324
    2
    18519310636
    1131813
    0
    9253,6
     

    From the output above, we can see that the fixed plan is the most efficient, that's why it was marked as FIXED in the SMB.

    How to manually load an execution plan for an SQL statement into the SMB


    Facts:
    * Your customer has a query against a couple of tables with very volatile contents
    * The problem is simply solved by analyzing the tables involved using dbms_stats.gather_table_stats. After that, the query runs with acceptable performance

    Solution:
    * Add the SQLs execution plan to the SMB as a FIXED statement, so the optimizer will use it next time

    Here is how:

    1. Make sure the parameter optimizer_use_sql_plan_baselines is set to TRUE.
    This is default for 11g.

    2. Find the sql_id for the query you want to mark as fixed.
    Easiest is to use Enterprise Manager.

    3. When found, make a note of the sql_id. Then create an STS as follows:
    BEGIN
       DBMS_SQLTUNE.create_sqlset
       (sqlset_name  => 'ACCEPTED_NESTED_LOOP_QUERY',
        description  => 'Freeze of execution plan requested by customer.',
        sqlset_owner => 'DB_ADMIN'
        );
    END;
    /
    
    4. Add the sql_id found in step 1 to your STS:
    
    DECLARE
       sqlset_cur dbms_sqltune.sqlset_cursor;
       bf VARCHAR2(82);
    BEGIN 
      bf := q'#UPPER(PARSING_SCHEMA_NAME) = 'SCOTT' AND UPPER(SQL_ID) = 'BHAANST01X9YB' #';
      OPEN sqlset_cur FOR 
      SELECT VALUE(P) FROM TABLE( 
       dbms_sqltune.select_cursor_cache(bf, NULL, NULL, NULL, NULL, 1, NULL, 'TYPICAL')) P;
       dbms_sqltune.load_sqlset( 
         sqlset_name     => 'ACCEPTED_NESTED_LOOP_QUERY',
         populate_cursor => sqlset_cur, 
         load_option     => 'MERGE', 
         update_option   => 'ACCUMULATE', 
         sqlset_owner    => 'DB_ADMIN');
    END;
    /
    

    5. Load the plan into the SQL Management Base (SMB). Put the following into a file called manual_load.sql:
    declare
       l_num_plans PLS_INTEGER;
    begin
     l_num_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET
      (
      'ACCEPTED_NESTED_LOOP_QUERY',
      'DB_ADMIN'
      ,NULL,
      'YES',
      'YES',
      1000
      );
     DBMS_OUTPUT.put_line('Number of plans loaded: ' || l_num_plans);
    end;
    /
    
    Result:
    $ sqlplus /nolog
    
    SQL> connect DB_ADMIN
    Enter password:
    Connected.
    SQL> set serveroutput on
    SQL> @manual_load.sql
    Number of plans loaded: 1
    
    PL/SQL procedure successfully completed.
    

    6. Verify that the plan is indeed saved:
    SELECT sql_text,
           plan_name,
           sql_handle,
           creator,
           origin,
           created, 
           last_executed,
           enabled,
           accepted,
           fixed,
           autopurge,
           module
    FROM   DBA_SQL_PLAN_BASELINES
    WHERE  SQL_TEXT like '%your sql text%';
    
    
    SQL_TEXT PLAN_NAME SQL_HANDLE CREATOR ORIGIN CREATED LAST_EXECUTED ENABLED ACCEPTED FIXED AUTOPURGE MODULE
    your statement will be visible here SYS_SQL_PLAN_50701d3df0871b58 SYS_SQL_c95f410d50701d3d DBA_ADMIN MANUAL-LOAD 06.11.2013 15:36:20.000000   YES YES YES YES TOAD background query session

    To view the execution plan that is now fixed in the SMB, use the DISPLAY_SQL_PLAN_BASELINE function of the DBMS_XPLAN package:

    sqlplus / as sysdba
    SQL>
    set long 10000
    set lines 200
    set pages 300
    SELECT *
    FROM TABLE(DBMS_XPLAN.display_sql_plan_baseline(plan_name=>'SYS_SQL_PLAN_50701d3df0871b58'));


    Further reading on the topic:
    Oracle-base.com: SQL Plan Management in Oracle Database 11g Release 1
    Oracle Database Performing tuning guide
    Adaptive cursors and SQL Plan Managment by Arup Nanda

    Tuesday, November 5, 2013

    Components and Granules in the SGA


    The SGA comprises a number of memory components, which are pools of memory used to satisfy a particular class of memory allocation requests.
    Examples of memory components include

    * the shared pool (used to allocate memory for SQL and PL/SQL execution)
    * the java pool (used for java objects and other java execution memory)
    * the buffer cache (used for caching disk blocks)

    All SGA components allocate and deallocate space in units of granules.
    Oracle Database tracks SGA memory use in internal numbers of granules for each SGA component.

    The memory for dynamic components in the SGA is allocated in the unit of granules. Granule size is determined by total SGA size.
    Generally speaking, on most platforms, if the total SGA size is equal to or less than 1 GB, then granule size is 4 MB.
    For SGAs larger than 1 GB, granule size is 16 MB.

    You can query the V$SGAINFO view to see the granule size that is being used by an instance.
    The same granule size is used for alldynamic components in the SGA.

    Source: Oracle Documentation

    How to find the shared memory kernel parameter setting for a Sun Solaris Server

    Use the utility prctl:

    prctl -n project.max-shm-memory -i project 110
    
    project: 110: user.oracle
    NAME    PRIVILEGE       VALUE    FLAG   ACTION   RECIPIENT
    project.max-shm-memory
            privileged      17.0GB      -   deny     -
            system          16.0EB    max   deny      
    
    
    Here, 17 GB is the shared memory configuration ("privileged").

    This corresponds to the setting in /etc/project:
    cat /etc/project
    system:0::::
    user.root:1::::
    noproject:2::::
    default:3::::
    group.staff:10::::
    user.oracle:110:Oracle default project:::project.max-shm-memory=(privileged,18253611008,deny)
    

    18253611008/1024/1024 = 17 GB

    Monday, November 4, 2013

    What is standard (or traditional) auditing?


    From the Oracle Documentation:

    "In standard auditing, you enable auditing of SQL statements, privileges, schema objects, and network or multitier activities. You can audit a specific schema table if you want."

    You need to set AUDIT_TRAILto "DB" or "DB,EXTENDED", in order to start using standard auditing.
    After this is done, you may execute the AUDIT statement to start auditing of different actions according to your need.
    You can choose to write the audit-trail to the database itself, or to operating system files.

    Oracle Database audits all data manipulation language (DML) operations, such as INSERT, UPDATE, MERGE, and DELETE on the SYS.AUD$ and SYS.FGA_LOG$ tables by non-SYS users. (It performs this audit even if you have not set audit options for the AUD$ and FGA_LOGS$ tables.)

    Typically, non-SYS users do not have access to these tables, except if they have been explicitly granted access. If a non-SYS user tampers with the data in the SYS.FGA_LOG$ and SYS.AUD$ tables, then Oracle Database writes an audit record for each action.

    To view the standard audit trail, query the views DBA_AUDIT_TRAIL and DBA_COMMON_AUDIT_TRAIL.

    Source: Oracle 11g Documentation


    Note that in later releases, Oracle is often refering to standard auditing as "Traditional Auditing", since Oracle 12c introduced the concept of Unfied Auditing.

    What is mandatory auditing?

    Oracle Database always audits certain database-related operations and writes them to the operating system audit files.

    It includes the actions of any user who is logged in with the SYSDBA or SYSOPER privilege. This is called mandatory auditing.

    Even if you have enabled the database audit trail (that is, setting the AUDIT_TRAIL parameter to DB), Oracle Database still writes mandatory records to operating system files.

    Mandatory auditing includes the following operations:

    * Database startup
    * SYSDBA and SYSOPER logins
    * Database shutdown

    Source: Oracle Documentation

    What is Row Movement and how is it used?

    When you add the clause "enable row movement" to a create table statement, you are giving Oracle permission to change the ROWIDs.

    Features such as flashback table depends on row movement being enabled on the table.

    To enable row movement on an existing table:

    ALTER TABLE SCOTT.EMP ENABLE ROW MOVEMEMENT;

    Using enable row movement can corrupt any Oracle features that rely on ROWID, such as nested tables.
    Check for such objects before you enable row movement:

    SELECT COUNT(*)
    FROM dba_nested_tables
    WHERE owner = 'SCOTT';


    Sources: Oracle Documentation and Don Burleson

    Sunday, November 3, 2013

    How to audit ALTER TABLE statements on a specific table


    Enable auditing on specific table:
    AUDIT ALTER ON SCOTT.T1;
    
    Verify that the object is being audited:
    SELECT OBJECT_NAME,OBJECT_TYPE,ALT
    FROM  DBA_OBJ_AUDIT_OPTS
    WHERE  OWNER ='SCOTT'
    ORDER BY 1 ASC; 
    
    Result:
    |------------------------------------|
    |object_name  | object_type  | alt   |
    |-------------|--------------|-------|
    |T1           | TABLE        | S/S   |
    |------------------------------------|
    
    Object is indeed being audited.

    Make a change to the table:
    ALTER TABLE SCOTT.T1
     ADD (age  number(2));
    
    Check that the change is captured in the audit-trail:
    SELECT TIMESTAMP, OS_USERNAME, OBJ_NAME,USERHOST, SESSIONID, USERNAME,ACTION_NAME, RETURNCODE
    FROM   DBA_AUDIT_TRAIL
    WHERE  OBJ_NAME = 'T1'
    AND    OWNER = 'SCOTT'
    ORDER  BY TIMESTAMP DESC;
    
    |-----------------------------------------------------------------------------------------------------------|
    |TIMESTAMP     | OS_USERNAME | OBJ_NAME  | USERHOST| SESSIONID | USERNAME | ACTION_NAME  | RETURNCODE |
    |-----------------------------------------------------------------------------------------------------------|
    |03.11.2013 08:32:28 | SCOTT | T1    | scottpc  | 172154    | SCOTT   | ALTER TABLE  | 0         |
    |03.11.2013 08:32:18 | SCOTT | T1    | scottpc  | 172154 | SCOTT   | ALTER TABLE  | 1430      |
    |03.11.2013 08:32:18 | SCOTT | T1    | scottpc  | 172154 | SCOTT   | ALTER TABLE  | 911       |
    |01.11.2013 16:14:17 | SCOTT | T1    | scottpc  | 234766 | SCOTT   | ALTER TABLE  | 0         |
    |-----------------------------------------------------------------------------------------------------------|
    
    The return codes can be checked by using the oerr utility that comes with the oracle installed binaries, for example:
    oerr ora 1430
    01430, 00000, "column being added already exists in table"
    // *Cause:
    // *Action:
    

    Note that other DDL statements, too, are captured in the audit trail for the table, such as DROP and TRUNCATE.
    If you drop the table without purging the recyclebin, the auditing will still be in place, and if you query the DBA_OBJ_AUDIT_OPTS after a DROP without purge, it will show you the recyclebase name in the view.

    How to use flashback table

    Use a point-in-time in the past:

    FLASHBACK TABLE SCOTT.EMP TO TIMESTAMP
    TO_TIMESTAMP('09-12-2008 14:00:00', 'DD-MM-YYYY HH24:MI:SS');
    
    Notice the peculiar syntax:
    "TO TIMESTAMP TO_TIMESTAMP..."
    
    Use a specific SCN:
    FLASHBACK TABLE SCOTT.EMP TO SCN 144003819;
    

    If you want to flash back multiple tables, you can use many table names separated by a comma as follows:
    FLASHBACK TABLE SCOTT.EMP, SCOTT.DEPT TO SCN 144003819;
    

    Use a restore point:
    FLASHBACK TABLE SCOTT.EMP TO RESTORE POINT BEFORE_MAINTENANCE;
    
    Flashback to before drop:
    FLASHBACK TABLE SCOTT.EMP TO BEFORE DROP;
    

    You can add the RENAME TO clause to rename the retrieved table:
    FLASHBACK TABLE SCOTT.EMP TO BEFORE DROP RENAME TO EMP_OLD;
    

    Note:
    For Oracle Flashback Table to succeed, the system must retain enough undo information to satisfy the specified SCN or timestamp, and the integrity constraints specified on the tables cannot be violated. Also, row movement must be enabled on the table.

    Row movement must be enabled for the table you attempt to flash back. It can be switched on right before you use the flashback table feature. If you are flashing back to a time when row movement was *not* enabled, that is no problem, you can still use flashback table, as long as row movment is enabled *at the time* of executing the flashback table command.

    There is one important exception to the requirement that row movement must be enabled: flashing back table TO BEFORE DROP. Such an operation Oracle calls a "flashback drop operation", and it uses dropped data in the recyclebin rather than undo data.

    You can check the content of the recyclebin before you flash back a table to before it was dropped, like this:
    select owner,object_name,original_name,operation,droptime,can_undrop from dba_recyclebin where original_name='EMP';
    
    OWNER                          OBJECT_NAME                    ORIGINAL_NAME                  OPERATION DROPTIME            CAN
    ------------------------------ ------------------------------ ------------------------------ --------- ------------------- ---
    SCOTT                  BIN$StmkAPSYpxbgUzwGAQph5w==$0 EMP               DROP      2017-03-16:14:00:09 YES
    



    There are some restrictions to flashback table. Read about them here "Notes on Flashing Back Dropped Tables"