Humans will often need some time to react to the output from your scripts. To briefly "pause" the output from a pl/sql script, add the following directive:
dbms_lock.sleep(no_of_seconds);
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Thursday, October 31, 2013
How to enable tracing for a session
To enable trace for a session:
execute dbms_monitor.session_trace_disable(127,29);
Alternatively, use the dbms_system package:
execute dbms_system.set_sql_trace_in_session(17,45171,true);
When finished, disable tracing:
dbms_system.stop_trace_in_session(17,45171);
To enable the writing of a trace file for your current session:
execute dbms_session.set_sql_trace(true);
Note that the user needs the ALTER SESSION privilege to perform tracing:
GRANT ALTER SESSION TO scott;
Note 293661.1 published on MOS is another great source to use the dbms_monitor package efficiently.
execute dbms_monitor.session_trace_enable(session_id => 127, serial_num => 29, waits => TRUE, binds => FALSE);Disable as follows:
execute dbms_monitor.session_trace_disable(127,29);
Alternatively, use the dbms_system package:
execute dbms_system.set_sql_trace_in_session(17,45171,true);
When finished, disable tracing:
dbms_system.stop_trace_in_session(17,45171);
To enable the writing of a trace file for your current session:
execute dbms_session.set_sql_trace(true);
Note that the user needs the ALTER SESSION privilege to perform tracing:
GRANT ALTER SESSION TO scott;
Note 293661.1 published on MOS is another great source to use the dbms_monitor package efficiently.
Syntax for using the dbms_stats.gather_table_stats and gather_schema_stats procedures
To use the default options:
Estimate_percent means percentage of rows to estamite. NULL means compute statistics, and gives the most accurate results, but may be completely innappropriate with large tables. Use the constant DBMS_STATS.AUTO_SAMPLE_SIZE to let Oracle figure out the appropriate number autmatically. In my opinion, this is sufficient for most situations.
Granularity means Granularity of statistics to collect (only relevant if the table is partitioned).
'ALL' - gathers all (subpartition, partition, and global) statistics
'AUTO' - determines the granularity based on the partitioning type. (DEFAULT)
'DEFAULT' - This option is obsolete. You should use the 'GLOBAL AND PARTITION' for this functionality.
'GLOBAL' - gathers global statistics
'GLOBAL AND PARTITION' - gathers the global and partition level statistics. No subpartition level statistics are gathered even if it is a composite partitioned object
'PARTITION' - gathers partition-level statistics
'SUBPARTITION' - gathers subpartition-level statistics
cascade means whether or not statistics for the indexes on this table should be gathered.
Using this option is equivalent to running the GATHER_INDEX_STATS Procedure on each of the table's indexes.
Use the constant DBMS_STATS.AUTO_CASCADE to have Oracle determine whether index statistics to be collected or not (DEFAULT).
degree is the degree of parallelism. The default is NULL, and instructs Oracle to use the DOP (Degree of Parallelism) specified by the DEGREE-clause when the table or index was created. By using the constant DBMS_STATS.DEFAULT_DEGREE you tell Oracle to calculate the DOP for you, using the formula (PARALLEL_THREADS_PER_CPU * CPU_COUNT). Oracle may still choose to use serial execution, if the size of the Object doesn't warrant a parallel execution. See this post for more information about the DEGREE setting.
method_opt means "collect histograms"
Can be:
FOR ALL [INDEXED | HIDDEN] COLUMNS [size_clause]
FOR COLUMNS [size clause] column|attribute [size_clause] [,column|attribute [size_clause]...]
size_clause: SIZE {integer | REPEAT | AUTO | SKEWONLY}
integer : Number of histogram buckets. Must be in the range [1,254].
'REPEAT' - Collects histograms only on the columns that already have histograms.
'AUTO' - Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
'SKEWONLY' - Oracle determines the columns to collect histograms based on the data distribution of the columns.
The default is FOR ALL COLUMNS SIZE AUTO. Note that you can change what's default using the SET_PARAM Procedure.
option GATHER AUTO is the default. GATHER forces a complete re-gathering of all tables
What is a histogram?
Column statistics may be stored as histograms. These histograms provide accurate estimates of the distribution of column data. Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with nonuniform data distributions.
About privileges:
To call this procedure you must either a) be owner of the table or b) have been granted the ANALYZE ANY privilege.
See also this post about how to gather statistics on a specific partition only
Source: Oracle documentation
BEGIN SYS.DBMS_STATS.GATHER_TABLE_STATS ( OwnName => 'SCOTT' ,TabName => 'EMP' ,Estimate_Percent => DBMS_STATS.AUTO_SAMPLE_SIZE ,method_opt => 'FOR ALL COLUMNS SIZE AUTO' ,Degree => NULL ,Cascade => DBMS_STATS.AUTO_CASCADE ,granularity => 'AUTO' ,No_Invalidate => DBMS_STATS.AUTO_INVALIDATE); END; / BEGIN DBMS_STATS.GATHER_SCHEMA_STATS ( ownname => 'SCOTT', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', degree => DBMS_STATS.DEFAULT_DEGREE, granularity => 'ALL', cascade => DBMS_STATS.AUTO_CASCADE, options => 'GATHER', no_invalidate => DBMS_STATS.AUTO_INVALIDATE); END; /
Estimate_percent means percentage of rows to estamite. NULL means compute statistics, and gives the most accurate results, but may be completely innappropriate with large tables. Use the constant DBMS_STATS.AUTO_SAMPLE_SIZE to let Oracle figure out the appropriate number autmatically. In my opinion, this is sufficient for most situations.
Granularity means Granularity of statistics to collect (only relevant if the table is partitioned).
'ALL' - gathers all (subpartition, partition, and global) statistics
'AUTO' - determines the granularity based on the partitioning type. (DEFAULT)
'DEFAULT' - This option is obsolete. You should use the 'GLOBAL AND PARTITION' for this functionality.
'GLOBAL' - gathers global statistics
'GLOBAL AND PARTITION' - gathers the global and partition level statistics. No subpartition level statistics are gathered even if it is a composite partitioned object
'PARTITION' - gathers partition-level statistics
'SUBPARTITION' - gathers subpartition-level statistics
cascade means whether or not statistics for the indexes on this table should be gathered.
Using this option is equivalent to running the GATHER_INDEX_STATS Procedure on each of the table's indexes.
Use the constant DBMS_STATS.AUTO_CASCADE to have Oracle determine whether index statistics to be collected or not (DEFAULT).
degree is the degree of parallelism. The default is NULL, and instructs Oracle to use the DOP (Degree of Parallelism) specified by the DEGREE-clause when the table or index was created. By using the constant DBMS_STATS.DEFAULT_DEGREE you tell Oracle to calculate the DOP for you, using the formula (PARALLEL_THREADS_PER_CPU * CPU_COUNT). Oracle may still choose to use serial execution, if the size of the Object doesn't warrant a parallel execution. See this post for more information about the DEGREE setting.
method_opt means "collect histograms"
Can be:
FOR ALL [INDEXED | HIDDEN] COLUMNS [size_clause]
FOR COLUMNS [size clause] column|attribute [size_clause] [,column|attribute [size_clause]...]
size_clause: SIZE {integer | REPEAT | AUTO | SKEWONLY}
integer : Number of histogram buckets. Must be in the range [1,254].
'REPEAT' - Collects histograms only on the columns that already have histograms.
'AUTO' - Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.
'SKEWONLY' - Oracle determines the columns to collect histograms based on the data distribution of the columns.
The default is FOR ALL COLUMNS SIZE AUTO. Note that you can change what's default using the SET_PARAM Procedure.
option GATHER AUTO is the default. GATHER forces a complete re-gathering of all tables
What is a histogram?
Column statistics may be stored as histograms. These histograms provide accurate estimates of the distribution of column data. Histograms provide improved selectivity estimates in the presence of data skew, resulting in optimal execution plans with nonuniform data distributions.
About privileges:
To call this procedure you must either a) be owner of the table or b) have been granted the ANALYZE ANY privilege.
See also this post about how to gather statistics on a specific partition only
Source: Oracle documentation
Wednesday, October 30, 2013
How to create a DDL trigger in a schema
CREATE TABLE AUDIT_DDL ( d date, OSUSER varchar2(255), CURRENT_USER varchar2(255), HOST varchar2(255), TERMINAL varchar2(255), owner varchar2(30), type varchar2(30), name varchar2(30), sysevent varchar2(30));
connect uid/pwd
CREATE OR REPLACE TRIGGER audit_ddl_trg after ddl on schema BEGIN IF(ora_sysevent='TRUNCATE') THEN null; -- If we do not care about truncate ELSE INSERT INTO audit_ddl(d, osuser,current_user,host,terminal,owner,type,name,sysevent) values( sysdate, sys_context('USERENV','OS_USER') , sys_context('USERENV','CURRENT_USER') , sys_context('USERENV','HOST') , sys_context('USERENV','TERMINAL') , ora_dict_obj_owner, ora_dict_obj_type, ora_dict_obj_name, ora_sysevent ); END IF; END; /
Source: Don Burleson http://www.dba-oracle.com/t_ddl_triggers.htm
How to print the predefined error messages in the error number in pl/sql
dbms_output.put_line(sqlerrm(sqlcode));
Tuesday, October 29, 2013
How to set the current schema in a session
alter session set current_schema=SCOTT;
The CURRENT_SCHEMA parameter changes the current schema of the session to the specified schema.
Subsequent unqualified references to schema objects during the session will resolve to objects in the specified schema.
The setting persists for the duration of the session or until you issue another ALTER SESSION SET CURRENT_SCHEMA statement.
This setting offers a convenient way to perform operations on objects in a schema other than that of the current user
without having to qualify the objects with the schema name.
This setting changes the current schema, but it does not change the session user or the current user,
nor does it give the session user any additional system or object privileges for the session.
Source: Oracle Documentation
The CURRENT_SCHEMA parameter changes the current schema of the session to the specified schema.
Subsequent unqualified references to schema objects during the session will resolve to objects in the specified schema.
The setting persists for the duration of the session or until you issue another ALTER SESSION SET CURRENT_SCHEMA statement.
This setting offers a convenient way to perform operations on objects in a schema other than that of the current user
without having to qualify the objects with the schema name.
This setting changes the current schema, but it does not change the session user or the current user,
nor does it give the session user any additional system or object privileges for the session.
Source: Oracle Documentation
Monday, October 28, 2013
How to exclude certain file systems when search for files belonging to tablespaces
Statement can be useful for example when reorganizing databases etc.
SELECT file_name FROM dba_data_files
WHERE file_name NOT LIKE '/data/oracle/u01%'
INTERSECT
SELECT file_name FROM dba_data_files
WHERE file_name NOT LIKE '/data/oracle/u02%';
SELECT file_name FROM dba_data_files
WHERE file_name NOT LIKE '/data/oracle/u01%'
INTERSECT
SELECT file_name FROM dba_data_files
WHERE file_name NOT LIKE '/data/oracle/u02%';
Examples of expdp and impdp parameter files, and some useful directives
Export:
USERID='/ as sysdba' DIRECTORY=DPUMP DUMPFILE=file_name_%U.dmp LOGFILE=expdp_log_file_name.log JOB_NAME=EXP_DATA PARALLEL=8 SCHEMAS=schema1,[schema2],[schema n] EXCLUDE=STATISTICS
To get a consistent dump file, add the flashback_time or flashback_scn directive, for example:
FLASHBACK_TIME="to_timestamp(to_char(sysdate,'ddmmyyyy hh24:mi:ss'),'ddmmyyyy hh24:mi:ss')"
Import:
USERID='/ as sysdba' DIRECTORY=DPUMP DUMPFILE=file_name_%U.dmp LOGFILE=impdp_log_file_name.log JOB_NAME=IMP_DATA SCHEMAS=schema1,[schema2],[schema n] REMAP_SCHEMA=SCOTT:JAMES REMAP_TABLE=SCOTT.MYTABLE:MYTABLE_INTERIM REMAP_TABLESPACE=USERS:TOOLS PARALLEL=8 TRANSFORM=oid:n TABLE_EXISTS_ACTION=REPLACE
Note: the directive "TRANSFORM=oid:n" makes sure all imported objects will be assigned a new OID instead of inhibiting the old OID from exported objects.
You can remap serveral tablespaces during the same run, just specify the directive multiple times in the same parameter file:
remap_tablespace=USER_DATA:HISTORICAL_DATA remap_tablespace=APP_DATA:HISTORICAL_DATA remap_tablespace=LOB_DATA:HISTORICAL_DATA
To limit export or import to specific tables only, use:
tables=SCOTT.EMP, SCOTT.DEPT, SCOTT.SALARIES
To limit export or import to specific partitions only, use:
TABLES=SCOTT.SALES:P_201701
To limit export or import to partitions matching a specific pattern, use:
TABLES=SCOTT.SALES:CONTRACT%2021%
Note that you cannot use both SCEMAS and TABLES as directives in the same export/import session.
Other useful directives:
CONTENT=METADATA_ONLY (only export object definition, not data) EXCLUDE=TABLE:"LIKE 'ABC%_TMP'" (exclude tables based on name pattern) EXCLUDE=TABLE:"= 'BONUS'" (exclude an entire table and its partitions, if any) EXCLUDE=INDEX:"LIKE 'EMP%'" (exclude indexes that match a certain string) EXCLUDE=VIEW,PACKAGE, FUNCTION (exclude certain types of objects) EXCLUDE = TRIGGER:"IN ('TRIG1', 'TRIG2')", INDEX:"='INDX1'", REF_CONSTRAINT (exclude specific triggers, a specific index and referential constraints) EXCLUDE=SEQUENCE, TABLE:"IN ('EMP', 'DEPT')" (exclude specific tables) EXCLUDE=INDEX:"='MY_INDX'" (exclude a specific index) EXCLUDE=MATERIALIZED_VIEW (exclude materialized views) EXCLUDE=MATERIALIZED_VIEW,MATERIALIZED_VIEW_LOG (exlude materialized views + mv logs) INCLUDE=FUNCTION, PACKAGE, TABLE:"='EMP'" (only include specific objects types, and tables called "EMP") INCLUDE=PROCEDURE:"LIKE 'MY_PROC_%'" (only include procedures named according to a specific string) QUERY=SCOTT.TABLE1:"WHERE 1=0" (no data is being exported, but the table is created) QUERY=SCOTT.TABLE2:"WHERE SALES_DATE >= to_date('01.12.2011','dd.MM.yyyy')" (exports only rows older than 01.12.2011)
EXCLUDE=SCHEMA:"='SCOTT'" (exclude a specific user and all objects of that user)Note:
Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.
If you try to exclude a user by using a statement such as EXCLUDE=USER:"= 'SCOTT'", only the CREATE USER scott DDL statement will be excluded, and you may not get the results you expect.
To import only the user definition over a database link
See this post.
To compress tables on import
See this post
A note on integrity checking in data pump import TABLE_EXISTS_ACTION
TRUNCATE is subject to stricter integrity checking, than in case of REPLACE.
• TRUNCATE deletes existing rows and then loads rows from the source.
• REPLACE drops the existing table and then creates and loads it from the source.
• When you use TRUNCATE or REPLACE, make sure that rows in the affected tables are not targets of any referential constraints.
• When you use TRUNCATE, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored.
• For REPLACE, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded.
• When you use TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action.
If the existing table has active constraints and triggers, it is loaded using the external tables access method. If any row violates an active constraint, the load fails and no data is loaded. You can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the Import command line.
If you have data that must be loaded, but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before re-enabling the constraints.
A note about excluding Constraints
The following constraints cannot be excluded:
• NOT NULL constraints.
• Constraints needed for the table to be created and loaded successfully (for example, primary key constraints for index-organized tables or REF SCOPE and WITH ROWID constraints for tables with REF columns).
Examples:
Exclude all nonreferential constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading:
EXCLUDE=CONSTRAINT
Exclude referential integrity (foreign key) constraints:
EXCLUDE=REF_CONSTRAINT
Turn on compression during export:
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}
For most cases, I would stick to the MEDIUM compression algorithm. Oracle states in the 12.2 documentation:
Recommended for most environments. This option, like the BASIC option, provides a good combination of compression ratios and speed, but it uses a different algorithm than BASIC.
I find compression of exports very useful when space is an issue, or transfer between servers over the network is needed.
Sources: Oracle 11g Documentation
Oracle 12cR2 documentation
Source: My Oracle Support
How to only export specific partitions using Oracles data pump utility
In your parameter file, use the following notation:
TABLES=SCOTT.TABLE1:PART_1
You can export more partitions in the same run:
TABLES=SCOTT.TABLE1:PART_1,
TABLES=SCOTT.TABLE2:PART_3,
TABLES=SCOTT.TABLE2:PART_4
Note that you may not export data from multiple schema in this mode, in other words, the notation:
TABLES=SCOTT.TABLE1:PART_1,
TABLES=HR.TABLE1:PART_1
will throw an error:
UDE-00012: table mode exports only allow objects from one schema
TABLES=SCOTT.TABLE1:PART_1
You can export more partitions in the same run:
TABLES=SCOTT.TABLE1:PART_1,
TABLES=SCOTT.TABLE2:PART_3,
TABLES=SCOTT.TABLE2:PART_4
Note that you may not export data from multiple schema in this mode, in other words, the notation:
TABLES=SCOTT.TABLE1:PART_1,
TABLES=HR.TABLE1:PART_1
will throw an error:
UDE-00012: table mode exports only allow objects from one schema
Wednesday, October 23, 2013
How to create a Flashback Data Archive
Remember that tablespaces used for flashback data archive must use ASSM (Automatic Segment Space Managment)
Create a tablespace (here I am using Oracle Managed Files, no file specification necessary)
SQL> CREATE TABLESPACE LAST_MONTH;
Tablespace created.
SQL> CREATE TABLESPACE LAST_YEAR;
Tablespace created.
Next, create the flashback archive containers:
SQL> CREATE FLASHBACK ARCHIVE ONE_MONTH TABLESPACE LAST_MONTH RETENTION 1 MONTH;
FLASHBACK ARCHIVE CREATED.
SQL> CREATE FLASHBACK ARCHIVE ONE_YEAR TABLESPACE LAST_YEAR RETENTION 1 YEAR;
FLASHBACK ARCHIVE CREATED.
Prepare the table so it can use flashback archive:
SQL> ALTER TABLE SCOTT.CUSTOMERS ENABLE ROW MOVEMENT;
TABLE ALTERED.
Enable flashback archiving by assigning the table to the appropriate flashback archive container:
SQL> ALTER TABLE SCOTT.CUSTOMERS FLASHBACK ARCHIVE LAST_YEAR;
TABLE ALTERED.
To revers the operations:
SQL> ALTER TABLE CUSTOMERS NO FLASHBACK ARCHIVE;
TABLE ALTERED.
SQL> DROP FLASHBACK ARCHIVE ONE_YEAR;
FLASHBACK ARCHIVE DROPPED.
SQL> DROP FLASHBACK ARCHIVE ONE_MONTH;
FLASHBACK ARCHIVE DROPPED.
Useful views:
DBA_FLASHBACK_ARCHIVE
DBA_FLASHBACK_ARCHIVE_TABLES
DBA_FLASHBACK_ARCHIVE_TS
Create a tablespace (here I am using Oracle Managed Files, no file specification necessary)
SQL> CREATE TABLESPACE LAST_MONTH;
Tablespace created.
SQL> CREATE TABLESPACE LAST_YEAR;
Tablespace created.
Next, create the flashback archive containers:
SQL> CREATE FLASHBACK ARCHIVE ONE_MONTH TABLESPACE LAST_MONTH RETENTION 1 MONTH;
FLASHBACK ARCHIVE CREATED.
SQL> CREATE FLASHBACK ARCHIVE ONE_YEAR TABLESPACE LAST_YEAR RETENTION 1 YEAR;
FLASHBACK ARCHIVE CREATED.
Prepare the table so it can use flashback archive:
SQL> ALTER TABLE SCOTT.CUSTOMERS ENABLE ROW MOVEMENT;
TABLE ALTERED.
Enable flashback archiving by assigning the table to the appropriate flashback archive container:
SQL> ALTER TABLE SCOTT.CUSTOMERS FLASHBACK ARCHIVE LAST_YEAR;
TABLE ALTERED.
To revers the operations:
SQL> ALTER TABLE CUSTOMERS NO FLASHBACK ARCHIVE;
TABLE ALTERED.
SQL> DROP FLASHBACK ARCHIVE ONE_YEAR;
FLASHBACK ARCHIVE DROPPED.
SQL> DROP FLASHBACK ARCHIVE ONE_MONTH;
FLASHBACK ARCHIVE DROPPED.
Useful views:
Which are the potential use cases for Flashback Data Archive, introduced in Oracle 11gR1?
Flashback Data Archive is handy for many purposes. Here are some ideas:
- To audit for recording how data changed
- To enable an application to undo changes (correct mistakes)
- To debug how data has been changed
- To comply with some regulations that require data must not be changed after some time.
Flashback Data Archives are not regular tables so they can't be changed by typical users.
- Recording audit trails on cheaper storage thereby allowing more retention at less cost
Remember: Inserts do not create archive records; because they are not changes to data.
Source: Arup Nanda
- To audit for recording how data changed
- To enable an application to undo changes (correct mistakes)
- To debug how data has been changed
- To comply with some regulations that require data must not be changed after some time.
Flashback Data Archives are not regular tables so they can't be changed by typical users.
- Recording audit trails on cheaper storage thereby allowing more retention at less cost
Remember: Inserts do not create archive records; because they are not changes to data.
Source: Arup Nanda
Tuesday, October 22, 2013
What are the different types of database links used in Oracle?
Connected user
A local user accessing a database link in which no fixed username and password have been specified.
Note that connected users does not have to be the user who created the link, but is any user who is accessing the link.
Example:
CREATE PUBLIC DATABASE LINK sales USING 'sales_db';
Fixed user
A user whose username/password is part of the link definition.
CREATE PUBLIC DATABASE LINK sales CONNECT TO scott IDENTIFIED BY tiger USING 'sales_db';
If you are logged in locally as scott and try to use the database link, a connection as scott is made on the remote side, too.
If the user scott doesn't exist in the remote database, an error will be thrown.
In some cases the database link must contain the entire connect string:
CREATE PUBLIC DATABASE LINK mydatabaselink CONNECT TO scott IDENTIFIED BY tiger USING '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=remoteserver1.domain.com)(PORT=1521))(CONNECT_DATA= (SID=proddb01)))';
Current user
The link uses the userid/password of the current user to log onto the remote database.
CREATE DATABASE LINK sales CONNECT TO CURRENT_USER USING 'sales';
Note: Current user links are only available through the Oracle Advanced Security option.
Database links are either private or public, authenticated or non-authenticated.
Specify PUBLIC to create a public database link available to all users.
If the value of the GLOBAL_NAMES initialization parameter is TRUE, then the database link must have the same name as the database to which it connects.
How to generate scripts to move partitions to a new tablespace while updating the indexes
Probably many ways of doing this, but here is how I generated scripts to move table partitions from one tablespace to another:
-- Find table partitions with GLOBAL indexes: SELECT 'ALTER TABLE ' || TABLE_OWNER || '.' || TABLE_NAME || ' MOVE PARTITION "' || PARTITION_NAME || '" TABLESPACE DATA2 NOCOMPRESS UPDATE GLOBAL INDEXES;' FROM dba_tab_partitions WHERE table_owner = 'USER1' AND tablespace_name = 'DATA1' AND subpartition_count = 0 AND table_name IN ( SELECT distinct table_name FROM dba_part_indexes WHERE owner = 'USER1' AND locality = 'GLOBAL' ) ORDER BY table_name DESC; -- Find table partitions with LOCAL indexes: SELECT 'ALTER TABLE ' || TABLE_OWNER || '.' || TABLE_NAME || ' MOVE PARTITION "' || PARTITION_NAME || '" TABLESPACE DATA2 NOCOMPRESS UPDATE INDEXES;' FROM dba_tab_partitions WHERE table_owner = 'USER1' AND tablespace_name = 'DATA1' AND subpartition_count = 0 AND table_name IN ( SELECT distinct table_name FROM dba_part_indexes WHERE owner = 'USER1' AND locality = 'LOCAL' ) ORDER BY table_name DESC; -- Find table partitions with no index at all SELECT 'ALTER TABLE ' || TABLE_OWNER || '.' || TABLE_NAME || ' MOVE PARTITION "' || PARTITION_NAME || '" TABLESPACE DATA2 NOCOMPRESS;' FROM dba_tab_partitions WHERE table_owner = 'USER1' AND tablespace_name = 'DATA1' AND subpartition_count = 0 AND table_name NOT IN ( SELECT table_name FROM dba_part_indexes WHERE owner = 'USER1' ) ORDER BY table_name DESC;
How to delete lines in vi (unix)
d1G = delete to top including current line
Dgg = delete from current line to bottom of file
Dgg = delete from current line to bottom of file
How to make a table read only (11g only)
ALTER TABLE table_name READ ONLY;
The command would make the table read-only even for its owner.
To reverse the operation:
ALTER TABLE table_name READ WRITE;
The command would make the table read-only even for its owner.
To reverse the operation:
ALTER TABLE table_name READ WRITE;
Monday, October 21, 2013
What is the definition of "Partition Pruning"?
From the Oracle 11 documentation
Oracle Database explicitly recognizes partitions and subpartitions.
It then optimizes SQL statements to mark the partitions or subpartitions that need to be accessed and eliminates (prunes) unnecessary partitions or subpartitions from access by those SQL statements.
In other words, partition pruning is the skipping of unnecessary index and data partitions or subpartitions in a query.
See also my post about the difference between dynamic and static pruning
Oracle Database explicitly recognizes partitions and subpartitions.
It then optimizes SQL statements to mark the partitions or subpartitions that need to be accessed and eliminates (prunes) unnecessary partitions or subpartitions from access by those SQL statements.
In other words, partition pruning is the skipping of unnecessary index and data partitions or subpartitions in a query.
See also my post about the difference between dynamic and static pruning
How to move a table containing LOB columns to a new tablespace
Find out which tables have LOB objects:
SELECT TABLE_NAME,COLUMN_NAME,SEGMENT_NAME,TABLESPACE_NAME,INDEX_NAME FROM DBA_LOBS WHERE OWNER = 'USR1'; Genrate a "move table" script: select 'alter table ' || owner || '.' || table_name || ' move lob(' || column_name ||') store as (tablespace LOB_DATA);' from dba_tab_columns c where c.owner = 'USR1' and c.data_type like '%LOB%';
Query returns the following statements:
alter table USR1.LG_LOG move lob(MESSAGE_CONTEXT) store as (tablespace LOB_DATA);
alter table USR1.TFW_TEST_RESULT move lob(SQL_NUM_MATCHED_ERROR_ROWS) store as (tablespace LOB_DATA);
Note:
The LOB index is an internal structure that is strongly associated with LOB storage.
This implies that a user may not drop the LOB index or rebuild it.
Sunday, October 20, 2013
How to disable flashback archiver process, introduced to support Oracle 11gR1 feature "Flashback Data Archive"
If you do not use the flashback data archive feature, and have no plans to do so, you might as well disable the process, even though it is defined as "lightweight".
First, confirm that the operating system process is running:
$ ps -ef | grep FBDA
oracle 18199 8264 0 13:22:12 pts/5 0:00 grep FBDA
--> The background proceess FBDA is running in the background
Check the setting
sqlplus / as sysdba:
alter system set "_disable_flashback_archiver"=1 scope=spfile;
First, confirm that the operating system process is running:
$ ps -ef | grep FBDA
oracle 18199 8264 0 13:22:12 pts/5 0:00 grep FBDA
--> The background proceess FBDA is running in the background
Check the setting
sqlplus / as sysdba:
select a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value" from x$ksppi a, x$ksppcv b, x$ksppsv c where a.indx = b.indx and a.indx = c.indx and substr(ksppinm,1,1)='_' and ksppinm like '%archiver%' order by a.ksppinm; Parameter Session Value Instance Value ----------------------------------- --------------- -------------- _disable_flashback_archiver 0 0 _flashback_archiver_partition_size 0 0by setting the parameter _disable_flashback_archiver and restarting your instance, you disable the flashback archiver process:
alter system set "_disable_flashback_archiver"=1 scope=spfile;
Saturday, October 19, 2013
How to instruct Oracle to use the flash recovery area as archive log destination
Find out what the current setting is:
sqlplus / as sysdba SQL> show parameter db_recovery NAME TYPE VALUE ---------------------------- ------------- ------------------------ db_recovery_file_dest string /data/oracle/PRODDB/f01 db_recovery_file_dest_size big integer 500G show parameter log_archive NAME TYPE VALUE --------------------------- -------------- -------------------------- log_archive_dest_1 string LOCATION=/data/oracle/PRODDB/u01/newarchloc/
Then, switch to the flash recovery area:
alter system set log_archive_dest_1='location=use_db_recovery_file_dest' scope=both;
How to use the dbms_scheduler package to collect intermittent statistics on volatile tables
In the following example I am using the dbms_scheduler package to analyze certain volatile tables every hour:
BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => 'GATHERSTATS' ,start_date => TO_TIMESTAMP_TZ('2009/05/13 12:17:57.508450 +02:00','yyyy/mm/dd hh24:mi:ss.ff tzh:tzm') ,repeat_interval => 'FREQ=HOURLY;' ,end_date => NULL ,job_class => 'DEFAULT_JOB_CLASS' ,job_type => 'PLSQL_BLOCK' ,job_action => ' begin dbms_stats.gather_table_stats( ownname=> ''TBR'', tabname=> ''LOGONS'' , estimate_percent=> DBMS_STATS.AUTO_SAMPLE_SIZE, cascade=> DBMS_STATS.AUTO_CASCADE, degree=> null, no_invalidate=> DBMS_STATS.AUTO_INVALIDATE, granularity=> ''AUTO'', method_opt=> ''FOR ALL COLUMNS SIZE AUTO''); comments=>'your comment here' END; /
Oracle 11g enhanced partitioning options
Reference partitioning
======================
allows you to equi-partition related tables in the same database, even if the columns are not present in all the child tables.
Interval partitioning
======================
embodies that highly desirable fire-and-forget power. You define an interval and Oracle takes care of the maintenance, forever.
Extended composite partitioning
===============================
The extension of composite partitioning to range-range, list-range, list-hash, and list-list exposes new possibilities for better partitioning choices and manageability.
Transportable partitions
=========================
Data Pump now allows you to transport and plug-in a single partition, a feature that is guaranteed to be quite useful in archival and retention.
Virtual column partitioning
============================
You can design the best possible partitioning strategy that mimics business flow by partitioning on virtual columns.
System Partitioning
===================
If a table lacks a column that is candidate for partitioning, the table can still be partitioned by using system partitioning,
where oracle will create as many partitions as you like.
The partitions must be explicitly referenced by the developers and users.
So instead of the statement:
insert into sales3 values (1,101,1);
you will have to use:
insert into sales3 partition (p1) values (1,101,1);
Source: Arup Nanda
======================
allows you to equi-partition related tables in the same database, even if the columns are not present in all the child tables.
Interval partitioning
======================
embodies that highly desirable fire-and-forget power. You define an interval and Oracle takes care of the maintenance, forever.
Extended composite partitioning
===============================
The extension of composite partitioning to range-range, list-range, list-hash, and list-list exposes new possibilities for better partitioning choices and manageability.
Transportable partitions
=========================
Data Pump now allows you to transport and plug-in a single partition, a feature that is guaranteed to be quite useful in archival and retention.
Virtual column partitioning
============================
You can design the best possible partitioning strategy that mimics business flow by partitioning on virtual columns.
System Partitioning
===================
If a table lacks a column that is candidate for partitioning, the table can still be partitioned by using system partitioning,
where oracle will create as many partitions as you like.
The partitions must be explicitly referenced by the developers and users.
So instead of the statement:
insert into sales3 values (1,101,1);
you will have to use:
insert into sales3 partition (p1) values (1,101,1);
Source: Arup Nanda
How to perform an online reorganization of a table - step by step
First, check that your user is eligeble for usage of the DBMS_REDEFINITION package. It requires generous privileges.
Step 1: determine if the table can be reorganized:
whenever sqlerror exit accept table_name prompt 'Table name: ' accept owner prompt 'Table owner: ' accept move_method prompt 'Move method (cons_use_pk or cons_use_rowid): ' set serveroutput on EXEC DBMS_REDEFINITION.can_redef_table('&&owner','&&table_name',dbms_redefinition.&&move_method); exitStep 2: Create an interim table with the desirable layout:
If the outcome of step one was positive, you can proceed. You can for example create the interim table partitioned, while the original table is a normal table without partitions.
Important:
If you copy/paste DDL from SQL Developer, TOAD or similar tools, and use this as a template for your interim table, make sure *all* columns allow NULL values.
Otherwise, you will receive errors when running the COPY_TABLE_DEPENDENTS procedure in step 4
If you copy/paste DDL from SQL Developer, TOAD or similar tools, and use this as a template for your interim table, make sure *all* columns allow NULL values.
Otherwise, you will receive errors when running the COPY_TABLE_DEPENDENTS procedure in step 4
Step 3: start the redefinition:
set serveroutput on set feedback off set verify off accept table_name prompt 'Table name: ' accept owner prompt 'Table owner: ' accept interim_table_name prompt 'Interim table name: ' accept move_method prompt 'Move method (cons_use_pk or cons_use_rowid): ' whenever sqlerror exit set feedback off set verify off begin DBMS_REDEFINITION.start_redef_table(uname=>'&&owner', orig_table=>'&&table_name', int_table=>'&&interim_table_name', col_mapping=>NULL, options_flag=>dbms_redefinition.&&move_method); end; / exitStep 4: Copy the dependent object to the interim table
If you want the indexes from the original table to be copied over along with triggers, constraints etc, use the directive
copy_indexes=>dbms_redefinition.cons_orig_params
If you prefer creating the indexes manuall after you have finished the redefinition, use
copy_indexes=>0
whenever sqlerror exit set serveroutput on set feedback off set verify off accept table_name prompt 'Table name: ' accept owner prompt 'Table owner: ' accept interim_table prompt 'Interim table name: ' DECLARE l_num_errors PLS_INTEGER; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(uname=>'&&owner', orig_table=>'&&table_name', int_table=>'&&interim_table', -- copy_indexes=>dbms_redefinition.cons_orig_params, copy_indexes=>0, copy_triggers=>TRUE, copy_constraints=>TRUE, copy_privileges=>TRUE, ignore_errors=>FALSE, num_errors => l_num_errors, copy_statistics=>TRUE, copy_mvlog=>TRUE); DBMS_OUTPUT.put_line('l_num_errors=' || l_num_errors); END; / exitStep 5: synchronize the interim table with potential intermittent transactions:
whenever sqlerror exit set serveroutput on set feedback off set verify off accept table_name prompt 'Table name: ' accept owner prompt 'Table owner: ' accept interim_table prompt 'Interim table name: ' begin DBMS_REDEFINITION.SYNC_INTERIM_TABLE(uname=>'&&owner',orig_table=>'&&table_name',int_table=>'&&interim_table'); end; / exitStep 6: finish the redefinition:
whenever sqlerror exit set serveroutput on set feedback off set verify off accept table_name prompt 'Table name: ' accept owner prompt 'Table owner: ' accept interim_table prompt 'Interim table name: ' begin DBMS_REDEFINITION.FINISH_REDEF_TABLE(uname=>'&&owner', orig_table=>'&&table_name', int_table=>'&&interim_table'); end; / exitAt any time before the execution of FINISH_REDEF_TABLE, you can chose to abort the procedure by calling the dbms_redefinition.abort_redef_table procedure:
whenever sqlerror exit accept table_name prompt 'Table name: ' accept owner prompt 'Table owner: ' accept interim_table prompt 'Interim table name: ' set serverout on exec dbms_redefinition.abort_redef_table(uname=>'&&owner',orig_table=>'&&table_name',int_table=>'&&interim_table'); exit
Tip: when you create your interim table, give it a short name, like MYTABLE.
I have had cases where longer names, even though they are below the 30 character limit, will create trouble during the execution of the DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS procedure.
I have had cases where longer names, even though they are below the 30 character limit, will create trouble during the execution of the DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS procedure.
Errors during the redefinition can be seen from the DBA_REDEFINITION_ERRORS view:
SET LONG 2000 SET LINES 300 COL OBJECT_TYPE FORMAT A20 COL OBJECT_OWNER FORMAT A10 COL OBJECT_NAME FORMAT A20 COL BASE_TABLE_NAME FORMAT A20 COL DDL_TXT FORMAT A50 COL ERR_TXT FORMAT A100 SELECT OBJECT_TYPE, OBJECT_OWNER, OBJECT_NAME,BASE_TABLE_NAME,DDL_TXT, ERR_TXT FROM DBA_REDEFINITION_ERRORS;
Some useful session views - short description
V$SESSION --> displays information about the current session
V$SESSION_EVENT --> shows all the wait events waited for by the session. Note that it will show the data only if the session is still active in the database
V$SESSTAT --> shows the resource usage for a session
Some useful statistics visible in the V$SESSTAT view:
- physical reads: the number of database blocks retrieved from disk
- db block changes: the number of database blocks changed in the session
- bytes sent via SQL*Net to client: the bytes received from the client over the network, which is used to determine the data traffic from the client
V$ACTIVE_SESSION_HISTORY --> shows historical session information
DBA_HIST_ACTIVE_SESS_HISTORY --> contains the same data as V$ACTIVE_SESSION_HISTORY, but with less granularity, only every 10th second sampling
V$SESSION_EVENT --> shows all the wait events waited for by the session. Note that it will show the data only if the session is still active in the database
V$SESSTAT --> shows the resource usage for a session
Some useful statistics visible in the V$SESSTAT view:
- physical reads: the number of database blocks retrieved from disk
- db block changes: the number of database blocks changed in the session
- bytes sent via SQL*Net to client: the bytes received from the client over the network, which is used to determine the data traffic from the client
V$ACTIVE_SESSION_HISTORY --> shows historical session information
DBA_HIST_ACTIVE_SESS_HISTORY --> contains the same data as V$ACTIVE_SESSION_HISTORY, but with less granularity, only every 10th second sampling
How to use sqlerrm to reveal the meaning of audit information
As an example, the audit information may look like the following:
ALTER SESSION SET NLS_DATE_FORMAT='DD.MM.YYYY HH24:MI:SS';
SELECT os_username, userhost, timestamp, returncode FROM dba_audit_session WHERE action_name = 'LOGON' AND returncode > 0 AND trunc(timestamp) = to_date('08.07.2013', 'DD.MM.YYYY') AND username = 'TSF' ORDER BY timestamp DESC;
Result:
OS_USERNAME USERHOST TIMESTAMP RETURNCODE ----------- ----------- ------------------- ---------- billyb CHM06071130 08.07.2013 12:24:07 1017 billyb CHM06071130 08.07.2013 10:06:06 28000You can use the sqlerrm keyword to print the meaning of a return code, like this:
exec dbms_output.put_line(sqlerrm(-1017)) ; exec dbms_output.put_line(sqlerrm(-28000)) ;These commands will yield:
ORA-01017: invalid username/password; logon denied ORA-28000: the account is locked
How to name a trace file to find it more easily
alter session set tracefile_identifier = 'your string for easy recognition here';
For example:
alter session set tracefile_identifier = 'data_pump_import_trace.trc';
If you trace your session now, your file will now be much easier to find in the udump directory (pre 11g) or in the diagnostic_dest/trace directory (11g and onwards)
For example:
alter session set tracefile_identifier = 'data_pump_import_trace.trc';
If you trace your session now, your file will now be much easier to find in the udump directory (pre 11g) or in the diagnostic_dest/trace directory (11g and onwards)
How to set a timeout for DDL operations in oracle11g - to avoid "resource busy" error
ALTER SESSION SET DDL_LOCK_TIMEOUT = 10;
when a DDL statement in the session does not get the exclusive lock, it will not error out.
Instead, it will wait for 10 seconds. In that 10 seconds, it continually re-tries the DDL operation until it is successful or the time expires, whichever comes first.
For exammple, if the following query is executed:
SQL> alter table sales add (tax_code varchar2(10));
the statement hangs and does not error out.
Arup Nanda puts it as
To set it database-wide:
ALTER SYSTEM SET DDL_LOCK_TIMEOUT = 10;
when a DDL statement in the session does not get the exclusive lock, it will not error out.
Instead, it will wait for 10 seconds. In that 10 seconds, it continually re-tries the DDL operation until it is successful or the time expires, whichever comes first.
For exammple, if the following query is executed:
SQL> alter table sales add (tax_code varchar2(10));
the statement hangs and does not error out.
Arup Nanda puts it as
somewhat like a telephone programmed to re-try a busy number
To set it database-wide:
ALTER SYSTEM SET DDL_LOCK_TIMEOUT = 10;
The reason for "ORA-12052: cannot fast refresh materialized view" during Online Redefinition
Online Redefinition of an object relies on Material View (MView) logs to keep track of all data changes in the table under redefinition *while* the redefintion is going on.
If there is any change in the definition (DDL), then there is no guarantee that the Mview logs can capture all changes (they can only track DML )
The internal process works like this:
- Set up Mview log to track any changes on the table and consider the temp table as an MView of the original table
- Transfer all the data in the original table to the temporary table
- WHILE this is going on, any DATA changes to the table will be logged in the MView log
- If you perform a SYNC, Oracle essentially performs a "MVIEW FAST REFRESH" using the MView log to replicating the data changes. The MView log is then cleared.
- Once the data transfer is complete, the MView log (if not empty) is processed - the "MView" / temp table is now up-to-date.
- Upon executing dbms_redefinition.finish_redef_table, the MView log is dropped, the MView is converted back to simple table.
- After that, the names of the original and temp table are switched automatically.
If at any point during this process, the structure of the table is changed or DDL in general is executed (say, TRUNCATE, adding a column, dropping a partition, exchanging a partition), then the MView log cannot in itself guarantee it has all the information needed to "replicate" this change of the original table.
Hence, the MView log is regarded as "unusable" and the error you observed occurs:
ORA-42009: error occurred while synchronizing the redefinition
ORA-12052: cannot fast refresh materialized view BWALM.TOMBA
ORA-06512: at "SYS.DBMS_REDEFINITION", line 119
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1740
ORA-06512: at line 2
Oracle therefore recommend to avoid all DDL on the original table while ONLINE REDEFINITION is ongoing.
If there is any change in the definition (DDL), then there is no guarantee that the Mview logs can capture all changes (they can only track DML )
The internal process works like this:
- Set up Mview log to track any changes on the table and consider the temp table as an MView of the original table
- Transfer all the data in the original table to the temporary table
- WHILE this is going on, any DATA changes to the table will be logged in the MView log
- If you perform a SYNC, Oracle essentially performs a "MVIEW FAST REFRESH" using the MView log to replicating the data changes. The MView log is then cleared.
- Once the data transfer is complete, the MView log (if not empty) is processed - the "MView" / temp table is now up-to-date.
- Upon executing dbms_redefinition.finish_redef_table, the MView log is dropped, the MView is converted back to simple table.
- After that, the names of the original and temp table are switched automatically.
If at any point during this process, the structure of the table is changed or DDL in general is executed (say, TRUNCATE, adding a column, dropping a partition, exchanging a partition), then the MView log cannot in itself guarantee it has all the information needed to "replicate" this change of the original table.
Hence, the MView log is regarded as "unusable" and the error you observed occurs:
ORA-42009: error occurred while synchronizing the redefinition
ORA-12052: cannot fast refresh materialized view BWALM.TOMBA
ORA-06512: at "SYS.DBMS_REDEFINITION", line 119
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1740
ORA-06512: at line 2
Oracle therefore recommend to avoid all DDL on the original table while ONLINE REDEFINITION is ongoing.
How to completely move a sub-partitioned table from one tablespace to another, including attributes
1. move the subpartitions physically:
ALTER TABLE lkw.sales MOVE SUBPARTITION P_012013_WEEK1 TABLESPACE SALES_2013_DATA [ UPDATE INDEXES ];
2. alter the default attributes for the *partition* (as opposed to the subpartition which was physically moved in the previous step):
ALTER TABLE lkw.sales MODIFY DEFAULT ATTRIBUTES FOR PARTITION "P_012013" TABLESPACE SALES_2013_DATA;
3. Finally, alter the default attributes for the table:
ALTER TABLE lkw.sales MODIFY DEFAULT ATTRIBUTES TABLESPACE SALES_2013_DATA;
The default attributes for partitions and subpartitions can be found by querying the view DBA_PART_TABLES.
A script that will generate the DDL for you:
accept table_owner prompt 'Table owner: ' accept table_name prompt 'Table name: ' accept new_tabspc prompt 'New tablespace: ' accept old_tabspc prompt 'Old tablespace: ' set lines 300 set heading off set feedback off set verify off set echo off set pages 0 set trimspool on spool exec_movesubpart_&&table_owner..&&table_name..sql select 'alter session set nls_language=''american'';' from dual; select 'alter session force parallel ddl;' from dual; select 'set timing on' from dual; select 'set lines 200' from dual; select 'set trimspool on' from dual; select 'spool exec_move_subpart_&&table_owner..&&table_name..log' from dual; select 'exec DBMS_APPLICATION_INFO.SET_MODULE(module_name=>''Move'', action_name=>''MoveSubPart_&&table_name'');' from dual; select 'exec DBMS_APPLICATION_INFO.SET_CLIENT_INFO (client_info=>''MoveSubPart_&&table_name'');' from dual; select 'exec DBMS_SESSION.SET_IDENTIFIER (client_id=>''MoveSubPart_&&table_name''); ' from dual; SELECT 'alter table ' || ts.TABLE_OWNER || '.' || ts.TABLE_NAME || ' move subpartition ' || ts.subpartition_name || ' ONLINE TABLESPACE &&new_tabspc ROW STORE COMPRESS ADVANCED UPDATE GLOBAL INDEXES;' FROM dba_tab_subpartitions ts WHERE ts.table_name = '&&table_name' AND ts.tablespace_name = '&&old_tabspc' AND ts.table_owner = '&&table_owner' ORDER BY ts.subpartition_name DESC; select 'prompt alter default attributes for partitions:' from dual; SELECT 'alter table ' || ts.TABLE_OWNER || '.' || ts.TABLE_NAME || ' MODIFY DEFAULT ATTRIBUTES FOR PARTITION ' || ts.partition_name || ' TABLESPACE &&new_tabspc;' FROM dba_tab_subpartitions ts WHERE ts.table_name = '&&table_name' AND ts.tablespace_name = '&&old_tabspc' AND ts.table_owner = '&&table_owner' ORDER BY ts.subpartition_name DESC; select 'exit' from dual; exit
From 12.1 and onwards, this operation can be done online, meaning that it becomes a non-blocking DDL statement, allowing DML against the partition to continue without interuption. See this post for details.
Friday, October 18, 2013
How to speed up index rebuild
ALTER SESSION FORCE PARALLEL DDL;
ALTER INDEX <schema>.<index_name> REBUILD PARALLEL 4 NOLOGGING;
Note the "NOLOGGING" - this tells oracle that it will not log the *creation* of the index.
This result in faster creation time. The index would not be recoverable during a recover operations, since oracle would not have the information in its redo logs. This may be perfectly acceptable for indexes, since they can easily be recreated anyway if we ever have to recover the database.
After the index is created, remove the PARALLEL clause to impose normal, non-parallelized usage of the index:
ALTER INDEX <schema>.<index_name> NOPARALLEL;
Update regarding rebuilds of large partitioned tables with global indexes:
It is my experience that trying to run multiple rebuilds simultaniously, with a low degree of parallelism, or no parallelism at all, is a terrible idea. You will get nowhere and your sessions will create wait-events all over the place.
If you find yourself in a situation where you have to rebuild a number of large global non-partitioned indexes on a large partitioned table, it's better to throw all the CPU power you can find on the database server, and focus all CPU power on building one index at a time. You should run with an aggressive number of parallel slaves.
In my example, I had a virtual VMWare server with 38 cores and exclusive rights to the database.
The following statement was resonably effective:
ALTER INDEX myindx REBUILD PARALLEL 34 ONLINE NOLOGGING;Then, when the index has been rebuilt, move to the next index and repeat procedure.
How to use the ONLINE keyword during index rebuilds
This feature eliminates the need for DML-blocking locks when creating or rebuilding an index, making the online index operations fully transparent:
ALTER INDEX <schema>.<table_name> REBUILD TABLESPACE <tablespace_name> ONLINE;
ALTER INDEX <schema>.<table_name> REBUILD TABLESPACE <tablespace_name> ONLINE;
How to create an interval range partitioned table based on a function
CREATE TABLE SCOTT.MY_TABLE( PID NUMBER(12), PTDBAL NUMBER, BUSINESSDAY VARCHAR2(255 BYTE), PTBALCCY NUMBER, CREATIONTIME TIMESTAMP(6), COCODE VARCHAR2(4 BYTE), OPERATOR VARCHAR2(255 BYTE), VERSION TIMESTAMP(6), MAID NUMBER(12), LPER NUMBER(12), ARCHIVALFLAG CHAR(1 BYTE), CREATIONDATE AS ( CAST(CREATIONTIME as DATE) ) VIRTUAL ) -- Partition by range on the virtual column PARTITION BY RANGE (CREATIONDATE ) -- Use 11gR1 Interval Partitioning INTERVAL (NUMTOYMINTERVAL(1,'MONTH')) ( PARTITION P082013 VALUES LESS THAN (TO_DATE('01.09.2013','DD.MM.YYYY')) ) TABLESPACE DATA;
Subscribe to:
Posts (Atom)