From Oracle 10g and onwards, you can enable block change tracking to speed up incremental backups. In short,
"if change tracking is enabled, RMAN uses the change tracking file to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile"
Syntax:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/path/PRODDB_rman_change_track.dbf' REUSE;
Database altered.
More info here
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Monday, February 24, 2014
Thursday, February 13, 2014
SQL: view memory parameter
set lines 200 set pages 100 col name format a30 col value format a40 select name from v$database / select name, value/1024/1024 "MB" from v$parameter where name in ('java_pool_size', 'db_cache_size', 'shared_pool_size', 'large_pool_size', 'streams_pool_size', 'sga_max_size', 'sga_target', 'memory_max_target', 'memory_target', 'pga_aggregate_target') / exit
An easy way to find DML locks
The view DBA_DML_LOCKS is a convenient view to use if you want to identify locks caused by uncommited DML statements.
Consider the following simple example:
INSERT INTO FAMILY_TREE
(PERSON_ID, FULL_NAME, GENDER, CITIZENSHIP, COUNTRY_OF_RESIDENCE)
VALUES
(5, 'Arne Kåsa', 'M', 'Norwegian', 'Sweden');
-- Do not commit --
Logged in as a privileged user from another session, execute:
SELECT * FROM DBA_DML_LOCKS WHERE OWNER = 'VEGARDK';
COMMIT;
SELECT * FROM DBA_DML_LOCKS WHERE OWNER = 'VEGARDK';
No rows returned.
Consider the following simple example:
INSERT INTO FAMILY_TREE
(PERSON_ID, FULL_NAME, GENDER, CITIZENSHIP, COUNTRY_OF_RESIDENCE)
VALUES
(5, 'Arne Kåsa', 'M', 'Norwegian', 'Sweden');
-- Do not commit --
Logged in as a privileged user from another session, execute:
SELECT * FROM DBA_DML_LOCKS WHERE OWNER = 'VEGARDK';
SESSION_ID | OWNER | NAME | MODE_HELD | MODE_REQUESTED | LAST_CONVERT | BLOCKING_OTHERS |
---|---|---|---|---|---|---|
362 | VEGARDK | FAMILY_TREE | Row-X (SX) | None | 39 | Not Blocking |
COMMIT;
SELECT * FROM DBA_DML_LOCKS WHERE OWNER = 'VEGARDK';
No rows returned.
Wednesday, February 12, 2014
How to work around ORA-14080: partition cannot be split along the specified high bound
Problem:
My table had partitions for Jan 2014 and March 2014. Any attempt to split the overflow partition at this point will throw the error message
ORA-14080: partition cannot be split along the specified high bound
The following query shows my partitions for year 2014:
Output:
My solution:
Split the partition with the highest bound (March 2014) into two new ones, one that will take care of the rows for February 2014, and another one which will be empty:
Analyze the partitio, and check to make sure the that it really does contains zero rows. You can use the initial query in this post for that:
Let's execute the query above once more:
Drop the temporary partition:
My table had partitions for Jan 2014 and March 2014. Any attempt to split the overflow partition at this point will throw the error message
ORA-14080: partition cannot be split along the specified high bound
The following query shows my partitions for year 2014:
SELECT PARTITION_NAME, HIGH_VALUE, NUM_ROWS FROM DBA_TAB_PARTITIONS WHERE TABLE_NAME ='SALES' AND TABLE_OWNER = 'SCOTT' AND (PARTITION_NAME LIKE '%Y2014' OR PARTITION_NAME LIKE '%MAX%') ORDER BY PARTITION_NAME;
Output:
PARTITION_NAME | HIGH_VALUE | NUM_ROWS |
---|---|---|
SALES_M01_Y2014 | TO_DATE(' 2014-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') | 3936 |
SALES_M03_Y2014 | TO_DATE(' 2014-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') | 2308 |
SALES_MAX | MAXVALUE | 0 |
My solution:
Split the partition with the highest bound (March 2014) into two new ones, one that will take care of the rows for February 2014, and another one which will be empty:
SQL> 1 ALTER TABLE SCOTT.SALES 2 SPLIT PARTITION SALES_M03_Y2014 AT (TO_DATE('2014-03-01', 'YYYY-MM-DD')) INTO ( 3 PARTITION SALES_M02_Y2014 TABLESPACE SCOTT_DATA, 4* PARTITION SALES_MAX_NEW TABLESPACE SCOTT_DATA); Table altered.Rebuild local indexes, if any:
SQL> ALTER TABLE SCOTT.SALES MODIFY PARTITION SALES_M02_Y2014 REBUILD UNUSABLE LOCAL INDEXES; Table altered.
Analyze the partitio, and check to make sure the that it really does contains zero rows. You can use the initial query in this post for that:
SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('SCOTT','SALES','SALES_M02_Y2014'); PL/SQL procedure successfully completed.
Let's execute the query above once more:
PARTITION_NAME | HIGH_VALUE | NUM_ROWS |
---|---|---|
SALES_M01_Y2014 | TO_DATE(' 2014-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') | 3936 |
SALES_M02_Y2014 | TO_DATE(' 2014-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') | 2308 |
SALES_MAX | MAXVALUE | 0 |
SALES_MAX_NEW | TO_DATE(' 2014-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') | 0 |
Drop the temporary partition:
SQL> ALTER TABLE SCOTT.SALES DROP PARTITION SALES_MAX_NEW; Table altered.We now have:
PARTITION_NAME | HIGH_VALUE | NUM_ROWS |
---|---|---|
SALES_M01_Y2014 | TO_DATE(' 2014-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') | 3936 |
SALES_M02_Y2014 | TO_DATE(' 2014-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN') | 2308 |
SALES_MAX | MAXVALUE | 0 |
ORA-39726: unsupported add/drop column operation on compressed tables when attempting to drop column on a (previously) compressed table
My customer had a partitioned table that was previously compressed with FOR ALL OPERATIONS options.
One of the developers wanted to drop a columnn on a table, but receives the following error message:
ORA-39726: unsupported add/drop column operation on compressed tables
According to Oracle, this is expected behaviour. Oracle Support note 1068820.1 "Error ORA-39726 Drop Column Operation On Compressed Tables 10.2 Release" explains:
"In release 10g, a drop column attempt on a compressed table must raise an ORA-39726 error if the column has to be physically removed from the data blocks.
Hence "DROP COLUMN" and "DROP COLUMN UNUSED" are both illegal because they may have to touch the datablocks. SET UNUSED is OK because it is just a data
dictionary operation (as is DROP COLUMN on virtual columns)."
The Oracle support note concludes:
"In 11g it is allowed to drop columns from a compressed table IF compatible is set to 11.1 or higher AND table was created with the "compress for all OLTP" option but even in this situation there is no real drop but internally the database sets the column UNUSED to avoid long-running decompression and recompression operations."
My database unfortunately has compatible set to 11.0. So the only option at this point is to use the syntax
ALTER TABLE owner.table_name SET UNUSED COLUMN column_name;
This will render the column as good as dropped, another column can be added to the table with the same name.
The number of unused columns for a table can be tracked with
SELECT *
FROM USER_UNUSED_COL_TABS
WHERE TABLE_NAME='table_name';
One of the developers wanted to drop a columnn on a table, but receives the following error message:
ORA-39726: unsupported add/drop column operation on compressed tables
According to Oracle, this is expected behaviour. Oracle Support note 1068820.1 "Error ORA-39726 Drop Column Operation On Compressed Tables 10.2 Release" explains:
"In release 10g, a drop column attempt on a compressed table must raise an ORA-39726 error if the column has to be physically removed from the data blocks.
Hence "DROP COLUMN" and "DROP COLUMN UNUSED" are both illegal because they may have to touch the datablocks. SET UNUSED is OK because it is just a data
dictionary operation (as is DROP COLUMN on virtual columns)."
The Oracle support note concludes:
"In 11g it is allowed to drop columns from a compressed table IF compatible is set to 11.1 or higher AND table was created with the "compress for all OLTP" option but even in this situation there is no real drop but internally the database sets the column UNUSED to avoid long-running decompression and recompression operations."
My database unfortunately has compatible set to 11.0. So the only option at this point is to use the syntax
ALTER TABLE owner.table_name SET UNUSED COLUMN column_name;
This will render the column as good as dropped, another column can be added to the table with the same name.
The number of unused columns for a table can be tracked with
SELECT *
FROM USER_UNUSED_COL_TABS
WHERE TABLE_NAME='table_name';
Monday, February 10, 2014
Working with TIME ZONE information
To view the time zone settings for the database and your session:
SELECT DBTIMEZONE, SESSIONTIMEZONE FROM DUAL;
DBTIMEZONE | SESSIONTIMEZONE |
---|---|
+01:00 | Europe/Zurich |
Some other useful functions:
SELECT SYSTIMESTAMP, LOCALTIMESTAMP, CURRENT_TIMESTAMP, CURRENT_DATE FROM DUAL;
SYSTIMESTAMP | LOCALTIMESTAMP | CURRENT_TIMESTAMP | CURRENT_DATE |
---|---|---|---|
10.02.2014 09:17:00.301998 +01:00 | 10.02.2014 09:17:00.302006 | 10.02.2014 09:17:00.302006 +01:00 | 10.02.2014 09:17:00 |
Some conversion functions:
FROM_TZ
-- converts a TIMESTAMP to TIMESTAMP WITH TIMEZONE SELECT LOCALTIMESTAMP, FROM_TZ(LOCALTIMESTAMP, '+01:00' ) FROM DUAL LOCALTIMESTAMP FROM_TZ(LOCALTIMESTAMP,'+01:00') ------------------------------ --------------------------------------------------------------------------- 18.03.14 09:23:40.032126 18.03.14 09:23:40.032126 +01:00
-- or alternatively, look up the timezone used in your session -- and use this in a subquery SELECT LOCALTIMESTAMP, FROM_TZ(LOCALTIMESTAMP, SELECT SESSIONTIMEZONE FROM DUAL) ) "Converted to TSTZ" FROM DUAL; LOCALTIMESTAMP Converted to TSTZ ------------------------------ ---------------------------------------- 18.03.14 09:28:39.470322 18.03.14 09:28:39.470322 +01:00
TO_TIMESTAMP_TZ
--converts a string to a TIMESTAMP WITH TIMEZONE datatype SELECT TO_TIMESTAMP_TZ('03.02.1973 16:00:00', 'DD.MM.YYYY HH24:MI:SS') FROM DUAL; TO_TIMESTAMP_TZ('03.02.197316:00:00','DD.MM.YYYYHH24:MI:SS') --------------------------------------------------------------------------- 03.02.73 16:00:00.000000000 +01:00
TO_TIMESTAMP
--converts a string to a TIMESTAMP datatype SELECT TO_TIMESTAMP('03.02.1973 16:00:00:123456','DD.MM.YYYY HH24:MI:SS:FF') FROM DUAL; TO_TIMESTAMP('03.02.197316:00:00:123456','DD.MM.YYYYHH24:MI:SS:FF') --------------------------------------------------------------------------- 03.02.73 16:00:00.123456000
CAST
Cast can be used for a variety of conversions. It takes an expression as its first parameter, and returns an instance of the datatype you are converting to.
It is useful to convert strings representing dates or times, into proper datetime formats.
-- to convert a string into a TIMESTAMP WITH LOCAL TIME ZONE, use CAST SELECT CAST('03-FEB-11' AS TIMESTAMP WITH LOCAL TIME ZONE) "String to TWLTZ" FROM DUAL; String to TWLTZ ---------------------------- 03.02.11 00:00:00.000000For a string with more precision, use TO_TIMESTAMP to convert to a timestamp first:
SELECT CAST( TO_TIMESTAMP('03.02.1973 16:05:00','DD.MM.YYYY HH24:MI:SS') AS TIMESTAMP WITH LOCAL TIME ZONE ) "TS to TSWLTZ" FROM DUAL; TS to TSWLTZ ----------------------------- 03.02.73 16:05:00.000000
EXTRACT
Extracts the value indicated by the constant immediately following the opening parenthesis, from a valid timestamp expression.
Note that EXTRACT requires a timestamp expression as an argument.
Its return type is character if you extract TIMEZONE_REGION or TIMEZONE_ABBR, but numeric for all other.
SELECT EXTRACT(MINUTE FROM LOCALTIMESTAMP), EXTRACT(YEAR FROM LOCALTIMESTAMP), EXTRACT(MONTH FROM LOCALTIMESTAMP), EXTRACT(DAY FROM LOCALTIMESTAMP), EXTRACT(HOUR FROM LOCALTIMESTAMP), EXTRACT(SECOND FROM LOCALTIMESTAMP) FROM DUAL;
EXTRACT(MINUTEFROMLOCALTIMESTAMP) | EXTRACT(YEARFROMLOCALTIMESTAMP) | EXTRACT(MONTHFROMLOCALTIMESTAMP) | EXTRACT(DAYFROMLOCALTIMESTAMP) | EXTRACT(HOURFROMLOCALTIMESTAMP) | EXTRACT(SECONDFROMLOCALTIMESTAMP) |
---|---|---|---|---|---|
3 | 2014 | 2 | 10 | 10 | 24.256402 |
Using the SYSTIMESTAMP, which includes TIME ZONE, in the EXTRACT function:
SELECT EXTRACT(TIMEZONE_HOUR FROM SYSTIMESTAMP) "TZ_HOUR", EXTRACT(TIMEZONE_MINUTE FROM SYSTIMESTAMP) "TZ_MINUTE" FROM DUAL;
TZ_HOUR | TZ_MINUTE |
---|---|
1 | 0 |
SYS_EXTRACT_UTC
Extract the universal time (UTC) based on a timestamp that contains time zone information:
SELECT SYS_EXTRACT_UTC(SYSTIMESTAMP) FROM DUAL;
SYS_EXTRACT_UTC(SYSTIMESTAMP) |
---|
10.02.2014 09:49:37.390230 |
AT TIME
Converts a datetime value to another time zone. AT TIME can convert TIMESTAMP, TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE.
But, take care, NOT the DATE datatype!
Note that the parenthesis are not required, but it makes the function more readable:
SELECT (LOCALTIMESTAMP) AT TIME ZONE DBTIMEZONE "DB time" FROM DUAL;
DB time |
---|
10.02.2014 10:59:45.901974 +01:00 |
Or, using a named time zone region:
SELECT (CURRENT_TIMESTAMP) AT TIME ZONE 'Europe/Moscow' "Moscow Time" FROM DUAL;
Moscow Time |
---|
10.02.2014 13:14:06.632136 +03:00 |
Specifying an offset from UTC:
SELECT (CURRENT_TIMESTAMP) AT TIME ZONE '-5:00' "New York Time" FROM DUAL;
New York Time |
---|
10.02.2014 05:15:51.388302 -05:00 |
Use a TIMESTAMP value instead of TIMESTAMP WITH TIME ZONE:
SELECT (LOCALTIMESTAMP) AT TIME ZONE 'America/Los_Angeles' "LA Time" FROM DUAL;
LA Time |
---|
10.02.2014 02:19:39.958949 -08:00 |
AT LOCAL
Convert the source data into the local time equivalent:
SELECT FROM_TZ( TO_TIMESTAMP('10.02.2014 02:21:06.938886', 'DD.MM.YYYY HH24:MI:SS:FF') ,'US/Hawaii') AT LOCAL "Hawaiian to Local Time" FROM DUAL;
Hawaiian to Local Time |
---|
10.02.2014 13:21:06.938886000 +01:00 |
Without the AT LOCAL function, you would of course get the timestamp in the time zone specified
SELECT FROM_TZ( TO_TIMESTAMP('10.02.2014 02:21:06.938886', 'DD.MM.YYYY HH24:MI:SS:FF') ,'US/Hawaii') "Hawaiian time" FROM DUAL;
Hawaiian time |
---|
10.02.2014 02:21:06.938886000 -10:00 |
Thursday, February 6, 2014
Which privileges are needed to execute DBMS_REDEFINITION package?
Execute privileges on the DBMS_REDEFINITION package are granted to EXECUTE_CATALOG_ROLE.
In addition to having execute privileges on this package, you must be granted the following privileges:
CREATE ANY TABLE
ALTER ANY TABLE
DROP ANY TABLE
LOCK ANY TABLE
SELECT ANY TABLE
In addition, the following privileges are required to execute COPY_TABLE_DEPENDENTS procedure:
CREATE ANY TRIGGER
CREATE ANY INDEX
Source: Oracle Documentation
In addition to having execute privileges on this package, you must be granted the following privileges:
In addition, the following privileges are required to execute COPY_TABLE_DEPENDENTS procedure:
Source: Oracle Documentation
Subscribe to:
Posts (Atom)