Showing posts with label Oracle Utilities. Show all posts
Showing posts with label Oracle Utilities. Show all posts

Monday, September 16, 2019

How to work around ORA-03206 when running the RCU (Repository Creation Utility) used by ODI



During installation you may encounter the error
ORA-03206: maximum file size of (13107200) blocks in AUTOEXTEND clause is out of range

In my case, the RCU attempted to create a temporary tablespace with autoextend of 100G. This is too much for a smallfile tablespace, and the error is thrown.
I checked the database creation scripts, and it is indeed created with smallfile as the default tablespace type.

To solve the problem, alter your database default setting:

ALTER DATABASE SET DEFAULT BIGFILE TABLESPACE;

and retry the operation.

Documentation for Oracle 12.2 can be found here

Thursday, May 16, 2019

How to compress tables on import



Oracle 12c introduces a way to compress tables on the fly, during import using impdp.

Use the directive
transform=table_compression_clause:"row store compress advanced"
in your impdp parameter file.

Documentation here

Monday, June 25, 2018

How to move data using Transportable Tablespaces



Mission:

* Move schema FXMART with all of its data from source database proddb01 to destination database testdb01
* In the same operation, rename schema FXMART with FXMARTST in the destination database testdb01.

While there are certainly a number of ways to accomplish such a task, I will show you how to use transportable tablespaces.

1. In your source database, check which tablespaces your schema is writing to:

SELECT UNIQUE TABLESPACE_NAME
FROM DBA_SEGMENTS
WHERE OWNER = 'FXMART';

TABLESPACE_NAME
FXDATA_2016
FX_MART_DATA
FXDATA_2014
FXDATA_2015

2. In your source database, check that these are self-contained:
exec dbms_tts.transport_set_check('FXDATA_2016,FX_MART_DATA,FXDATA_2014,FXDATA_2015', true);

Check errors using the view TRANSPORT_SET_VIOLATIONS:
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

If you see no rows here, proceed to step 3.

3. In your source database, set the tablespaces to be transported to read only:
alter tablespace FXDATA_2016 read only;
alter tablespace FX_MART_DATA read only;
alter tablespace FXDATA_2014 read only;
alter tablespace FXDATA_2015 read only;

4. Make sure you actually have a directory object called "DP". If not, create it:
CREATE OR REPLACE DIRECTORY DP AS '/u02/exports';
5. In your source database, export the metadata for the tablespaces to be moved using data pump export.

First, create a parameter file as follows:
userid='/ as sysdba'
DIRECTORY=DP
DUMPFILE=FXMOVE_TSS.dmp
LOGFILE=FXMOVE_TSS.log
JOB_NAME=FXMOVE
TRANSPORT_TABLESPACES=FXDATA_2016,FX_MART_DATA,FXDATA_2014,FXDATA_2015
EXCLUDE=STATISTICS
METRICS=YES

Then, execute the export:
expdp parfile=expdp_tts.par

6. Identify the data files which belong to the tablespaces to be transported.

To identify the files, use this simple query:
SELECT FILE_NAME
FROM DBA_DATA_FILES
WHERE TABLESPACE_NAME IN (
    SELECT UNIQUE TABLESPACE_NAME
    FROM DBA_SEGMENTS
    WHERE OWNER = 'FXMART'
  );

FILE_NAME
/u02/oradata/proddb01/fx_mart_data.dbf
/u02/oradata/proddb01/fxdata_2014.dbf
/u02/oradata/proddb01/fxdata_2015.dbf
/u02/oradata/proddb01/fxdata_2016.dbf


7. Transport the resulting dmpfile along with the datafiles of the tablespaces to be transported, over the network to the destination server, using scp
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/FXMOVE_TSS.dmp oracle@destinationserver:/u02/exports/.
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2016.dbf oracle@destinationserver:/u02/exports/.
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2015.dbf oracle@destinationserver:/u02/exports/
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2014.dbf oracle@destinationserver:/u02/exports/
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fx_mart_data.dbf oracle@destinationserver:/u02/exports/

When the scp session is over, set the tablespaces back to read/write to allow normal usage in the source database:
alter tablespace FXDATA_2016 read write;
alter tablespace FX_MART_DATA read write;
alter tablespace FXDATA_2014 read write;
alter tablespace FXDATA_2015 read write;

8. In the destination database, create a directory object that points to the location where the datafiles and the dump file now resides:

CREATE OR REPLACE DIRECTORY DP AS '/u02/exports';

9. In the destination database, drop the tablespaces to be replaced:
drop tablespace FXDATA_2016 including contents and datafiles;
drop tablespace FXDATA_2014 including contents and datafiles;
drop tablespace FX_MART_DATA including contents and datafiles;
drop tablespace FXDATA_2015 including contents and datafiles;

10. In the destination database, drop and recreate the schema to be repopulated with fresh data. Note that the user's default tablepace need to be temporarily set to USERS, as the tablespaces we intend to use as default tablespace is not yet imported:

drop user FXMARTTST cascade;

CREATE USER FXMARTTST
  IDENTIFIED BY mysecretpassword
  DEFAULT TABLESPACE USERS 
  TEMPORARY TABLESPACE TEMP
  PROFILE APP_SCHEMAS;

-- Grant of system privileges and/or roles here

11. On the destination server, create an import parameter file as follows:
userid='/ as sysdba'
DIRECTORY=DP
DUMPFILE=FXMOVE_TSS.dmp
LOGFILE=imp_FXMOVE_TSS.log
JOB_NAME=FXMOVE
TRANSPORT_DATAFILES=/u02/oradata/testdb01/fx_mart_data.dbf,/u02/oradata/testdb01/fxdata_2014.dbf,/u02/oradata/testdb01/fxdata_2015.dbf,/u02/oradata/testdb01/fxdata_2016.dbf
REMAP_SCHEMA=FXMART:FXMARTTST
METRICS=YES

12. In the destination database, make the freshly imported tablepaces read write:
alter tablespace FXDATA_2016 read write;
alter tablespace FX_MART_DATA read write;
alter tablespace FXDATA_2014 read write;
alter tablespace FXDATA_2015 read write;

All done. You should now be able to logon to the database using FXMARTST and use the refreshed data.

Wednesday, March 2, 2016

How to identifiy whether or not Oracle Configuration Manager is in use

Change directory to $ORACLE_HOME
# cd $ORACLE_HOME
# ls ccr
bin  config  doc  hosts  inventory  lib  README.pdf  state

If the directory ccr exist, OCM has been installed.

Change directory to ccr/bin:
# cd ccr/bin
# ls -la
drwxr-xr-x 2 oracle dba  4096 Feb 29 16:11 .
drwxr-xr-x 9 oracle dba  4096 Feb 29 16:11 ..
-rwx------ 1 oracle dba 19169 Feb 29 16:11 common
-rw------- 1 oracle dba 47285 Feb 29 16:11 deployPackages
-rwx------ 1 oracle dba  9063 Feb 29 16:11 emocmrsp
-rwx------ 1 oracle dba  5881 Feb 29 16:11 emSnapshotEnv
-rwx------ 1 oracle dba  6116 Feb 29 16:11 lockfile
-rwxr----- 1 oracle dba  1630 Feb 29 16:11 ocmJarUtil
-rw-r--r-- 1 oracle dba  3539 Feb 29 16:11 OCMJarUtil.class
-rwx------ 1 oracle dba 58374 Feb 29 16:11 setupCCR
-rw-r--r-- 1 oracle dba   903 Feb 29 16:11 strip_path.awk

If the file emCCR exists, OCM has been configured.

If not, like in my case above, OCM is installed but not configured for this server.

Monday, June 1, 2015

How to fix error >>> DBMS_AW_EXP: BIN$DdyrY during export

During export using data pump, you see the following:

Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 7.968 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
>>> DBMS_AW_EXP: BIN$DdyrY/cyAMLgU5/YLVaAbg==$0 not AW$

Solution: empty the recyclebin:
purge dba_recyclebin;
and restart the export.

Thursday, March 6, 2014

How to deal with impdp error ORA-29913: error in executing ODCIEXTTABLEOPEN callout

I recently hit this error from one of my customers during import of a schema:

ORA-31693: Table data object "SH"."SALES_REG1_Q1" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout

After some searching on the internet, I found evidence of bug 8393456, and it is indeed confirmed on the Oracle Support website.

The problem seems to be related to cases where you use both PARALLEL (>1) and REMAP_SCHEMA at the same time.
Oracle versions affected are 11.2.0.1, 11.1.0.7 and 10.2.0.4.

I implemented the workaround by setting PARALLEL=1 and I can confirm that it solves the problem.

Thursday, January 16, 2014

How does Oracle Data Pump Import handles referential integrety constraint violations?

How does Oracle Data Pump Import handles referential integrety constraint violations?

To find out, let's use the two tables SHIPS and PORTS, which are connected by a referential constraint from SHIPS to PORTS, called FK_SHIPS_PORTS.
CREATE TABLE SHIPS
(
  SHIP_ID       NUMBER,
  SHIP_NAME     VARCHAR2(20 BYTE),
  CAPACITY      NUMBER,
  LENGTH        NUMBER,
  HOME_PORT_ID  NUMBER,
  LIFEBOATS     NUMBER(3),
  CONSTRAINT PK_SHIPS PRIMARY KEY (SHIP_ID),
  CONSTRAINT FK_SHIPS_PORTS FOREIGN KEY (HOME_PORT_ID) 
  REFERENCES PORTS (PORT_ID)
);

CREATE TABLE PORTS
(
  PORT_ID    NUMBER,
  PORT_NAME  VARCHAR2(20 BYTE),
  COUNTRY    VARCHAR2(40 BYTE),
  CAPACITY   NUMBER,
  CONSTRAINT PK_PORTS PRIMARY KEY (PORT_ID)
);
In other words, you cannot add a ship without a valid PORT_ID that already exist in table PORTS.

PORT_ID PORT_NAME COUNTRY CAPACITY
1 Baltimore USA  
2 Charleston USA  
3 Tampa USA  
4 Miami USA  

SHIP_ID SHIP_NAME CAPACITY LENGTH HOME_PORT_ID LIFEBOATS
1 Codd Crystal 2052 855 1 80
15 Codd Champion 2000 650   30
2 Codd Elegance 2974 952 2 95
16 Codd Victorious 2055 876 2  
17 Codd Grandeur 2030 840 4  
18 Codd Prince 1500 550 2 32
20 Codd Norway 1500 900 3 80

I now remove one row from the PORTS table, so that new rows being imported will have a missing parent key. In order to do that I need to also remove any child record from SHIPS, in my case, only one:

DELETE TESTUSER.SHIPS WHERE HOME_PORT_ID = 1;
DELETE TESTUSER.PORTS WHERE PORT_ID = 1;
COMMIT;

1 row deleted.
1 row deleted.
Commit complete.

SCENARIO 1: "What happens if we use TABLE_EXISTS_ACTION=REPLACE when the table being replaced has a referential constraint to another table?"

Result:

• The operation will report an error because of a constraint violation issue.
• All rows are imported into SHIPS, including the offending ones
• The referential constraint is dropped.

Error message:
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER"."SHIPS" 7.289 KB 7 rows
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ORA-39083: Object type REF_CONSTRAINT failed to create with error:
ORA-02298: cannot validate (TESTUSER.FK_SHIPS_PORTS) - parent keys not found
Failing sql is:
ALTER TABLE "TESTUSER"."SHIPS" ADD CONSTRAINT "FK_SHIPS_PORTS" FOREIGN KEY ("HOME_PORT_ID") REFERENCES "TESTUSER"."PORTS" ("PORT_ID") DEFERRABLE ENABLE


SCENARIO 2: "What happens if we use TABLE_EXISTS_ACTION=TRUNCATE when the table being truncated has a referential constraint to another table?"

Result:

• The operation will report an error because of a constraint violation issue.
• The SHIPS table is left truncated, no rows are loaded.
• The referential constraint stays.

Error message:
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39153: Table "TESTUSER"."SHIPS" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
ORA-31693: Table data object "TESTUSER"."SHIPS" failed to load/unload and is being skipped due to error:
ORA-02291: integrity constraint (TESTUSER.FK_SHIPS_PORTS) violated - parent key not found


Both cases requires manual intervention afterwards to cleanup offending rows, and then to recreate the constraint.
One way to do so would be to allow for orphan rows through temporarily disabling the index, loading the rows, then reenabling the index using the NOVALIDATE option.

First disable the constraint:
SQL> ALTER TABLE SHIPS DISABLE CONSTRAINT FK_SHIPS_PORTS;

Table altered.

Load the data once more:
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39153: Table "TESTUSER"."SHIPS" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER"."SHIPS" 7.289 KB 7 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT

Finally, enable the constraint again. However, tell oracle to disregard existing rows, only apply the constraint to new rows:
SQL> ALTER TABLE SHIPS ENABLE NOVALIDATE CONSTRAINT FK_SHIPS_PORTS;

Table altered.

SQL>

Example is based on Steve O'Heam's "SQL Certified SQL Expert Exam Guide"

Tuesday, November 19, 2013

Errors ORA-31693 and ORA-29913 thrown during import

Yesterday when importing some test data from production to test, I encountered the following error during import:

. . imported "< schema name >"."< table name > ":"< partition name >" 77.08 KB 0 rows
ORA-31693: Table data object "< schema name >"."< table name > ":"< partition name >" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
. . imported "< schema name >"."< table name > ":"< partition name >" 25.69 KB 0 rows

The preceeding export executed successfully, without any errors or warnings. I used multiple threads, and the result was 7 dump files.

I first tried reimporting the table, which in many similar cases actually work. This time, it didn't; the same error was thrown.

The solution was to remove the parallel directive from the import parameter file and reimport only the missing table:

USERID='/ AS SYSDBA'
DIRECTORY=DPUMP
DUMPFILE=EXDP_SCHEMA_EXPORT_201311181340_%U.DMP
LOGFILE=IMPDP_missing_tables.log
PARALLEL=1
TRANSFORM=OID:N
TABLE_EXISTS_ACTION=REPLACE
REMAP_SCHEMA=< schema_name >:
TABLES=< schema_name >.< table_name >


. . imported "< schema name >"."< table name > ":"< partition name >" 77.08 KB 0 rows
. . imported "< schema name >"."< table name > ":"< partition name >" 1.205 GB 6758593 rows
. . imported "< schema name >"."< table name > ":"< partition name >" 25.69 KB 0 rows

Thursday, November 7, 2013

Some useful asmcmd commands

When you need to work with ASM disks, you can use the asmcmd (ASM Command Line Utility) utility. I have found the following commands very useful:

asmcmd -p 
(-p option gives you the current directory at any given time - convenient)

lsdg
--> lists disk groups

lsdsk -s
--> -s shows various I/O related statistics of the disks

lsdsk --candidate -p
--> lists all available ASM disk candidates

lsdsk --member -p
--> lists all the current member of a disk group, as opposed to the candidates.

or simply

lsdsk -p

Friday, November 1, 2013

How to create a consistent dump set using expdp

Use the FLASHBACK_TIME or FLASHBACK_SCN directives.

The simplest approach:

FLASHBACK_TIME="to_timestamp(to_char(sysdate,'ddmmyyyy hh24:mi:ss'),'ddmmyyyy hh24:mi:ss')"

Monday, October 28, 2013

Examples of expdp and impdp parameter files, and some useful directives


Export:

USERID='/ as sysdba'
DIRECTORY=DPUMP
DUMPFILE=file_name_%U.dmp
LOGFILE=expdp_log_file_name.log
JOB_NAME=EXP_DATA
PARALLEL=8
SCHEMAS=schema1,[schema2],[schema n]
EXCLUDE=STATISTICS

To get a consistent dump file, add the flashback_time or flashback_scn directive, for example:
FLASHBACK_TIME="to_timestamp(to_char(sysdate,'ddmmyyyy hh24:mi:ss'),'ddmmyyyy hh24:mi:ss')" 

Import:

USERID='/ as sysdba'
DIRECTORY=DPUMP
DUMPFILE=file_name_%U.dmp
LOGFILE=impdp_log_file_name.log
JOB_NAME=IMP_DATA
SCHEMAS=schema1,[schema2],[schema n]
REMAP_SCHEMA=SCOTT:JAMES
REMAP_TABLE=SCOTT.MYTABLE:MYTABLE_INTERIM
REMAP_TABLESPACE=USERS:TOOLS
PARALLEL=8
TRANSFORM=oid:n
TABLE_EXISTS_ACTION=REPLACE

Note: the directive "TRANSFORM=oid:n" makes sure all imported objects will be assigned a new OID instead of inhibiting the old OID from exported objects.

You can remap serveral tablespaces during the same run, just specify the directive multiple times in the same parameter file:

remap_tablespace=USER_DATA:HISTORICAL_DATA
remap_tablespace=APP_DATA:HISTORICAL_DATA
remap_tablespace=LOB_DATA:HISTORICAL_DATA


To limit export or import to specific tables only, use:
tables=SCOTT.EMP,
SCOTT.DEPT,
SCOTT.SALARIES

To limit export or import to specific partitions only, use:
TABLES=SCOTT.SALES:P_201701

To limit export or import to partitions matching a specific pattern, use:
TABLES=SCOTT.SALES:CONTRACT%2021%

Note that you cannot use both SCEMAS and TABLES as directives in the same export/import session.


Other useful directives:

CONTENT=METADATA_ONLY (only export object definition, not data)
EXCLUDE=TABLE:"LIKE 'ABC%_TMP'" (exclude tables based on name pattern)
EXCLUDE=TABLE:"= 'BONUS'" (exclude an entire table and its partitions, if any)
EXCLUDE=INDEX:"LIKE 'EMP%'" (exclude indexes that match a certain string)
EXCLUDE=VIEW,PACKAGE, FUNCTION (exclude certain types of objects)
EXCLUDE = TRIGGER:"IN ('TRIG1', 'TRIG2')", INDEX:"='INDX1'", REF_CONSTRAINT (exclude specific triggers, a specific index and referential constraints)
EXCLUDE=SEQUENCE, TABLE:"IN ('EMP', 'DEPT')" (exclude specific tables)
EXCLUDE=INDEX:"='MY_INDX'" (exclude a specific index)
EXCLUDE=MATERIALIZED_VIEW (exclude materialized views)
EXCLUDE=MATERIALIZED_VIEW,MATERIALIZED_VIEW_LOG (exlude materialized views + mv logs)


INCLUDE=FUNCTION, PACKAGE, TABLE:"='EMP'" (only include specific objects types, and tables called "EMP")
INCLUDE=PROCEDURE:"LIKE 'MY_PROC_%'" (only include procedures named according to a specific string)

QUERY=SCOTT.TABLE1:"WHERE 1=0" (no data is being exported, but the table is created)
QUERY=SCOTT.TABLE2:"WHERE SALES_DATE >= to_date('01.12.2011','dd.MM.yyyy')" (exports only rows older than 01.12.2011)
EXCLUDE=SCHEMA:"='SCOTT'" (exclude a specific user and all objects of that user)
Note:
Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.
If you try to exclude a user by using a statement such as EXCLUDE=USER:"= 'SCOTT'", only the CREATE USER scott DDL statement will be excluded, and you may not get the results you expect.

To import only the user definition over a database link

See this post.

To compress tables on import

See this post

A note on integrity checking in data pump import TABLE_EXISTS_ACTION

TRUNCATE is subject to stricter integrity checking, than in case of REPLACE.

• TRUNCATE deletes existing rows and then loads rows from the source.
• REPLACE drops the existing table and then creates and loads it from the source.

• When you use TRUNCATE or REPLACE, make sure that rows in the affected tables are not targets of any referential constraints.
• When you use TRUNCATE, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored.
• For REPLACE, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded.
• When you use TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action.

If the existing table has active constraints and triggers, it is loaded using the external tables access method. If any row violates an active constraint, the load fails and no data is loaded. You can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the Import command line.
If you have data that must be loaded, but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before re-enabling the constraints.

A note about excluding Constraints
The following constraints cannot be excluded:
• NOT NULL constraints.
• Constraints needed for the table to be created and loaded successfully (for example, primary key constraints for index-organized tables or REF SCOPE and WITH ROWID constraints for tables with REF columns).

Examples:

Exclude all nonreferential constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading:

EXCLUDE=CONSTRAINT

Exclude referential integrity (foreign key) constraints:
EXCLUDE=REF_CONSTRAINT

Turn on compression during export:
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}

For most cases, I would stick to the MEDIUM compression algorithm. Oracle states in the 12.2 documentation:
Recommended for most environments. This option, like the BASIC option, provides a good combination of compression ratios and speed, but it uses a different algorithm than BASIC.

I find compression of exports very useful when space is an issue, or transfer between servers over the network is needed.


Sources: Oracle 11g Documentation
Oracle 12cR2 documentation

Source: My Oracle Support

How to only export specific partitions using Oracles data pump utility

In your parameter file, use the following notation:

TABLES=SCOTT.TABLE1:PART_1

You can export more partitions in the same run:

TABLES=SCOTT.TABLE1:PART_1,
TABLES=SCOTT.TABLE2:PART_3,
TABLES=SCOTT.TABLE2:PART_4


Note that you may not export data from multiple schema in this mode, in other words, the notation:

TABLES=SCOTT.TABLE1:PART_1,
TABLES=HR.TABLE1:PART_1

will throw an error:

UDE-00012: table mode exports only allow objects from one schema