Showing posts with label Export and Import. Show all posts
Showing posts with label Export and Import. Show all posts

Wednesday, September 22, 2021

How to resolve ORA-48128 during export

Applicable to Oracle 19.3 and onwards.

The following error may occur when you export data using Data Pump

Parameter file:
userid=system/mypasswd
directory=DATA_PUMP_DIR
logfile=exp_sales.log
dumpfile=klm.dmp
exclude=statistics
job_name=exp_klm
schemas=KLM
flashback_time=systimestamp

Execute the export:
expdp parfile=expdp.par
Result:
RA-39126: Worker unexpected fatal error in KUPW$WORKER.CREATE_OBJECT_ROWS [USER]
USER:"GRDA_TEST"
ORA-39155: error expanding dump file name "/data1/export/klm.dmp"
ORA-48128: opening of a symbolic link is disallowed

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 12630
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPF$FILE", line 9793
ORA-06512: at "SYS.KUPF$FILE_INT", line 1211
ORA-06512: at "SYS.KUPF$FILE", line 9734
ORA-06512: at "SYS.KUPW$WORKER", line 11341

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0xcf98d208     33543  package body SYS.KUPW$WORKER.WRITE_ERROR_INFORMATION
0xcf98d208     12651  package body SYS.KUPW$WORKER.DETERMINE_FATAL_ERROR
0xcf98d208     11612  package body SYS.KUPW$WORKER.CREATE_OBJECT_ROWS
0xcf98d208     15278  package body SYS.KUPW$WORKER.FETCH_XML_OBJECTS
0xcf98d208      3917  package body SYS.KUPW$WORKER.UNLOAD_METADATA
0xcf98d208     13746  package body SYS.KUPW$WORKER.DISPATCH_WORK_ITEMS
0xcf98d208      2439  package body SYS.KUPW$WORKER.MAIN
0xcf996200         2  anonymous block

KUPW: Object row index into parse items is: 1
KUPW: Parse item count is: 4
KUPW: In function CHECK_FOR_REMAP_NETWORK
KUPW: Nothing to remap
KUPW: In procedure BUILD_OBJECT_STRINGS - non-base info
KUPW: In procedure BUILD_SUBNAME_LIST with USER:KLM.KLM
KUPW: In function NEXT_PO_NUMBER
KUPW: PO number assigned: 13
KUPF$FILE.WRITE_LOB
KUPW: In procedure DETERMINE_FATAL_ERROR with ORA-39155: error expanding dump file name "/data1/export/klm.dmp"
ORA-06512: at "SYS.KUPF$FILE", line 9793
ORA-48128: opening of a symbolic link is disallowed
ORA-06512: at "SYS.KUPF$FILE_INT", line 1211
ORA-06512: at "SYS.KUPF$FILE", line 9734

Job "SYSTEM"."EXP_KLM" stopped due to fatal error at Wed Sep 22 08:42:49 2021 elapsed 0 00:00:04
Reason: bug

Solution:
Outlined in Doc ID 2654167.1 "Customer RecommendedDataPump Export (EXPDP) Fails Due to ORA-39155 ORA-48128":
ALTER SYSTEM  SET "_disable_directory_link_check" =true  SCOPE=SPFILE;
shutdown immediate
startup

Tuesday, June 8, 2021

Workaround for ORA-39358 during import

When running import, you may sometimes run into the problem below:
impdp parfile=myparfile.par

Import: Release 12.2.0.1.0 - Production on Tue Jun 8 14:54:34 2021

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
ORA-39002: invalid operation
ORA-39358: Export dump file version 18.0.0 not compatible with target version 12.2.0
Solution:
 oerr ora 39358
39358, 00000, "Export dump file version %s not compatible with target version %s"
// *Cause:    The Oracle Data Pump export job version was newer than the target
//            compatibility version.
// *Action:   Upgrade the target database to a compatibility level of at least
//            the export job version, or rerun the export job with a job
//            version that is lower than or equal to the compatibility version
//            of the target database.
The simplest way out for me was to downgrade the export. On the source database server, add the following parameter to your export file:
VERSION=12.2
So that my complete parameter file looked as follows:
userid=system/passwd
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=HR.dmp
LOGFILE=exp_HR.log
JOB_NAME=HR
EXCLUDE=STATISTICS
schemas=HR
VERSION=12.2
Rerun the export:
expdp parfile=myparfile.par
Transfer the newly generated dumpfile to your destination server, and rerun the import. This time around, you shouldn't see any errors relating to incompatible versions.

Friday, April 30, 2021

How to instruct data pump import to load whatever it can from a corrupted source


Oracle 18c introduces a new option to the parameter DATA_OPTIONS: CONTINUE_LOAD_ON_FORMAT_ERROR.
This option will tell data pump import to skip forward to the start of the next granule if a stream format error is encountered while loading data.
In previous versions, if a format error was found, impdp would abort and already loaded rows would be rolled back.

If you find yourself in a situation where all recovery options have failed you, then you might save at least some of the source data using this parameter.

Used in a simple paramter file called impdp.par:
USERID=system
FULL=y 
DUMPFILE=mydumpfile.dmp 
DIRECTORY=DP
TABLE_EXISTS_ACTION=replace 
DATA_OPTIONS=CONTINUE_LOAD_ON_FORMAT_ERROR
Execute:
impdp parfile=impdp.par

The feature is documented in the 18c new features documentation.

See also the utilities documentation for 18c

Thursday, May 16, 2019

How to compress tables on import



Oracle 12c introduces a way to compress tables on the fly, during import using impdp.

Use the directive
transform=table_compression_clause:"row store compress advanced"
in your impdp parameter file.

Documentation here

Monday, June 25, 2018

How to move data using Transportable Tablespaces



Mission:

* Move schema FXMART with all of its data from source database proddb01 to destination database testdb01
* In the same operation, rename schema FXMART with FXMARTST in the destination database testdb01.

While there are certainly a number of ways to accomplish such a task, I will show you how to use transportable tablespaces.

1. In your source database, check which tablespaces your schema is writing to:

SELECT UNIQUE TABLESPACE_NAME
FROM DBA_SEGMENTS
WHERE OWNER = 'FXMART';

TABLESPACE_NAME
FXDATA_2016
FX_MART_DATA
FXDATA_2014
FXDATA_2015

2. In your source database, check that these are self-contained:
exec dbms_tts.transport_set_check('FXDATA_2016,FX_MART_DATA,FXDATA_2014,FXDATA_2015', true);

Check errors using the view TRANSPORT_SET_VIOLATIONS:
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

If you see no rows here, proceed to step 3.

3. In your source database, set the tablespaces to be transported to read only:
alter tablespace FXDATA_2016 read only;
alter tablespace FX_MART_DATA read only;
alter tablespace FXDATA_2014 read only;
alter tablespace FXDATA_2015 read only;

4. Make sure you actually have a directory object called "DP". If not, create it:
CREATE OR REPLACE DIRECTORY DP AS '/u02/exports';
5. In your source database, export the metadata for the tablespaces to be moved using data pump export.

First, create a parameter file as follows:
userid='/ as sysdba'
DIRECTORY=DP
DUMPFILE=FXMOVE_TSS.dmp
LOGFILE=FXMOVE_TSS.log
JOB_NAME=FXMOVE
TRANSPORT_TABLESPACES=FXDATA_2016,FX_MART_DATA,FXDATA_2014,FXDATA_2015
EXCLUDE=STATISTICS
METRICS=YES

Then, execute the export:
expdp parfile=expdp_tts.par

6. Identify the data files which belong to the tablespaces to be transported.

To identify the files, use this simple query:
SELECT FILE_NAME
FROM DBA_DATA_FILES
WHERE TABLESPACE_NAME IN (
    SELECT UNIQUE TABLESPACE_NAME
    FROM DBA_SEGMENTS
    WHERE OWNER = 'FXMART'
  );

FILE_NAME
/u02/oradata/proddb01/fx_mart_data.dbf
/u02/oradata/proddb01/fxdata_2014.dbf
/u02/oradata/proddb01/fxdata_2015.dbf
/u02/oradata/proddb01/fxdata_2016.dbf


7. Transport the resulting dmpfile along with the datafiles of the tablespaces to be transported, over the network to the destination server, using scp
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/FXMOVE_TSS.dmp oracle@destinationserver:/u02/exports/.
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2016.dbf oracle@destinationserver:/u02/exports/.
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2015.dbf oracle@destinationserver:/u02/exports/
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2014.dbf oracle@destinationserver:/u02/exports/
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fx_mart_data.dbf oracle@destinationserver:/u02/exports/

When the scp session is over, set the tablespaces back to read/write to allow normal usage in the source database:
alter tablespace FXDATA_2016 read write;
alter tablespace FX_MART_DATA read write;
alter tablespace FXDATA_2014 read write;
alter tablespace FXDATA_2015 read write;

8. In the destination database, create a directory object that points to the location where the datafiles and the dump file now resides:

CREATE OR REPLACE DIRECTORY DP AS '/u02/exports';

9. In the destination database, drop the tablespaces to be replaced:
drop tablespace FXDATA_2016 including contents and datafiles;
drop tablespace FXDATA_2014 including contents and datafiles;
drop tablespace FX_MART_DATA including contents and datafiles;
drop tablespace FXDATA_2015 including contents and datafiles;

10. In the destination database, drop and recreate the schema to be repopulated with fresh data. Note that the user's default tablepace need to be temporarily set to USERS, as the tablespaces we intend to use as default tablespace is not yet imported:

drop user FXMARTTST cascade;

CREATE USER FXMARTTST
  IDENTIFIED BY mysecretpassword
  DEFAULT TABLESPACE USERS 
  TEMPORARY TABLESPACE TEMP
  PROFILE APP_SCHEMAS;

-- Grant of system privileges and/or roles here

11. On the destination server, create an import parameter file as follows:
userid='/ as sysdba'
DIRECTORY=DP
DUMPFILE=FXMOVE_TSS.dmp
LOGFILE=imp_FXMOVE_TSS.log
JOB_NAME=FXMOVE
TRANSPORT_DATAFILES=/u02/oradata/testdb01/fx_mart_data.dbf,/u02/oradata/testdb01/fxdata_2014.dbf,/u02/oradata/testdb01/fxdata_2015.dbf,/u02/oradata/testdb01/fxdata_2016.dbf
REMAP_SCHEMA=FXMART:FXMARTTST
METRICS=YES

12. In the destination database, make the freshly imported tablepaces read write:
alter tablespace FXDATA_2016 read write;
alter tablespace FX_MART_DATA read write;
alter tablespace FXDATA_2014 read write;
alter tablespace FXDATA_2015 read write;

All done. You should now be able to logon to the database using FXMARTST and use the refreshed data.

Friday, March 31, 2017

How to import only users over a network link

Use the following parameter file to import only the user definitions from your source database to your test database:
Remember that the database link proddb01 must be correctly defined and working before you import the data.

userid='system/yourpassword
network_link=proddb1.mydomain.com
logfile=DATA_PUMP_DIR:user_definitions.log
full=Y
include=USER

To import a specific user only:
INCLUDE=USER:"='SCOTT'"

To include a specific user and remap it to another user name during import:
REMAP_SCHEMA=SCOTT:JIM
INCLUDE=USER:"='SCOTT'"

Remember to check if the SCOTT user is assigned a default tablespace that doesn't exist in the target database.
If this is the case, you need to use remap_schema directive to make it work:
REMAP_TABLESPACE=SCOTT_DATA:JIM_DATA
... or potentiall pre-create the tablespace SCOTT_DATA in the target database, but this may not be what you wish.

Thursday, March 9, 2017

Errors ORA-39083 and ORA-28365 during import

During an import you get:

ORA-39083: Object type TABLE:"SCOTT"."EMP" failed to create with error:
ORA-28365: wallet is not open

Action:
Open the wallet:
SQL> administer key management set keystore open identified by "mysecretpassword";

keystore altered.

Check status of the wallet:

select status from  v$encryption_Wallet;

STATUS
------
OPEN

You can now perform your import as you normally would.

After import you can Close the wallet:
administer key management set keystore close identified by "mysecretpassword";

Tuesday, June 14, 2016

How to compare the number of objects per schema in two databases

After a migration of a database to a new server, or after any kind of replication, you would like to get some verification that your objects were created.

A potentially very useful query could help you with this task.

You need a database link from the target database back to the source database
create public database link proddb01_old 
connect to system
identified by mysecretpassword
using 'myproddb01';

Verify that the link works as intended:
SQL> select d.name,i.host_name from v$database d, v$instance i;

NAME      HOST_NAME
--------- -------------------------------
PRODDB01 myserver01.mydomain.com

Save the following in a script, and execute it from the target database via sqlplus:
set lines 200
col count_new format 99999
col count_old format 99999
col "object owner" format a30
col "Result" format a20
set pages 100

prompt ============================================
prompt compare users and their object count
prompt ============================================

WITH newprod_users AS(
 SELECT target.owner AS new_owner, count(*) AS count_new
 FROM   dba_objects target
 GROUP BY owner
 ORDER BY 1
),
oldprod_users AS (
 SELECT source.owner AS old_owner, count(*) AS count_old
 FROM   dba_objects@proddb01_old  source
 GROUP BY owner
 ORDER BY 1)
SELECT newprod_users.new_owner "object owner", 
       newprod_users.count_new "NEW PROD",  
       oldprod_users.count_old "OLD PROD",
       DECODE(newprod_users.count_new,
              oldprod_users.count_old,'Num rows identical','Num row differs') "Result"
FROM   newprod_users JOIN oldprod_users
ON     newprod_users.new_owner = oldprod_users.old_owner
ORDER BY "Result" DESC;

Example output:
============================================
compare number of users
============================================

object owner                     NEW PROD  OLD_PROD Result
------------------------------ ---------- ---------- --------------------

USER1                               1329       1329 Num rows identical
USER2                                  2          2 Num rows identical
USER3                               1750       1750 Num rows identical
USER4                                409        389 Num row differs
USER5                                961       1167 Num row differs
USER6                                251        256 Num row differs


The report can be easily modified to produce for example a semi-colon separated list, which can in turn be pulled into an MS Excel file if desirable.

Monday, June 1, 2015

How to fix error >>> DBMS_AW_EXP: BIN$DdyrY during export

During export using data pump, you see the following:

Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 7.968 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
>>> DBMS_AW_EXP: BIN$DdyrY/cyAMLgU5/YLVaAbg==$0 not AW$

Solution: empty the recyclebin:
purge dba_recyclebin;
and restart the export.

Monday, October 28, 2013

Examples of expdp and impdp parameter files, and some useful directives


Export:

USERID='/ as sysdba'
DIRECTORY=DPUMP
DUMPFILE=file_name_%U.dmp
LOGFILE=expdp_log_file_name.log
JOB_NAME=EXP_DATA
PARALLEL=8
SCHEMAS=schema1,[schema2],[schema n]
EXCLUDE=STATISTICS

To get a consistent dump file, add the flashback_time or flashback_scn directive, for example:
FLASHBACK_TIME="to_timestamp(to_char(sysdate,'ddmmyyyy hh24:mi:ss'),'ddmmyyyy hh24:mi:ss')" 

Import:

USERID='/ as sysdba'
DIRECTORY=DPUMP
DUMPFILE=file_name_%U.dmp
LOGFILE=impdp_log_file_name.log
JOB_NAME=IMP_DATA
SCHEMAS=schema1,[schema2],[schema n]
REMAP_SCHEMA=SCOTT:JAMES
REMAP_TABLE=SCOTT.MYTABLE:MYTABLE_INTERIM
REMAP_TABLESPACE=USERS:TOOLS
PARALLEL=8
TRANSFORM=oid:n
TABLE_EXISTS_ACTION=REPLACE

Note: the directive "TRANSFORM=oid:n" makes sure all imported objects will be assigned a new OID instead of inhibiting the old OID from exported objects.

You can remap serveral tablespaces during the same run, just specify the directive multiple times in the same parameter file:

remap_tablespace=USER_DATA:HISTORICAL_DATA
remap_tablespace=APP_DATA:HISTORICAL_DATA
remap_tablespace=LOB_DATA:HISTORICAL_DATA


To limit export or import to specific tables only, use:
tables=SCOTT.EMP,
SCOTT.DEPT,
SCOTT.SALARIES

To limit export or import to specific partitions only, use:
TABLES=SCOTT.SALES:P_201701

To limit export or import to partitions matching a specific pattern, use:
TABLES=SCOTT.SALES:CONTRACT%2021%

Note that you cannot use both SCEMAS and TABLES as directives in the same export/import session.


Other useful directives:

CONTENT=METADATA_ONLY (only export object definition, not data)
EXCLUDE=TABLE:"LIKE 'ABC%_TMP'" (exclude tables based on name pattern)
EXCLUDE=TABLE:"= 'BONUS'" (exclude an entire table and its partitions, if any)
EXCLUDE=INDEX:"LIKE 'EMP%'" (exclude indexes that match a certain string)
EXCLUDE=VIEW,PACKAGE, FUNCTION (exclude certain types of objects)
EXCLUDE = TRIGGER:"IN ('TRIG1', 'TRIG2')", INDEX:"='INDX1'", REF_CONSTRAINT (exclude specific triggers, a specific index and referential constraints)
EXCLUDE=SEQUENCE, TABLE:"IN ('EMP', 'DEPT')" (exclude specific tables)
EXCLUDE=INDEX:"='MY_INDX'" (exclude a specific index)
EXCLUDE=MATERIALIZED_VIEW (exclude materialized views)
EXCLUDE=MATERIALIZED_VIEW,MATERIALIZED_VIEW_LOG (exlude materialized views + mv logs)


INCLUDE=FUNCTION, PACKAGE, TABLE:"='EMP'" (only include specific objects types, and tables called "EMP")
INCLUDE=PROCEDURE:"LIKE 'MY_PROC_%'" (only include procedures named according to a specific string)

QUERY=SCOTT.TABLE1:"WHERE 1=0" (no data is being exported, but the table is created)
QUERY=SCOTT.TABLE2:"WHERE SALES_DATE >= to_date('01.12.2011','dd.MM.yyyy')" (exports only rows older than 01.12.2011)
EXCLUDE=SCHEMA:"='SCOTT'" (exclude a specific user and all objects of that user)
Note:
Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users' schemas.
If you try to exclude a user by using a statement such as EXCLUDE=USER:"= 'SCOTT'", only the CREATE USER scott DDL statement will be excluded, and you may not get the results you expect.

To import only the user definition over a database link

See this post.

To compress tables on import

See this post

A note on integrity checking in data pump import TABLE_EXISTS_ACTION

TRUNCATE is subject to stricter integrity checking, than in case of REPLACE.

• TRUNCATE deletes existing rows and then loads rows from the source.
• REPLACE drops the existing table and then creates and loads it from the source.

• When you use TRUNCATE or REPLACE, make sure that rows in the affected tables are not targets of any referential constraints.
• When you use TRUNCATE, existing table-dependent objects in the source, such as indexes, grants, triggers, and constraints, are ignored.
• For REPLACE, the dependent objects are dropped and re-created from the source, if they were not explicitly or implicitly excluded.
• When you use TRUNCATE, checks are made to ensure that rows from the source are compatible with the existing table prior to performing any action.

If the existing table has active constraints and triggers, it is loaded using the external tables access method. If any row violates an active constraint, the load fails and no data is loaded. You can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the Import command line.
If you have data that must be loaded, but may cause constraint violations, consider disabling the constraints, loading the data, and then deleting the problem rows before re-enabling the constraints.

A note about excluding Constraints
The following constraints cannot be excluded:
• NOT NULL constraints.
• Constraints needed for the table to be created and loaded successfully (for example, primary key constraints for index-organized tables or REF SCOPE and WITH ROWID constraints for tables with REF columns).

Examples:

Exclude all nonreferential constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading:

EXCLUDE=CONSTRAINT

Exclude referential integrity (foreign key) constraints:
EXCLUDE=REF_CONSTRAINT

Turn on compression during export:
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]
COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH}

For most cases, I would stick to the MEDIUM compression algorithm. Oracle states in the 12.2 documentation:
Recommended for most environments. This option, like the BASIC option, provides a good combination of compression ratios and speed, but it uses a different algorithm than BASIC.

I find compression of exports very useful when space is an issue, or transfer between servers over the network is needed.


Sources: Oracle 11g Documentation
Oracle 12cR2 documentation

Source: My Oracle Support