Wednesday, June 30, 2021

How to deal with ORA-01110 during export

If you receive errors like this during data pump export
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 204: '/datafiles/oradata/proddb/temp.dbf'
ORA-06512: at "SYS.DBMS_LOB", line 724
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4546
ORA-06512: at line 1
you need to drop and recreate your temporary tablespace(s). If you only use bigfile temporary tablespaces, you can use the following script to rectify the situation:
set trimspool on
set lines 200
set feedback off
set verify off
set heading off
set echo off

spool 1.drop_non_default_temp.sql
select 'spool 1.drop_non_default_temp.log' from dual;
select 'drop tablespace ' || ts.name || ' including contents and datafiles;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name not in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;

spool 2.recreate_temp.sql
select 'spool 2.recreate_temp.log' from dual;
select 'create bigfile temporary tablespace ' || ts.name || ' tempfile ''' || tf.name || ''' SIZE 32M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name not in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

spool 3.alter_default_temp.sql
select 'spool 3.alter_default_temp.log' from dual;
select 'alter database default temporary tablespace TEMP;' from dual;
select 'exit' from dual;
spool off

spool 4.drop_default_temp.sql
select 'spool 4.drop_default_temp.log' from dual;
select 'drop tablespace ' || ts.name || ' including contents and datafiles;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

spool 5.create_default_temp.sql
select 'spool 5.create_default_temp.log' from dual;
select 'create bigfile temporary tablespace ' || ts.name || ' tempfile ''' || tf.name || ''' SIZE 32M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

spool 6.reset_default_temp.sql
select 'spool 6.reset_default_temp.log' from dual;
select 'alter database default temporary tablespace ' || ts.name || ';'
from v$tempfile tf join v$tablespace ts
on (ts.ts# = tf.ts#)
and ts.name in(
    select property_value
    FROM DATABASE_PROPERTIES
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
);
select 'exit' from dual;
spool off;

exit
Run the script in your database, and it will product 6 new sql files, numbered 1 to 6. Run them sequenctially and you should have recreated your temporary tablespaces. After this your data pump export will start as expected.

Monday, June 28, 2021

How to add an ADO compression policy to a table partition

ADO (Automatic Data Optimization) is one component of the Oracle 12c ILM (Information Life Cycle Management) solution.
For ADO to work, access to and modification of data needs to be constantly tracked. This is done by enabling a heat map in your database:
alter system set heat_map=on scope=both;
The database will immediately start sampling information. After a while, the heat map will contain information about how your objects are being used.

My table contains 211 partitions. I would like to add an ADO policy to one of them to illustrate how ADO policies works.
Let's list the 3 largest partitions:
select p.TABLE_NAME, s.partition_name,s.tablespace_name, p.compression,p.num_rows,Round(sum(s.bytes)/1024/1024/1024,1) "GB"
from dba_segments S join dba_tab_partitions p
on (s.partition_name = p.partition_name)
where s.segment_name='ENTITIY_PR'
and   s.owner='SSB'
group by p.table_name,s.partition_name,s.tablespace_name,p.compression,p.num_rows
order by 5 desc
fetch first 3 rows only;

TABLE_NAME PARTITION_NAME TABLESPACE_NAME COMPRESSION NUM_ROWS GB
ENTITIY_PR SYS_P5055 SSB DISABLED
43448193
3,3
ENTITIY_PR SYS_P4518 SSB DISABLED
43447546
3,4
ENTITIY_PR SYS_P4709 SSB DISABLED
43217045
3,3


Add a policy to have Oracle automatically compress the partition segment after one day without any access (read and write):
  
alter table SSB.ENTITIY_PR 
modify partition SYS_P5055 ilm add policy 
row store compress advanced segment after 1 days of no access;
In addition to no access, you could also use low access, no modification or creation to express activity type

Verify that the ADO policy is in place:
SELECT policy_name,
       object_owner,
       object_name,
       subobject_name,
       object_type,
       inherited_from,
       enabled,
       deleted
FROM   dba_ilmobjects
WHERE  object_owner='SSB'
AND    object_name='ENTITIY_PR'
AND    subobject_name='SYS_P5055'
ORDER BY 1;
POLICY_NAME OBJECT_OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE INHERITED_FROM ENABLED DELETED
P22 SSB ENTITIY_PR SYS_P5055 TABLE PARTITION POLICY NOT INHERITED YES NO

What does the heat map say about this particular partition?
select * 
from dba_heat_map_segment
where owner='SSB'
and   object_name='ENTITIY_PR' 
AND   subobject_name='SYS_P5055';

No rows returned.

So this partitioned hasn't actually been modified or accessed since the heat map was enabled.
Let's access the partition with a query, and see if this will be registered in the heat map.
SELECT SYSTIMESTAMP FROM DUAL; --> 25-JUN-21 10.04.30

SELECT *
FROM   SSB.ENTITIY_PR
PARTITION (SYS_P5055)
fetch first 10 rows only;
Run the query against dba_heat_map_segment again, and there is now an entry in the heat map for my visit to the table:
OWNER OBJECT_NAME SUBOBJECT_NAME SEGMENT_WRITE_TIME SEGMENT_READ_TIME FULL_SCAN LOOKUP_SCAN
SSB ENTITIY_PR SYS_P5055   25.06.2021 10:04:34 25.06.2021 10:04:34  
Notice that the column "LOOKUP_SCAN" has no value.
Let's try to send a query to the database that will require an PK index lookup:
SELECT *
FROM   SBB.ENTITIY_PR
PARTITION (SYS_P5055)
WHERE systemid = 1145708618;
Run the query against dba_heat_map_segment again, and you can now see that the lookup was recorded in the heat map:
OWNER OBJECT_NAME SUBOBJECT_NAME SEGMENT_WRITE_TIME SEGMENT_READ_TIME FULL_SCAN LOOKUP_SCAN
SBB ENTITIY_PR SYS_P5055   25.06.2021 10:25:33 25.06.2021 10:25:33 25.06.2021 10:25:33
If you wonder why the timestamps in dba_heat_map_segment are incrementing, although you're not executing any queries, it is because the object is still in the shared pool. There is no such thing as purging a table from the shared pool. However, you can purge the cursors that refer to it. To find those cursors, use the following query:
set lines 200
spool purge.sql
set heading off
set feedback off
set verify off
set echo off
set pages 0
set trimspool on

select 'exec DBMS_SHARED_POOL.PURGE (''' || ADDRESS ||','|| HASH_VALUE || ''',''C'');'
from V$SQLAREA where SQL_ID IN (SELECT SQL_ID FROM v$sqlarea where sql_text like 'SELECT * FROM%SBB.ENTITIY_PR%SYS_P5055%');
select 'exit' from dual;
exit
Run the script and it will spool a file "purge.sql" to your current directory, which looks like this in my case:
exec DBMS_SHARED_POOL.PURGE ('00000000BF56D560,1541949375','C');
exec DBMS_SHARED_POOL.PURGE ('00000000C67994B8,3904135316','C');
exec DBMS_SHARED_POOL.PURGE ('00000000C67973D8,27827680','C');
Run these statements, and the cursors refering to ENTITY_PR will be removed from memory, and the incrementing timestamps will stop. Obviously, if other users are actually using the object, you would need to be careful about purging the shared pool.
After one full day of no access, let's check if the ADO policy has done its work:
select p.TABLE_NAME, s.partition_name,s.tablespace_name, p.compression,p.num_rows,Round(sum(s.bytes)/1024/1024/1024,1) "GB"
from dba_segments S join dba_tab_partitions p
on (s.partition_name = p.partition_name)
where s.segment_name='ENTITIY_PR'
and   s.owner='SSB'
and   s.partition_name in ( 'SYS_P5055')
and   s.segment_name = p.table_name
group by p.table_name,s.partition_name,s.tablespace_name,p.compression,p.num_rows
order by 5 desc;
The output confirms that the partition has been compressed, and if you check the output from the same query earlier in this post, you'll notice that the size of the partition has dropped from 3,3G to 1,3G, a 60% reduction:
TABLE_NAME PARTITION_NAME TABLESPACE_NAME COMPRESSION NUM_ROWS GB
ENTITY_PR SYS_P5055 SSB ENABLED
43448193
1,3
When the policy now has been enforced, what is the status of the policy?
SELECT policy_name,
       object_owner,
       object_name,
       subobject_name,
       object_type,
       inherited_from,
       enabled,
       deleted
FROM   dba_ilmobjects
WHERE  object_owner='SSB'
AND    object_name='ENTITY_PR'
AND    subobject_name='SYS_P5055'
ORDER BY 1;
POLICY_NAME OBJECT_OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE INHERITED_FROM ENABLED DELETED
P22 SAKSINFO_SKFIN ENTITET_EGENSKAP SYS_P5055 TABLE PARTITION POLICY NOT INHERITED NO NO
Note how the rule is no longer enabled, now that it has been executed.
Further reading:
The official Oracle 19c ADO documentation As always, the excellently written tutorials from Tim Hall I also used examples from the book "Oracle Database 12c New Features", Chapter 8, by Robert G. Freeman published by Oracle Press.

Friday, June 25, 2021

utlu122s.sql is replaced from version 19c

During upgrades, you will notice that any old script that you've assembled to upgrade 12c and 18c databases, will break when they try to run the script utlu122s.sql. Note that it has been rename from version 19c and onwards, and is now called utlusts.sql.

Thanks to Rajasekhar Amudala for pointing this out on his blog.

How to purge tables from the shared pool

Tables cannot be directly purged from the shared pool using the dbms_shared_pool package. Instead, cursors refering to the table can be purged. To generate statements, use v$sqlarea:
set lines 200
spool purge.sql
set heading off
set feedback off
set verify off
set echo off
set pages 0
set trimspool on

select 'exec DBMS_SHARED_POOL.PURGE (''' || ADDRESS ||','|| HASH_VALUE || ''',''C'');'
from V$SQLAREA 
where SQL_ID IN (SELECT SQL_ID 
                 FROM v$sqlarea 
                 WHERE sql_text like 'SELECT * FROM%SBB.ENTITIY_PR%SYS_P5055%');
select 'exit' from dual;
exit
Execute the file "purge.sql" and the cursors refering to the object "SBB.ENTITY_PR" and the partition "SYS_P5055" will be purged. The Oracle 19c documentation for dbms_shared_pool can be found here

Wednesday, June 23, 2021

How to list index sizes

For non-partitioned global indexes in a specific schema:
select i.index_name, s.tablespace_name,round(sum(bytes)/1024/1024/1024) "GB"
from dba_indexes i join dba_segments s
on (i.index_name = s.segment_name)
where i.owner='SH'
and i.partitioned='NO'
group by i.index_name,s.tablespace_name
order by 3 desc;
INDEX_NAME TABLESPACE_NAME GB
AN_EEG_IDX SH
2406
PK_EEG SH
994
UIDX_REG_EEG SH
502
IDX_REL_IDX SH
156

For non-partitioned global indexes on a specific table:
select i.index_name, s.tablespace_name,round(sum(bytes)/1024/1024/1024) "GB"
from dba_indexes i join dba_segments s
on (i.index_name = s.segment_name)
where i.owner='SH'
and i.partitioned='NO'
and i.table_name = 'ACCOUNTING'
group by i.index_name,s.tablespace_name
order by 2 desc;

INDEX_NAME TABLESPACE_NAME GB
IDX_ACC1 SH
120
PK_ACC SH
994

For partitioned indexes on a specific table:
select s.segment_name,round(sum(s.bytes)/1024/1024/1024,2) "GB" 
from dba_segments s
where segment_name IN (select unique index_name 
                        from dba_indexes 
                        where table_name='SALES' 
                        and PARTITIONED='YES'
                        and index_type <> 'LOB'
                        and owner='SH')
group by s.segment_name;

SEGMENT_NAME GB
IDX_UK1
28
IDX_ACC_UK2
78,24

For all partitioned indexes in a specific schema:
select i.index_name,i.table_name,s.tablespace_name,round(sum(s.bytes)/1024/1024/1024) "GB"
from dba_indexes i join dba_segments s
on (i.index_name = s.segment_name)
where i.owner='SH'
and i.partitioned='YES'
and i.index_type <> 'LOB'
group by i.index_name,i.table_name,s.tablespace_name
order by 2 ;

INDEX_NAME TABLE_NAME TABLESPACE_NAME GB
IDX_OP_AA ENTITY SH
260
IDX_OP_NA ENTITY_HISTORY SH
1082
IDX_VER_AA EVENT SH
28
IDX_VER_AA_HIST EVENT SH
78
IDX_WW_UK2 RELATION SH
226
IDX_RELNO RELATION_ENTITY SH
350

Thursday, June 17, 2021

How to list partitions and their LOB segment sizes

SELECT  ts.table_owner "owner",
        ts.table_name "table name",
        s.SEGMENT_NAME "segment name",
        s.SEGMENT_TYPE "segment type",
        s.SEGMENT_SUBTYPE "lob type",
        s.PARTITION_NAME "lob part name",
        lp.column_name "lob name",
        lp.compression "lob compression",
        lp.partition_name "table part name",
        ts.num_rows "num rows",
        lp.tablespace_name "tablespace",
        ROUND(sum(s.bytes)/1024/1024/1024) "size GB"
FROM dba_segments s JOIN dba_lob_partitions lp
ON (s.partition_name = lp.lob_partition_name)  
   JOIN DBA_TAB_PARTITIONS ts
        ON (TS.PARTITION_NAME = lp.PARTITION_NAME) 
WHERE lp.table_name='MYTABLE'
AND   ts.table_name='MYTABLE'
-- To limit the output to a specific tablespace, uncomment line below
AND   s.tablespace_name='DATA1'
-- To limit output to specific table subpartitions only, uncomment the following row
--AND   lp.subpartition_name like 'SYS_SUBP186786%'
AND s.segment_name IN ( SELECT lpt.lob_name 
                        FROM dba_lob_partitions lpt 
                        WHERE lpt.table_name IN ( 'MYTABLE' ) )
GROUP BY ts.table_owner,ts.table_name,s.SEGMENT_NAME,s.SEGMENT_TYPE,s.SEGMENT_SUBTYPE,s.PARTITION_NAME,lp.COMPRESSION,/*lp.subpartition_name*/lp.partition_name, lp.column_name,ts.num_rows,lp.tablespace_name
/*
To limit output to lob subpartitions with a specific size, 
uncomment the restriction above and change the operator to suit your needs ( <, >, =) 
or use BETWEEN x AND y
*/
-- HAVING ROUND(sum(s.bytes)/1024/1024/1024) between 1 and 10
ORDER BY 12 DESC;
Output:
owner table name segment name segment type lob type lob part name lob name lob compression table part name num rows tablespace size GB
JIM MYTABLE SYS_LOB0000079505C00019$$ LOB PARTITION SECUREFILE SYS_LOB_P3273 MYLOB1 NO SYS_P3270
864051
DATA1
20
JIM MYTABLE SYS_LOB0000079505C00019$$ LOB PARTITION SECUREFILE SYS_LOB_P4684 MYLOB1 NO SYS_P4681
593520
DATA1
17
JIM MYTABLE SYS_LOB0000079505C00019$$ LOB PARTITION SECUREFILE SYS_LOB_P5692 MYLOB1 NO SYS_P5689
2021989
DATA1
16


For queries against a subpartitioned table, see this post.

How to generate dbms_application_info settings together with DDL in a sql script

Many DBAs forget to incoporate usage of dbms_application_info into their own scipts. This is particulary important if you start long-running maintenance jobs that could last for days or weeks.

By sending MODULE and ACTION to the databaser server, you allow for more granular searching using v$session, end-to-end tracing using trcsess. In addtion, more information is collected by any ongoing auditing so it's easier to see who did what later.

Here is a sniplet that can be used when generating a DDL script for multiple objects.
accept table_owner prompt 'Table owner: '
accept table_name prompt 'Table name: '
accept new_tabspc prompt 'New tablespace: '
accept old_tabspc prompt 'Old tablespace: '

set lines 300
set heading off
set feedback off
set verify off
set echo off
set pages 0
set trimspool on
spool exec_move_lob_&&table_owner..&&table_name..sql
select 'alter session set nls_language=''american'';' from dual;
select 'alter session force parallel ddl;' from dual;
select 'set timing on' from dual;
select 'set lines 200' from dual;
select 'set trimspool on' from dual;
select 'spool exec_move_lob_&&table_owner..&&table_name..log' from dual;
select 'exec DBMS_APPLICATION_INFO.SET_MODULE(module_name=>''Move'', action_name=>''MoveLOB_&&table_name'');' from dual;
select 'exec DBMS_APPLICATION_INFO.SET_CLIENT_INFO (client_info=>''MoveLOB_&&table_name'');' from dual;
select 'exec DBMS_SESSION.SET_IDENTIFIER (client_id=>''MoveLOB_&&table_name''); ' from dual;

-- your DDL statements generated here, for example:
select 'alter table ' || owner || '.' || table_name || ' move tablespace data2 online;'
from dba_tables
where owner='&&table_owner';

select 'exit' from dual;
exit

Thursday, June 10, 2021

How to extract all mountpoints for oracle data files in a database

To extract all unique mount points used for datafiles in a database, used the query below:
SELECT distinct SUBSTR(FILE_NAME, 1,
               INSTR(FILE_NAME, '/', -1, 1) --> Search for position of the first occurrence of the char '/', start at end of string
              -1) "PATH" --> starting from the rightmost side of the string, search from position found in INSTR above, and work your way to position 1
FROM DBA_DATA_FILES
;

Example output:
PATH
---------------------------------
/sales_db_02/oradata/SALES
/salesdb/oradata/SALES
/sales_db_01/oradata/SALES
If the database is in mount-mode (for example, a physical standby database), exchange dba_data_files with v$datafile:
 SELECT distinct SUBSTR(NAME, 1,
                   INSTR(NAME, '/', -1, 1)  
                  -1) "PATH" 
FROM v$datafile;

For logfiles, exchange dba_data_files with v$logfile and file_name with member:
  SELECT distinct SUBSTR(member, 1,
               INSTR(member, '/', -1, 1) 
              -1) "PATH" 
FROM v$logfile
;

Tuesday, June 8, 2021

Workaround for ORA-39358 during import

When running import, you may sometimes run into the problem below:
impdp parfile=myparfile.par

Import: Release 12.2.0.1.0 - Production on Tue Jun 8 14:54:34 2021

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
ORA-39002: invalid operation
ORA-39358: Export dump file version 18.0.0 not compatible with target version 12.2.0
Solution:
 oerr ora 39358
39358, 00000, "Export dump file version %s not compatible with target version %s"
// *Cause:    The Oracle Data Pump export job version was newer than the target
//            compatibility version.
// *Action:   Upgrade the target database to a compatibility level of at least
//            the export job version, or rerun the export job with a job
//            version that is lower than or equal to the compatibility version
//            of the target database.
The simplest way out for me was to downgrade the export. On the source database server, add the following parameter to your export file:
VERSION=12.2
So that my complete parameter file looked as follows:
userid=system/passwd
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=HR.dmp
LOGFILE=exp_HR.log
JOB_NAME=HR
EXCLUDE=STATISTICS
schemas=HR
VERSION=12.2
Rerun the export:
expdp parfile=myparfile.par
Transfer the newly generated dumpfile to your destination server, and rerun the import. This time around, you shouldn't see any errors relating to incompatible versions.