Wednesday, May 12, 2021

Simple script for finding a leftover FK constraint to an interim table after a redefinisjon

After a redefinition of a table, you may be stuck with some FK constraints from other tables, to the old interim tables.

This simple script fixed my problem:
alter session set nls_language='american';
set lines 200
col table_name format a30
col constraint_name format a30
col r_constraint_name format a30
col status format a20

SELECT  TABLE_NAME, CONSTRAINT_NAME
FROM    DBA_CONSTRAINTS
WHERE   OWNER = '&&owner'
AND     CONSTRAINT_TYPE='R'
AND     R_CONSTRAINT_NAME IN (SELECT CONSTRAINT_NAME FROM DBA_CONSTRAINTS WHERE TABLE_NAME='&&interim_table');

prompt copy and paste output above as input to owner and table_name:
prompt
ALTER TABLE &&owner..&&table_name DROP CONSTRAINT &constraint_name;
DROP TABLE &&owner..&&interim_table PURGE;
exit



Result:
Enter value for owner: SCOTT
old   3: WHERE   OWNER = '&&owner'
new   3: WHERE   OWNER = 'SCOTT'
Enter value for interim_table: ORDERS_INTERIM
old   5: AND     R_CONSTRAINT_NAME IN (SELECT CONSTRAINT_NAME FROM DBA_CONSTRAINTS WHERE TABLE_NAME='&&interim_table')
new   5: AND     R_CONSTRAINT_NAME IN (SELECT CONSTRAINT_NAME FROM DBA_CONSTRAINTS WHERE TABLE_NAME='ORDERS_INTERIM')

TABLE_NAME                     CONSTRAINT_NAME
------------------------------ ------------------------------
ORDERROWS                      TMP$$_FK__ORD__ENT0

copy and paste output above as input to owner and table_name:
Enter value for table_name: ORDERROWS
Enter value for constraint_name: TMP$$_FK__ORD__ENT0
old   1: ALTER TABLE &&owner..&table_name DROP CONSTRAINT &constraint_name
new   1: ALTER TABLE SCOTT.ORDERROWS DROP CONSTRAINT TMP$$_FK__ORD__ENT0

Table altered.

old   1: DROP TABLE &&owner..ORDERS_INTERIM PURGE
new   1: DROP TABLE SCOTT.ORDERS_INTERIM PURGE

Table dropped.

How to repeat a command for all the lines i a text file using vi

This is a handy command for all those using the vi editor to fix scripts with many lines. For example, I had to create statements like this:
catalog start with '/recovery_data/PRODDB01/archivelog/2021_05_06/o1_mf_1_391574_j98pt1n9_.arc';
and then execute it as a script in RMAN. I did the following
cd /recovery_data/PRODDB01/archivelog/2021_05_06
ls *.arc > register_06052021.cmd
open the file in vi:
vi register_06052021.cmd
check the number of lines:
:set number [enter]
press Shift+G [enter] - you're taken to the end of the file. In my case, the file had a total of 98 lines Go to the top of the file:
:0 [enter]
Now type to record your action:
q a I catalog start with '/recovery_data/PRODDB01/archivelog/2021_05_06/  ESC j q
Repeat it 97 times:
97 @ a
I also needed to add a closing '; at the end of each statement, to be able to pass it to RMAN. Turned out to be as easy as going to the top of the file again, and then execute:
q a A '; ESC j q
Repeat it 97 times:
97 @ a

How to clear a standby redo logfile and then dropping it

During set up of a physical standby database, one of the standby redo logfiles was marked as "ACTIVE" when querying the v$standby_log:
  THREAD#     GROUP#  SEQUENCE#      BYTES ARCHIVED  STATUS
---------- ---------- ---------- ---------- --------- ------------------------------
         0         12          0 2097152000 YES       UNASSIGNED
         0         13          0 2097152000 YES       UNASSIGNED
         0         14          0 2097152000 YES       UNASSIGNED
         0         15          0 2097152000 YES       UNASSIGNED
         0         16          0 2097152000 YES       UNASSIGNED
         0         17          0 2097152000 YES       UNASSIGNED
         0         18          0 2097152000 YES       UNASSIGNED
         1         11     392344 2097152000 YES       ACTIVE
The file is not even existing on my system, but the path was somehow copied from the primary database, which has a different file structure.
The query:
SELECT thread#, group#, sequence#, bytes, archived, status FROM v$standby_log order by thread#, group#;

would return

  GROUP# TYPE                  MEMBER
---------- --------------------- ------------------------------------------------------------
        11 STANDBY               /backup/fast_recovery_area/PRODDB01/onlinelog/stb_redo01.log
The path /backup/fast_recovery_area/PRODDB01/onlinelog doesn't even exist on my server. This makes it impossible to drop and recreate it:
SQL> alter database drop standby logfile group 11;
alter database drop standby logfile group 11
*
ERROR at line 1:
ORA-00313: open failed for members of log group 11 of thread 1
ORA-00312: online log 11 thread 1:
'/fra/stdb/onlinelog/stb_redo01.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Solution: Clear the logfile:
SQL> alter database clear logfile group 11;

Database altered.
Then drop and recreate the standby redo logfile:
SQL> alter database drop standby logfile group 11;

Database altered.

SQL> alter database add standby logfile group 11 ('/data1/oradata/PRODDB01/stb_redo01.log') size 2000M;

Friday, April 30, 2021

How to install the Oracle database sample schemas

1. go to https://github.com/oracle/db-sample-schemas/releases/latest and download the desired version of the sample schemas
2. Upload the zip file to your server
3. Unpack to any directory. $ORACLE_HOME/demo/schema is a good candidate
4. Prepare your file by running this perl command which replaces occurrences of the token `__SUB__CWD__` with your current directory:
perl -p -i.bak -e 's#__SUB__CWD__#'$(pwd)'#g' *.sql */*.sql */*.dat
4. Make sure you know the system and the sys password, and execute
sqlplus /nolog
@mksample systempwd syspwd hrpwd oepwd pmpwd ixpwd shpwd bipwd EXAMPLE TEMP /u01/oracle/18c/demo/schema/log/ mydb
The list of arguments is pretty obvious: 
1. system password 
2. sys password 
3-6: passwords for the sample schemas you are about to install 
7. default tablespace for the samples schemas 
8. temporary tablespace for the sample schemas 
9. log locatation 
10. connect string. When using multitenant, you need to pass a pdb name as an argument. The cdb cannot be used
 
Read more here

Solution for error when from external tables when installing the SH sample schema

If you are trying to select from the external tables created in the SH sample schemas, and receive an error like this:
SQL>select count(*) from SALES_TRANSACTIONS_EXT;
select count(*) from SALES_TRANSACTIONS_EXT
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04001: error opening file $ORACLE_HOME/demo/schema/log/ext_1v3.log
It is most likely because you have used the operating system variable $ORACLE_HOME during installation, like this:
sqlplus /nolog
@mksample systempwd syspwd hrpwd oepwd pmpwd ixpwd shpwd bipwd EXAMPLE TEMP $ORACLE_HOME/demo/schema/log/ mydb

This causes the directories created as a part of the installation to contain the variable rather than the physical path, and that will cause the directories to be non-working:
CREATE OR REPLACE DIRECTORY 
DATA_FILE_DIR AS '$ORACLE_HOME/demo/schema/sales_history';

CREATE OR REPLACE DIRECTORY 
LOG_FILE_DIR AS '$ORACLE_HOME/demo/schema/log';

Use an absolute path instead:
sqlplus /nolog
@mksample systempwd syspwd hrpwd oepwd pmpwd ixpwd shpwd bipwd EXAMPLE TEMP /u01/oracle/18c/demo/schema/log/ mydb

And your directories will be created correctly:
CREATE OR REPLACE DIRECTORY 
DATA_FILE_DIR AS '/u01/oracle/18c/demo/schema/sales_history';

CREATE OR REPLACE DIRECTORY 
LOG_FILE_DIR AS '/u01/oracle/18c/demo/schema/log';

You should now be able to select from your external table:
SQL>connect sh/shpwd@mydb
Connected.
SQL>select count(*) from SALES_TRANSACTIONS_EXT;

  COUNT(*)
----------
    916039
The Oracle 18c installation of the samples schemas is documented here.
The Oracle 18c sample schemas on github

How to instruct data pump import to load whatever it can from a corrupted source


Oracle 18c introduces a new option to the parameter DATA_OPTIONS: CONTINUE_LOAD_ON_FORMAT_ERROR.
This option will tell data pump import to skip forward to the start of the next granule if a stream format error is encountered while loading data.
In previous versions, if a format error was found, impdp would abort and already loaded rows would be rolled back.

If you find yourself in a situation where all recovery options have failed you, then you might save at least some of the source data using this parameter.

Used in a simple paramter file called impdp.par:
USERID=system
FULL=y 
DUMPFILE=mydumpfile.dmp 
DIRECTORY=DP
TABLE_EXISTS_ACTION=replace 
DATA_OPTIONS=CONTINUE_LOAD_ON_FORMAT_ERROR
Execute:
impdp parfile=impdp.par

The feature is documented in the 18c new features documentation.

See also the utilities documentation for 18c

How to create a private temporary table from Oracle 18c and onwards


The default is to throw away the data after a transaction ends with commit or rollback:
create private temporary table ora$ptt_mytemptab
on commit drop definition;

You could also let the data be visible for the entire duration of the session:
create private temporary table ora$ptt_mytemptab
on commit preserve definition;
The notation ora$ptt is derived from the parameter private_temp_table_prefix:
SQL>show parameter private_temp_table_prefix

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
private_temp_table_prefix            string      ORA$PTT_

It must be included in the create statement of the PTT. You can find information about your PTTs in the dictionary:
set lines 200
col table_name format a20
col tablespace_name format a20
col duration format a20
col num_rows format 99999999
SELECT sid, serial#, table_name, tablespace_name, duration, num_rows
FROM   user_private_temp_tables;
In my case the output is:

      SID    SERIAL# TABLE_NAME           TABLESPACE_NAME      DURATION              NUM_ROWS
---------- ---------- -------------------- -------------------- -------------------- ---------
       280      12720 ORA$PTT_MYTEMPTAB   TEMP                 SESSION                    107
The column "duration" will show whether it is preserving data for the session or the transaction.

Read more about PTTs in the Oracle 18c documentation