Wednesday, May 12, 2021

How to clear a standby redo logfile and then dropping it

During set up of a physical standby database, one of the standby redo logfiles was marked as "ACTIVE" when querying the v$standby_log:
  THREAD#     GROUP#  SEQUENCE#      BYTES ARCHIVED  STATUS
---------- ---------- ---------- ---------- --------- ------------------------------
         0         12          0 2097152000 YES       UNASSIGNED
         0         13          0 2097152000 YES       UNASSIGNED
         0         14          0 2097152000 YES       UNASSIGNED
         0         15          0 2097152000 YES       UNASSIGNED
         0         16          0 2097152000 YES       UNASSIGNED
         0         17          0 2097152000 YES       UNASSIGNED
         0         18          0 2097152000 YES       UNASSIGNED
         1         11     392344 2097152000 YES       ACTIVE
The file is not even existing on my system, but the path was somehow copied from the primary database, which has a different file structure.
The query:
SELECT thread#, group#, sequence#, bytes, archived, status FROM v$standby_log order by thread#, group#;

would return

  GROUP# TYPE                  MEMBER
---------- --------------------- ------------------------------------------------------------
        11 STANDBY               /backup/fast_recovery_area/PRODDB01/onlinelog/stb_redo01.log
The path /backup/fast_recovery_area/PRODDB01/onlinelog doesn't even exist on my server. This makes it impossible to drop and recreate it:
SQL> alter database drop standby logfile group 11;
alter database drop standby logfile group 11
*
ERROR at line 1:
ORA-00313: open failed for members of log group 11 of thread 1
ORA-00312: online log 11 thread 1:
'/fra/stdb/onlinelog/stb_redo01.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Solution: Clear the logfile:
SQL> alter database clear logfile group 11;

Database altered.
Then drop and recreate the standby redo logfile:
SQL> alter database drop standby logfile group 11;

Database altered.

SQL> alter database add standby logfile group 11 ('/data1/oradata/PRODDB01/stb_redo01.log') size 2000M;

Friday, April 30, 2021

How to install the Oracle database sample schemas

1. go to https://github.com/oracle/db-sample-schemas/releases/latest and download the desired version of the sample schemas
2. Upload the zip file to your server
3. Unpack to any directory. $ORACLE_HOME/demo/schema is a good candidate
4. Prepare your file by running this perl command which replaces occurrences of the token `__SUB__CWD__` with your current directory:
perl -p -i.bak -e 's#__SUB__CWD__#'$(pwd)'#g' *.sql */*.sql */*.dat
4. Make sure you know the system and the sys password, and execute
sqlplus /nolog
@mksample systempwd syspwd hrpwd oepwd pmpwd ixpwd shpwd bipwd EXAMPLE TEMP /u01/oracle/18c/demo/schema/log/ mydb
The list of arguments is pretty obvious: 
1. system password 
2. sys password 
3-6: passwords for the sample schemas you are about to install 
7. default tablespace for the samples schemas 
8. temporary tablespace for the sample schemas 
9. log locatation 
10. connect string. When using multitenant, you need to pass a pdb name as an argument. The cdb cannot be used
 
Read more here

Solution for error when from external tables when installing the SH sample schema

If you are trying to select from the external tables created in the SH sample schemas, and receive an error like this:
SQL>select count(*) from SALES_TRANSACTIONS_EXT;
select count(*) from SALES_TRANSACTIONS_EXT
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04001: error opening file $ORACLE_HOME/demo/schema/log/ext_1v3.log
It is most likely because you have used the operating system variable $ORACLE_HOME during installation, like this:
sqlplus /nolog
@mksample systempwd syspwd hrpwd oepwd pmpwd ixpwd shpwd bipwd EXAMPLE TEMP $ORACLE_HOME/demo/schema/log/ mydb

This causes the directories created as a part of the installation to contain the variable rather than the physical path, and that will cause the directories to be non-working:
CREATE OR REPLACE DIRECTORY 
DATA_FILE_DIR AS '$ORACLE_HOME/demo/schema/sales_history';

CREATE OR REPLACE DIRECTORY 
LOG_FILE_DIR AS '$ORACLE_HOME/demo/schema/log';

Use an absolute path instead:
sqlplus /nolog
@mksample systempwd syspwd hrpwd oepwd pmpwd ixpwd shpwd bipwd EXAMPLE TEMP /u01/oracle/18c/demo/schema/log/ mydb

And your directories will be created correctly:
CREATE OR REPLACE DIRECTORY 
DATA_FILE_DIR AS '/u01/oracle/18c/demo/schema/sales_history';

CREATE OR REPLACE DIRECTORY 
LOG_FILE_DIR AS '/u01/oracle/18c/demo/schema/log';

You should now be able to select from your external table:
SQL>connect sh/shpwd@mydb
Connected.
SQL>select count(*) from SALES_TRANSACTIONS_EXT;

  COUNT(*)
----------
    916039
The Oracle 18c installation of the samples schemas is documented here.
The Oracle 18c sample schemas on github

How to instruct data pump import to load whatever it can from a corrupted source


Oracle 18c introduces a new option to the parameter DATA_OPTIONS: CONTINUE_LOAD_ON_FORMAT_ERROR.
This option will tell data pump import to skip forward to the start of the next granule if a stream format error is encountered while loading data.
In previous versions, if a format error was found, impdp would abort and already loaded rows would be rolled back.

If you find yourself in a situation where all recovery options have failed you, then you might save at least some of the source data using this parameter.

Used in a simple paramter file called impdp.par:
USERID=system
FULL=y 
DUMPFILE=mydumpfile.dmp 
DIRECTORY=DP
TABLE_EXISTS_ACTION=replace 
DATA_OPTIONS=CONTINUE_LOAD_ON_FORMAT_ERROR
Execute:
impdp parfile=impdp.par

The feature is documented in the 18c new features documentation.

See also the utilities documentation for 18c

How to create a private temporary table from Oracle 18c and onwards


The default is to throw away the data after a transaction ends with commit or rollback:
create private temporary table ora$ptt_mytemptab
on commit drop definition;

You could also let the data be visible for the entire duration of the session:
create private temporary table ora$ptt_mytemptab
on commit preserve definition;
The notation ora$ptt is derived from the parameter private_temp_table_prefix:
SQL>show parameter private_temp_table_prefix

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
private_temp_table_prefix            string      ORA$PTT_

It must be included in the create statement of the PTT. You can find information about your PTTs in the dictionary:
set lines 200
col table_name format a20
col tablespace_name format a20
col duration format a20
col num_rows format 99999999
SELECT sid, serial#, table_name, tablespace_name, duration, num_rows
FROM   user_private_temp_tables;
In my case the output is:

      SID    SERIAL# TABLE_NAME           TABLESPACE_NAME      DURATION              NUM_ROWS
---------- ---------- -------------------- -------------------- -------------------- ---------
       280      12720 ORA$PTT_MYTEMPTAB   TEMP                 SESSION                    107
The column "duration" will show whether it is preserving data for the session or the transaction.

Read more about PTTs in the Oracle 18c documentation

Monday, April 19, 2021

trcsess utility arguments

trcsess  [output=output_file_name]
         [session=session_id]
         [clientid=client_id]
         [service=service_name]
         [action=action_name]
         [module=module_name]
         [trace_files]
Documentation and examples for Oracle 19c can be found here

tkprof arguments

From Oracle Support note 29012.1
 print=integer    List only the first 'integer' SQL statements.
   insert=filename  List SQL statements and data inside INSERT statements.
   sys=no           TKPROF does not list SQL statements run as user SYS.
   record=filename  Record statements found in the trace file.
   sort=option      Set of zero or more of the following sort options:

     prscnt  number of times parse was called
     prscpu  cpu time parsing
     prsela  elapsed time parsing
     prsdsk  number of disk reads during parse
     prsqry  number of buffers for consistent read during parse
     prscu   number of buffers for current read during parse
     prsmis  number of misses in library cache during parse

     execnt  number of execute was called
     execpu  cpu time spent executing
     exeela  elapsed time executing
     exedsk  number of disk reads during execute
     exeqry  number of buffers for consistent read during execute
     execu   number of buffers for current read during execute
     exerow  number of rows processed during execute
     exemis  number of library cache misses during execute

     fchcnt  number of times fetch was called
     fchcpu  cpu time spent fetching
     fchela  elapsed time fetching
     fchdsk  number of disk reads during fetch
     fchqry  number of buffers for consistent read during fetch
     fchcu   number of buffers for current read during fetch
     fchrow  number of rows fetched

     userid  userid of user that parsed the cursor