Wednesday, December 16, 2020

How to fix error SYS.DBMS_BACKUP_RESTORE version 18.11.00.00 in TARGET database is not current

Even if you're not using a recovery catalog, you may encounter this issue if you have recently patched your database. Simply by connecting to the target database as sysdba, you can verify that you need to update your internal packages:
oracle@myserver.mydomain.com:[proddb01]# rman target / nocatalog

Recovery Manager: Release 18.0.0.0.0 - Production on Wed Dec 16 13:51:43 2020
Version 18.12.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

PL/SQL package SYS.DBMS_BACKUP_RESTORE version 18.11.00.00 in TARGET database is not current
PL/SQL package SYS.DBMS_RCVMAN version 18.11.00.00 in TARGET database is not current
connected to target database: PRODDB01 (DBID=1234567890)
using target database control file instead of recovery catalog
Solution: Follow Doc ID 888818.1: "Rman 06190 Connecting to target database after upgrade" In short, connect to the database as sysdba, and run the following scripts:
SQL> @$ORACLE_HOME/rdbms/admin/dbmsrman.sql
SQL> @$ORACLE_HOME/rdbms/admin/prvtrmns.plb   
SQL> @$ORACLE_HOME/rdbms/admin/dbmsbkrs.sql 
SQL> @$ORACLE_HOME/rdbms/admin/prvtbkrs.plb

Friday, December 4, 2020

How to view the contens of a zip file without extracting it

Method 1:
zip -sf myzipfile.zip
Method 2:
zipinfo myzipfile.zip

How to mark an existing SQL Plan baseline as fixed

Thanks a lot to Tim Hall for providing this simple example:
SET SERVEROUTPUT ON
DECLARE
  l_plans_altered  PLS_INTEGER;
BEGIN
  l_plans_altered := DBMS_SPM.alter_sql_plan_baseline(
    sql_handle      => 'SQL_7ff180a4583f257d',
    plan_name       => 'SQL_PLAN_7zwc0njc3y9bx2c993bf4',
    attribute_name  => 'fixed',
    attribute_value => 'YES');

  DBMS_OUTPUT.put_line('Plans Altered: ' || l_plans_altered);
END;
/
The details for the plan baselines can be found like this:
SELECT sql_handle,plan_name,origin,enabled,accepted,fixed
FROM dba_sql_plan_baselines
WHERE plan_name ='SQL_PLAN_7zwc0njc3y9bx2c993bf4';
Result:
SQL_HANDLE PLAN_NAME ORIGIN ENABLED ACCEPTED FIXED
SQL_7ff180a4583f257d SQL_PLAN_7zwc0njc3y9bx2c993bf4 AUTO-CAPTURE YES YES YES

After fixing the plan, all the plan alternatives for the SQL ID can be seen below:
SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN,
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID in('2mympbsn3r4rk')
ORDER BY sql_id,LAST_ACTIVE_TIME DESC;
Result:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE IS_RESOLVED_ADAPTIVE_PLAN Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
04.12.2020 09:49 2mympbsn3r4rk
480132689
SQL_PLAN_7zwc0njc3y9bx3bf43977    
1
612838711
116
613
5,3
04.12.2020 09:49 2mympbsn3r4rk
3102497174
SQL_PLAN_7zwc0njc3y9bx2c993bf4   Y
2
203961
191
0
0
04.12.2020 08:49 2mympbsn3r4rk
480132689
SQL_PLAN_7zwc0njc3y9bx3bf43977    
0
2247452482
30
2247
74,9

Tuesday, November 10, 2020

Date formatting in PostgreSQL

select action_type, to_char(action_date,'DD Mon YYYY'), count(*) from actions group by action_type,
action_type     |   to_char   | count
-----------------+-------------+-------
 action1 | 27 Oct 2020 | 47831
 action1 | 22 Oct 2020 |   640
 action1 | 20 Oct 2020 |     1
 action1 | 22 Oct 2020 |  1654
 action1 | 26 Oct 2020 |   290
 action2 | 21 Oct 2020 |     8
 action2 | 27 Oct 2020 |   140
 action2 | 26 Oct 2020 |   900

Documentation here

Thursday, November 5, 2020

How to change the control_files parameter used in the spfile

Mind your syntax when you are changing certain multi-value parameters directly in the spfile. Oracle will tell you that the syntax is accepted, but it will give you an error later. The following example is from an attempt to change the control_files parameter.

Won't work:
SQL> alter system set control_files='/oradata/proddb01/control01.ctl, /fra/proddb01/control02.ctl' scope=spfile;

System altered.
During mount stage, this will give you the following message in the database's alert log:
ORA-205 signalled during: ALTER DATABASE   MOUNT...
ORA-00202: control file: /oradata/proddb01/control01.ctl, /fra/proddb01/control02.ctl
ORA-27037: cannot obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
Works:
SQL> alter system set control_files='/oradata/proddb01/control01.ctl','/fra/proddb01/control02.ctl' scope=spfile;

System altered.
You should enclose both values within ' ' apostroph characters.

Wednesday, October 21, 2020

How to transfer SQL Profiles from one database to another

Following up from part 1 on my post on How to transfer SQL Plan Baselines from one database to another, I will now show you the steps to transfer a SQL Profile in the same manner.
For SQL Profiles you depend on the package dbms_sqltune
I am connecting to the databases as a privileged user called dbadmin throughout this post.

1. First, find the name of SQL profile you are about to transfer:
select * from dba_sql_profiles order by created;
The query above will return the SQL Profiles in the order in which they were created. Notice the name of the relevant one, as this will be used as an argument in the procedures later.

2. On the source database, create a staging table for SQL Profiles:
vi 2.cre_SQLProfile_stgtab.sql
Add the following:
connect dbadmin
BEGIN
  DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (
    table_name  => 'MYSTAGETAB_SQLPROF'
,   schema_name => 'DBADMIN'
);
END;
/
exit

Execute the file:
sqlplus /nolog @2.cre_SQLProfile_stgtab.sql
3. Pack your SQL Profile into your staging table

Create the file 3.pack_SQLProfile_stgtab.sql:
vi 3.pack_SQLProfile_stgtab.sql
Add the following:
conn dbadmin
alter session set nls_language='american';

BEGIN
  DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (
    profile_name         => 'SYS_SQLPROF_014fb0ca24980001'
,   staging_table_name   => 'MYSTAGETAB_SQLPROF'
,   staging_schema_owner => 'DBADMIN'
);
END;
/
exit
Run the file:
sqlplus /nolog @3.pack_SQLProfile_stgtab.sql
Let's verify that the staging table now has some rows in it:
connect dbadmin
select count(*)
from  MYSTAGETAB_SQLPROF;
4. Create the export parameter file and export the table:

vi 4.expdp_SQLPROFILE_stgtab.sql

Add the following:

userid=dbaadmin
tables=MYSTAGETAB_SQLPROF
logfile=expdp_MYSTAGETAB_SQLPROF.log
dumpfile=MYSTAGETAB_SQLPROF.dmp
job_name=exp_MYSTAGETAB_SQLPROF
directory=DP


Make sure the directory DP exists in the database. If not, create it with
create directory DP as '/exports/';
Then export the table:
expdp parfile=4.expdp_SQLPROFILE_stgtab.sql
Logon to the destination server. 

5. Import the staging table in the destination database

Create an import parameter file:
vi 5.impdp_SQLProfile.par
Add the following:
userid=dbadmin
tables=MYSTAGETAB_SQLPROF
logfile=impdp_MYSTAGETAB_SQLPROF.log
dumpfile=MYSTAGETAB_SQLPROF.dmp
job_name=imp_MYSTAGETAB_SQLPROF
directory=DP
Run the import:
impdp parfile=5.impdp_SQLProfile.par
6. Unpack the information in the staging table

Create the file for unpacking the SQL Plans to the local SMB:
vi 6.unpack.sql
Add the following:
conn dbadmin
alter session set nls_language='american';
set serveroutput on
BEGIN
   DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(
     replace            => true
,    staging_table_name => 'MYSTAGETAB_SQLPROF'
,    profile_name => 'SYS_SQLPROF_014fb0ca24980001'
);
END;
/
exit
Execute the file:
sqlplus /nolog @6.unpack.sql

Let's verify that the optimizer has actually created a new execution plan based on the SQL Profile:
SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN,
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID in('5cbuqgt5m5961')
ORDER BY LAST_ACTIVE_TIME DESC;
Output:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:19 5cbuqgt5m5961
1355560190
     
1
20850937263
1488
20851
14
21.10.2020 12:55 5cbuqgt5m5961
1355560190
     
0
112190741129
27134
112191
4,1
21.10.2020 14:47 5cbuqgt5m5961
3389950398
SQL_PLAN_ds58srfg86uf4f5a21fb1 SYS_SQLPROF_014fb0ca24980001  
3
116651
302
0
0
After the SQL Profile was created, the performance was finally stabelized.
The documentation for the dbms_sqltune package can be found here

How to transfer SQL Plan baselines from one database to another

A customer called with a request to transfer a specific SQL plan from production to test. The query was found in the shared pool:
SELECT TO_CHAR(LAST_ACTIVE_TIME, 'dd.mm.yyyy hh24:mi') "last active",
       SQL_ID,
       PLAN_HASH_VALUE,
       SQL_PLAN_BASELINE "BASELINE", 
       SQL_PROFILE,
       IS_RESOLVED_ADAPTIVE_PLAN "AdaptivePlan",
       CHILD_NUMBER AS "Child Num",
       ELAPSED_TIME,
       EXECUTIONS,
       ROUND(ELAPSED_TIME/1000000) "duration (sec)"
       ,CASE WHEN EXECUTIONS > 0 THEN
        ROUND( (ELAPSED_TIME/EXECUTIONS)/1000000, 1)
        ELSE
            NULL
       END "sec per exe"
FROM V$SQL 
WHERE SQL_ID in('5cbuqgt5m5961')
ORDER BY LAST_ACTIVE_TIME DESC;
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:34 5cbuqgt5m5961
3389950398
SQL_PLAN_ds58srfg86uf4f5a21fb1 SYS_SQLPROF_014fb0ca24980001  
1
210182320
1402169
210
0

In the test database, the same query does not have a plan in the SMB, nor does it have a SQL Profile. The same query as above gave me this result in the test database:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 10:47 5cbuqgt5m5961
1355560190
     
0
110508458900
27032
110508
4,1
We see that the query needs a SQL Plan baseline and a SQL Profile to perform well.
Oracle has a way to export the metadata needed to stabilize the query plan, and import it in another database.
When exporting SQL Plan baselines you would use the package dbms_spm

Here is what I did to transfer the SQL Plan from one database to another.
I am connecting to the databases as a privileged user called dbadmin throughout this post. In part two of this post, I will show you how I transferred the SQL Profile from one database to another.

1. On the source database, create a staging table for SQL Plan Baselines:
vi 1.cre_stagetab.sql
Add the following:
connect dbadmin
BEGIN
  DBMS_SPM.CREATE_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB');
END;
/
Execute the file:
sqlplus /nolog @1.cre_stagetab.sql
Still in the source database, check who the owner of the baseline is:
SELECT plan_name,
         sql_handle,
         creator,
         origin,
         TO_CHAR (created, 'dd.mm.yyyy hh24:mi') "created",
         TO_CHAR (last_executed, 'dd.mm.yyyy hh24:mi') "lastexecuted",
         enabled,
         accepted,
         fixed,
         autopurge,
         module,
         ROUND (ELAPSED_TIME / 1000000) "duration in seconds",
         cpu_time,
         buffer_gets,
         disk_reads
    FROM DBA_SQL_PLAN_BASELINES
where plan_name='SQL_PLAN_ds58srfg86uf4f5a21fb1';
The output shows that the owner is SYS:
PLAN_NAME SQL_HANDLE CREATOR ORIGIN created lastexecuted ENABLED ACCEPTED FIXED AUTOPURGE MODULE duration in seconds CPU_TIME BUFFER_GETS DISK_READS
SQL_PLAN_ds58srfg86uf4f5a21fb1 SQL_dc1518bb9e8369c4 SYS MANUAL-LOAD-FROM-STS 30.03.2020 15:13 18.10.2020 03:13 YES YES YES YES JDBC Thin Client
8
3897100
597270
715

2. Create the file 2.pack_stagetab.sql:
vi 2.pack_stagetab.sql
Add the following:
conn dbadmin
alter session set nls_language='american';

DECLARE
  v_plan_cnt NUMBER;
BEGIN
  v_plan_cnt := DBMS_SPM.PACK_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB'
,   enabled    => 'yes'
,   creator    => 'SYS'
,   plan_name  => 'SQL_PLAN_ds58srfg86uf4f5a21fb1'
);
END;
/
exit
Run the file:
sqlplus /nolog @2.pack_stagetab.sql
Let's verify that the staging table has some rows in it:
connect dbadmin
select count(*)
from  MYSTAGETAB;
3. Create an export parameter file:
vi 3.expdp.par
Add the following. Make sure the directory DP exist and is writable:
userid=dbadmin
tables=MYSTAGETAB
logfile=expdp_MYSTAGETAB.log
dumpfile=MYSTAGETAB.dmp
job_name=exp_MYSTAGETAB
directory=DP
Run the export:
expdp parfile=3.expdp.par
When the export is done, transfer the file to the destination server using ssh.
At this point logon to the destination server.

4. Create an import parameter file:
vi 4.impdp.par
Add the following. Make sure the directory DP exist and is writable:
userid=dbadmin
tables=MYSTAGETAB
logfile=impdp_MYSTAGETAB.log
dumpfile=MYSTAGETAB.dmp
job_name=imp_MYSTAGETAB
directory=DP
Run the import:
impdp parfile=4.impdp.par
5. Create the file for unpacking the SQL Plans to the local SMB:
vi 5.unpack.sql
Add the following:
conn dbadmin
alter session set nls_language='american';
set serveroutput on
DECLARE
  v_plan_cnt NUMBER;
BEGIN
  v_plan_cnt := DBMS_SPM.UNPACK_STGTAB_BASELINE (
    table_name => 'MYSTAGETAB'
    ,plan_name  => 'SQL_PLAN_ds58srfg86uf4f5a21fb1'
);
  DBMS_OUTPUT.put_line('Number of plans loaded: ' || v_plan_cnt);
END;
/
exit
Run the file:
sqlplus /nolog @5.unpack.sql
You should now be able to see a different execution plan when executing the query at the top of this article:
last active SQL_ID PLAN_HASH_VALUE BASELINE SQL_PROFILE AdaptivePlan Child Num ELAPSED_TIME EXECUTIONS duration (sec) sec per exe
21.10.2020 13:19 5cbuqgt5m5961
1355560190
     
1
20850937263
1488
20851
14
21.10.2020 12:55 5cbuqgt5m5961
1355560190
     
0
112190741129
27134
112191
4,1
The performance actually got worse! The reason is that we are missing the SQL Profile I will now transfer the SQL Profile to the same server in the next post.

Thursday, October 15, 2020

How to select a specific line number using awk

The file clonetext.txt contains:
  Clone Volume: true
                      Clone Parent server Name: myserver1-cluster49
                        SnapClone Parent Volume: myvolume_mirror1
                        

You would like to get the name of the volume, which is the 4th word on the 3rd line of the file. When grepping for the keyword "Clone" all lines are returned:
 cat clonetext.txt | grep Clone
  Clone Volume: true
                      Clone Parent server Name: myserver1-cluster49
                        SnapClone Parent Volume: myvolume_mirror1
Grepping for the 4th column helps, but you still get two rows, not just the single one you're interested in:
oracle@oric-db01:[mydb01]# cat clonetext.txt | grep Clone | awk -F' ' '{print $4}'

Name:
myvolume_mirror1
Adding the NR flag to your command solves the problem:
cat clonetext.txt | grep Clone | awk -F' ' 'NR==3 {print $4}'
myvolume_mirror1

Monday, October 12, 2020

Different ways to check your path variable on a windows client

Check the windows registry entry for the value of the PATH variable: Open your regedit application by going to your windows menu and start typing "regedit"
HKEY_CURRENT_USER\Environment
If using Windows powershell, use
Get-ItemProperty -Path "HKCU:\Environment"
In windows command-line enviroment use
echo %PATH%

Thursday, September 3, 2020

Potential solution to dgmgrl error ORA-16665: time out waiting for the result from a member

After having added a terminal standby database to an existing configuration, the Data Guard Broker configuration seemed unhappy with communicating with the new member. The output from "show configuration" showed the following:
DGMGRL> show configuration;

Configuration - DB01

  Protection Mode: MaxPerformance
  Members:
  DB01      - Primary database
    DB01_STB  - Physical standby database
      DB01_TSTB - Physical standby database (receiving current redo)
        Error: ORA-16665: time out waiting for the result from a member

    DB01_RO   - Physical standby database
When looking at the details by using
show database verbose "DB01_TSTB"
the entire operation would take very long, and at the, the following message is displayed:
Database Status:
DGM-17016: failed to retrieve status for database "DB01_TSTB"
ORA-16665: time out waiting for the result from a member
The broker log file showed:
09/02/2020 15:08:52
Data Guard Broker Status Summary:
  Type                        Name                            Severity  Status
  Configuration               DB01                            Warning  ORA-16607
  Primary Database            DB01                            Success  ORA-0
  Physical Standby Database   DB01_STB                        Success  ORA-0
  Physical Standby Database   DB01_RO                         Success  ORA-0
  Physical Standby Database   DB01_TSTB                       Error  ORA-16665
Root cause here was firewalls. The terminal standby database could not reach the primary database. Although the terminal standby database isn't set up to receive redo data from the primary database directly, in a broker configuration all members must be able to communicate with eachother. A good tool for troubleshooting issues dealing with ports and firewalls is nmap. I installed it on the terminal server and issued:
[root@db04_server ~]# nmap -n -p 1511 db01_sever.oric.no

Starting Nmap 6.40 ( http://nmap.org ) at 2020-09-02 14:23 CEST
Nmap scan report for db01_sever.oric.no (xxx.xxx.xxx.xxx)
Host is up (0.016s latency).
PORT     STATE    SERVICE
1511/tcp filtered 3l-l1

Nmap done: 1 IP address (1 host up) scanned in 0.49 seconds
A filtered port means that it is not possible to determine whether the port is open or closed, most often due to firewalls along the way. Further checks in the firewall log files showed
action=Drop service=1511 dst=xxx.xxx.xxx.xxx scr=yyy.yyy.yyy.yyy
where xxx.xxx.xxx.xxx was matching the ipadress of the terminal standby server, while yyy.yyy.yyy.yyy was matching the ipadress of the primary server. The network admin opened the port, and the ORA-16665 immediately disappeared from the dgmgrl output.

Monday, August 31, 2020

How to zip files in the current directory

In its simplest form, to zip a number of similary named files, use this syntax:
zip client_sql_trace client*.trc

where the resulting zip file will be called "client_sql_trace.zip" and contain all files in the current directory named client_.trc.

Tuesday, August 18, 2020

How to check the password file version

oracle@oric01.oric.no:[PRODDB01]# orapwd describe file=/u01/oracle/product/12201/dbs/orapwPRODDB01
Password file Description : format=LEGACY
If you try to create the password file with a password that is too simple, you may see this error from orapwd:
OPW-00029: Password complexity failed for SYS user : Password must contain at least 8 characters
The format is set to 12.2 by default, and from the orapwd help text there doesn't seem to be anything lower than this.
If you for some reason need to keep an old password which is too simple for today's default settings, you can create a "legacy" password by using the format "legacy". This is not documented in the help text:
orapwd file=u01/oracle/product/12201/dbs/orapwPRODDB01 format=legacy force=Y

Enter password for SYS:

Thursday, August 13, 2020

How to perform a full export and import of an entire database in PostgreSQL


Export the database "db01" from your source postgres server:

pg_dump db01 -v -Fc >db01.dmp

The -v means "verbose" and -Fc means "custom format" (as opposed to "directory format", which would be indicated by -Fd)

Import the database "db01" in your destination postgres server

First, transfer the directory db01.dmp file from the source server to the target server using scp.
pg_restore -v -d db01 -1 db01.dmp
The -1 option means "all or nothing"; either the entire operation succeed, or nothing is done.

If the database doesn't exist, you need to connect to another database in the cluster first:
pg_restore -v -C -d postgres /exports/full_db_export.dmp

Export the database "db01" using the directory format:

pg_dump -Fd db01 -f /exports/full_db01 -v

This will create the directory /exports/full_db01 which contains all the files necessary for restore. The -v means "verbose" and -Fd means "directory format"

Import the database "db01":


Again, transfer the needed files, this time the entire directory /exports/full_db01 from the source server to the target server using scp.
Then import the database, either by recreating the database indicated by the -C option:
pg_restore /exports/full_db01 -C -c -d postgres -j 2 -v

or by importing the objects into an existing database, in my case "db02":
pg_restore /exports/full_db01 -c -d db02 -j 2 -v

  • The -C option means "create database".
  • The -d postgres option indicates the database to which the pg_dump utility makes an initial connection
  • The -c option means "drop the database objects before creating"
  • -j is the number of jobs started in parallel

    The official documentation for the pg_dump utility can be found here
    here
  • Monday, August 10, 2020

    what is the difference between DBA_AUDIT_TRAIL and DBA_COMMON_AUDIT_TRAIL?

    The difference between these views is that DBA_COMMON_AUDIT_TRAIL contains information from both standard and fine-grained auditing, while DBA_AUDIT_TRAIL only contains information from standard auditing.

    DBA_AUDIT_TRAIL displays all standard audit trail entries. 
    This view is populated only in an Oracle Database where unified auditing is not enabled
    The SQL_BIND and SQL_TEXT columns are only populated if the AUDIT_TRAIL initialization parameter is set to 'db, extended'



    • All standard audit trail entries 
    • Fine-grained audit trail entries 
    • Mandatory audit trail entries 
    • SYS audit records written in XML format 

    The SQL_BIND and SQL_TEXT columns are only populated if the AUDIT_TRAIL initialization parameter is set to 'db, extended' or 'xml, extended' or if the AUDIT_SYS_OPERATIONS initialization parameter is set to TRUE.

    How to set parallelism for a data guard recovery process

    If you need to manually set the apply-process DOP (Degree of Parallelism), you can do this by setting the property ApplyParallel=num where num is the number of parallel processes. 

    In my setup, I have three databases: 

    •  A primary database (hdal) 
    • A physical standby database (hdal_stb) which serves as a "cascading" database 
    • A terminal standby database (hdal_tstb) which receives redo from the cascading standby database  
    Logged in on any one of the participating servers of your data guard configuration, start dgmgrl: 
    dgmgrl / as sysdba
    Connected to "hdal_stb"
    Connected as SYSDBA.
    DGMGRL> show configuration;
    
    Configuration - DGConfig1
    
      Protection Mode: MaxPerformance
      Members:
      hdal      - Primary database
        hdal_stb  - Physical standby database
          hdal_tstb - Physical standby database (receiving current redo)
    
    DGMGRL> edit database 'hdal_stb' set property ApplyParallel=4;
    
    In the database's alert log, we can se that Oracle is cancelling the ongoing recovery process:
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
    2020-08-10T09:26:24.249847+02:00
    MRP0: Background Media Recovery cancelled with status 16037
    Managed Standby Recovery not using Real Time Apply
    Recovery interrupted!
    Recovered data files to a consistent state at change 27754456
    2020-08-10T09:26:24.510262+02:00
    ORA-16037: user requested cancel of managed recovery operation
    2020-08-10T09:26:24.617519+02:00
    MRP0: Background Media Recovery process shutdown (hdal)
    2020-08-10T09:26:25.250904+02:00
    Managed Standby Recovery Canceled (hdal)
    
    And then starting it again:
    Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
    ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT PARALLEL 4 NODELAY
    2020-08-10T09:26:25.284547+02:00
    Attempt to start background Managed Standby Recovery process (hdal)
    Starting background process MRP0
    2020-08-10T09:26:25.306150+02:00
    MRP0 started with pid=77, OS id=115854
    2020-08-10T09:26:25.311038+02:00
    MRP0: Background Managed Standby Recovery process started (hdal)
    2020-08-10T09:26:30.339894+02:00
     Started logmerger process
    2020-08-10T09:26:30.378497+02:00
    Managed Standby Recovery starting Real Time Apply
    2020-08-10T09:26:30.515607+02:00
    Parallel Media Recovery started with 4 slaves
    2020-08-10T09:26:30.741846+02:00
    Media Recovery Waiting for thread 1 sequence 8299 (in transit)
    
    Verify the new setting by looking at the properties (output truncated for brevity):
    DGMGRL> show database verbose hdal_stb
    
    Database - hdal_stb
    
      Role:               PHYSICAL STANDBY
      Intended State:     APPLY-ON
      Transport Lag:      0 seconds (computed 1 second ago)
      Apply Lag:          0 seconds (computed 1 second ago)
      Average Apply Rate: 0 Byte/s
      Active Apply Rate:  0 Byte/s
      Maximum Apply Rate: 0 Byte/s
      Real Time Query:    OFF
      Instance(s):
        hdal
    
      Properties:
        DGConnectIdentifier             = 'hdal_stb'
        LogXptMode                      = 'ASYNC'
        ApplyParallel                   = '4'
    
    The change will be propagated to the other members of the data guard configuration. The data guard broker logfile, located in $ORACLE_BASE/diag/rdbms/$UNIQUE_NAME/$ORACLE_SID/trace, shows:
    08/10/2020 09:26:24
    Forwarding EDIT_RES_PROP operation to member hdal for processing
    08/10/2020 09:26:31
    Updated broker configuration file available, loading from "$ORACLE_HOME/dbs/dr2hdal_stb.dat"
    08/10/2020 09:42:35
    Forwarding MON_PROPERTY operation to member hdal_tstb for processing
    
    The cascading database hdal_tstb simply confirms that the broker configuration has been updated:
    08/10/2020 09:26:31
    Updated broker configuration file available, loading from "$ORACLE_HOME/dbs/dr2hdal_tstb.dat"
    
    The primary database is also verifying the change, although a bit more detailed:
    08/10/2020 09:26:24
    EDIT DATABASE hdal_stb SET PROPERTY applyparallel = 4
    08/10/2020 09:26:31
    EDIT INSTANCE hdal ON DATABASE hdal_stb SET PROPERTY applyparallel = 4 completed successfully
    
    Documentation for dgmgrl version 19c can be found here

    Tuesday, August 4, 2020

    What constitute "mandatory auditing" under Unified Auditing?

    • Activities from administrative users such as SYSDBA, SYSBACKUP, and SYSKM. 
    •  The following audit-related activities are mandatorily audited:
    CREATE AUDIT POLICY AUDIT EXECUTE of the DBMS_FGA PL/SQL package
    ALTER AUDIT POLICY NOAUDIT EXECUTE of the DBMS_AUDIT_MGMT PL/SQL package
    DROP AUDIT POLICY Access to sensitive columns in the optimizer dictionary tables. ALTER TABLE attempts on the AUDSYS audit trail table
    Top level statements by the administrative users SYS, SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM, until the database opens All user-issued DML statements on the SYS.AUD$ and SYS.FGA_LOG$ dictionary tables Any attempts to modify the data or metadata of the unified audit internal table. SELECT statements on this table are not audited by default or mandatorily.
    All configuration changes that are made to Oracle Database Vault


    The audit information can be found in the view UNIFIED_AUDIT_TRAIL.
    Documentation for Mandatory Unified Auditing in Oracle 12.2 can be found here

    Monday, August 3, 2020

    How to find offending process when getting error umount.nfs: /data1: device is busy

    As root, use lsof ("list open files"):
    [root@myserver /home/oracle]# lsof | grep '/data1'
    extract    14041        oracle   25r      REG               0,23      2896 1235955379 /data1/goldengate/dirdat/et000000127 (storage1:/Oracle/myserver/data1)
    
    You will see a list of processes. Terminate these with the kill-command:
    kill 13859 14041 
    
    After this is done, you can unmount the nfs file system:
     umount -f /data1
    

    Friday, July 31, 2020

    How to connect to and start a specific database using dgmgrl

    The data guard broker allows you to connect to any of the participating members of the configuration:
    oracle@myserver:[hdal]# dgmgrl / as sysdba
    DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Jul 31 10:44:47 2020
    
    Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
    
    Welcome to DGMGRL, type "help" for information.
    Connected to "hdal"
    Connected as SYSDBA.
    DGMGRL> show configuration
    
    Configuration - DGConfig1
    
      Protection Mode: MaxPerformance
      Members:
      hdal      - Primary database
        hdal_stb  - Physical standby database
          hdal_tstb - Physical standby database (receiving current redo)
    
    Fast-Start Failover: DISABLED
    
    Configuration Status:
    SUCCESS   (status updated 51 seconds ago)
    
    DGMGRL> connect sys@hdal_stb as sysdba
    Password:
    Connected to "hdal_stb"
    Connected as SYSDBA.
    
    Note that the password files must be identical for all members of the configuration.
    You can startup and shutdown individual members of your configuration without leaving the data guard broker interface, for example:
    DGMGRL> startup mount
    ORACLE instance started.
    Database mounted.
    
    Documentation on how to use Oracle 12.2 version of dgmgrl is found here

    Wednesday, July 29, 2020

    How to rebuild an index partition

    To limit a rebuild to a specific index partition only, use the following syntax:
    ALTER INDEX SH.SALES_IDX4 
    REBUILD PARTITION SYS_P51490 
    TABLESPACE DATA04 ONLINE;
    
    Notice the use of the ONLINE keyward, which will allow DML against the table (and thus update the index). To generate rebuild-statements, one index at a time, I have used the script below. It takes as arguments
    1. The index owner 
    2. The index name 
    3. The new tablespace where you want to place your partitions 
    4. The old tablespace from which you want to move out
    set termout off
    select 'alter session set nls_language=''american'';' from dual;
    set termout on
    accept index_owner prompt 'Index owner: '
    accept index_name prompt 'Index name: '
    accept new_tabspc prompt 'New tablespace: '
    accept old_tabspc prompt 'Old tablespace: '
    
    set lines 300
    set heading off
    set feedback off
    set verify off
    set echo off
    set pages 0
    set trimspool on
    spool exec_move_part_&&index_owner..&&index_name..sql
    select 'alter session set nls_language=''american'';' from dual;
    
    select 'alter session force parallel ddl;' from dual;
    select 'set timing on' from dual;
    select 'set lines 200' from dual;
    select 'set trimspool on' from dual;
    select 'spool exec_rebuild_part_&&index_owner..&&index_name..log' from dual;
    select 'exec DBMS_APPLICATION_INFO.SET_MODULE(module_name=>''Move'', action_name=>''Rebuild_part_&&index_name'');' from dual;
    select 'exec DBMS_APPLICATION_INFO.SET_CLIENT_INFO (client_info=>''Rebuild_part_t_&&index_name'');' from dual;
    select 'exec DBMS_SESSION.SET_IDENTIFIER (client_id=>''Rebuild_part_&&index_name''); ' from dual;
    
    select 'prompt moving the index partitions from &&old_tabspc to &&new_tabspc,' from dual;
    select 'prompt and setting default attributes for the table ' from dual;
    SELECT 'alter index ' || idx.table_owner || '.' ||  idx.index_name || ' rebuild partition ' || idxpart.partition_name || ' TABLESPACE &&new_tabspc ONLINE;'
    FROM    dba_indexes idx, dba_ind_partitions idxpart
    WHERE   idx.table_owner = '&&index_owner'
    and     idxpart.TABLESPACE_NAME = '&&old_tabspc'
    AND     idx.index_name = idxpart.index_name
    AND     idx.index_name = '&&index_name'
    AND     idxpart.subpartition_count=0
    ORDER BY idx.table_owner, idx.index_name;
    
    select distinct 'alter index ' ||   IDXPART.INDEX_OWNER || '.' || IDXPART.INDEX_NAME || ' modify default attributes tablespace &&new_tabspc;'
    FROM   DBA_IND_PARTITIONS IDXPART JOIN DBA_SEGMENTS S
    ON     (IDXPART.PARTITION_NAME = S.PARTITION_NAME )
    WHERE  IDXPART.INDEX_NAME='&&index_name'
    AND    IDXPART.TABLESPACE_NAME='&&old_tabspc';
    
    select 'exit' from dual;
    exit
    
    
    Run it as a user with access to the data dictionary: 

    sqlplus / as sysdba @gen_move_idx_part.sql 

    It will generate a new script called exec_move_part_<username>.<index_name>.sql which will perform the actual rebuild: 
    sqlplus / as sysdba @exec_move_part_<username>.<index_name>.sql

    I also added som calls to dbms_application_info so that the session can be easily identfied in v$session. Remember, sometimes it's desirable to set the PARALLEL degree and the NOLOGGING options during rebuild, to complete it as fast as possible. See this post for more info.

    How to list index partitions and their sizes

    To list index partitions for index SALES_IDX1:
    SELECT IDXPART.INDEX_OWNER, IDXPART.INDEX_NAME, IDXPART.PARTITION_NAME, IDXPART.TABLESPACE_NAME, ROUND(SUM(BYTES)/1024/1024/1024) "GB"
    FROM   DBA_IND_PARTITIONS IDXPART JOIN DBA_SEGMENTS S
    ON (IDXPART.PARTITION_NAME = S.PARTITION_NAME )
    WHERE  IDXPART.INDEX_NAME='SALES_IDX1'
    AND    S.SEGMENT_TYPE='INDEX PARTITION'
    -- To only list partitions in tablespace DATA1, uncomment the following line:
    -- AND    IDXPART.TABLESPACE_NAME='DATA1'
    GROUP BY IDXPART.INDEX_OWNER, IDXPART.INDEX_NAME, IDXPART.PARTITION_NAME, IDXPART.PARTITION_NAME,IDXPART.TABLESPACE_NAME
    ORDER BY 5 DESC;
    

    INDEX_OWNER INDEX_NAME PARTITION_NAME TABLESPACE_NAME GB
    SH SALES_IDX1 SYS_P177667 DATA1
    8
    SH SALES_IDX1 SYS_P177518 DATA1
    5
    SH SALES_IDX1 SYS_P44844 DATA1
    3
    SH SALES_IDX1 SYS_P44663 DATA1
    2
    SH SALES_IDX1 SYS_P177677 DATA1
    2

    To list all index partitions for tablespace DATA1:
    SELECT IDXPART.INDEX_OWNER, IDXPART.INDEX_NAME, IDXPART.PARTITION_NAME, IDXPART.TABLESPACE_NAME, ROUND(SUM(BYTES)/1024/1024/1024) "GB"
    FROM   DBA_IND_PARTITIONS IDXPART JOIN DBA_SEGMENTS S
    ON (IDXPART.PARTITION_NAME = S.PARTITION_NAME )
    AND    S.SEGMENT_TYPE='INDEX PARTITION'
    AND    IDXPART.TABLESPACE_NAME='DATA1'
    GROUP BY IDXPART.INDEX_OWNER, IDXPART.INDEX_NAME, IDXPART.PARTITION_NAME, IDXPART.PARTITION_NAME,IDXPART.TABLESPACE_NAME
    ORDER BY 5 DESC;
    

    INDEX_OWNER INDEX_NAME PARTITION_NAME GB
    SH SALES_IDX1 SYS_P177667
    8
    SH SALES_IDX1 SYS_P177518
    5
    SH SALES_IDX1 SYS_P44844
    3
    SH SALES_IDX1 SYS_P177677
    2
    SH SALES_IDX1 SYS_P44663
    2
    SH SALES_IDX2 SYS_P179608
    1
    SH SALES_IDX2 SYS_P178334
    1
    SH SALEX_IDX2 SYS_P178459
    1
    SH SALES_IDX3 SYS_P28534
    0
    SH SALES_IDX3 SYS_P50905
    0

    To list the names of all partitioned indexes and their total sizes, in a particular tablespace:
    SELECT DISTINCT IDXPART.INDEX_OWNER, IDXPART.INDEX_NAME, IDXPART.TABLESPACE_NAME, 
        (SELECT ROUND(SUM(S.BYTES)/1024/1024/1024) 
         FROM DBA_SEGMENTS S
         WHERE S.SEGMENT_NAME = IDXPART.INDEX_NAME
         ) GB
    FROM   DBA_IND_PARTITIONS IDXPART 
    WHERE  IDXPART.TABLESPACE_NAME='DATA1'
    ORDER BY 4 DESC;
    
    INDEX_OWNER INDEX_NAME TABLESPACE_NAME GB
    SH SALES_IDX1 DATA1
    567
    SH SALES_IDX2 DATA1
    511
    SH SALES_IDX3 DATA1
    331

    To check which tablespaces the different partitions in an index reside in:
    SELECT TABLESPACE_NAME,COUNT(*)
    FROM DBA_IND_PARTITIONS
    WHERE INDEX_NAME='SALES_IDX4'
    GROUP BY TABLESPACE_NAME;
    
    TABLESPACE_NAME COUNT(*)
    DATA1
    13
    DATA2
    832

    Tuesday, July 21, 2020

    How to fix RMAN-04006 when cloning from active database




    Background:
    During an attempt to clone database using active duplication, I received an error when testing my connectivity to the target and auxiliary server. I normally do a testrun first, to see if I can connect without problems, before I start my cloning script.

    My script to test the connection is simple and called "testconnection.cmd".
    The connections are made through a wallet, so that you do not have to expose your passwords in any scripts or any shell prompts. See my previous post for details about setting up a wallet.
    connect target /@proddb01.oric.no
    connect auxiliary /@testdb01.oric.no
    run{
    allocate channel c1 type disk;
    allocate channel c2 type disk;
    allocate auxiliary channel aux1 type disk;
    allocate auxiliary channel aux2 type disk;
    configure device type disk parallelism 2;
    }
    exit
    

    Execute it as follows:
    rman cmdfile=testconnection.cmd
    

    connected to target database: prodb01 (DBID=2078894010, not open)
    connected to auxiliary database (not started)
    
    RMAN-04006: error from auxiliary database: ORA-01034: ORACLE not available
    ORA-27101: shared memory realm does not exist
    

    Cause:
    Incorrect password in the auxiliary database

    Solution:
    Copy the password file from your target database server to your auxiliary database server and retry the operation.

    When all is good, the output from the auxiliary database should be
    connected to auxiliary database: testdb01 (not mounted)
    

    Tuesday, July 14, 2020

    Identity columns in Oracle 12c



    The identity column introduced in Oracle 12c can be used in the different manners, according to your needs. Here are some basic examples:

    create table songs (
        song_id number generated always as identity,
        title varchar2(128)
    );
    

    The "generated always" clause is default and may be omitted.

    In this scenario, no inserts may contain a value for the column song_id, even if the datatype is correct. Neither null values are accepted. Oracle will generate its own value and will not allow any manipulation of the value recorded in the identity column.


    Allow for the possibility to override the auto generated numbers by specifying "by default" instead of "always":
    create table songs (
        song_id number generated by default as identity,
        title varchar2(128)
    );
    
    In other words, the statements below would both be permissible:
    insert into songs (song_id, title)values(10,'Born to run');
    update songs set song_id=20 where song_id=10;
    
    However, null cannot be specified as a legitimate value for the identity column:
    insert into songs values(null, 'White wedding');
    
    would return the error
    ORA-01400: cannot insert NULL into ("MUSIC"."SONGS"."SONG_ID")
    

    If you need to allow for NULL values in the identity column, you create the table as follows:
    create table songs (
        song_id number generated by default on null as identity,
        title varchar2(128)
    );
    

    With the above definition, Oracle

    * permits overriding the autogenerated numbers
    * allows update of the identity column
    * allows null values to be specified during inserts

    All of the three statements below are thus valid:
    insert into songs (song_id, title)values(10,'Born to run');
    update songs set song_id=20 where song_id=10;
    insert into songs values(null, 'White wedding');
    

    Tim Hall has published a more in-depth article about the same topic, as usual of the highest quality

    Friday, June 19, 2020

    How to run RMAN with debug information


    Tested with Oracle 12.2

    For example, to trace a duplicate database session, I have put the following into a file called "run_duplication.cmd":

    spool trace to run_duplication.trc
    spool log to run_duplication.log
    set echo on;
    debug on;


    connect target sys/password@sourcedb
    connect auxiliary sys/password@auxdb

    run{
    allocate channel c1 type disk;
    allocate channel c2 type disk;
    allocate auxiliary channel aux1 type disk;
    allocate auxiliary channel aux2 type disk;
    configure device type disk parallelism 2;
    DUPLICATE DATABASE "sourcedb.mydomain.com" TO auxdb
    FROM ACTIVE DATABASE
    TABLESPACE TOOLS
    USING COMPRESSED BACKUPSET;
    }
    exit

    When you execute your rman script:

    rman cmdfile='run_duplication.cmd'

    your log and trace files will be generated in your working directory.

    Wednesday, May 13, 2020

    How to use the TIMESTAMP_TO_SCN function


    select timestamp_to_scn(to_timestamp('2020-05-11 14:36:22', 'YYYY-MM-DD HH24:MI:SS')) "scn" from dual;

    scn
    ----------
    20325895

    Reversed:
    select scn_to_timestamp(20325895) "timestamp" from dual;

    timestamp
    ----------------------------------
    11-MAY-20 02.36.20.000000000 PM

    Tuesday, May 12, 2020

    How to avoid the error Can't locate Data/Dumper.pm in @INC during installation of AHF



    During installation of ahf, I received the following error:
    Extracting AHF to /u01/ahf/oracle.ahf
    Can't locate Data/Dumper.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_per l /usr/lib64/perl5 /usr/share/perl5 . /u01/ahf/oracle.ahf/tfa/bin /u01/ahf/oracle.ahf/tfa/bin/common /u01/ahf/oracle.ahf/tfa/bin/modules /u01/ahf/oracle.a hf/tfa/bin/common/exceptions) at /u01/ahf/oracle.ahf/tfa/bin/common/tfactlshare.pm line 1350, line 1.
    BEGIN failed--compilation aborted at /u01/ahf/oracle.ahf/tfa/bin/common/tfactlshare.pm line 1350, line 1.
    Compilation failed in require at /u01/ahf/oracle.ahf/tfa/bin/tfasetup.pl line 107, line 1.
    BEGIN failed--compilation aborted at /u01/ahf/oracle.ahf/tfa/bin/tfasetup.pl line 107, line 1.

    Solution: Install missing packages:
    su -
    yum install perl-Data-Dumper

    At this point, you need to uninstall AHF:
    tfactl uninstall

    Then cleanup old files. Go to the directory where you attempted to install AHF;
    cd /u01/tfa
    rm -rf oracle.ahf

    Go to the directory where the zipped file has been extracted, and run the installation again:
    /u01/oracle/patch/AHF
    ./ahf_setup

    Tuesday, April 21, 2020

    Potential solution to ORA-1033 during configuration of standby database in data guard broker


    In my environment, the following message was displayed in dgmgrl when trying to configure a data guard environment:

    DGMGRL> add database 'hdal_stb' as  connect identifier is 'hdal_stb';
    Error: ORA-1033: ORACLE initialization or shutdown in progress
    
    Failed.
    

    I had prior to this attempt duplicated the target database to the auxiliary using "duplicate target databas for standby from active database" - and the output looked fine.

    Turns out that you cannot set the parameter redo_transport_user to any other value than SYS when cloning for standby.
    In my environment, we use a dedicated user for this purpuse, as we avoid using the SYS user as much as possible.

    I adjusted the parameter to read SYS on both the primary and standby instance, and reran the duplication.

    Afterwards, the database could be added:
    DGMGRL> add database 'hdal_stb' as  connect identifier is 'hdal_stb';
    Database "hdal_stb" added
    

    Friday, April 17, 2020

    How to use mkstore to set up passwordless duplication



    Oracle provides a method called "Secure External Password Store" to hide your passwords in a wallet, instead of using them directly in your scripts.

    The feature Secure External Password Store can be used without any restriction in all product editions, you do not require a license for the Advanced Security Option (ASO).
    /

    What exactly is a "secure external password store"?
    Oracle describes it as "a secure software container that stores authentication and signing credentials"
    You need to use the utility mkstore to manage secure external password store.

    Personally, I find them very useful when performing duplication. Here is how I set things up when I was cloning a database for standby.


    NOTE: If you intend to clone for standby, you need to add the SYS user and nothing else to your wallet!
    Any other privileged user will give you trouble during cloning.



    On the standby server, create directory where you want the wallet to reside. I normall put it in the $TNS_ADMIN directory:
    mkdir -p $TNS_ADMIN/wallet

    Create the wallet:
    mkstore -wrl $TNS_ADMIN/wallet -create
    Oracle Secret Store Tool : Version 12.1.0.2
    Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

    Enter password:
    Enter password again:

    Add a credential to one or more database
    mkstore -wrl $TNS_ADMIN/wallet -createCredential hdal_primary dbaadmin
    mkstore -wrl $TNS_ADMIN/wallet -createCredential hdal_stby dbaadmin

    Note!
    If you need to include a domain in your credential, use " " around your database name:
    mkstore -wrl $TNS_ADMIN/wallet -createCredential "hdal_primary.mydomain.com" dbaadmin

    If you later need to modify the same credential:
    mkstore -wrl $TNS_ADMIN/wallet -modifyCredential hdal_stby dbaadmin

    If you later need to delete a credential:
    mkstore -wrl $TNS_ADMIN/wallet -deleteCredential hdal_stby

    Verify that the credentials were added:
    mkstore -wrl $TNS_ADMIN/wallet -listCredential
    Oracle Secret Store Tool : Version 12.1.0.2
    Copyright (c) 2004, 2014, Oracle and/or its affiliates. All rights reserved.

    Enter wallet password:
    List credential (index: connect_string username)
    2: hdal_stby dbaadmin
    1: hdal_primary dbaadmin

    Add the following in the server's $TNS_ADMIN/sqlnet.ora:
    #Point out the server ("client") wallet
    WALLET_LOCATION =
      (SOURCE =
        (METHOD = FILE)
        (METHOD_DATA =
      (DIRECTORY = /orasw/product/12201/network/admin/wallet)
      )
     )
     
    #This setting causes all CONNECT /@db_connect_string statements to use the information in the wallet at the specified location to authenticate to databases:
    SQLNET.WALLET_OVERRIDE = TRUE
     
    #BUG 20721271 DUPLICATE FOR STANDBY FROM ACTIVE DATABASE HANGS WHILE RESTORING CONTROL FILE
    #Turning this parameter on disables the ability to send and receive "break" messages using urgent data provided by the underlying protocol.
    #This would apply to all protocols used by the client.
    DISABLE_OOB=on
    

    It works fine with an environmental variable directly in the sqlnet.ora file, too, instead of an absolute path:
    WALLET_LOCATION =
      (SOURCE =
        (METHOD = FILE)
        (METHOD_DATA =
      (DIRECTORY = $TNS_ADMIN/wallet)
      )
     )
    


    You should now be able to connect to both databases using rman. Test with a simple script called conntest.cmd.

    connect target /@hdal_primary
    connect auxiliary /@hdal_stby
    run{
    allocate channel c1 type disk;
    allocate channel c2 type disk;
    allocate auxiliary channel aux1 type disk;
    allocate auxiliary channel aux2 type disk;
    configure device type disk parallelism 2;
    }
    exit
    

    Output in my case was:
    connected to target database: HDAL (DBID=1893290026)
    
    connected to auxiliary database: HDAL (not mounted)
    
    using target database control file instead of recovery catalog
    allocated channel: c1
    channel c1: SID=1192 device type=DISK
    
    allocated channel: c2
    channel c2: SID=52 device type=DISK
    
    allocated channel: aux1
    channel aux1: SID=1152 device type=DISK
    
    allocated channel: aux2
    channel aux2: SID=1150 device type=DISK
    
    old RMAN configuration parameters:
    CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
    new RMAN configuration parameters:
    CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
    new RMAN configuration parameters are successfully stored
    released channel: c1
    released channel: c2
    released channel: aux1
    released channel: aux2