Wednesday, August 7, 2024

Workaround for ORA-65011: Pluggable database does not exist in an EBS environment

If you have upgraded an EBS database to Oracle 19c, you will also have migrated to the multitenant architechture, which is required.

During the upgrade, Oracle set the parameter _pdb_name_case_sensitive to TRUE by default, as explained in Doc ID 2642230.1.

If you then proceeded to convert your old EBS database to a pluggable database with a lower case name, this has the effect that a "flashback pluggable database" operation will fail when using sqlplus:
oracle@oraserver01:[pdb1]# echo $ORACLE_SID
cdb
oracle@oraserver01:[pdb1]# sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 6 10:23:55 2024
Version 19.23.0.0.0

SYS@_container_name SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 pdb1                           MOUNTED
SYS@_container_name SQL> flashback pluggable database "pdb1" to restore point SD76759_1;
flashback pluggable database "pdb1" to restore point SD76759_1
*
ERROR at line 1:
ORA-65011: Pluggable database PDB1 does not exist.


Workaround: Use RMAN instead
oracle@oraserver01:[pdb1]# rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Tue Aug 6 10:26:51 2024
Version 19.23.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB (DBID=2245029826)

RMAN> flashback pluggable database "pdb1" to restore point SD76759_1;

Starting flashback at 06-AUG-24

starting media recovery

archived log for thread 1 with sequence 21576 is already on disk as file /oradisk3/recoveryarea/CDB/archivelog/2024_07_11/o1_mf_1_21576__nw9trq2c_.arc
media recovery complete, elapsed time: 00:00:01
Finished flashback at 06-AUG-24

Monday, July 22, 2024

Friday, July 19, 2024

What do do if SET SERVEROUTPUT ON doesn't work

If you have an anonymous piece of pl/sql code and you have turned on serveroutput, like this:
set serveroutput on
You may need to enable output first:
exec dbms_output.enable();
set serveroutput on
In a multitenant environment, you will need to enable serveroutput after every time you switch container.

So, if you do
BEGIN
V_SQLSTRING := 'ALTER SESSION SET CONTAINER=' || '' || V_PDBNAME || '';
DBMS_OUTPUT.PUT_LINE(V_SQLSTRING);
EXECUTE IMMEDIATE V_SQLSTRING;
END;
/
you will need to repeat the enabling of serveroutput again, if you want to perform addition actions further down in your script.

How to check if your PostgreSQL server is part of a replication setup

If the server is down, check the entries in postgresql.auto.conf.

The following postgres.auto.conf is from a slave server:
wal_level = 'replica'
hot_standby = on
hot_standby_feedback = 'on'
primary_slot_name = 'stby1'
max_connections = 636

primary_conninfo      = 'host=pgserver01.oric.no port=5432 user=replicate password=mysecretpassword'
promote_trigger_file  = '/tmp/MasterNow'
The master server may have a very simple postgres.auto.conf file:
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
wal_level = 'replica'
If the server is up, use queries:

1. On the primary server, check that data is shipped over to the replica server:
[postgres@pgserver01.oric.no /pgdata/data]$ echo 'select usename,state,sync_state,sync_priority,client_hostname from pg_stat_replication;' | psql
  usename  |   state   | sync_state | sync_priority |        client_hostname
-----------+-----------+------------+---------------+--------------------------------
 replicate | streaming | async      |             0 | pgserver02.oric.no
(1 row)
2. The following query should return f (false) on primary server, t (true) on the replica server:
[postgres@pgserver01 /pgdata/data/log]$ echo "select pg_is_in_recovery();" | psql
 pg_is_in_recovery
-------------------
 f
(1 row)

[postgres@pgserver02 /pgdata/data/log]$  echo "select pg_is_in_recovery();" | psql
 pg_is_in_recovery
-------------------
 t
(1 row)

Tuesday, July 16, 2024

How to check StaticConnectIdentifier and other properties using Data Guard Broker

DGMGRL> show database cdb 'StaticConnectIdentifier'
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oricdb01.oric.no)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb_DGMGRL.skead.no)(INSTANCE_NAME=cdb)(SERVER=DEDICATED)))'
There are several other properties that you can check, too. See this post for a couple of examples

The documentation differntiates between monitorable (read only) and configurable properties.

Friday, July 12, 2024

Datapatch seems to "hang" or taking an extraordinary long time to complete. What could be wrong?

When using the utility datapatch to either apply or rollback patches in my Oracle 19c ORACLE_HOME, I was suprised at the time it took to (in this case) rollback the patches from version 19.23 to 19.22.

The patching was just "hanging" without any sign of progress. I left the shell for several hours, thinking progress would eventually be made and patience would be needed.

When I came back the morning after, it had not progressed at all.

The solution was found in the logfiles located $ORACLE_BASE/cfgtoollogs:

cd $ORACLE_BASE/cfgtoollogs/sqlpatch

# list your directories. Enter the most recent ones:
ls -latr 

drwxr-xr-x  2 oracle dba 4.0K Feb  1 18:25 sqlpatch_9072_2024_02_01_18_13_47
drwxr-xr-x  2 oracle dba 4.0K May 14 09:21 sqlpatch_2561578_2024_05_14_09_20_33
-rw-r--r--  1 oracle dba    0 May 23 18:12 36199232_25601966.lock
drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36199232
-rw-r--r--  1 oracle dba    0 May 23 18:12 36420641_25643730.lock
drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36420641
-rw-r--r--  1 oracle dba    0 May 23 18:12 36233263_25638263.lock
drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36233263
drwxr-xr-x  2 oracle dba 4.0K May 23 18:25 sqlpatch_4642_2024_05_23_18_11_41
drwxr-xr-x  2 oracle dba 4.0K Jul 12 10:30 sqlpatch_1073748_2024_07_12_10_13_30

cd sqlpatch_1073748_2024_07_12_10_13_30
 ls -altrh
total 75M
-rw-r--r--  1 oracle dba  12K Jul 12 10:14 install1.sql
-rw-------  1 oracle dba 3.7M Jul 12 10:26 sqlpatch_catcon_0.log
-rw-------  1 oracle dba  689 Jul 12 10:26 sqlpatch_catcon__catcon_1073748.lst
-rw-r--r--  1 oracle dba  12K Jul 12 10:26 sqlpatch_summary.json
-rw-r--r--  1 oracle dba  133 Jul 12 10:26 sqlpatch_progress.json
-rw-r--r--  1 oracle dba 5.5M Jul 12 10:26 sqlpatch_invocation.log
-rw-r--r--  1 oracle dba  66M Jul 12 10:26 sqlpatch_debug.log
The source of the error was found in the file sqlpatch_catcon_0.log:
CREATE OR REPLACE PACKAGE BODY ku$_dpload AS
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.SOURCE$ by 8192 in tablespace SYSTEM
This simple error caused the rest of the script to fail:
SQL> --
SQL> -- ============================================================================
SQL> -- BEGINNING OF APPLY EXECUTION
SQL> -- ============================================================================
SQL> --
SQL> -- Initial phase sets up internal infrastructure for rest of dpload.
SQL> --
SQL> SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual;
SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual
       *
ERROR at line 1:
ORA-04067: not executed, package body "SYS.KU$_DPLOAD" does not exist
ORA-06508: PL/SQL: could not find program unit being called: "SYS.KU$_DPLOAD"
ORA-06512: at line 1
Obvious solution: Increase the tablespace datafile, or add a new file:
sqlplus / as sysdba
alter session set container=PDB01;
select f.file_name,f.bytes/1024/1024 "MB", f.autoextensible,f.maxbytes/1024/1024 "MaxMB", t.bigfile
from dba_data_files f, v$tablespace t
where f.tablespace_name='SYSTEM'
and f.tablespace_name = t.name;


FILE_NAME MB AUTOEXTENSIBLE MaxMB BIGFILE
/oradata/pdb01/system01.dbf
32712
YES
32767,984375
NO
So we have a smallfile tablespace which is full!
Add a datafile, still in the same session connected to the PDB01:
alter tablespace system 
add datafile '/oradata/pdb01/system02.dbf' size 256M autoextend on next 128M maxsize unlimited;

Thursday, July 11, 2024

How to solve RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process

In one of my cloned databases, the FRA was filling up and pushing towards the limit of 2 TB.

Since it was cloned from a production database using Golden Gate, the capture processes followed along, but was not dropped in the cloned database.

This caused the deletion policy in RMAN to be overriden because Oracle thinks it will need the archivelogs for Golden Gate, even though no Golden Gate is configured for this particular database.

When deleting an archivelog, RMAN would throw an error:
RMAN> delete archivelog sequence 47447;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1352 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=1354 device type=DISK
RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
archived log file name=/u04/fra/CDB/archivelog/2024_03_13/o1_mf_1_47447__j5wx4r32_.arc thread=1 sequence=47447
The query below revealed the name of the blocking capture processes:
sqlplus / as sysdba --> log on to root container
SYS@_container_name SQL>  SELECT CAPTURE_NAME,
            CAPTURE_TYPE, STATUS,
            to_char(REQUIRED_CHECKPOINT_SCN,'999999999999999')as REQ_SCN ,
            to_char(OLDEST_SCN,'999999999999999')as OLDEST_SCN
     FROM DBA_CAPTURE;   2    3    4    5

CAPTURE_NAME         CAPTURE_TY STATUS   REQ_SCN          OLDEST_SCN
-------------------- ---------- -------- ---------------- ----------------
OGG$CAP_MYDB2ABC     LOCAL      DISABLED     426508588124     426508588124
OGG$CAP_MYDB2DEF     LOCAL      DISABLED     426508561845     426508561845
Solution was to use the package dbms_capture_adm and drop the captures:
sqlplus / as sysdba --> log on to root container

SYS@_container_name SQL> exec dbms_capture_adm.drop_capture('OGG$CAP_MYDB2ABC');
SYS@_container_name SQL> exec dbms_capture_adm.drop_capture('OGG$CAP_MYDB2DEF');
Thanks to Bobby Curtis for pointing me in the righ direction with his post about the same topic