Showing posts with label Multitenant architechture. Show all posts
Showing posts with label Multitenant architechture. Show all posts

Wednesday, September 25, 2024

How to plug and unplug a PDB in a multitenant configuration

What exactly does it mean to unlug and plug a database, in a multitenant architecture?

  • To unplug means to close the PDB and then generate its manifest file.
  • To plug means using the manifest file to create a new pluggable database.

    In the examples below, I am unplugging the databases pdb1 and pdb2 into two different manifest files:
    sqlplus / as sysdba
    alter pluggable database pdb1 close immediate;
    alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/pdb1.xml';
    
    alter pluggable database pdb2 close immediate;
    alter pluggable database pdb2 unplug into '/u01/app/oracle/oradata/pdb2.xml';
    
    The XML file created in the unplug operation contains information about the names and the full paths of the tablespaces, as well as data files of the unplugged PDB.

    This information will then be used by a subsequent plugg-in operation.

    After having unplugged a pdb you can drop the pluggable database but physically keep the datafiles belonging to it, like this:
    drop pluggable database pdb1 keep datafiles;
    
    If you wish to plug the database into a different CDB, it is a good idea to check the compatability of the database with the CDB first. This is particulary true if the new CDB is created with newer binaries than the original CDB, or if it is on a different host.In the new CDB, execute the following pl/sql code:
    set serveroutput on
    
    DECLARE
       compatible BOOLEAN := FALSE;
    BEGIN  
       compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
            pdb_descr_file => '/u01/app/oracle/oradata/pdb1.xml');
       if compatible then
          DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? YES');
       else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? NO');
       end if;
    END;
    /
    
    If the output shows that the database is compatible with the new CDB, proceed to plugin the database.

    Plug operations can be done using two different methods: NOCOPY and COPY.

    Using the NOCOPY method will use the data files of the unplugged PDB to plug the PDB into another (or the same) CDB without any physical file copy:
    create pluggable database pdb_plug_nocopy using '/u01/app/oracle/oradata/pdb1.xml'
    NOCOPY
    TEMPFILE REUSE;
    
    When using the NOCOPY option, the plugin operation lasts a few seconds. The original data files of the unplugged PDB now belong to the new plugged-in PDB in the new (or the same) CDB. A file with the same name as the temp file specified in the XML file exists in the target location. Therefore, the TEMPFILE_REUSE clause is required.

    Using the COPY method, will physically move datafiles from the original destination to a new destination:
    mkdir -p /u01/app/oracle/oradata/cdb2/pdb_plug_copy
    
    create pluggable database pdb_plug_copy using '/u01/app/oracle/oradata/pdb2.xml'
    COPY
    FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb2','/u01/app/oracle/oradata/cdb2/pdb_plug_copy');
    
    Verify the status, open mode and the file location of the plugged-in PDB (In the example below, I am showing the output for the pdb created using the COPY method, but it should always be done regardless of the method used):
    select pdb_name, status from cdb_pdbs where pdb_name='PDB_PLUG_COPY'; --> should return PDB_PLUG_COPY and NEW
    select open_mode from v$pdbs where name='PDB_PLUG_COPY'; --> should return MOUNTED
    
    select name from v$datafile where con_id=(select con_id from v$pdbs where name='PDB_PLUG_COPY';) --> should return the full path and name of the datafiles belonging to the system and sysaux tablespaces.
    
    Whether or not you are using the NOCOPY or COPY method, you will now have to open the newly plugged in database in the new CDB:

    alter pluggable database pdb_plug_nocopy open;
    
    alter pluggable database pdb_plug_copy open;
    
    show con_name
    show pdbs
    
    Source: Oracle 12cR1 tutorial
  • Wednesday, August 7, 2024

    Workaround for ORA-65011: Pluggable database does not exist in an EBS environment

    If you have upgraded an EBS database to Oracle 19c, you will also have migrated to the multitenant architechture, which is required.

    During the upgrade, Oracle set the parameter _pdb_name_case_sensitive to TRUE by default, as explained in Doc ID 2642230.1.

    If you then proceeded to convert your old EBS database to a pluggable database with a lower case name, this has the effect that a "flashback pluggable database" operation will fail when using sqlplus:
    oracle@oraserver01:[pdb1]# echo $ORACLE_SID
    cdb
    oracle@oraserver01:[pdb1]# sqlplus / as sysdba
    
    SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 6 10:23:55 2024
    Version 19.23.0.0.0
    
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             4 pdb1                           MOUNTED
    SYS@_container_name SQL> flashback pluggable database "pdb1" to restore point SD76759_1;
    flashback pluggable database "pdb1" to restore point SD76759_1
    *
    ERROR at line 1:
    ORA-65011: Pluggable database PDB1 does not exist.
    


    Workaround: Use RMAN instead
    oracle@oraserver01:[pdb1]# rman target /
    
    Recovery Manager: Release 19.0.0.0.0 - Production on Tue Aug 6 10:26:51 2024
    Version 19.23.0.0.0
    
    Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: CDB (DBID=2245029826)
    
    RMAN> flashback pluggable database "pdb1" to restore point SD76759_1;
    
    Starting flashback at 06-AUG-24
    
    starting media recovery
    
    archived log for thread 1 with sequence 21576 is already on disk as file /oradisk3/recoveryarea/CDB/archivelog/2024_07_11/o1_mf_1_21576__nw9trq2c_.arc
    media recovery complete, elapsed time: 00:00:01
    Finished flashback at 06-AUG-24
    

    Monday, July 22, 2024

    Friday, July 12, 2024

    Datapatch seems to "hang" or taking an extraordinary long time to complete. What could be wrong?

    When using the utility datapatch to either apply or rollback patches in my Oracle 19c ORACLE_HOME, I was suprised at the time it took to (in this case) rollback the patches from version 19.23 to 19.22.

    The patching was just "hanging" without any sign of progress. I left the shell for several hours, thinking progress would eventually be made and patience would be needed.

    When I came back the morning after, it had not progressed at all.

    The solution was found in the logfiles located $ORACLE_BASE/cfgtoollogs:

    cd $ORACLE_BASE/cfgtoollogs/sqlpatch
    
    # list your directories. Enter the most recent ones:
    ls -latr 
    
    drwxr-xr-x  2 oracle dba 4.0K Feb  1 18:25 sqlpatch_9072_2024_02_01_18_13_47
    drwxr-xr-x  2 oracle dba 4.0K May 14 09:21 sqlpatch_2561578_2024_05_14_09_20_33
    -rw-r--r--  1 oracle dba    0 May 23 18:12 36199232_25601966.lock
    drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36199232
    -rw-r--r--  1 oracle dba    0 May 23 18:12 36420641_25643730.lock
    drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36420641
    -rw-r--r--  1 oracle dba    0 May 23 18:12 36233263_25638263.lock
    drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36233263
    drwxr-xr-x  2 oracle dba 4.0K May 23 18:25 sqlpatch_4642_2024_05_23_18_11_41
    drwxr-xr-x  2 oracle dba 4.0K Jul 12 10:30 sqlpatch_1073748_2024_07_12_10_13_30
    
    cd sqlpatch_1073748_2024_07_12_10_13_30
     ls -altrh
    total 75M
    -rw-r--r--  1 oracle dba  12K Jul 12 10:14 install1.sql
    -rw-------  1 oracle dba 3.7M Jul 12 10:26 sqlpatch_catcon_0.log
    -rw-------  1 oracle dba  689 Jul 12 10:26 sqlpatch_catcon__catcon_1073748.lst
    -rw-r--r--  1 oracle dba  12K Jul 12 10:26 sqlpatch_summary.json
    -rw-r--r--  1 oracle dba  133 Jul 12 10:26 sqlpatch_progress.json
    -rw-r--r--  1 oracle dba 5.5M Jul 12 10:26 sqlpatch_invocation.log
    -rw-r--r--  1 oracle dba  66M Jul 12 10:26 sqlpatch_debug.log
    
    The source of the error was found in the file sqlpatch_catcon_0.log:
    CREATE OR REPLACE PACKAGE BODY ku$_dpload AS
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01653: unable to extend table SYS.SOURCE$ by 8192 in tablespace SYSTEM
    
    This simple error caused the rest of the script to fail:
    SQL> --
    SQL> -- ============================================================================
    SQL> -- BEGINNING OF APPLY EXECUTION
    SQL> -- ============================================================================
    SQL> --
    SQL> -- Initial phase sets up internal infrastructure for rest of dpload.
    SQL> --
    SQL> SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual;
    SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual
           *
    ERROR at line 1:
    ORA-04067: not executed, package body "SYS.KU$_DPLOAD" does not exist
    ORA-06508: PL/SQL: could not find program unit being called: "SYS.KU$_DPLOAD"
    ORA-06512: at line 1
    
    Obvious solution: Increase the tablespace datafile, or add a new file:
    sqlplus / as sysdba
    alter session set container=PDB01;
    select f.file_name,f.bytes/1024/1024 "MB", f.autoextensible,f.maxbytes/1024/1024 "MaxMB", t.bigfile
    from dba_data_files f, v$tablespace t
    where f.tablespace_name='SYSTEM'
    and f.tablespace_name = t.name;
    
    
    
    FILE_NAME MB AUTOEXTENSIBLE MaxMB BIGFILE
    /oradata/pdb01/system01.dbf
    32712
    YES
    32767,984375
    NO
    So we have a smallfile tablespace which is full!
    Add a datafile, still in the same session connected to the PDB01:
    alter tablespace system 
    add datafile '/oradata/pdb01/system02.dbf' size 256M autoextend on next 128M maxsize unlimited;
    

    Friday, June 14, 2024

    SQL for troubleshooting services in multitenant environment

    Query for troubleshooting services in a multitenant environment:
    set lines 200
    col name format a40
    col network_name format a40
    col pdb format a20
    col global format a20
    
    select name,network_name,con_id 
    from v$active_services
    order by con_id,name;
      
    select name,network_name,con_id,pdb,global 
    from v$services 
    order by con_id,name;
     
    select con_id#,name,network_name 
    from  cdb_service$ 
    order by con_id#,name ;
    exit
    

    Tuesday, June 4, 2024

    Solution to ORA-29548: Java system class reported: release of Java system classes in the database (19.0.0.0.240116 1.8) does not match that of the oracle executable (19.0.0.0.240416 1.8)

    When I was trying to drop and recreate a schema in my PDB, I received the following error:
    drop user myuser cascade
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-29548: Java system class reported: release of Java system classes in the database (19.0.0.0.240116 1.8) does not match that of the oracle executable (19.0.0.0.240416 1.8)
    
    This occured since I had cloned the database from a source with a lower CPU level. To correct the situation: shutdown the entire cdb
    SYS@_container_name SQL> show con_name
    
    CON_NAME
    ------------------------------
    CDB$ROOT
    SYS@_container_name SQL> shutdown
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    
    Start up the database in upgrade mode:
    SYS@_container_name SQL> startup upgrade
    Database mounted.
    Database opened.
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       MIGRATE    YES
             4 pdb1                           MOUNTED
    
    Open the pdb1 in upgrade mode, too:
    SYS@_container_name SQL> alter pluggable database all open upgrade ;
    
    Pluggable database altered.
    
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       MIGRATE    YES
             4 pdb1                           MIGRATE    YES
    
    Run datapatch:
    cd $ORACLE_HOME/OPatch/
    datapatch -verbose
    
    Shutdown the database, open normally:
    SYS@_container_name SQL> startup
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             4 pdb1                           MOUNTED
    SYS@_container_name SQL> alter pluggable database all open;
    
    Pluggable database altered.
    
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             4 pdb1                           READ WRITE NO
    
    You can now drop the user:
    SYS@_container_name SQL> alter session set container="pdb1";
    
    Session altered.
    
    SYS@_container_name SQL> drop user myuser cascade;
    
    User dropped.
    

    Monday, May 6, 2024

    How to solve errors like "Interim patch num/num (DATAPUMP BUNDLE PATCH 19.21.0.0.0): Not installed in the CDB but installed in the PDB"

    After running autoupgrade, I was left with the following message in the logfile $ORACLE_BASE/cfgtoollogs/upgrade/auto/status/status.log:
    [Stage Name]    NONCDBTOPDB
    [Status]        FAILURE
    [Start Time]    2024-05-06 14:29:45
    [Duration]      0:05:33
    [Log Directory] /u01/oracle/txs01/101/noncdbtopdb
    Cause:AutoUpgException [After running noncdb_to_pdb.sql, the pdb was closed or was opened in restricted mode]
    Reason:None
    Action:None
    Info:None
    ExecutionError:Yes
    Error Message:AutoUpgException [After running noncdb_to_pdb.sql, the pdb was closed or was opened in restricted mode]
    
    A query against the database shows that there are errors in the pdb_plug_in_violations view:
    SELECT TIME,NAME,CAUSE,STATUS,MESSAGE FROM PDB_PLUG_IN_VIOLATIONS;
    
    TIME                      NAME    CAUSE       STATUS     MESSAGE
    -------------------       -----   ---------   --------   ------------------------------------------------------------------------------------------------------------------------------
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 35926646/25513953 (OJVM RELEASE UPDATE: 19.22.0.0.240116 (35926646)): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 36092868/25496280 (DATAPUMP BUNDLE PATCH 19.22.0.0.0): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 35648110/25365038 (OJVM RELEASE UPDATE: 19.21.0.0.231017 (35648110)): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 35787077/25410019 (DATAPUMP BUNDLE PATCH 19.21.0.0.0): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   '19.22.0.0.0 Release_Update 2401040239' is installed in the CDB but '19.21.0.0.0 Release_Update 2309301519' is installed in the PDB
    
    Solution is in Doc ID 2604940.1 "Datapatch precheck failed : Release update is BEING installed to PDB but is not installed in CDB$ROOT.This will cause a patch mismatch between this PDB and CDB$ROOT":

    datapatch when executed is trying to apply Release update in the PDB database first but is not applying in CDB$ROOT .

    Solution is to execute datapatch individually in each of the containers, starting with CDB$ROOT :
    datapatch -verbose -pdbs CDB\$ROOT
    datapatch -verbose -pdbs TXS01
    
    sqlplus / as sysdba
    alter session set container=PDB$SEED;
    alter session set "_oracle_script"=TRUE;
    alter pluggable database pdb$seed close immediate instances=all;
    alter pluggable database pdb$seed OPEN READ WRITE;
    select open_mode from v$database;
    exit
    
    datapatch -verbose -pdbs PDB\$SEED
    
    sqlplus / as sysdba
    alter session set "_oracle_script"=FALSE;
    
    You should now see that the status has changed from PENDING to RESOLVED:
    TIME                      NAME    CAUSE       STATUS     MESSAGE
    -------------------       -----   ---------   --------   ------------------------------------------------------------------------------------------------------------------------------
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 35926646/25513953 (OJVM RELEASE UPDATE: 19.22.0.0.240116 (35926646)): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 36092868/25496280 (DATAPUMP BUNDLE PATCH 19.22.0.0.0): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 35648110/25365038 (OJVM RELEASE UPDATE: 19.21.0.0.231017 (35648110)): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 35787077/25410019 (DATAPUMP BUNDLE PATCH 19.21.0.0.0): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   '19.22.0.0.0 Release_Update 2401040239' is installed in the CDB but '19.21.0.0.0 Release_Update 2309301519' is installed in the PDB
    
    You may now clear the errors:
    SYS@cdb>SQL>exec DBMS_PDB.CLEAR_PLUGIN_VIOLATIONS( pdb_name => 'TXS01');
    
    PL/SQL procedure successfully completed.
    
    SYS@cdb>SQL>select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;
    
    no rows selected
    

    Saturday, April 13, 2024

    How to open the PDB$SEED database for patching in read/write mode

    The following should only be done if you need to patch the PDB$SEED using datapatch, or under instructions from Oracle Support.

    sqlplus / as sysdba
    Connected to:
    
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.22.0.0.0
    
    SYS@cdb>SQL> alter session set container=PDB$SEED;
    
    Session altered.
    
    SYS@cdb>SQL>alter session set "_oracle_script"=TRUE;
    
    Session altered.
    
    SYS@cdb>SQL>alter pluggable database pdb$seed close immediate instances=all;
    
    Pluggable database altered.
    
    SYS@cdb>SQL>alter pluggable database pdb$seed OPEN READ WRITE;
    
    Pluggable database altered.
    
    SYS@cdb>SQL>select open_mode from v$database;
    
    OPEN_MODE
    --------------------
    READ WRITE
    
    SYS@cdb>SQL>show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ WRITE NO
    SYS@cdb>SQL>exit
    
    To put the PDB$SEED back into READ ONLY and RESTRICTED mode:
    SYS@cdb>SQL>alter pluggable database PDB$SEED close;
    
    Pluggable database altered.
    
    SYS@cdb>SQL>show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       MOUNTED
    SYS@cdb>SQL>alter pluggable database PDB$SEED open read only;
    
    Pluggable database altered.
    
    SYS@cdb>SQL>show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
    
    SYS@cdb>SQL>alter session set "_oracle_script"=FALSE;
    
    Session altered.
    
    SYS@cdb>SQL>show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
    
    SYS@cdb>SQL>alter system enable restricted session;
    
    System altered.
    
    SYS@cdb>SQL>show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  YES
    
    

    Thursday, March 7, 2024

    How to solve TNS-01194: The listener command did not arrive in a secure transport

    On one of my servers running Oracle 19c with a container database and a PDB, I had trouble getting the services to automatically register themselves in the listener.

    The database it self seemed healthy; the v$active_services view showed that my services were indeed alive.

    But still, the services wouldn't be registered by LREG and thus wouldn't be available for the the listener to service incoming requests.

    I turned on logging for the listener by setting the following parameter in listener.ora:
    LOGGING_LISTENER=on
    
    Then restart the listener. Logging starts:
    lsnrctl start
    LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 07-MAR-2024 11:22:33
    
    Copyright (c) 1991, 2023, Oracle.  All rights reserved.
    
    Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
    STATUS of the LISTENER
    ------------------------
    Alias                     LISTENER
    Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
    Start Date                07-MAR-2024 11:11:35
    Uptime                    0 days 0 hr. 10 min. 57 sec
    Trace Level               off
    Security                  ON: Local OS Authentication
    SNMP                      OFF
    Listener Parameter File   /orainst/product/19c/network/admin/listener.ora
    Listener Log File         /orainst/diag/tnslsnr/myserver/listener/alert/log.xml
    
    I then put a tail on the listener log and register the services with the listener manually:
    sqlplus / as sysdba
    alter system register;
    


    I then noticed the output on the listener.log file:
     07-MAR-2024 10:37:56 * service_register_NSGR * 1194
     
     TNS-01194: The listener command did not arrive in a secure transport
    
    Look up the error:
    oracle@myserver.oric.no:[cdb]# oerr tns 01194
    01194, 00000, "The listener command did not arrive in a secure transport"
    // *Cause: Most of the listener administrative commands are only intended to
    // be issued in a secure transport, which are configured in
    // secure_control_ parameter. If the parameter is set, then the listener
    // accepts administrative requests only on those secure transports.
    // *Action: Make sure the command is issued using a transport specified
    // in secure transport list.
    
    I then noticed that my listener.ora parameter SECURE_REGISTER_LISTENER was set to TCP:
    SECURE_REGISTER_LISTENER = (TCP)
    
    To allow for dynamic instance registration, I needed to allow for the other protocol, IPC, too:
    SECURE_REGISTER_LISTENER = (TCP,IPC)
    
    My tests showed that they both need to be present, in that particular order.

    In fact, the listener.ora file could be as simple as this:
    ADR_BASE_LISTENER = /orainst/oracle
    LOGGING_LISTENER=on
    TRACE_LEVEL_LISTENER=off
    SECURE_REGISTER_LISTENER = (TCP,IPC)
    
      SID_LIST_LISTENER =
        (SID_LIST =
        (SID_DESC =
            (ORACLE_HOME = /orainst/oracle/product/19c)
            (SID_NAME = cdb)
          )
        )
    
      LISTENER =
        (DESCRIPTION_LIST =
          (DESCRIPTION =
            (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
          )
        )
    
    and still allow for remote client connections over TCP.

    Sources: Oracle Net LISTENER Parameters for 19c

    Thursday, November 23, 2023

    Solution to ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

    I had a situation where the error
    ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA
    
    was thrown when connecting to a PDB.

    The PDB was up and the listener runnning and serving the service_name which I wanted to connect to:

    Solution was to reverse the order of directory_path in sqlnet.ora from
    NAMES.DIRECTORY_PATH= (EZCONNECT,TNSNAMES)
    
    to
    NAMES.DIRECTORY_PATH= (TNSNAMES,EZCONNECT )
    

    Tuesday, September 12, 2023

    How to delete a pluggable database with dbca using the command line

    dbca -silent -deletePluggableDatabase \
    -sourceDB cdb \
    -pdbName pdb3 \
    

    How to create a pluggable database with dbca using the command line

    In the following code snippet, I am creating the pluggable database "pdb3" as a copy of the already existing pluggable database "pdb1":
    dbca -silent -createPluggableDatabase \
    -sourceDB cdb.skead.no \
    -pdbName pdb3 \
    -sourcePDB pdb1 \
    -createPDBFrom PDB \
    -pdbDatafileDestination "/dbfiles/oradata/cdb/pdb3/" \
    -pdbAdminPassword mysecretpassword1 \
    -sysPassword mysecretpassword2 \
    -systemPassword mysecretpassword3 \
    
    If you want to use the SEED database as source instead of an already existing database, remove the directive
    -sourcePDB pdb1 \
    
    and use this directive instead:
    -createPDBFrom DEFAULT \
    
    Source: Oracle 19c documentation

    Friday, September 8, 2023

    Script for creating a multitenant database with dbca

    Prerequisites:
  • You have a valid Oracle 19c installation in a server
  • You have set your Oracle environment already, so that your PATH, ORACLE_SID etc already exists when you execute your script

    The script called run_dbca.sh:
    #!/bin/bash
    start=$(date +%s.%N)
    export GLOGFILE=run_dbca.log
    touch ${GLOGFILE}
    chmod 666 ${GLOGFILE}
    exec 1> ${GLOGFILE} 2>&1
    echo "Now running run_dbca.sh"
    echo "Resetting oratab..."
    echo '' > /etc/oratab
    if [ $? -ne 0 ]; then
     echo "Could not erase oratab. Exit."
     exit 1
    else
     echo "oratab erased. Continuing..."
    fi
    export instanceAlive=`ps -ef| grep pmon | grep -v grep |awk '{ print $8}' | cut -f 3 -d"_"`
    
    if [ ! -z ${instanceAlive} ]; then
     echo "Instance for database $ORACLE_SID is already running. Shut it down first"
     exit 1
    fi
    dbca -createDatabase -responsefile ./${ORACLE_SID}.rsp -silent
    echo "Finished running run_dbca.sh"
    dur=$(echo "$(date +%s.%N) - $start" | bc)
    printf "Total execution time for run_dbca.sh: %.6f seconds\n" $dur
    exit 0
    
    The response file can be seen below. I saved it in a file called cdb.rsp and put it in the same directory.

    I commented out most instructions, as they are not needed.

    The original template db_install.rsp is delivered by default with your oracle installation and resides in your $ORACLE_HOME/install/response.
    #-------------------------------------------------------------------------------
    # Do not change the responseFileVersion diretive!
    #-------------------------------------------------------------------------------
    responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0
    gdbName=cdb.oric.no
    sid=cdb
    createAsContainerDatabase=true
    numberOfPDBs=1
    pdbName=pdb1
    useLocalUndoForPDBs=true
    pdbAdminPassword=mysecretpassword
    templateName=New_Database.dbt
    sysPassword=mysyspassword
    systemPassword=mysystempassword
    datafileDestination=/data01/oradata/cdb
    recoveryAreaDestination=/data02/fra
    characterSet=AL32UTF8
    initParams=max_pdbs=3,db_create_file_dest_size=1024GB
    memoryPercentage=75
    automaticMemoryManagement=false
    enableArchive=true
    redoLogFileSize=2048
    dbOptions=JSERVER:false,DV:false,ORACLE_TEXT:false,IMEDIA:false,CWMLITE:false,SPATIAL:false,OMS:false,APEX:false
    
    Execute the script like this:
    ./run_dbca.sh
    
    Follow the progression of the creation in the log run_dbca.log:
    tail -f run_dbca.log
    
    Sources:

    Oracle 19c documentation: About creating a database with dbca

    Oracle 19c documentation: Managing templates with dbca

    Oracle-Base.com: Multitenant : Create and Configure a Pluggable Database (PDB) in Oracle Database 12c Release 1 (12.1) - still a relevant and good source.

    Oracle-base.com: a dbca template

    Oracle-base.com: Database Configuration Assistant (DBCA) : Creating Databases in Silent Mode
  • Thursday, September 7, 2023

    Syntax for switching over a container database using data guard broker

    Here I am switching over a container database called cdb, to the standby server which is running a container database with unique name cdb_stby1.

    First, view the configuration:
    DGMGRL> show configuration
    
    Configuration - DGConfig1
    
      Protection Mode: MaxPerformance
      Members:
      cdb       - Primary database
        cdb_stby1 - Physical standby database
    
    Fast-Start Failover:  Disabled
    
    Configuration Status:
    SUCCESS   (status updated 84 seconds ago)
    
    Perform the switchover:
    DGMGRL> switchover to 'cdb_stby1';
    
    Typical output from a successful switchover:
    Performing switchover NOW, please wait...
    Operation requires a connection to database "cdb_stby1"
    Connecting ...
    Connected to "cdb_stby1"
    Connected as SYSDBA.
    New primary database "cdb_stby1" is opening...
    Operation requires start up of instance "cdb" on database "cdb"
    Starting instance "cdb"...
    Connected to an idle instance.
    ORACLE instance started.
    Connected to "cdb"
    Database mounted.
    Switchover succeeded, new primary is "cdb_stby1"
    
    If I wish to switch back, make sure you log on to whichever server is running your primary database at the time.
    (DO NOT use os authentication with dgmgrl / as sysdba, it will throw an error)
    dgmgrl sys@cdb_stby1 as sysdba
    DGMGRL for Linux: Release 19.0.0.0.0 - Production on Thu Sep 7 15:27:27 2023
    Version 19.20.0.0.0
    
    Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
    
    Welcome to DGMGRL, type "help" for information.
    Password:
    Connected to "cdb_stby1"
    Connected as SYSDBA.
    DGMGRL> show configuration;
    
    Configuration - DGConfig1
    
      Protection Mode: MaxPerformance
      Members:
      cdb_stby1 - Primary database
        cdb       - Physical standby database
    
    Fast-Start Failover:  Disabled
    
    Configuration Status:
    SUCCESS   (status updated 46 seconds ago)
    
    DGMGRL> switchover to cdb
    Performing switchover NOW, please wait...
    Operation requires a connection to database "cdb"
    Connecting ...
    Connected to "cdb"
    Connected as SYSDBA.
    New primary database "cdb" is opening...
    Operation requires start up of instance "cdb" on database "cdb_stby1"
    Starting instance "cdb"...
    Connected to an idle instance.
    ORACLE instance started.
    Connected to "cdb_stby1"
    Database mounted.
    Switchover succeeded, new primary is "cdb"
    
    

    Thursday, August 31, 2023

    More on how to display the current container in a multitenant database

    My listener is listening for connections to a service called "myservice1", which runs out of the pluggable database "pdb1":
    lsnrctl status
    Service "pdb1" has 1 instance(s). <-- the default service for the pluggable database
      Instance "cdb", status READY, has 1 handler(s) for this service...
    
    Service "myservice1" has 1 instance(s). <-- service defined by the DBA
      Instance "cdb", status READY, has 1 handler(s) for this service...
    
    In the code snippet below, I am executing a script called disp_session.sql which will

  • connecting directly to a service served by the listener
  • switching schema within my session

    The following will display the username, current schema, and the service my connction is connecting to:
    alter session set current_schema=scott;
    set lines 200
    col service format a20
    col container format a20
    col username format a20
    col "container ID" format a20
    col schemaname format a20
    select sys_context ('userenv','SERVICE_NAME')   service,
           sys_context ('userenv','CON_NAME')       container,
           sys_context ('userenv','CON_ID')         "container ID",
           sys_context ('userenv','CURRENT_USER')   username,
           sys_context ('userenv','CURRENT_SCHEMA') schemaname
    from dual;
    
    oracle@server1.oric.no:[cdb]# sqlplus system@myservice1 @disp_session.sql
    
    Session altered.
    
    SERVICE              CONTAINER            container ID         USERNAME             SCHEMANAME
    -------------------- -------------------- -------------------- -------------------- --------------------
    myservice1           pdb1                 4                    SYSTEM               SCOTT
    
    Useful information when debugging client connections ;-)
  • Thursday, August 3, 2023

    Script to check the services in a multitenant container databaser

    A simple script to check services running out of a specific PDB, or for all PDBs if the root container is selected:
    column pdb_name format a20
    column status format a20
    prompt
    prompt List of PDBs available on this server:
    prompt
    select p.pdb_name
    from dba_pdbs p join v$pdbs v
    on (p.pdb_id = v.con_id)
    where open_mode='READ WRITE'
    /
    
    accept pdb_name DEFAULT 'CDB$ROOT' prompt 'Select a PDB (default=root container): '
    alter session set container=&&pdb_name;
    col con_name format a40
    set lines 200
    select 'You are connected to: ' || sys_context('USERENV','CON_NAME') "con_name"
    from dual;
    
    col name format a20
    col global format a20
    col pdb format a20
    col con_name format a20
    col network_name format a20
    col creation_Date format a20
    prompt V$SERVICES
    select name,global,pdb,network_name from v$services
    /
    
    prompt V$ACTIVE_SERVICES
    select name,global,con_name,network_name from v$active_services
    /
    
    prompt DBA_SERVICES
    select service_id,name,network_name,creation_date,pdb
    from dba_services
    /
    
    prompt CDB_SERVICES
    select service_id,name,network_name,creation_date,pdb
    from cdb_services
    /
    exit
    

    Monday, June 19, 2023

    Using the autoupgrade tool to migrate a non-multitenant database to a PDB on the same host

    The autoupgrade tool can convert a non-cdb database to a PDB running in a container database.
    If you need to switch to a multitenant architechture, this could be a way forward.

    In my case, I had a non-cdb database of versjon 19c running on my server testserver1.oric.no, and I want to convert it to a PDB.

    This is what I did to move away from the non-cdb architecture:

    1. Download the latest version of autoupgrade from Oracle Support: AutoUpgrade Tool (Doc ID 2485457.1)

    On your database server, make a backup of the existing $ORACLE_HOME/rdbms/admin/autoupgrade.jar file, then transfer the new version of the file you just downloaded from Oracle into the same folder.

    2. create a container database.

    You can execute the database configuration assistant silently to quickly set one up. Put this in a shell script and call it for example "cre_cdb.sh":
    export ORACLE_SID=cdb
    dbca -silent -createDatabase \
     -templateName General_Purpose.dbc \
     -gdbname cdb -sid cdb -responseFile NO_VALUE \
     -characterSet AL32UTF8 \
     -sysPassword secret \
     -systemPassword secret \
     -createAsContainerDatabase true \
     -numberOfPDBs 0 \
     -databaseType MULTIPURPOSE \
     -memoryMgmtType auto_sga \
     -totalMemory 2048 \
     -storageType FS \
     -datafileDestination "/data01/oradata/" \
     -redoLogFileSize 50 \
     -emConfiguration NONE \
     -ignorePreReqs
    
    Execute it:
    chmod 755 cre_db.sh
    ./cre_db.sh
    
    3. Set some parameters in the cdb:
    alter system set db_recovery_file_dest_size=10G;
    alter system set db_recovery_file_dest='/fra';
    alter system set db_create_file_dest = '/data01/oradata';
    shutdown immediate
    startup mount
    alter database archivelog;
    alter database open;
    exit
    
    4. When the container database is ready, create a configuration file.

    Put this in a file called "cre_sample_file.sh":
    $ORACLE_BASE/product/19c/jdk/bin/java \
      -jar $ORACLE_BASE/product/19c/rdbms/admin/autoupgrade.jar \
      -create_sample_file config /sw/oracle/admin/mydb/sql/config.txt noncdbtopdb
    
    Execute it:
    chmod 755 cre_sample_file.sh
    ./cre_sample_file.sh
    
    The output should be a file that you now open in an editor and edit. This is how mine looked:
    upg1.log_dir=/sw/oracle/cfgtoollogs/autoupgrade/mydb
    upg1.sid=mydb
    upg1.source_home=/sw/oracle/product/19c
    upg1.target_cdb=cdb
    upg1.target_home=/sw/oracle/product/19c
    upg1.target_pdb_name=mydb
    upg1.start_time=NOW                      # Optional. 10 Minutes from now
    upg1.upgrade_node=testserver1.oric.no    # Optional. To find out the name of your node, run the hostname utility. Default is ''localhost''
    upg1.run_utlrp=yes                       # Optional. Whether or not to run utlrp after upgrade
    upg1.target_version=19                   # Oracle version of the target ORACLE_HOME.  Only required when the target Oracle database version is 12.2
    
    There are many additional options that could potentially be relevant, but in my case the ones listed above was all that was needed.

    5. Analyze the database before converting.

    Put the following in a script called "analyze_db.sh":
    $ORACLE_BASE/product/19c/jdk/bin/java \
      -jar $ORACLE_BASE/product/19c/rdbms/admin/autoupgrade.jar \
      -config /sw/oracle/admin/mydb/sql/config.txt -mode analyze
    
    Execute it:
    chmod 755 analyze_db.sh
    ./analyze_db.sh
    
    In my case, this job returned very quickly, since there is no upgrade job do be done, we are simply converting from an non-cdb to a PDB!

    6. Finally, convert the database to a pdb.

    Put the following into a file called "deploy.sh":
    $ORACLE_BASE/product/19c/jdk/bin/java \
      -jar $ORACLE_BASE/product/19c/rdbms/admin/autoupgrade.jar \
      -config /sw/oracle/admin/mydb/sql/config.txt -mode deploy
    
    Execute it:
    chmod 755 deploy.sh
    ./deploy.sh
    
    The deploy phase goes through several steps and leaves you at the prompt. You can watch the progress by executing "lsj" at the prompt:
    AutoUpgrade 23.1.230224 launched with default internal options
    Processing config file ...
    +--------------------------------+
    | Starting AutoUpgrade execution |
    +--------------------------------+
    1 Non-CDB(s) will be processed
    Type 'help' to list console commands
    upg> lsj
    +----+-------+-----------+---------+-------+----------+-------+-------------------+
    |Job#|DB_NAME|      STAGE|OPERATION| STATUS|START_TIME|UPDATED|            MESSAGE|
    +----+-------+-----------+---------+-------+----------+-------+-------------------+
    | 101|   mydb|NONCDBTOPDB|EXECUTING|RUNNING|  15:19:20| 1s ago|Compatibility check|
    +----+-------+-----------+---------+-------+----------+-------+-------------------+
    Total jobs 1
    
    upg> lsj
    +----+-------+-----------+---------+-------+----------+-------+------------------+
    |Job#|DB_NAME|      STAGE|OPERATION| STATUS|START_TIME|UPDATED|           MESSAGE|
    +----+-------+-----------+---------+-------+----------+-------+------------------+
    | 101|   mydb|NONCDBTOPDB|EXECUTING|RUNNING|  15:19:20| 3s ago|Executing describe|
    +----+-------+-----------+---------+-------+----------+-------+------------------+
    Total jobs 1
    .
    .
    .
    upg> lsj
    +----+-------+---------+---------+-------+----------+-------+-------+
    |Job#|DB_NAME|    STAGE|OPERATION| STATUS|START_TIME|UPDATED|MESSAGE|
    +----+-------+---------+---------+-------+----------+-------+-------+
    | 101|   mydb|DBUPGRADE|EXECUTING|RUNNING|  15:19:20| 8s ago|Running|
    +----+-------+---------+---------+-------+----------+-------+-------+
    Total jobs 1
    
    upg> Job 101 completed
    ------------------- Final Summary --------------------
    Number of databases            [ 1 ]
    
    Jobs finished                  [1]
    Jobs failed                    [0]
    Jobs restored                  [0]
    Jobs pending                   [0]
    
    
    
    Please check the summary report at:
    /sw/oracle/admin/mydb/sql/cfgtoollogs/upgrade/auto/status/status.html
    /sw/oracle/admin/mydb/sql/cfgtoollogs/upgrade/auto/status/status.log
    
    Looking at the log file /sw/oracle/admin/mydb/sql/cfgtoollogs/upgrade/auto/status/status.log I see that many of the stages were very quick to return:
    ==========================================
              Autoupgrade Summary Report
    ==========================================
    [Date]           Mon Jun 19 15:28:58 CEST 2023
    [Number of Jobs] 1
    ==========================================
    [Job ID] 101
    ==========================================
    [DB Name]                mydb
    [Version Before Upgrade] 19.19.0.0.0
    [Version After Upgrade]  19.19.0.0.0
    ------------------------------------------
    [Stage Name]    PREUPGRADE
    [Status]        SUCCESS
    [Start Time]    2023-06-19 15:19:20
    [Duration]      0:00:00
    [Log Directory] /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/preupgrade
    ------------------------------------------
    [Stage Name]    DRAIN
    [Status]        SUCCESS
    [Start Time]    2023-06-19 15:19:21
    [Duration]      0:00:00
    [Log Directory] /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/drain
    ------------------------------------------
    [Stage Name]    NONCDBTOPDB
    [Status]        SUCCESS
    [Start Time]    2023-06-19 15:19:21
    [Duration]      0:08:51
    [Log Directory] /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/noncdbtopdb
    ------------------------------------------
    [Stage Name]    DBUPGRADE
    [Status]        SUCCESS
    [Start Time]    2023-06-19 15:28:12
    [Duration]      0:00:42
    [Log Directory] /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/dbupgrade
    ------------------------------------------
    [Stage Name]    POSTUPGRADE
    [Status]        SUCCESS
    [Start Time]    2023-06-19 15:28:57
    [Duration]      0:00:00
    [Log Directory] /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/postupgrade
    ------------------------------------------
    [Stage Name]    SYSUPDATES
    [Status]        SUCCESS
    [Start Time]    2023-06-19 15:28:58
    [Duration]
    [Log Directory] /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/sysupdates
    ------------------------------------------
    Summary: /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/dbupgrade/datapatch_summary.log
    
    If I move into the /sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/noncdbtopdb folder, I can see
    ls -altrh
    total 480K
    -rw------- 1 oracle dba 6.9K Jun 19 15:20 mydb-MYDB.xml
    -rw------- 1 oracle dba 1.7K Jun 19 15:20 createpdb_mydb_MYDB.log
    -rw------- 1 oracle dba 323K Jun 19 15:28 noncdbtopdb_mydb_MYDB.log
    -rw------- 1 oracle dba 127K Jun 19 15:28 noncdb_to_pdb_mydb.log
    drwx------ 2 oracle dba 4.0K Jun 19 15:28 .
    drwx------ 8 oracle dba 4.0K Jun 19 15:28 ..
    
    This stage executes the command
    create pluggable database "MYDB" using '/sw/oracle/cfgtoollogs/autoupgrade/mydb/mydb/101/noncdbtopdb/mydb-MYDB.xml' NOCOPY tempfile reuse
    
    to perform the actual plug-in operation of the old non-cdb database, so that it becomes a PDB. This was the only stage that took a bit of time to complete, almost 9 minutes.

    There are numerous ways to migrate from the non-container architecture to multitenant architecture, this is just one of them.

    Things to notice:

    * My non-cdb database was called "mydb" - notice the lower case of the name. During the process, this was ignored and the database was renamed to MYDB when it was recreated as a PDB. There seem to be no way to change this behaviour.

    Credit to Tim Hall for the usual excellent work in his blog post about the same topic, but also involving upgrading to a higher version of the Oracle software.

    The official Oracle 19c documenation about AutoUpgrade Configuration File for Non-CDB Upgrades on the Same System is essential reading

    Lots of good stuff on the Mike Dietrich blog

    Wednesday, May 10, 2023

    Can the recyclebin be turned off for a container database but remain on in a pluggable database?

    Yes, the multitenant architecture allows this.
    SQL>select name, value , ISPDB_MODIFIABLE from V$parameter where name = 'recyclebin';
    
    NAME                           VALUE                ISPDB
    ------------------------------ -------------------- -----
    recyclebin                     OFF                  TRUE
    
    Switch container to a PDB:
    SQL>alter session set container=veg7;
    
    Session altered.
    
    SQL>show con_name
    
    CON_NAME
    ------------------------------
    VEG7
    
    SQL>select name, value , ISPDB_MODIFIABLE from V$parameter where name = 'recyclebin';
    
    NAME                           VALUE                ISPDB
    ------------------------------ -------------------- -----
    recyclebin                     ON                   TRUE
    
    Remember that the pluggable database must be closed and reopened if you toggle the recyclebin on/off:
    SQL>alter session set container=veg7;
    
    Session altered.
    
    SQL>alter system set recyclebin=on container=current scope=spfile;
    
    System altered.
    
    SQL>alter pluggable database close;
    
    Pluggable database altered.
    
    cdb>SQL>alter pluggable database open;
    
    Pluggable database altered.
    
    If you want to toggle the recyclebin on/off for the container database, the procedure is the same except that the entire instance will have to be bounced.

    Monday, May 1, 2023

    How to use RMAN DUPLICATE FROM ACTIVE DATABASE to clone an entire CDB and all PDBs to a remote host



    In this post I will show how to clone an entire cdb with all PDBs using the DUPLICATE FROM ACTIVE DATABASE command, which has been around for many years. 

    You can still reuse your existing scripts for these operations, with some minor tweaks, even after you have moved to the multitenant architechture.





    My target server is called prodserver
    My auxiliary server is called testserver

    On prodserver, the container database is called "cdb" and for the time being, there is only one pluggable database running there, called "pdbprod", as shown below:


    orasoft@prodserver:[pdbprod] sqlplus / as sysdba
    SYS@CDB$ROOT SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             3 pdbprod                        READ WRITE NO
    
    On my auxiliary server testserver, I also have a container database called "cdb", and a PDB called "pdbtest":
    orasoft@testserver:[pdbtest] sqlplus / as sysdba
    SYS@CDB$ROOT SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             3 pdbtest                        READ WRITE NO
    
    First, I recommend to configure a wallet, to avoid exposing your passwords in scripts or in on the Linux shell prompt.

    Add a global user that exists in both container database and make sure it has SYSDBA privileges.
    In this example, I add a user called c##cloneadmin and use the alias "cdbprod" and "cdbtest" for both:
    mkstore -wrl $TNS_ADMIN/wallet -createCredential cdbprod c##cloneadmin
    mkstore -wrl $TNS_ADMIN/wallet -createCredential cdbaux c##cloneadmin
    
    Add the connect descriptions to your tnsnames.ora file, on both source and target.
    Make sure the passwordless connections work before you attempt cloning. Test like this:
    rman 
    connect target /@cdbprod
    connect auxiliary /@cdbaux
    run{
    allocate channel c1 type disk;
    allocate channel c2 type disk;
    allocate auxiliary channel aux1 type disk;
    allocate auxiliary channel aux2 type disk;
    }
    exit
    
    Do not proceed until you connect successfully to both.

    Create a minimal initcdb.ora file, containing only these two parameters:
    *.db_name='cdb'
    *.enable_pluggable_database=TRUE
    
    Startup your auxiliary container database in nomount mode using the initcdb.ora file above:
    sqlplus / as sysdba
    startup nomount pfile=initcdb.ora
    
    Create an RMAN script for your duplication, in my example I put it in a file called run_duplication.cmd:
    connect target /@cdbprod
    connect auxiliary /@cdbaux
    run{
    allocate channel c1 type disk;
    allocate channel c2 type disk;
    allocate auxiliary channel aux1 type disk;
    allocate auxiliary channel aux2 type disk;
    configure device type disk parallelism 2;
    debug io;
    DUPLICATE DATABASE TO cdb
    FROM ACTIVE DATABASE
    USING BACKUPSET
    SPFILE
     set db_file_name_convert='pdbprod','pdbtest'
     set audit_file_dest='/oraadmin/oracle/admin/cdb/adump'
     set core_dump_dest='/oraadmin/oracle/diag/rdbms/cdb/cdb/cdump'
     set control_files='/dbfiles01/oradata/CDB/control01.ctl','/dbfiles02/fra/CDB/control02.ctl'
     set sga_target='33621540864'
     set sga_max_size='33621540864'
     set pga_aggregate_target='12773752832'
     set shared_pool_size='2751463424'
     set streams_pool_size='268435456'
     set service_names='cdb,pdbtest'
     set db_recovery_file_dest_size='3221225472000'
    NOFILENAMECHECK 
    ;
    debug off;
    }
    exit
    
    Remember that the NOFILENAMECHECK is required when using the same data file names but on different hosts. Without it, RMAN throws the error below and aborts your script:
    RMAN-05001: auxiliary file name /dbfiles02/oradata/CDB/undotbs01.dbf conflicts with a file used by the target database
    
    Create a shell script that calls the RMAN script, for example run_duplication.sh:
    rman cmdfile='run_duplication.cmd' debug trace='duplicate.trc' log='duplicate.log'
    
    Make the script executable, and execute it. If the source database is very large, and you expect the duplication to take more than 2-3 hours, you may want execute it in the background:
    chmod 755 run_duplication.sh
    nohup ./run_duplication.sh &
    
    When the duplication was finished, the cdb + pdbprod was opened on the remote server. The last lines of the logfile states:
    RMAN-06400: database opened
    RMAN-06162: sql statement: alter pluggable database all open
    RMAN-03091: Finished Duplicate Db at 29-APR-23
    
    Sources:
    Duplicating a CDB from the Oracle 19c documentation.

    Wednesday, January 4, 2023

    Check if your CDB is set up for local undo

    col PROPERTY_NAME format a20
    col PROPERTY_VALUE format a20
    col DESCRIPTION format a40
    col LOCAL_UNDO_ENABLED format a20
    select * from database_properties where property_name like '%UNDO%';
    
    PROPERTY_NAME        PROPERTY_VALUE       DESCRIPTION
    -------------------- -------------------- ----------------------------------------
    LOCAL_UNDO_ENABLED   TRUE                 true if local undo is enabled