Monday, December 2, 2024

How to use rsync between two servers

For example, to syncronize two servers so they get identical file structures: 

  • Source server: server1.oric.no 
  • Destination server: server2.oric.no 
  • Execute as: root
target=server2.oric.no && rsync -vaz --delete /u01/app/myfiles ebsuser@$target:/u01/app
This will sync the entire "/u01/app/myfiles" folder + all subfolders.

The --delete flag in rsync ensures that files in the destination directory (ebsuser@$target:/u01/app) that are not present in the source directory (/u01/app/inst) are deleted. This helps keep the destination directory synchronized with the source directory.

Thursday, October 31, 2024

Error: ORA-16831: operation not allowed on this member

When trying to add a standby database to your configuration, I received
DGMGRL>  add database 'cdb_stby1' as connect identifier is 'cdb_stby1';
Error: ORA-16831: operation not allowed on this member
This message means that the database you are trying to add, in this case cdb_stby1, is not ready to become a standby database in your configuration.

It is easy to verify by querying the "database_role" column in the v$database view.

It needs to be opened with a role that is NOT primary, for example, the PHYSICAL STANDBY role
select database_role, open_mode,name from v$database;

DATABASE_ROLE    OPEN_MODE            NAME
---------------- -------------------- ------------------------------
PHYSICAL STANDBY MOUNTED              CDB
If your database is opened with the role PRIMARY, you need to clone it from your target, for example by using RMANs DUPLICATE ... FOR STANDBY FROM ACTIVE DATABASE

Tuesday, October 29, 2024

Lock and expire users

An anonymous PL/SQL block to lock and expire open accounts:
SET VERIFY OFF
connect "SYS"/"&&sysPassword" as SYSDBA
set echo on
spool /oracle/admin/mydb01/scripts/lockAccount.log append
BEGIN
 FOR item IN ( SELECT USERNAME, AUTHENTICATION_TYPE FROM DBA_USERS WHERE ACCOUNT_STATUS IN ('OPEN', 'LOCKED', 'EXPIRED') AND USERNAME NOT IN (
'SYS','SYSTEM') )
 LOOP
IF item.AUTHENTICATION_TYPE='PASSWORD' THEN
  dbms_output.put_line('Locking and Expiring: ' || item.USERNAME);
  execute immediate 'alter user ' ||
         sys.dbms_assert.enquote_name(
         sys.dbms_assert.schema_name(
         item.USERNAME),false) || ' password expire account lock' ;
 ELSE
  dbms_output.put_line('Locking: ' || item.USERNAME);
  execute immediate 'alter user ' ||
         sys.dbms_assert.enquote_name(
         sys.dbms_assert.schema_name(
         item.USERNAME),false) || ' account lock' ;
 END IF;
 END LOOP;
END;
/
spool off

Tuesday, October 15, 2024

Arguments that can be passed to dbca - usage output

This can certainly come in handy one day...

        -createDatabase - Command to Create a database.
                -responseFile | (-gdbName,-templateName)
                -responseFile - 
                -gdbName 
                -templateName 
                [-useWalletForDBCredentials  Specify true to load database credentials from wallet]
                        -dbCredentialsWalletLocation 
                        [-dbCredentialsWalletPassword ]
                [-characterSet ]
                [-dvConfiguration  Specify true to configure and enable database vault.]
                        -dvUserName 
                        -dvUserPassword 
                        [-dvAccountManagerPassword ]
                        [-dvAccountManagerName ]
                [-datafileDestination ]
                [-datafileJarLocation ]
                [-runCVUChecks ]
                [-sid ]
                [-redoLogFileSize ]
                [-registerWithDirService ]
                        -dirServiceUserName 
                        [-databaseCN ]
                        [-dirServiceCertificatePath ]
                        [-dirServicePassword ]
                        [-dirServiceUser ]
                        [-ldapDirectoryAccessType ]
                        [-useSYSAuthForLDAPAccess ]
                        [-walletPassword ]
                [-systemPassword ]
                [-nodelist ]
                [-oracleHomeUserPassword ]
                [-sysPassword ]
                [-enableArchive  Specify true to enable archive>]
                        [-archiveLogMode ]
                        [-archiveLogDest ]
                [-memoryMgmtType ]
                [-variables ]
                [-listeners ]
                [-olsConfiguration  Specify true to configure and enable Oracle Label Security.]
                        [-configureWithOID This flag configures Oracle Label Security with OID.]
                [-createAsContainerDatabase ]
                        [-pdbName ]
                        [-numberOfPDBs ]
                        [-pdbStorageMAXSizeInMB ]
                        [-pdbStorageMAXTempSizeInMB ]
                        [-useLocalUndoForPDBs   Specify false to disable local undo tablespace for PDBs.]
                        [-pdbAdminPassword ]
                        [-pdbOptions ]
                [-recoveryAreaDestination ]
                        [-recoveryAreaSize ]
                [-createListener ]
                [-useOMF  Specify true to use Oracle-Managed Files.]
                [-memoryPercentage | -totalMemory]
                [-memoryPercentage ]
                [-totalMemory ]
                [-dbOptions ]
                [-sampleSchema ]
                [-variablesFile ]
                [-customScripts ]
                [-databaseType ]
                [-oracleHomeUserName ]
                [-initParams ]
                        [-initParamsEscapeChar ]
                [-policyManaged | -adminManaged]
                [-policyManaged ]
                        -serverPoolName 
                        [-pqPoolName ]
                        [-createServerPool ]
                                [-pqPoolName ]
                                [-forceServerPoolCreation ]
                                [-pqCardinality ]
                                [-cardinality ]
                [-adminManaged ]
                [-nationalCharacterSet ]
                [-storageType < FS | ASM >]
                        -datafileDestination 
                        [-asmsnmpPassword ]
                [-databaseConfigType ]
                        [-RACOneNodeServiceName ]
                [-emConfiguration ]
                        [-dbsnmpPassword ]
                        [-emPassword ]
                        [-emUser ]
                        [-emExpressPort ]
                        [-omsHost ]
                        [-omsPort ]
                        [-emExpressPortAsGlobalPort ]

Monday, October 14, 2024

How to add a post-creation script to your dbca createDatabase statement

Given that you have saved your responsefile in the oracle user's home directory, use the -customScripts to point out a script that will be run *after* the cdb is ready. I have found this useful in situations where you want a non-default standard that Oracle simply denies you every time you create a new cdb.
dbca -createDatabase -responsefile/home/oracle/${ORACLE_SID}.rsp -customScripts /home/oracle/post_cdb_creation.sql -silent

Thursday, October 10, 2024

autoupgrade error TARGET_CDB_COMPATIBILITY

If you are converting your non-cdb to to a pdb using autoupgrade, you might run into this error:
2024-10-10 14:19:51.697 INFO
build.version:24.6.240905
build.hash:0ca273885
build.date:2024/09/05 11:30:40 -0400
build.max_target_version:23
build.MOS_NOTE:2485457.1
build.supported_target_versions:12.2,18,19,21,23
build.type:production
build.MOS_LINK:https://support.oracle.com/epmos/faces/DocumentDisplay?id=2485457.1
build.label:(HEAD, tag: v24.6, origin/stable_devel, stable_devel)
build.hash_date:2024/09/05 11:23:06 -0400

2024-10-10 14:19:51.850 INFO No preupgrade user action defined
2024-10-10 14:20:13.184 INFO Analyzing mydb01, 62 checks will run using 8 threads
2024-10-10 14:20:39.765 INFO File /u01/oracle/cfgtoollogs/autoupgrade/mydb01/mydb01/104/prechecks/mydb01_preupgrade.html not found
2024-10-10 14:20:39.767 INFO File /u01/oracle/cfgtoollogs/autoupgrade/mydb01/mydb01/104/prechecks/mydb01_preupgrade.log not found
2024-10-10 14:20:39.768 INFO File /u01/oracle/cfgtoollogs/autoupgrade/mydb01/mydb01/104/dbupgrade/datapatch_summary.log not found
2024-10-10 14:20:39.805 ERROR The following checks have ERROR severity and no auto fixup is available or
the fixup failed to resolve the issue. Fix them before continuing:
mydb01 TARGET_CDB_COMPATIBILITY
2024-10-10 14:20:39.925 ERROR Dispatcher failed: The following checks have ERROR severity and no auto fixup is available or
the fixup failed to resolve the issue. Fix them before continuing:
mydb01 TARGET_CDB_COMPATIBILITY
oracle.upgrade.autoupgrade.utils.errors.AutoUpgException: The following checks have ERROR severity and no auto fixup is available or
the fixup failed to resolve the issue. Fix them before continuing:
mydb01 TARGET_CDB_COMPATIBILITY
        at oracle.upgrade.autoupgrade.dbchecks.standard.checks.ChecksController.evaluateBlockerChecks(ChecksController.java:310)
        at oracle.upgrade.autoupgrade.dbchecks.standard.checks.ChecksController.executeChecks(ChecksController.java:166)
        at oracle.upgrade.autoupgrade.dispatcher.facade.subsystems.ExecuteChecks.executeStage(ExecuteChecks.java:95)
        at oracle.commons.dispatcher.JobStage.executeStage(JobStage.java:80)
        at oracle.commons.dispatcher.RunJobDefinition.runJob(RunJobDefinition.java:144)
        at oracle.upgrade.autoupgrade.dispatcher.v2.def.RunAutoUpgradeJob.executeDispatcher(RunAutoUpgradeJob.java:62)
        at oracle.upgrade.autoupgrade.dispatcher.AutoUpgDispatcher.run_(AutoUpgDispatcher.java:262)
        at oracle.upgrade.autoupgrade.dispatcher.AutoUpgDispatcher.run(AutoUpgDispatcher.java:245)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:750)
Cause:
The database you are converting to a pluggable database is installed with more options than the cdb you are trying to plug it into.

Solution:
Recreate your cdb so that it has the same options installed as the database you are trying to convert to a pdb.

Check the installed options in both databases like this:
set lines 200 pages 100
col comp_name format a40
col status format a20
SELECT COMP_NAME,COMP_ID,VERSION,STATUS FROM DBA_REGISTRY;

Wednesday, October 9, 2024

Workaround for ORA-27069 attempt to do I/O beyond the range of the file during RMAN clone from active database

This is the message I received when starting an active database duplication using RMAN:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/09/2024 10:40:19
RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the file
The rman script:
connect target sys/pwd@mydb1
connect auxiliary sys/pwd@mydb2
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate auxiliary channel aux1 type disk;
allocate auxiliary channel aux2 type disk;
configure device type disk parallelism 2;
debug io;
DUPLICATE TARGET DATABASE TO mydb2
FROM ACTIVE DATABASE
USING COMPRESSED BACKUPSET;
debug off;
}
Solution:
  • Usa a pfile to start the auxiliary instance, and *not* an spfile. If you do not have one, log onto the not-yet-mounted instance and create one:
    sqlplus / as sysdba
    create spfile from pfile;
    
  • Set the environment variable export ORA_RMAN_SGA_TARGET to the same (or a little more) as the auxiliary database's total SGA:
    export ORA_RMAN_SGA_TARGET=7900
    Run the rman script again and it should proceed past the mount-stage and start restoring the files.
  • Thursday, October 3, 2024

    Workaround for RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the file when using dbca

    Late night error when trying to create a cdb out of the same ORACLE_HOME as an older, non-cdb database:

    dbca reports:
    [ 2024-10-03 18:34:35.876 CEST ] Prepare for db operation
    DBCA_PROGRESS : 10%
    [ 2024-10-03 18:34:35.956 CEST ] Copying database files
    DBCA_PROGRESS : 40%
    DBCA_PROGRESS : 100%
    [ 2024-10-03 18:35:04.332 CEST ] [FATAL] Recovery Manager failed to restore datafiles. Refer logs for details.
    DBCA_PROGRESS : 10%
    DBCA_PROGRESS : 0%
    
    Detailed log file shows:
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the file
    
    Potential cause: your memory is too small to hold the extra instance you are attempting to create.

    Potential solution: scale your total memory up. If necessary, adjust hugepages to fit the extra instance.

    Wednesday, October 2, 2024

    Workaround for error ORA-00141: all addresses specified for parameter LOCAL_LISTENER are invalid

    When trying to update your database's LOCAL_LISTENER parameter like this:
    alter system set local_listener=LISTENER_CDB scope=both
    
    and you get the following error stack:
    ERROR at line 1:
    ORA-32017: failure in updating SPFILE
    ORA-00119: invalid specification for system parameter LOCAL_LISTENER
    ORA-00141: all addresses specified for parameter LOCAL_LISTENER are invalid
    ORA-00132: syntax error or unresolved network name 'LISTENER_CDB'
    
    The solution is to first change your $TNS_ADMIN/tnsadmin.ora to correspond to the value you wish to set the local_listener parameter to. For example, change the following line:
    LISTENER =
      (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.oric.no)(PORT = 1521))
    
    to
    LISTENER_CDB =
      (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.oric.no)(PORT = 1521))
    
    For more information about the local_listener parameter, see this earlier post

    Wednesday, September 25, 2024

    How to plug and unplug a PDB in a multitenant configuration

    What exactly does it mean to unlug and plug a database, in a multitenant architecture?

  • To unplug means to close the PDB and then generate its manifest file.
  • To plug means using the manifest file to create a new pluggable database.

    In the examples below, I am unplugging the databases pdb1 and pdb2 into two different manifest files:
    sqlplus / as sysdba
    alter pluggable database pdb1 close immediate;
    alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/pdb1.xml';
    
    alter pluggable database pdb2 close immediate;
    alter pluggable database pdb2 unplug into '/u01/app/oracle/oradata/pdb2.xml';
    
    The XML file created in the unplug operation contains information about the names and the full paths of the tablespaces, as well as data files of the unplugged PDB.

    This information will then be used by a subsequent plugg-in operation.

    After having unplugged a pdb you can drop the pluggable database but physically keep the datafiles belonging to it, like this:
    drop pluggable database pdb1 keep datafiles;
    
    If you wish to plug the database into a different CDB, it is a good idea to check the compatability of the database with the CDB first. This is particulary true if the new CDB is created with newer binaries than the original CDB, or if it is on a different host.In the new CDB, execute the following pl/sql code:
    set serveroutput on
    
    DECLARE
       compatible BOOLEAN := FALSE;
    BEGIN  
       compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
            pdb_descr_file => '/u01/app/oracle/oradata/pdb1.xml');
       if compatible then
          DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? YES');
       else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? NO');
       end if;
    END;
    /
    
    If the output shows that the database is compatible with the new CDB, proceed to plugin the database.

    Plug operations can be done using two different methods: NOCOPY and COPY.

    Using the NOCOPY method will use the data files of the unplugged PDB to plug the PDB into another (or the same) CDB without any physical file copy:
    create pluggable database pdb_plug_nocopy using '/u01/app/oracle/oradata/pdb1.xml'
    NOCOPY
    TEMPFILE REUSE;
    
    When using the NOCOPY option, the plugin operation lasts a few seconds. The original data files of the unplugged PDB now belong to the new plugged-in PDB in the new (or the same) CDB. A file with the same name as the temp file specified in the XML file exists in the target location. Therefore, the TEMPFILE_REUSE clause is required.

    Using the COPY method, will physically move datafiles from the original destination to a new destination:
    mkdir -p /u01/app/oracle/oradata/cdb2/pdb_plug_copy
    
    create pluggable database pdb_plug_copy using '/u01/app/oracle/oradata/pdb2.xml'
    COPY
    FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb2','/u01/app/oracle/oradata/cdb2/pdb_plug_copy');
    
    Verify the status, open mode and the file location of the plugged-in PDB (In the example below, I am showing the output for the pdb created using the COPY method, but it should always be done regardless of the method used):
    select pdb_name, status from cdb_pdbs where pdb_name='PDB_PLUG_COPY'; --> should return PDB_PLUG_COPY and NEW
    select open_mode from v$pdbs where name='PDB_PLUG_COPY'; --> should return MOUNTED
    
    select name from v$datafile where con_id=(select con_id from v$pdbs where name='PDB_PLUG_COPY';) --> should return the full path and name of the datafiles belonging to the system and sysaux tablespaces.
    
    Whether or not you are using the NOCOPY or COPY method, you will now have to open the newly plugged in database in the new CDB:

    alter pluggable database pdb_plug_nocopy open;
    
    alter pluggable database pdb_plug_copy open;
    
    show con_name
    show pdbs
    
    Source: Oracle 12cR1 tutorial
  • Monday, September 23, 2024

    How prevent dbca to create folders in capital letters during database creation

    This post is derived from my previous post, but I have come to realize that I have needed to look up this particular detail at at least a couple of occasions, so it deserves a post of their own.

    To keep dbca to create folders with capital letters during database cration, you need to alter the directions
    datafileDestination=/disk1/oradata/{DB_UNIQUE_NAME}/
    recoveryAreaDestination=/disk2/flash_recovery_area/{DB_UNIQUE_NAME}
    
    to
    datafileDestination=/disk1/oradata/mydb
    recoveryAreaDestination=/disk2/flash_recovery_area/mydb
    
    in your response file.

    The response file would then look something like:
    #-------------------------------------------------------------------------------
    # Do not change the following system generated value.
    #-------------------------------------------------------------------------------
    responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0
    gdbName=mydb.oric.no
    sid=mydb
    databaseConfigType=SI
    createAsContainerDatabase=false
    templateName=/u01/oracle/product/19c/assistants/dbca/templates/General_Purpose.dbc
    sysPassword=manager1
    systemPassword=manager1
    datafileDestination=/disk1/oradata/mydb
    recoveryAreaDestination=/disk2/flash_recovery_area/mydb
    storageType=FS
    characterSet=al32utf8
    variables=
    initParams=db_recovery_file_dest_size=50G
    memoryPercentage=75
    databaseType=MULTIPURPOSE
    enableArchive=true
    redoLogFileSize=2048
    

    Thursday, September 19, 2024

    What is the $ORACLE_HOME/dbs/hc_$ORACLE_SID.dat file?

    In every $ORACLE_HOME/dbs folder you will find a file named hc_$ORACLE_SID.dat.

    What is it, and is it essential for your instance?

    Oracle Support states:

    The $ORACLE_HOME/dbs/hc_.dat is created for the instance health check monitoring. It contains information used to monitor the instance health and to determine why it went down if the instance isn't up.

    The file can be deleted while the instance is up, it won't cause any harm to your instance.

    In earlier versions of the Oracle database software a bug existed that would trigger an ORA-7445 if the file was deleted while the database was up, but this was fixed as early as in version 11.2.

    The file is created at every instance startup.

    Source: What Is The $ORACLE_HOME/dbs/hc_.dat File? (Doc ID 390474.1) from Oracle Support

    Tuesday, September 17, 2024

    Where does an Oracle EBS 12.2 appserver save logs from the concurrent worker processes?

    Look in the directory $ADOP_LOG_HOME

    In here, every session will create its own subfolder. In my case
    10  11  2  3  4  5  6  7  8  9
    
    In my case, I had to enter the folder named after session 11.

    In here you will find folders named according to exection time, for example

    20240916_135516

    Inside this folder, you will find folders named according to action, for example "prepare", "cutover", or "apply".

    In my case, step inside the "apply" directory and you will find a folder named after your appserver.

    Finally, you will find a folder named according to the patch number, for example
    36876222_N
    
    with a log directory underneath it.

    So the path $ADOP_LOG_HOME/11/20240916_135516/apply/oric-ebsapp01/36876222_N/log is the complete path to your my log directory for the session I am looking for.

    Thursday, September 5, 2024

    How do I store multiple connections to the same database in my wallet?

    If you have a wallet which already contains an alias for a database, you cannot add another one pointing to the same alias. If you do, mkstore will give you the following error:
    Secret Store error occurred: oracle.security.pki.OracleSecretStoreException: Credential already exists
    
    This is actually documented:

    Each user account must have its own unique connection string; you cannot create one connection string for multiple users.

    So what to do, then, if you find yourself in a situation where you need two different users to access the database in a passwordless manner using a wallet?

    Workaround:

    Use multiple aliases in tnsnames.ora pointing to the same database service.

    List the current contents of the wallet:
    mkstore -wrl $TNS_ADMIN/wallet -listCredential
    Oracle Secret Store Tool Release 19.0.0.0.0 - Production
    Version 19.4.0.0.0
    Copyright (c) 2004, 2024, Oracle and/or its affiliates. All rights reserved.
    
    Enter wallet password:
    List credential (index: connect_string username)
    1: mydb1_scott scott
    
    So user scott can log into the database passwordless using his own password which is securly stored in the wallet.

    The tnsnames.ora knows which service to connect to whenever "mydb01" is called:
    mydb1_scott =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = server1.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = mydb1.oric.no)
          (INSTANCE_NAME = cdb)
        )
      )
    
    Let's add another connect_identifier in the tnsnames.ora file:
    mydb1_jim =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = server1.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = mydb1.skead.no)
          (INSTANCE_NAME = cdb)
        )
      )
    
    Let's test it, using a random user, for example, system:
    sqlplus system@mydb1_jim
    Enter password:
    SQL>show user
    USER is "SYSTEM"
    
    So the connect identifier is working, it points towards a valid service being serviced by the listener on the server.

    Let's add another entry to the wallet for user jim, so that he can connect without supplying his password:
    mkstore -wrl $TNS_ADMIN/wallet -createCredential mydb1_jim jim
    Oracle Secret Store Tool Release 19.0.0.0.0 - Production
    Version 19.4.0.0.0
    Copyright (c) 2004, 2024, Oracle and/or its affiliates. All rights reserved.
    
    Your secret/Password is missing in the command line
    Enter your secret/Password: <-- password for user jim
    Re-enter your secret/Password: <-- reenter password for user jim
    Enter wallet password: <-- wallet password
    
    Jim can now login using his own password stored in the wallet:
    sqlplus /@mydb1_jim
    SQL>show user
    USER is "JIM"
    
    The contents of the wallet is now:
    mkstore -wrl $TNS_ADMIN/wallet -listCredential
    Oracle Secret Store Tool Release 19.0.0.0.0 - Production
    Version 19.4.0.0.0
    Copyright (c) 2004, 2024, Oracle and/or its affiliates. All rights reserved.
    
    Enter wallet password:
    List credential (index: connect_string username)
    2: mydb1_jim jim
    1: mydb1_scott scott
    

    Friday, August 23, 2024

    How to cleanup temporary tablespaces in a PDB and create new ones in the right location

    Notice how temporary tablespace groups can be used instead of directly referencing the temporary tablespace name:
    alter session set container=PDB1;
    
    CREATE TEMPORARY TABLESPACE TEMP3
    TEMPFILE
      '/disk1/oradata/PDB1/temp03.dbf' SIZE 20G AUTOEXTEND ON NEXT 128K MAXSIZE 20G
    TABLESPACE GROUP TEMP_NEW
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
    FLASHBACK ON;
    
    ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP_NEW;
    
    DROP TABLESPACE TEMP1 INCLUDING CONTENTS AND DATAFILES;
    DROP TABLESPACE TEMP2 INCLUDING CONTENTS AND DATAFILES;
    
    CREATE TEMPORARY TABLESPACE TEMP1
    TEMPFILE
      '/disk1/oradata/PDB1/temp01.dbf' SIZE 1G AUTOEXTEND ON NEXT 128K MAXSIZE 20G
    TABLESPACE GROUP TEMP
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
    FLASHBACK ON;
    
    CREATE TEMPORARY TABLESPACE TEMP2
    TEMPFILE
      '/disk1/oradata/PDB1/temp02.dbf' SIZE 1G AUTOEXTEND ON NEXT 128K MAXSIZE 20G
    TABLESPACE GROUP TEMP
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M
    FLASHBACK ON;
    
    ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP;
    
    DROP TABLESPACE TEMP3 INCLUDING CONTENTS AND DATAFILES;
    
    exit
    
    More about temporary tablespace groups in Oracle 19c here

    Our solution to ORA-00600: internal error, arguments: [kkmmctbf:bad intcoln]

    We were facing a problem in one of our databases with jobs that stopped immediately after attempting to start, and then aborting their respecitive processes immediately.

    Error:
    2024-08-20T10:09:51.641274+02:00
    Errors in file /orasw/rdbms/diag/rdbms/cdb/cdb/trace/cdb_j000_3187052.trc:
    ORA-00600: intern feilkode, argumenter: [kkmmctbf:bad intcoln], [0], [], [], [], [], [], [], [], [], [], []
    2024-08-20T10:09:51.645053+02:00
    opidrv aborting process J000 ospid (3187052) as a result of ORA-600
    
    Cause:

    Right prior to the incident we ran out of free space in the SYSTEM tablespace, and a new datafile was promptly added.
    The internal table SYS.SOURCE$ contains all the PL/SQL code for the database ( Oracle provided code and your own code )
    Our schemas are editioned, and for some reason we seem to be hitting bug number 14163397: "Trigger on editioning view can get ORA-600 [kkmmctbf:bad intcoln] (Doc ID 14163397.8)"

    Solution:
    alter system set "_ignore_fg_deps"=ALL container=all scope=both;
    
    The above parameter is used to ignore fine grain dependencies during invalidation. Flush the shared pool and buffer cache if its holding up in memory the old versions:
    alter system flush shared_pool;
    alter system flush buffer_cache;
    
    Wait for a little while, then the jobs started to run automatically again.

    The above solution was found with assistance from Oracle Support and should not be attempted in a production environment without prior investigation and only upon recommendation from Oracle Support.

    Wednesday, August 7, 2024

    How to find a failed job in the data dictionary tables

    My adrci output showed a failed job (abbreviated):
    ADR Home = /u01/oracle/diag/rdbms/mydb01/mydb01:
    *************************************************************************
    INCIDENT_ID          PROBLEM_KEY                                                 CREATE_TIME
    -------------------- ----------------------------------------------------------- -------------------------
    9720716              ORA 600 [kdxolrs2_comp: samekey]                            2024-08-06 17:37:32.674000 +02:00
    9720796              ORA 600 [kdxolrs2_comp: samekey]                            2024-08-06 17:37:24.616000 
    2 rows fetched (*** more available ***)
    
    adrci> show incident -mode detail -p "incident_id=9720716"
    
       INCIDENT_ID                   9720716
       STATUS                        ready
       CREATE_TIME                   2024-08-06 17:37:32.674000 +02:00
       PROBLEM_ID                    21
       PROBLEM_KEY                   ORA 600 [kdxolrs2_comp: samekey]
       FIRSTINC_TIME                 2024-08-06 17:37:24.616000 +02:00
       LASTINC_TIME                  2024-08-06 17:37:32.674000 +02:00
       KEY_NAME                      PQ
       KEY_NAME                      Service
       KEY_VALUE                     SYS$USERS
       KEY_NAME                      Module
       KEY_VALUE                     DBMS_SCHEDULER
       KEY_NAME                      Action
       KEY_VALUE                     ORA$AT_OS_OPT_SY_125693
       OWNER_ID                      1
       INCIDENT_FILE                 /u01/oracle/diag/rdbms/mydb01/mydb01/incident/incdir_9720716/mydb01_p007_11096_i9720716.trc
       OWNER_ID                      1
       INCIDENT_FILE                 /u01/oracle/diag/rdbms/mydb01/mydb01/trace/mydb01_p007_11096.trc
       OWNER_ID                      1
       INCIDENT_FILE                 /u01/oracle/diag/rdbms/mydb01/mydb01/incident/incdir_9720716/mydb01_mz00_1311372_i9720716_a.trc
    1 row fetched
    
    
    The file mydb01_p007_11096_i9720716.trc shows:
    *** MODULE NAME:(DBMS_SCHEDULER) 2024-08-06T17:37:32.699564+02:00
    *** ACTION NAME:(ORA$AT_OS_OPT_SY_125693) 2024-08-06T17:37:32.699567+02:00
    
    [TOC00002]
    ========= Dump for incident 9720716 (ORA 600 [kdxolrs2_comp: samekey]) ========
    
    *** 2024-08-06T17:37:32.709272+02:00
    dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
    [TOC00003]
    ----- Current SQL Statement for this session (sql_id=82xfc2m10quym) -----
    select /*+ opt_param('_optimizer_use_auto_indexes' 'on')  
    parallel_index(t, "mypk",8)  
    dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) 
    no_monitoring xmlindex_sel_idx_tbl 
    opt_param('optimizer_inmemory_aware' 'false') 
    no_substrb_pad  no_expand index_ffs(t,"mypk") */ 
    count(*) as nrw,approx_count_distinct(sys_op_lbid(632968,'L',t.rowid)) as nlb,null as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) 
    as clf from "myuser"."mytable" t where "ID" is not null
    [TOC00003-END]
    
    To find the record in the data dictionary holding the history for this run, use the following query:
    set lines 300
    col client_name format a30
    col job_name format a30
    col job_status format a20
    col job_start_time format a30
    col job_duration format a20
    col job_info format a80
    
    SELECT client_name,job_name,job_status,to_char(job_start_time,'dd.mm.yyyy hh24:mi:ss') "job_start_time" ,job_duration,job_info
    FROM  dba_autotask_job_history 
    WHERE JOB_NAME = 'ORA$AT_OS_OPT_SY_125693';
    
    CLIENT_NAME JOB_NAME JOB_STATUS job_start_time JOB_DURATION JOB_INFO
    auto optimizer stats collection ORA$AT_OS_OPT_SY_125693 FAILED 06.08.2024 17:00:02 +000 00:37:34 ORA-12801: error signaled in parallel query server P007 ORA-00600: internal error code, arguments: [kdxolrs2_comp: samekey], [0], [0], [0], [0], [], [], [], [], [], [], []
    Further details can be seen in the view DBA_OPTSTAT_OPERATION_TASKS, if necessary.

    Workaround for ORA-65011: Pluggable database does not exist in an EBS environment

    If you have upgraded an EBS database to Oracle 19c, you will also have migrated to the multitenant architechture, which is required.

    During the upgrade, Oracle set the parameter _pdb_name_case_sensitive to TRUE by default, as explained in Doc ID 2642230.1.

    If you then proceeded to convert your old EBS database to a pluggable database with a lower case name, this has the effect that a "flashback pluggable database" operation will fail when using sqlplus:
    oracle@oraserver01:[pdb1]# echo $ORACLE_SID
    cdb
    oracle@oraserver01:[pdb1]# sqlplus / as sysdba
    
    SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 6 10:23:55 2024
    Version 19.23.0.0.0
    
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             4 pdb1                           MOUNTED
    SYS@_container_name SQL> flashback pluggable database "pdb1" to restore point SD76759_1;
    flashback pluggable database "pdb1" to restore point SD76759_1
    *
    ERROR at line 1:
    ORA-65011: Pluggable database PDB1 does not exist.
    


    Workaround: Use RMAN instead
    oracle@oraserver01:[pdb1]# rman target /
    
    Recovery Manager: Release 19.0.0.0.0 - Production on Tue Aug 6 10:26:51 2024
    Version 19.23.0.0.0
    
    Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database: CDB (DBID=2245029826)
    
    RMAN> flashback pluggable database "pdb1" to restore point SD76759_1;
    
    Starting flashback at 06-AUG-24
    
    starting media recovery
    
    archived log for thread 1 with sequence 21576 is already on disk as file /oradisk3/recoveryarea/CDB/archivelog/2024_07_11/o1_mf_1_21576__nw9trq2c_.arc
    media recovery complete, elapsed time: 00:00:01
    Finished flashback at 06-AUG-24
    

    Monday, July 22, 2024

    Friday, July 19, 2024

    What do do if SET SERVEROUTPUT ON doesn't work

    If you have an anonymous piece of pl/sql code and you have turned on serveroutput, like this:
    set serveroutput on
    
    You may need to enable output first:
    exec dbms_output.enable();
    set serveroutput on
    
    In a multitenant environment, you will need to enable serveroutput after every time you switch container.

    So, if you do
    BEGIN
    V_SQLSTRING := 'ALTER SESSION SET CONTAINER=' || '' || V_PDBNAME || '';
    DBMS_OUTPUT.PUT_LINE(V_SQLSTRING);
    EXECUTE IMMEDIATE V_SQLSTRING;
    END;
    /
    
    you will need to repeat the enabling of serveroutput again, if you want to perform addition actions further down in your script.

    How to check if your PostgreSQL server is part of a replication setup

    If the server is down, check the entries in postgresql.auto.conf.

    The following postgres.auto.conf is from a slave server:
    wal_level = 'replica'
    hot_standby = on
    hot_standby_feedback = 'on'
    primary_slot_name = 'stby1'
    max_connections = 636
    
    primary_conninfo      = 'host=pgserver01.oric.no port=5432 user=replicate password=mysecretpassword'
    promote_trigger_file  = '/tmp/MasterNow'
    
    The master server may have a very simple postgres.auto.conf file:
    # Do not edit this file manually!
    # It will be overwritten by the ALTER SYSTEM command.
    wal_level = 'replica'
    
    If the server is up, use queries:

    1. On the primary server, check that data is shipped over to the replica server:
    [postgres@pgserver01.oric.no /pgdata/data]$ echo 'select usename,state,sync_state,sync_priority,client_hostname from pg_stat_replication;' | psql
      usename  |   state   | sync_state | sync_priority |        client_hostname
    -----------+-----------+------------+---------------+--------------------------------
     replicate | streaming | async      |             0 | pgserver02.oric.no
    (1 row)
    
    2. The following query should return f (false) on primary server, t (true) on the replica server:
    [postgres@pgserver01 /pgdata/data/log]$ echo "select pg_is_in_recovery();" | psql
     pg_is_in_recovery
    -------------------
     f
    (1 row)
    
    [postgres@pgserver02 /pgdata/data/log]$  echo "select pg_is_in_recovery();" | psql
     pg_is_in_recovery
    -------------------
     t
    (1 row)
    

    Tuesday, July 16, 2024

    How to check StaticConnectIdentifier and other properties using Data Guard Broker

    DGMGRL> show database cdb 'StaticConnectIdentifier'
      StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oricdb01.oric.no)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb_DGMGRL.skead.no)(INSTANCE_NAME=cdb)(SERVER=DEDICATED)))'
    
    There are several other properties that you can check, too. See this post for a couple of examples

    The documentation differntiates between monitorable (read only) and configurable properties.

    Friday, July 12, 2024

    Datapatch seems to "hang" or taking an extraordinary long time to complete. What could be wrong?

    When using the utility datapatch to either apply or rollback patches in my Oracle 19c ORACLE_HOME, I was suprised at the time it took to (in this case) rollback the patches from version 19.23 to 19.22.

    The patching was just "hanging" without any sign of progress. I left the shell for several hours, thinking progress would eventually be made and patience would be needed.

    When I came back the morning after, it had not progressed at all.

    The solution was found in the logfiles located $ORACLE_BASE/cfgtoollogs:

    cd $ORACLE_BASE/cfgtoollogs/sqlpatch
    
    # list your directories. Enter the most recent ones:
    ls -latr 
    
    drwxr-xr-x  2 oracle dba 4.0K Feb  1 18:25 sqlpatch_9072_2024_02_01_18_13_47
    drwxr-xr-x  2 oracle dba 4.0K May 14 09:21 sqlpatch_2561578_2024_05_14_09_20_33
    -rw-r--r--  1 oracle dba    0 May 23 18:12 36199232_25601966.lock
    drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36199232
    -rw-r--r--  1 oracle dba    0 May 23 18:12 36420641_25643730.lock
    drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36420641
    -rw-r--r--  1 oracle dba    0 May 23 18:12 36233263_25638263.lock
    drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36233263
    drwxr-xr-x  2 oracle dba 4.0K May 23 18:25 sqlpatch_4642_2024_05_23_18_11_41
    drwxr-xr-x  2 oracle dba 4.0K Jul 12 10:30 sqlpatch_1073748_2024_07_12_10_13_30
    
    cd sqlpatch_1073748_2024_07_12_10_13_30
     ls -altrh
    total 75M
    -rw-r--r--  1 oracle dba  12K Jul 12 10:14 install1.sql
    -rw-------  1 oracle dba 3.7M Jul 12 10:26 sqlpatch_catcon_0.log
    -rw-------  1 oracle dba  689 Jul 12 10:26 sqlpatch_catcon__catcon_1073748.lst
    -rw-r--r--  1 oracle dba  12K Jul 12 10:26 sqlpatch_summary.json
    -rw-r--r--  1 oracle dba  133 Jul 12 10:26 sqlpatch_progress.json
    -rw-r--r--  1 oracle dba 5.5M Jul 12 10:26 sqlpatch_invocation.log
    -rw-r--r--  1 oracle dba  66M Jul 12 10:26 sqlpatch_debug.log
    
    The source of the error was found in the file sqlpatch_catcon_0.log:
    CREATE OR REPLACE PACKAGE BODY ku$_dpload AS
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-01653: unable to extend table SYS.SOURCE$ by 8192 in tablespace SYSTEM
    
    This simple error caused the rest of the script to fail:
    SQL> --
    SQL> -- ============================================================================
    SQL> -- BEGINNING OF APPLY EXECUTION
    SQL> -- ============================================================================
    SQL> --
    SQL> -- Initial phase sets up internal infrastructure for rest of dpload.
    SQL> --
    SQL> SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual;
    SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual
           *
    ERROR at line 1:
    ORA-04067: not executed, package body "SYS.KU$_DPLOAD" does not exist
    ORA-06508: PL/SQL: could not find program unit being called: "SYS.KU$_DPLOAD"
    ORA-06512: at line 1
    
    Obvious solution: Increase the tablespace datafile, or add a new file:
    sqlplus / as sysdba
    alter session set container=PDB01;
    select f.file_name,f.bytes/1024/1024 "MB", f.autoextensible,f.maxbytes/1024/1024 "MaxMB", t.bigfile
    from dba_data_files f, v$tablespace t
    where f.tablespace_name='SYSTEM'
    and f.tablespace_name = t.name;
    
    
    
    FILE_NAME MB AUTOEXTENSIBLE MaxMB BIGFILE
    /oradata/pdb01/system01.dbf
    32712
    YES
    32767,984375
    NO
    So we have a smallfile tablespace which is full!
    Add a datafile, still in the same session connected to the PDB01:
    alter tablespace system 
    add datafile '/oradata/pdb01/system02.dbf' size 256M autoextend on next 128M maxsize unlimited;
    

    Thursday, July 11, 2024

    How to solve RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process

    In one of my cloned databases, the FRA was filling up and pushing towards the limit of 2 TB.

    Since it was cloned from a production database using Golden Gate, the capture processes followed along, but was not dropped in the cloned database.

    This caused the deletion policy in RMAN to be overriden because Oracle thinks it will need the archivelogs for Golden Gate, even though no Golden Gate is configured for this particular database.

    When deleting an archivelog, RMAN would throw an error:
    RMAN> delete archivelog sequence 47447;
    
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=1352 device type=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: SID=1354 device type=DISK
    RMAN-08137: warning: archived log not deleted, needed for standby or upstream capture process
    archived log file name=/u04/fra/CDB/archivelog/2024_03_13/o1_mf_1_47447__j5wx4r32_.arc thread=1 sequence=47447
    
    The query below revealed the name of the blocking capture processes:
    sqlplus / as sysdba --> log on to root container
    SYS@_container_name SQL>  SELECT CAPTURE_NAME,
                CAPTURE_TYPE, STATUS,
                to_char(REQUIRED_CHECKPOINT_SCN,'999999999999999')as REQ_SCN ,
                to_char(OLDEST_SCN,'999999999999999')as OLDEST_SCN
         FROM DBA_CAPTURE;   2    3    4    5
    
    CAPTURE_NAME         CAPTURE_TY STATUS   REQ_SCN          OLDEST_SCN
    -------------------- ---------- -------- ---------------- ----------------
    OGG$CAP_MYDB2ABC     LOCAL      DISABLED     426508588124     426508588124
    OGG$CAP_MYDB2DEF     LOCAL      DISABLED     426508561845     426508561845
    
    Solution was to use the package dbms_capture_adm and drop the captures:
    sqlplus / as sysdba --> log on to root container
    
    SYS@_container_name SQL> exec dbms_capture_adm.drop_capture('OGG$CAP_MYDB2ABC');
    SYS@_container_name SQL> exec dbms_capture_adm.drop_capture('OGG$CAP_MYDB2DEF');
    
    Thanks to Bobby Curtis for pointing me in the righ direction with his post about the same topic

    Tuesday, July 9, 2024

    Stupid mistake that causes TNS-12533: TNS:illegal ADDRESS parameters during tns names resolution

    Consider the following tnsnames.ora file:
    cdb =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = cdb)
        )
      )
    
    
     pdb1 =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521))
          (CONNECT_DATA =
            (SERVER = DEDICATED)
            (SERVICE_NAME = pdb1.oric.no)
            (INSTANCE_NAME = cdb)
          )
        )
    
    A tnsping against the cdb would return
     tnsping cdb
    
    Used parameter files:
    $ORACLE_HOME/network/admin/sqlnet.ora
    
    
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = cdb))) pdb1 = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = pdb1.oric.no) (INSTANCE_NAME = cdb) ) )
    TNS-12533: TNS:illegal ADDRESS parameters
    
    For the pdb1 the error was:
    TNS-03505: Failed to resolve name
    
    The solution is simple: open tnsnames.ora in an editor and remove the blank space erroniously indented before "pdb1":
     pdb1 =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521))
          (CONNECT_DATA =
            (SERVER = DEDICATED)
            (SERVICE_NAME = pdb1.oric.no)
            (INSTANCE_NAME = cdb)
          )
        )
    
    Make sure every tnsnames.ora entry starts at the very left end of the file, and this problem goes away.

    Friday, June 14, 2024

    SQL for troubleshooting services in multitenant environment

    Query for troubleshooting services in a multitenant environment:
    set lines 200
    col name format a40
    col network_name format a40
    col pdb format a20
    col global format a20
    
    select name,network_name,con_id 
    from v$active_services
    order by con_id,name;
      
    select name,network_name,con_id,pdb,global 
    from v$services 
    order by con_id,name;
     
    select con_id#,name,network_name 
    from  cdb_service$ 
    order by con_id#,name ;
    exit
    

    Wednesday, June 5, 2024

    Some Oracle Net related terms and what they mean

    The terms used in Oracle Networking can be very similar and somewhat confusing.

    Here is an attempt to explain some of them.

    What is a network service name?

    The network service name is shown in yellow below:
    BRNY01 =
    (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = oric-sandbox-db01.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = brny01.oric.no)
        )
      )
    
    A network service name resolves to a connect descriptor.

    What is a connect descriptor?

    A specially-formatted description of the destination for a network connection. A connect descriptor contains destination service and network route information.
    Everyhing contained within the DESCRIPTION part constitutes the connect description:
    (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = oric-sandbox-db01.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = brny01.oric.no)
        )
      )
    
    What is a connect identifier?

    A connect identifier can be a network service name, database service name, or network service alias. Users will have to pass along username and password together with the connect identifier, to connect to a database.

    For all intents and purposes, think about a connect identifier as a database name, or a nicname for a database name that you make up yourself.

    What is a connect string?

    By connect string we mean the information the user passes to a service to connect, such as user name, password and connect identifier
    CONNECT scott/tiger@net_service_name
    

    Tuesday, June 4, 2024

    Solution to ORA-29548: Java system class reported: release of Java system classes in the database (19.0.0.0.240116 1.8) does not match that of the oracle executable (19.0.0.0.240416 1.8)

    When I was trying to drop and recreate a schema in my PDB, I received the following error:
    drop user myuser cascade
    *
    ERROR at line 1:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-29548: Java system class reported: release of Java system classes in the database (19.0.0.0.240116 1.8) does not match that of the oracle executable (19.0.0.0.240416 1.8)
    
    This occured since I had cloned the database from a source with a lower CPU level. To correct the situation: shutdown the entire cdb
    SYS@_container_name SQL> show con_name
    
    CON_NAME
    ------------------------------
    CDB$ROOT
    SYS@_container_name SQL> shutdown
    Database closed.
    Database dismounted.
    ORACLE instance shut down.
    
    Start up the database in upgrade mode:
    SYS@_container_name SQL> startup upgrade
    Database mounted.
    Database opened.
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       MIGRATE    YES
             4 pdb1                           MOUNTED
    
    Open the pdb1 in upgrade mode, too:
    SYS@_container_name SQL> alter pluggable database all open upgrade ;
    
    Pluggable database altered.
    
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       MIGRATE    YES
             4 pdb1                           MIGRATE    YES
    
    Run datapatch:
    cd $ORACLE_HOME/OPatch/
    datapatch -verbose
    
    Shutdown the database, open normally:
    SYS@_container_name SQL> startup
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             4 pdb1                           MOUNTED
    SYS@_container_name SQL> alter pluggable database all open;
    
    Pluggable database altered.
    
    SYS@_container_name SQL> show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             4 pdb1                           READ WRITE NO
    
    You can now drop the user:
    SYS@_container_name SQL> alter session set container="pdb1";
    
    Session altered.
    
    SYS@_container_name SQL> drop user myuser cascade;
    
    User dropped.
    

    Wednesday, May 29, 2024

    How I removed a data guard config completely

    Here is my config:
    DGMGRL> show configuration;
    
    Configuration - DGConfig1
    
      Protection Mode: MaxPerformance
      Members:
      kej01       - Primary database
        kej01_stby1 - Physical standby database
    
    Fast-Start Failover:  Disabled
    
    Configuration Status:
    SUCCESS   (status updated 66 seconds ago)
    
    1. stop the redo apply process:
    DGMGRL>  edit database kej01_stby1 set state='APPLY-OFF';
    Succeeded.
    
    2. remove the standby database from the configuration:
    DGMGRL> remove database kej01_stby1;
    Removed database "kej01_stby1" from the configuration
    
    3. Remove the configuration itself:
    DGMGRL> remove configuration;
    Removed configuration
    
    4. On both the primary and standby server, edit the $TNS_ADMIN/listener.ora by removing these entries:
    
    (SID_DESC =
          (GLOBAL_DBNAME = kej01_DGMGRL)
          (ORACLE_HOME = /orasw/19c)
          (SID_NAME = kej01)
        )
       (SID_DESC =
          (GLOBAL_DBNAME = kej01_DGMGRL.skead.no)
          (ORACLE_HOME = /orasw/19c)
          (SID_NAME = kej01)
        )
     
    Make sure to stop/start the listener afterwards.
    5. On previous standby, remove db_unique_name:
    SYS@kej01>SQL>alter system set db_unique_name='kej01' scope=spfile;
    
    6. On both servers, stop the broker processes:
    SYS@kej01>SQL>alter system set dg_broker_start=false scope=both;
    
    System altered.
    
    6. On the standby, finish database recovery:
    SYS@kej01>alter database recover managed standby database finish;
    
    Database altered.
    
    The database is still mounted as a physical standby:
    SYS@kej01>SQL>select open_mode, database_role from v$database;
    
    OPEN_MODE            DATABASE_ROLE
    -------------------- ----------------
    MOUNTED              PHYSICAL STANDBY
    
    If you open it now, it will be in READ ONLY status:
    SYS@kej01>SQL>alter database open;
    
    Database altered.
    
    SYS@kej01>select open_mode, database_role from v$database;
    
    OPEN_MODE            DATABASE_ROLE
    -------------------- ----------------
    READ ONLY            PHYSICAL STANDBY
    
    
    7. Instruct the former standby database that it is now indeed a normal ("primary") database:
    SYS@kej01>ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
    
    Database altered.
    
    The alert log will show that we are cutting all strings to the previous role:
    2024-05-29T15:55:09.123174+02:00
    ALTER DATABASE SWITCHOVER TO PRIMARY (kej01)
    2024-05-29T15:55:09.125019+02:00
    Maximum wait for role transition is 15 minutes.
    TMI: kcv_commit_to_so_to_primary wait for MRP to finish BEGIN 2024-05-29 15:55:09.125651
    TMI: kcv_commit_to_so_to_primary wait for MRP to finish END 2024-05-29 15:55:09.126990
    TMI: kcv_commit_to_so_to_primary Switchover from physical BEGIN 2024-05-29 15:55:09.130097
    
    Standby terminal recovery start SCN: 4695193
    RESETLOGS after incomplete recovery UNTIL CHANGE 4695565 time 05/29/2024 15:28:15
    ET  (PID:1614518): ORL pre-clearing operation disabled by switchover
    Online log /log/oradata/kej01/redo1.log: Thread 1 Group 1 was previously cleared
    Online log /log/oradata/kej01/redo2.log: Thread 1 Group 2 was previously cleared
    Online log /log/oradata/kej01/redo3.log: Thread 1 Group 3 was previously cleared
    Standby became primary SCN: 4695192
    2024-05-29T15:55:16.588617+02:00
    Setting recovery target incarnation to 3
    2024-05-29T15:55:16.662719+02:00
    NET  (PID:1614518): Database role cleared from PHYSICAL STANDBY [kcvs.c:1133]
    Switchover: Complete - Database mounted as primary
    TMI: kcv_commit_to_so_to_primary Switchover from physical END 2024-05-29 15:55:16.667784
    Completed: ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN
    2024-05-29T15:55:27.547369+02:00
    alter database open
    
    
    At this point, my standby database is once again mounted, but it has now assumed the PRIMARY role, which is what I want:
    SYS@kej01>SQL>select open_mode, database_role from v$database;
    
    OPEN_MODE            DATABASE_ROLE
    -------------------- ----------------
    MOUNTED              PRIMARY
    
    8. Open it now, and it will be in read write mode:
    SYS@kej01>SQL>alter database open;
    
    Database altered.
    
    SYS@kej01>SQL>select open_mode, database_role from v$database;
    
    OPEN_MODE            DATABASE_ROLE
    -------------------- ----------------
    READ WRITE           PRIMARY
    
    According to the oracle documentation, the syntax above is pre-12.1 style, which is still compatible but DBAs are encouraged to use
    ALTER DATABASE SWITCHOVER TO target_db_name [FORCE] [VERIFY];
    and
    ALTER DATABASE FAILOVER TO target_db_name;
    
    instead.

    Friday, May 17, 2024

    Easy Connect syntax

    Basic syntax:
    sqlplus uid/pwd@//servername.domain.com:port_number/service_name
    
    If you do not specify port, 1521 is assumed.

    Example connecting to a cntainer databaser:
    C:\Users\vegard>sqlplus system/mypassword@oric-sandbox-db01.oric.no/cdb.oric.no
    
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.23.0.0.0
    
    SQL> show user
    USER is "SYSTEM"
    SQL> show con_name
    
    CON_NAME
    ------------------------------
    CDB$ROOT
    
    Example connecting to a pdb plugged into the same container database:
    C:\Users\vegard>sqlplus system/mypassword@oric-sandbox-db01.oric.no/vksa.oric.no
    
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.23.0.0.0
    
    SQL> show user
    USER is "SYSTEM"
    SQL> show con_name
    
    CON_NAME
    ------------------------------
    VKSA01
    SQL>
    
    You can also use easy connect syntax with the tnsping utility, just to check if there is an oracle service running on a specific server.

    Example using tnsping from my Windows 11 client, using a cmd/powershell terminal:
    C:\Users\vegard>tnsping "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oric-sandbox-db01.oric.no)(PORT=1521)))"
    
    Attempting to contact (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oric-sandbox-db01.oric.no)(PORT=1521)))
    OK (50 msec)
    
    Documentation for the Net Services Administrator's Guide can be found here

    Wednesday, May 8, 2024

    syntax for dropping a temporary file from a temporary tablespace

    For future reference, here is how to drop and recreate a temporary file in an existing temporary tablespace, but with a changed path: If applicable, set the container:
    alter session set container=pdb1;
    
    Drop the current tempfile, and recreate it in the desired location. Make it autoextensible:
    alter database tempfile '/data/oradata/db01/temp1.dbf' drop including datafiles;
    alter tablespace TEMP1 add tempfile '/data/oradata/db02/temp1.dbf' size 1G; 
    alter database tempfile '/data/oradata/db02/temp1.dbf' autoextend on next 1G maxsize unlimited;
    
    A good query for temporary tablespaces:
    select t.ts#, t.name,t.bigfile, t.con_id, f.name "file_name", tg.group_name
    from v$tempfile f join v$tablespace t
    on (t.TS# = f.TS#)
    and t.con_id = f.CON_ID join DBA_TABLESPACE_GROUPS tg on (tg.tablespace_name = t.name)
    
           TS# NAME                           BIG     CON_ID file_name                                GROUP_NAME
    ---------- ------------------------------ --- ---------- ---------------------------------------- ------------------------------
             3 TEMP1                          YES          3 /u02/oradata/pdb1/temp1.dbf                    TEMP
             4 TEMP2                          YES          3 /u02/oradata/pdb1/temp2.dbf                    TEMP
    

    Monday, May 6, 2024

    How to solve errors like "Interim patch num/num (DATAPUMP BUNDLE PATCH 19.21.0.0.0): Not installed in the CDB but installed in the PDB"

    After running autoupgrade, I was left with the following message in the logfile $ORACLE_BASE/cfgtoollogs/upgrade/auto/status/status.log:
    [Stage Name]    NONCDBTOPDB
    [Status]        FAILURE
    [Start Time]    2024-05-06 14:29:45
    [Duration]      0:05:33
    [Log Directory] /u01/oracle/txs01/101/noncdbtopdb
    Cause:AutoUpgException [After running noncdb_to_pdb.sql, the pdb was closed or was opened in restricted mode]
    Reason:None
    Action:None
    Info:None
    ExecutionError:Yes
    Error Message:AutoUpgException [After running noncdb_to_pdb.sql, the pdb was closed or was opened in restricted mode]
    
    A query against the database shows that there are errors in the pdb_plug_in_violations view:
    SELECT TIME,NAME,CAUSE,STATUS,MESSAGE FROM PDB_PLUG_IN_VIOLATIONS;
    
    TIME                      NAME    CAUSE       STATUS     MESSAGE
    -------------------       -----   ---------   --------   ------------------------------------------------------------------------------------------------------------------------------
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 35926646/25513953 (OJVM RELEASE UPDATE: 19.22.0.0.240116 (35926646)): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 36092868/25496280 (DATAPUMP BUNDLE PATCH 19.22.0.0.0): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 35648110/25365038 (OJVM RELEASE UPDATE: 19.21.0.0.231017 (35648110)): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   Interim patch 35787077/25410019 (DATAPUMP BUNDLE PATCH 19.21.0.0.0): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   PENDING   '19.22.0.0.0 Release_Update 2401040239' is installed in the CDB but '19.21.0.0.0 Release_Update 2309301519' is installed in the PDB
    
    Solution is in Doc ID 2604940.1 "Datapatch precheck failed : Release update is BEING installed to PDB but is not installed in CDB$ROOT.This will cause a patch mismatch between this PDB and CDB$ROOT":

    datapatch when executed is trying to apply Release update in the PDB database first but is not applying in CDB$ROOT .

    Solution is to execute datapatch individually in each of the containers, starting with CDB$ROOT :
    datapatch -verbose -pdbs CDB\$ROOT
    datapatch -verbose -pdbs TXS01
    
    sqlplus / as sysdba
    alter session set container=PDB$SEED;
    alter session set "_oracle_script"=TRUE;
    alter pluggable database pdb$seed close immediate instances=all;
    alter pluggable database pdb$seed OPEN READ WRITE;
    select open_mode from v$database;
    exit
    
    datapatch -verbose -pdbs PDB\$SEED
    
    sqlplus / as sysdba
    alter session set "_oracle_script"=FALSE;
    
    You should now see that the status has changed from PENDING to RESOLVED:
    TIME                      NAME    CAUSE       STATUS     MESSAGE
    -------------------       -----   ---------   --------   ------------------------------------------------------------------------------------------------------------------------------
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 35926646/25513953 (OJVM RELEASE UPDATE: 19.22.0.0.240116 (35926646)): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 36092868/25496280 (DATAPUMP BUNDLE PATCH 19.22.0.0.0): Installed in the CDB but not in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 35648110/25365038 (OJVM RELEASE UPDATE: 19.21.0.0.231017 (35648110)): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   Interim patch 35787077/25410019 (DATAPUMP BUNDLE PATCH 19.21.0.0.0): Not installed in the CDB but installed in the PDB
    06.05.2024 15.23.20       TXS01   SQL Patch   RESOLVED   '19.22.0.0.0 Release_Update 2401040239' is installed in the CDB but '19.21.0.0.0 Release_Update 2309301519' is installed in the PDB
    
    You may now clear the errors:
    SYS@cdb>SQL>exec DBMS_PDB.CLEAR_PLUGIN_VIOLATIONS( pdb_name => 'TXS01');
    
    PL/SQL procedure successfully completed.
    
    SYS@cdb>SQL>select time,name,cause,status,message from PDB_PLUG_IN_VIOLATIONS order by name;
    
    no rows selected
    

    Monday, April 22, 2024

    grep for all database names in /etc/oratab

    The file /etc/oratab contains:
    #
    mydb01:/orainst/product/19c:Y
    cdb:/orainst/product/19c:N
    
    To extract the database names:
    grep "^[^#]" /etc/oratab | awk -F':' '{print $1}'
    
    Result:
    mydb01
    cdb
    

    Thursday, April 18, 2024

    Solution to script warning: here-document at line < num > delimited by end-of-file (wanted `!')

    In a script, I had formatted my code as follows:
      for l in $(cat file.txt|grep 'Status'|  awk -F' ' '{print $2}'); do
        if [ $l != "SUCCESS" ]; then
           echo $l ": Errors found. Please check logfile status.log"
        else
          echo "Readjust memory for the remaining cdb instance"
            su - oracle <<!
            
            -- code here --
          !
          runAdjustMemory
        fi
      done
    
    During execution, my script failed with
    ./myscript.sh: line 32: warning: here-document at line 20 delimited by end-of-file (wanted `!')
    ./myscript.sh: line 33: syntax error: unexpected end of file
    


    Cause: The termination character, in this case the exclamation point ! was indented in the code.

    Solution: Remove the formatting and pull the termination character all the way to the left margin of your editor:
      for l in $(cat file.txt|grep 'Status'|  awk -F' ' '{print $2}'); do
        if [ $l != "SUCCESS" ]; then
           echo $l ": Errors found. Please check logfile status.log"
        else
          echo "Readjust memory for the remaining cdb instance"
            su - oracle <<!
            
            -- code here --
    !
          runAdjustMemory
        fi
      done
    

    Wednesday, April 17, 2024

    Solution to ORA-28547: connection to server failed, probable Oracle Net admin error

    When trying to perform sqlplus actions against one of my databases, I received
    oracle@oric-dbserver01:[mydb01]# sqlplus / as sysdba
    
    SQL*Plus: Release 19.0.0.0.0 - Production on Wed Apr 17 10:43:56 2024
    Version 19.22.0.0.0
    
    Copyright (c) 1982, 2023, Oracle.  All rights reserved.
    
    ERROR:
    ORA-28547: connection to server failed, probable Oracle Net admin error
    
    Cause:

    This is a consequence of the fact that oracle nls files are not found in its default location for this particular server.

    If there are no .nlb files in $ORACLE_HOME/nls/data, you need to find out where they are located and set the ORA_NLS10 parameter correctly

    If you lookup the error you will get a hint about this fact:
    oracle@oric-dbserver01:[mydb01]# oerr ora 28547
    A failure occurred during initialization of a network connection from a client process to the Oracle server.  ... 
    
    The most frequent specific causes are: [ oracle lists several potential causes here ]
    
    The character set of the database is not recognized by the client process, which may be caused by an incorrect or unnecessary setting 
    of the ORA_NLS10 client environment variable or by a new or user-defined character set installed in the Oracle server and used for the database.
    


    Solution:

    Set the ORA_NLS10 environment variable:
    export ORA_NLS10=$ORACLE_HOME/nls/data/9idata
    
    When this is done, sqlplus will work.

    Don't forget to add the same to your .bash_profile for the operating system user owning and running the oracle server software. For most installations I have seen, this will be the os user called "oracle".

    Solution to [FATAL] [DBT-05509] Failed to connect to the specified database (cdb) in dbca

    I was attempting to create a container database using dbca like this:
    dbca -createDatabase -responsefile /home/oracle/scripts/cdb.rsp -silent
    
    The following error kept coming up:
    [FATAL] [DBT-05509] Failed to connect to the specified database (cdb).
       CAUSE: OS Authentication might be disabled for this database (cdb).
       ACTION: Specify a valid sysdba user name and password to connect to the database.
    
    Solution:

    Same solution as in my previous post "My solution to ORA-12701: CREATE DATABASE character set is not known":

    If there are no *.nlb files in the default location $ORACLE_HOME/nls/data, then set the ORA_NLS10 parameter to the place where these files actually resides. In my case, they were found in the sub directory 9idata instead:
    export ORA_NLS10=$ORACLE_HOME/nls/data/9idata
    
    Otherwise Oracle won't be able to find the language files it needs to create the database.