Wednesday, October 9, 2024

Workaround for ORA-27069 attempt to do I/O beyond the range of the file during RMAN clone from active database

This is the message I received when starting an active database duplication using RMAN:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 10/09/2024 10:40:19
RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the file
The rman script:
connect target sys/pwd@mydb1
connect auxiliary sys/pwd@mydb2
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate auxiliary channel aux1 type disk;
allocate auxiliary channel aux2 type disk;
configure device type disk parallelism 2;
debug io;
DUPLICATE TARGET DATABASE TO mydb2
FROM ACTIVE DATABASE
USING COMPRESSED BACKUPSET;
debug off;
}
Solution:
  • Usa a pfile to start the auxiliary instance, and *not* an spfile. If you do not have one, log onto the not-yet-mounted instance and create one:
    sqlplus / as sysdba
    create spfile from pfile;
    
  • Set the environment variable export ORA_RMAN_SGA_TARGET to the same (or a little more) as the auxiliary database's total SGA:
    export ORA_RMAN_SGA_TARGET=7900
    Run the rman script again and it should proceed past the mount-stage and start restoring the files.
  • Thursday, October 3, 2024

    Workaround for RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the file when using dbca

    Late night error when trying to create a cdb out of the same ORACLE_HOME as an older, non-cdb database:

    dbca reports:
    [ 2024-10-03 18:34:35.876 CEST ] Prepare for db operation
    DBCA_PROGRESS : 10%
    [ 2024-10-03 18:34:35.956 CEST ] Copying database files
    DBCA_PROGRESS : 40%
    DBCA_PROGRESS : 100%
    [ 2024-10-03 18:35:04.332 CEST ] [FATAL] Recovery Manager failed to restore datafiles. Refer logs for details.
    DBCA_PROGRESS : 10%
    DBCA_PROGRESS : 0%
    
    Detailed log file shows:
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the file
    
    Potential cause: your memory is too small to hold the extra instance you are attempting to create.

    Potential solution: scale your total memory up. If necessary, adjust hugepages to fit the extra instance.

    Wednesday, October 2, 2024

    Workaround for error ORA-00141: all addresses specified for parameter LOCAL_LISTENER are invalid

    When trying to update your database's LOCAL_LISTENER parameter like this:
    alter system set local_listener=LISTENER_CDB scope=both
    
    and you get the following error stack:
    ERROR at line 1:
    ORA-32017: failure in updating SPFILE
    ORA-00119: invalid specification for system parameter LOCAL_LISTENER
    ORA-00141: all addresses specified for parameter LOCAL_LISTENER are invalid
    ORA-00132: syntax error or unresolved network name 'LISTENER_CDB'
    
    The solution is to first change your $TNS_ADMIN/tnsadmin.ora to correspond to the value you wish to set the local_listener parameter to. For example, change the following line:
    LISTENER =
      (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.oric.no)(PORT = 1521))
    
    to
    LISTENER_CDB =
      (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.oric.no)(PORT = 1521))
    
    For more information about the local_listener parameter, see this earlier post

    Wednesday, September 25, 2024

    How to plug and unplug a PDB in a multitenant configuration

    What exactly does it mean to unlug and plug a database, in a multitenant architecture?

  • To unplug means to close the PDB and then generate its manifest file.
  • To plug means using the manifest file to create a new pluggable database.

    In the examples below, I am unplugging the databases pdb1 and pdb2 into two different manifest files:
    sqlplus / as sysdba
    alter pluggable database pdb1 close immediate;
    alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/pdb1.xml';
    
    alter pluggable database pdb2 close immediate;
    alter pluggable database pdb2 unplug into '/u01/app/oracle/oradata/pdb2.xml';
    
    The XML file created in the unplug operation contains information about the names and the full paths of the tablespaces, as well as data files of the unplugged PDB.

    This information will then be used by a subsequent plugg-in operation.

    After having unplugged a pdb you can drop the pluggable database but physically keep the datafiles belonging to it, like this:
    drop pluggable database pdb1 keep datafiles;
    
    If you wish to plug the database into a different CDB, it is a good idea to check the compatability of the database with the CDB first. This is particulary true if the new CDB is created with newer binaries than the original CDB, or if it is on a different host.In the new CDB, execute the following pl/sql code:
    set serveroutput on
    
    DECLARE
       compatible BOOLEAN := FALSE;
    BEGIN  
       compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
            pdb_descr_file => '/u01/app/oracle/oradata/pdb1.xml');
       if compatible then
          DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? YES');
       else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? NO');
       end if;
    END;
    /
    
    If the output shows that the database is compatible with the new CDB, proceed to plugin the database.

    Plug operations can be done using two different methods: NOCOPY and COPY.

    Using the NOCOPY method will use the data files of the unplugged PDB to plug the PDB into another (or the same) CDB without any physical file copy:
    create pluggable database pdb_plug_nocopy using '/u01/app/oracle/oradata/pdb1.xml'
    NOCOPY
    TEMPFILE REUSE;
    
    When using the NOCOPY option, the plugin operation lasts a few seconds. The original data files of the unplugged PDB now belong to the new plugged-in PDB in the new (or the same) CDB. A file with the same name as the temp file specified in the XML file exists in the target location. Therefore, the TEMPFILE_REUSE clause is required.

    Using the COPY method, will physically move datafiles from the original destination to a new destination:
    mkdir -p /u01/app/oracle/oradata/cdb2/pdb_plug_copy
    
    create pluggable database pdb_plug_copy using '/u01/app/oracle/oradata/pdb2.xml'
    COPY
    FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb2','/u01/app/oracle/oradata/cdb2/pdb_plug_copy');
    
    Verify the status, open mode and the file location of the plugged-in PDB (In the example below, I am showing the output for the pdb created using the COPY method, but it should always be done regardless of the method used):
    select pdb_name, status from cdb_pdbs where pdb_name='PDB_PLUG_COPY'; --> should return PDB_PLUG_COPY and NEW
    select open_mode from v$pdbs where name='PDB_PLUG_COPY'; --> should return MOUNTED
    
    select name from v$datafile where con_id=(select con_id from v$pdbs where name='PDB_PLUG_COPY';) --> should return the full path and name of the datafiles belonging to the system and sysaux tablespaces.
    
    Whether or not you are using the NOCOPY or COPY method, you will now have to open the newly plugged in database in the new CDB:

    alter pluggable database pdb_plug_nocopy open;
    
    alter pluggable database pdb_plug_copy open;
    
    show con_name
    show pdbs
    
    Source: Oracle 12cR1 tutorial
  • Monday, September 23, 2024

    How prevent dbca to create folders in capital letters during database creation

    This post is derived from my previous post, but I have come to realize that I have needed to look up this particular detail at at least a couple of occasions, so it deserves a post of their own.

    To keep dbca to create folders with capital letters during database cration, you need to alter the directions
    datafileDestination=/disk1/oradata/{DB_UNIQUE_NAME}/
    recoveryAreaDestination=/disk2/flash_recovery_area/{DB_UNIQUE_NAME}
    
    to
    datafileDestination=/disk1/oradata/mydb
    recoveryAreaDestination=/disk2/flash_recovery_area/mydb
    
    in your response file.

    The response file would then look something like:
    #-------------------------------------------------------------------------------
    # Do not change the following system generated value.
    #-------------------------------------------------------------------------------
    responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0
    gdbName=mydb.oric.no
    sid=mydb
    databaseConfigType=SI
    createAsContainerDatabase=false
    templateName=/u01/oracle/product/19c/assistants/dbca/templates/General_Purpose.dbc
    sysPassword=manager1
    systemPassword=manager1
    datafileDestination=/disk1/oradata/mydb
    recoveryAreaDestination=/disk2/flash_recovery_area/mydb
    storageType=FS
    characterSet=al32utf8
    variables=
    initParams=db_recovery_file_dest_size=50G
    memoryPercentage=75
    databaseType=MULTIPURPOSE
    enableArchive=true
    redoLogFileSize=2048
    

    Thursday, September 19, 2024

    What is the $ORACLE_HOME/dbs/hc_$ORACLE_SID.dat file?

    In every $ORACLE_HOME/dbs folder you will find a file named hc_$ORACLE_SID.dat.

    What is it, and is it essential for your instance?

    Oracle Support states:

    The $ORACLE_HOME/dbs/hc_.dat is created for the instance health check monitoring. It contains information used to monitor the instance health and to determine why it went down if the instance isn't up.

    The file can be deleted while the instance is up, it won't cause any harm to your instance.

    In earlier versions of the Oracle database software a bug existed that would trigger an ORA-7445 if the file was deleted while the database was up, but this was fixed as early as in version 11.2.

    The file is created at every instance startup.

    Source: What Is The $ORACLE_HOME/dbs/hc_.dat File? (Doc ID 390474.1) from Oracle Support

    Tuesday, September 17, 2024

    Where does an Oracle EBS 12.2 appserver save logs from the concurrent worker processes?

    Look in the directory $ADOP_LOG_HOME

    In here, every session will create its own subfolder. In my case
    10  11  2  3  4  5  6  7  8  9
    
    In my case, I had to enter the folder named after session 11.

    In here you will find folders named according to exection time, for example

    20240916_135516

    Inside this folder, you will find folders named according to action, for example "prepare", "cutover", or "apply".

    In my case, step inside the "apply" directory and you will find a folder named after your appserver.

    Finally, you will find a folder named according to the patch number, for example
    36876222_N
    
    with a log directory underneath it.

    So the path $ADOP_LOG_HOME/11/20240916_135516/apply/oric-ebsapp01/36876222_N/log is the complete path to your my log directory for the session I am looking for.