dbca -createDatabase -responsefile/home/oracle/${ORACLE_SID}.rsp -customScripts /home/oracle/post_cdb_creation.sql -silent
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Monday, October 14, 2024
How to add a post-creation script to your dbca createDatabase statement
Given that you have saved your responsefile in the oracle user's home directory, use the -customScripts to point out a script that will be run *after* the cdb is ready. I have found this useful in situations where you want a non-default standard that Oracle simply denies you every time you create a new cdb.
Thursday, October 10, 2024
autoupgrade error TARGET_CDB_COMPATIBILITY
If you are converting your non-cdb to to a pdb using autoupgrade, you might run into this error:
The database you are converting to a pluggable database is installed with more options than the cdb you are trying to plug it into.
Solution:
Recreate your cdb so that it has the same options installed as the database you are trying to convert to a pdb.
Check the installed options in both databases like this:
2024-10-10 14:19:51.697 INFO build.version:24.6.240905 build.hash:0ca273885 build.date:2024/09/05 11:30:40 -0400 build.max_target_version:23 build.MOS_NOTE:2485457.1 build.supported_target_versions:12.2,18,19,21,23 build.type:production build.MOS_LINK:https://support.oracle.com/epmos/faces/DocumentDisplay?id=2485457.1 build.label:(HEAD, tag: v24.6, origin/stable_devel, stable_devel) build.hash_date:2024/09/05 11:23:06 -0400 2024-10-10 14:19:51.850 INFO No preupgrade user action defined 2024-10-10 14:20:13.184 INFO Analyzing mydb01, 62 checks will run using 8 threads 2024-10-10 14:20:39.765 INFO File /u01/oracle/cfgtoollogs/autoupgrade/mydb01/mydb01/104/prechecks/mydb01_preupgrade.html not found 2024-10-10 14:20:39.767 INFO File /u01/oracle/cfgtoollogs/autoupgrade/mydb01/mydb01/104/prechecks/mydb01_preupgrade.log not found 2024-10-10 14:20:39.768 INFO File /u01/oracle/cfgtoollogs/autoupgrade/mydb01/mydb01/104/dbupgrade/datapatch_summary.log not found 2024-10-10 14:20:39.805 ERROR The following checks have ERROR severity and no auto fixup is available or the fixup failed to resolve the issue. Fix them before continuing: mydb01 TARGET_CDB_COMPATIBILITY 2024-10-10 14:20:39.925 ERROR Dispatcher failed: The following checks have ERROR severity and no auto fixup is available or the fixup failed to resolve the issue. Fix them before continuing: mydb01 TARGET_CDB_COMPATIBILITY oracle.upgrade.autoupgrade.utils.errors.AutoUpgException: The following checks have ERROR severity and no auto fixup is available or the fixup failed to resolve the issue. Fix them before continuing: mydb01 TARGET_CDB_COMPATIBILITY at oracle.upgrade.autoupgrade.dbchecks.standard.checks.ChecksController.evaluateBlockerChecks(ChecksController.java:310) at oracle.upgrade.autoupgrade.dbchecks.standard.checks.ChecksController.executeChecks(ChecksController.java:166) at oracle.upgrade.autoupgrade.dispatcher.facade.subsystems.ExecuteChecks.executeStage(ExecuteChecks.java:95) at oracle.commons.dispatcher.JobStage.executeStage(JobStage.java:80) at oracle.commons.dispatcher.RunJobDefinition.runJob(RunJobDefinition.java:144) at oracle.upgrade.autoupgrade.dispatcher.v2.def.RunAutoUpgradeJob.executeDispatcher(RunAutoUpgradeJob.java:62) at oracle.upgrade.autoupgrade.dispatcher.AutoUpgDispatcher.run_(AutoUpgDispatcher.java:262) at oracle.upgrade.autoupgrade.dispatcher.AutoUpgDispatcher.run(AutoUpgDispatcher.java:245) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750)Cause:
The database you are converting to a pluggable database is installed with more options than the cdb you are trying to plug it into.
Solution:
Recreate your cdb so that it has the same options installed as the database you are trying to convert to a pdb.
Check the installed options in both databases like this:
set lines 200 pages 100 col comp_name format a40 col status format a20 SELECT COMP_NAME,COMP_ID,VERSION,STATUS FROM DBA_REGISTRY;
Wednesday, October 9, 2024
Workaround for ORA-27069 attempt to do I/O beyond the range of the file during RMAN clone from active database
This is the message I received when starting an active database duplication using RMAN:
Usa a pfile to start the auxiliary instance, and *not* an spfile. If you do not have one, log onto the not-yet-mounted instance and create one:
Set the environment variable export ORA_RMAN_SGA_TARGET to the same (or a little more) as the auxiliary database's total SGA:
RMAN-00571: =========================================================== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: =========================================================== RMAN-03002: failure of Duplicate Db command at 10/09/2024 10:40:19 RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the fileThe rman script:
connect target sys/pwd@mydb1 connect auxiliary sys/pwd@mydb2 run{ allocate channel c1 type disk; allocate channel c2 type disk; allocate auxiliary channel aux1 type disk; allocate auxiliary channel aux2 type disk; configure device type disk parallelism 2; debug io; DUPLICATE TARGET DATABASE TO mydb2 FROM ACTIVE DATABASE USING COMPRESSED BACKUPSET; debug off; }Solution:
sqlplus / as sysdba create spfile from pfile;
export ORA_RMAN_SGA_TARGET=7900Run the rman script again and it should proceed past the mount-stage and start restoring the files.
Thursday, October 3, 2024
Workaround for RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the file when using dbca
Late night error when trying to create a cdb out of the same ORACLE_HOME as an older, non-cdb database:
dbca reports:
Potential solution: scale your total memory up. If necessary, adjust hugepages to fit the extra instance.
dbca reports:
[ 2024-10-03 18:34:35.876 CEST ] Prepare for db operation DBCA_PROGRESS : 10% [ 2024-10-03 18:34:35.956 CEST ] Copying database files DBCA_PROGRESS : 40% DBCA_PROGRESS : 100% [ 2024-10-03 18:35:04.332 CEST ] [FATAL] Recovery Manager failed to restore datafiles. Refer logs for details. DBCA_PROGRESS : 10% DBCA_PROGRESS : 0%Detailed log file shows:
RMAN-03015: error occurred in stored script Memory Script RMAN-04014: startup failed: ORA-27069: attempt to do I/O beyond the range of the filePotential cause: your memory is too small to hold the extra instance you are attempting to create.
Potential solution: scale your total memory up. If necessary, adjust hugepages to fit the extra instance.
Wednesday, October 2, 2024
Workaround for error ORA-00141: all addresses specified for parameter LOCAL_LISTENER are invalid
When trying to update your database's LOCAL_LISTENER parameter like this:
alter system set local_listener=LISTENER_CDB scope=bothand you get the following error stack:
ERROR at line 1: ORA-32017: failure in updating SPFILE ORA-00119: invalid specification for system parameter LOCAL_LISTENER ORA-00141: all addresses specified for parameter LOCAL_LISTENER are invalid ORA-00132: syntax error or unresolved network name 'LISTENER_CDB'The solution is to first change your $TNS_ADMIN/tnsadmin.ora to correspond to the value you wish to set the local_listener parameter to. For example, change the following line:
LISTENER = (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.oric.no)(PORT = 1521))to
LISTENER_CDB = (ADDRESS = (PROTOCOL = TCP)(HOST = myserver.oric.no)(PORT = 1521))For more information about the local_listener parameter, see this earlier post
Wednesday, September 25, 2024
How to plug and unplug a PDB in a multitenant configuration
What exactly does it mean to unlug and plug a database, in a multitenant architecture?
To unplug means to close the PDB and then generate its manifest file.
To plug means using the manifest file to create a new pluggable database.
In the examples below, I am unplugging the databases pdb1 and pdb2 into two different manifest files:
This information will then be used by a subsequent plugg-in operation.
After having unplugged a pdb you can drop the pluggable database but physically keep the datafiles belonging to it, like this:
Plug operations can be done using two different methods: NOCOPY and COPY.
Using the NOCOPY method will use the data files of the unplugged PDB to plug the PDB into another (or the same) CDB without any physical file copy:
Using the COPY method, will physically move datafiles from the original destination to a new destination:
In the examples below, I am unplugging the databases pdb1 and pdb2 into two different manifest files:
sqlplus / as sysdba alter pluggable database pdb1 close immediate; alter pluggable database pdb1 unplug into '/u01/app/oracle/oradata/pdb1.xml'; alter pluggable database pdb2 close immediate; alter pluggable database pdb2 unplug into '/u01/app/oracle/oradata/pdb2.xml';The XML file created in the unplug operation contains information about the names and the full paths of the tablespaces, as well as data files of the unplugged PDB.
This information will then be used by a subsequent plugg-in operation.
After having unplugged a pdb you can drop the pluggable database but physically keep the datafiles belonging to it, like this:
drop pluggable database pdb1 keep datafiles;If you wish to plug the database into a different CDB, it is a good idea to check the compatability of the database with the CDB first. This is particulary true if the new CDB is created with newer binaries than the original CDB, or if it is on a different host.In the new CDB, execute the following pl/sql code:
set serveroutput on DECLARE compatible BOOLEAN := FALSE; BEGIN compatible := DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/u01/app/oracle/oradata/pdb1.xml'); if compatible then DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? YES'); else DBMS_OUTPUT.PUT_LINE('Is pluggable PDB1 compatible? NO'); end if; END; /If the output shows that the database is compatible with the new CDB, proceed to plugin the database.
Plug operations can be done using two different methods: NOCOPY and COPY.
Using the NOCOPY method will use the data files of the unplugged PDB to plug the PDB into another (or the same) CDB without any physical file copy:
create pluggable database pdb_plug_nocopy using '/u01/app/oracle/oradata/pdb1.xml' NOCOPY TEMPFILE REUSE;When using the NOCOPY option, the plugin operation lasts a few seconds. The original data files of the unplugged PDB now belong to the new plugged-in PDB in the new (or the same) CDB. A file with the same name as the temp file specified in the XML file exists in the target location. Therefore, the TEMPFILE_REUSE clause is required.
Using the COPY method, will physically move datafiles from the original destination to a new destination:
mkdir -p /u01/app/oracle/oradata/cdb2/pdb_plug_copy create pluggable database pdb_plug_copy using '/u01/app/oracle/oradata/pdb2.xml' COPY FILE_NAME_CONVERT=('/u01/app/oracle/oradata/cdb1/pdb2','/u01/app/oracle/oradata/cdb2/pdb_plug_copy');Verify the status, open mode and the file location of the plugged-in PDB (In the example below, I am showing the output for the pdb created using the COPY method, but it should always be done regardless of the method used):
select pdb_name, status from cdb_pdbs where pdb_name='PDB_PLUG_COPY'; --> should return PDB_PLUG_COPY and NEW select open_mode from v$pdbs where name='PDB_PLUG_COPY'; --> should return MOUNTED select name from v$datafile where con_id=(select con_id from v$pdbs where name='PDB_PLUG_COPY';) --> should return the full path and name of the datafiles belonging to the system and sysaux tablespaces.Whether or not you are using the NOCOPY or COPY method, you will now have to open the newly plugged in database in the new CDB:
alter pluggable database pdb_plug_nocopy open; alter pluggable database pdb_plug_copy open; show con_name show pdbsSource: Oracle 12cR1 tutorial
Monday, September 23, 2024
How prevent dbca to create folders in capital letters during database creation
This post is derived from my previous post, but I have come to realize that I have needed to look up this particular detail at at least a couple of occasions, so it deserves a post of their own.
To keep dbca to create folders with capital letters during database cration, you need to alter the directions
The response file would then look something like:
To keep dbca to create folders with capital letters during database cration, you need to alter the directions
datafileDestination=/disk1/oradata/{DB_UNIQUE_NAME}/ recoveryAreaDestination=/disk2/flash_recovery_area/{DB_UNIQUE_NAME}to
datafileDestination=/disk1/oradata/mydb recoveryAreaDestination=/disk2/flash_recovery_area/mydbin your response file.
The response file would then look something like:
#------------------------------------------------------------------------------- # Do not change the following system generated value. #------------------------------------------------------------------------------- responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v19.0.0 gdbName=mydb.oric.no sid=mydb databaseConfigType=SI createAsContainerDatabase=false templateName=/u01/oracle/product/19c/assistants/dbca/templates/General_Purpose.dbc sysPassword=manager1 systemPassword=manager1 datafileDestination=/disk1/oradata/mydb recoveryAreaDestination=/disk2/flash_recovery_area/mydb storageType=FS characterSet=al32utf8 variables= initParams=db_recovery_file_dest_size=50G memoryPercentage=75 databaseType=MULTIPURPOSE enableArchive=true redoLogFileSize=2048
Subscribe to:
Posts (Atom)