alter session set container=PDB1; CREATE TEMPORARY TABLESPACE TEMP3 TEMPFILE '/disk1/oradata/PDB1/temp03.dbf' SIZE 20G AUTOEXTEND ON NEXT 128K MAXSIZE 20G TABLESPACE GROUP TEMP_NEW EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M FLASHBACK ON; ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP_NEW; DROP TABLESPACE TEMP1 INCLUDING CONTENTS AND DATAFILES; DROP TABLESPACE TEMP2 INCLUDING CONTENTS AND DATAFILES; CREATE TEMPORARY TABLESPACE TEMP1 TEMPFILE '/disk1/oradata/PDB1/temp01.dbf' SIZE 1G AUTOEXTEND ON NEXT 128K MAXSIZE 20G TABLESPACE GROUP TEMP EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M FLASHBACK ON; CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE '/disk1/oradata/PDB1/temp02.dbf' SIZE 1G AUTOEXTEND ON NEXT 128K MAXSIZE 20G TABLESPACE GROUP TEMP EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M FLASHBACK ON; ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP; DROP TABLESPACE TEMP3 INCLUDING CONTENTS AND DATAFILES; exitMore about temporary tablespace groups in Oracle 19c here
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Friday, August 23, 2024
How to cleanup temporary tablespaces in a PDB and create new ones in the right location
Notice how temporary tablespace groups can be used instead of directly referencing the temporary tablespace name:
Our solution to ORA-00600: internal error, arguments: [kkmmctbf:bad intcoln]
We were facing a problem in one of our databases with jobs that stopped immediately after attempting to start, and then aborting their respecitive processes immediately.
Error:
Right prior to the incident we ran out of free space in the SYSTEM tablespace, and a new datafile was promptly added.
The internal table SYS.SOURCE$ contains all the PL/SQL code for the database ( Oracle provided code and your own code )
Our schemas are editioned, and for some reason we seem to be hitting bug number 14163397: "Trigger on editioning view can get ORA-600 [kkmmctbf:bad intcoln] (Doc ID 14163397.8)"
Solution:
The above solution was found with assistance from Oracle Support and should not be attempted in a production environment without prior investigation and only upon recommendation from Oracle Support.
Error:
2024-08-20T10:09:51.641274+02:00 Errors in file /orasw/rdbms/diag/rdbms/cdb/cdb/trace/cdb_j000_3187052.trc: ORA-00600: intern feilkode, argumenter: [kkmmctbf:bad intcoln], [0], [], [], [], [], [], [], [], [], [], [] 2024-08-20T10:09:51.645053+02:00 opidrv aborting process J000 ospid (3187052) as a result of ORA-600Cause:
Right prior to the incident we ran out of free space in the SYSTEM tablespace, and a new datafile was promptly added.
The internal table SYS.SOURCE$ contains all the PL/SQL code for the database ( Oracle provided code and your own code )
Our schemas are editioned, and for some reason we seem to be hitting bug number 14163397: "Trigger on editioning view can get ORA-600 [kkmmctbf:bad intcoln] (Doc ID 14163397.8)"
Solution:
alter system set "_ignore_fg_deps"=ALL container=all scope=both;The above parameter is used to ignore fine grain dependencies during invalidation. Flush the shared pool and buffer cache if its holding up in memory the old versions:
alter system flush shared_pool; alter system flush buffer_cache;Wait for a little while, then the jobs started to run automatically again.
The above solution was found with assistance from Oracle Support and should not be attempted in a production environment without prior investigation and only upon recommendation from Oracle Support.
Wednesday, August 7, 2024
How to find a failed job in the data dictionary tables
My adrci output showed a failed job (abbreviated):
Further details can be seen in the view DBA_OPTSTAT_OPERATION_TASKS, if necessary.
ADR Home = /u01/oracle/diag/rdbms/mydb01/mydb01:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
-------------------- ----------------------------------------------------------- -------------------------
9720716 ORA 600 [kdxolrs2_comp: samekey] 2024-08-06 17:37:32.674000 +02:00
9720796 ORA 600 [kdxolrs2_comp: samekey] 2024-08-06 17:37:24.616000
2 rows fetched (*** more available ***)
adrci> show incident -mode detail -p "incident_id=9720716"
INCIDENT_ID 9720716
STATUS ready
CREATE_TIME 2024-08-06 17:37:32.674000 +02:00
PROBLEM_ID 21
PROBLEM_KEY ORA 600 [kdxolrs2_comp: samekey]
FIRSTINC_TIME 2024-08-06 17:37:24.616000 +02:00
LASTINC_TIME 2024-08-06 17:37:32.674000 +02:00
KEY_NAME PQ
KEY_NAME Service
KEY_VALUE SYS$USERS
KEY_NAME Module
KEY_VALUE DBMS_SCHEDULER
KEY_NAME Action
KEY_VALUE ORA$AT_OS_OPT_SY_125693
OWNER_ID 1
INCIDENT_FILE /u01/oracle/diag/rdbms/mydb01/mydb01/incident/incdir_9720716/mydb01_p007_11096_i9720716.trc
OWNER_ID 1
INCIDENT_FILE /u01/oracle/diag/rdbms/mydb01/mydb01/trace/mydb01_p007_11096.trc
OWNER_ID 1
INCIDENT_FILE /u01/oracle/diag/rdbms/mydb01/mydb01/incident/incdir_9720716/mydb01_mz00_1311372_i9720716_a.trc
1 row fetched
The file mydb01_p007_11096_i9720716.trc shows:
*** MODULE NAME:(DBMS_SCHEDULER) 2024-08-06T17:37:32.699564+02:00 *** ACTION NAME:(ORA$AT_OS_OPT_SY_125693) 2024-08-06T17:37:32.699567+02:00 [TOC00002] ========= Dump for incident 9720716 (ORA 600 [kdxolrs2_comp: samekey]) ======== *** 2024-08-06T17:37:32.709272+02:00 dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0) [TOC00003] ----- Current SQL Statement for this session (sql_id=82xfc2m10quym) ----- select /*+ opt_param('_optimizer_use_auto_indexes' 'on') parallel_index(t, "mypk",8) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring xmlindex_sel_idx_tbl opt_param('optimizer_inmemory_aware' 'false') no_substrb_pad no_expand index_ffs(t,"mypk") */ count(*) as nrw,approx_count_distinct(sys_op_lbid(632968,'L',t.rowid)) as nlb,null as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) as clf from "myuser"."mytable" t where "ID" is not null [TOC00003-END]To find the record in the data dictionary holding the history for this run, use the following query:
set lines 300 col client_name format a30 col job_name format a30 col job_status format a20 col job_start_time format a30 col job_duration format a20 col job_info format a80 SELECT client_name,job_name,job_status,to_char(job_start_time,'dd.mm.yyyy hh24:mi:ss') "job_start_time" ,job_duration,job_info FROM dba_autotask_job_history WHERE JOB_NAME = 'ORA$AT_OS_OPT_SY_125693';
CLIENT_NAME | JOB_NAME | JOB_STATUS | job_start_time | JOB_DURATION | JOB_INFO |
---|---|---|---|---|---|
auto optimizer stats collection | ORA$AT_OS_OPT_SY_125693 | FAILED | 06.08.2024 17:00:02 | +000 00:37:34 | ORA-12801: error signaled in parallel query server P007 ORA-00600: internal error code, arguments: [kdxolrs2_comp: samekey], [0], [0], [0], [0], [], [], [], [], [], [], [] |
Workaround for ORA-65011: Pluggable database does not exist in an EBS environment
If you have upgraded an EBS database to Oracle 19c, you will also have migrated to the multitenant architechture, which is required.
During the upgrade, Oracle set the parameter _pdb_name_case_sensitive to TRUE by default, as explained in Doc ID 2642230.1.
If you then proceeded to convert your old EBS database to a pluggable database with a lower case name, this has the effect that a "flashback pluggable database" operation will fail when using sqlplus:
Workaround: Use RMAN instead
During the upgrade, Oracle set the parameter _pdb_name_case_sensitive to TRUE by default, as explained in Doc ID 2642230.1.
If you then proceeded to convert your old EBS database to a pluggable database with a lower case name, this has the effect that a "flashback pluggable database" operation will fail when using sqlplus:
oracle@oraserver01:[pdb1]# echo $ORACLE_SID cdb oracle@oraserver01:[pdb1]# sqlplus / as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 6 10:23:55 2024 Version 19.23.0.0.0 SYS@_container_name SQL> show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 4 pdb1 MOUNTED SYS@_container_name SQL> flashback pluggable database "pdb1" to restore point SD76759_1; flashback pluggable database "pdb1" to restore point SD76759_1 * ERROR at line 1: ORA-65011: Pluggable database PDB1 does not exist.
Workaround: Use RMAN instead
oracle@oraserver01:[pdb1]# rman target / Recovery Manager: Release 19.0.0.0.0 - Production on Tue Aug 6 10:26:51 2024 Version 19.23.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. connected to target database: CDB (DBID=2245029826) RMAN> flashback pluggable database "pdb1" to restore point SD76759_1; Starting flashback at 06-AUG-24 starting media recovery archived log for thread 1 with sequence 21576 is already on disk as file /oradisk3/recoveryarea/CDB/archivelog/2024_07_11/o1_mf_1_21576__nw9trq2c_.arc media recovery complete, elapsed time: 00:00:01 Finished flashback at 06-AUG-24
Monday, July 22, 2024
Syntax for flashback database to a specific timestamp using sqlplus
flashback pluggable database PDB1 TO timestamp to_timestamp('2024-07-22 15:12:01', 'YYYY-MM-DD Hh24:MI:SS');
Friday, July 19, 2024
What do do if SET SERVEROUTPUT ON doesn't work
If you have an anonymous piece of pl/sql code and you have turned on serveroutput, like this:
So, if you do
set serveroutput onYou may need to enable output first:
exec dbms_output.enable(); set serveroutput onIn a multitenant environment, you will need to enable serveroutput after every time you switch container.
So, if you do
BEGIN V_SQLSTRING := 'ALTER SESSION SET CONTAINER=' || '' || V_PDBNAME || ''; DBMS_OUTPUT.PUT_LINE(V_SQLSTRING); EXECUTE IMMEDIATE V_SQLSTRING; END; /you will need to repeat the enabling of serveroutput again, if you want to perform addition actions further down in your script.
How to check if your PostgreSQL server is part of a replication setup
If the server is down, check the entries in postgresql.auto.conf.
The following postgres.auto.conf is from a slave server:
1. On the primary server, check that data is shipped over to the replica server:
The following postgres.auto.conf is from a slave server:
wal_level = 'replica' hot_standby = on hot_standby_feedback = 'on' primary_slot_name = 'stby1' max_connections = 636 primary_conninfo = 'host=pgserver01.oric.no port=5432 user=replicate password=mysecretpassword' promote_trigger_file = '/tmp/MasterNow'The master server may have a very simple postgres.auto.conf file:
# Do not edit this file manually! # It will be overwritten by the ALTER SYSTEM command. wal_level = 'replica'If the server is up, use queries:
1. On the primary server, check that data is shipped over to the replica server:
[postgres@pgserver01.oric.no /pgdata/data]$ echo 'select usename,state,sync_state,sync_priority,client_hostname from pg_stat_replication;' | psql usename | state | sync_state | sync_priority | client_hostname -----------+-----------+------------+---------------+-------------------------------- replicate | streaming | async | 0 | pgserver02.oric.no (1 row)2. The following query should return f (false) on primary server, t (true) on the replica server:
[postgres@pgserver01 /pgdata/data/log]$ echo "select pg_is_in_recovery();" | psql pg_is_in_recovery ------------------- f (1 row) [postgres@pgserver02 /pgdata/data/log]$ echo "select pg_is_in_recovery();" | psql pg_is_in_recovery ------------------- t (1 row)
Subscribe to:
Posts (Atom)