Wednesday, August 7, 2024

How to find a failed job in the data dictionary tables

My adrci output showed a failed job (abbreviated):
ADR Home = /u01/oracle/diag/rdbms/mydb01/mydb01:
*************************************************************************
INCIDENT_ID          PROBLEM_KEY                                                 CREATE_TIME
-------------------- ----------------------------------------------------------- -------------------------
9720716              ORA 600 [kdxolrs2_comp: samekey]                            2024-08-06 17:37:32.674000 +02:00
9720796              ORA 600 [kdxolrs2_comp: samekey]                            2024-08-06 17:37:24.616000 
2 rows fetched (*** more available ***)

adrci> show incident -mode detail -p "incident_id=9720716"

   INCIDENT_ID                   9720716
   STATUS                        ready
   CREATE_TIME                   2024-08-06 17:37:32.674000 +02:00
   PROBLEM_ID                    21
   PROBLEM_KEY                   ORA 600 [kdxolrs2_comp: samekey]
   FIRSTINC_TIME                 2024-08-06 17:37:24.616000 +02:00
   LASTINC_TIME                  2024-08-06 17:37:32.674000 +02:00
   KEY_NAME                      PQ
   KEY_NAME                      Service
   KEY_VALUE                     SYS$USERS
   KEY_NAME                      Module
   KEY_VALUE                     DBMS_SCHEDULER
   KEY_NAME                      Action
   KEY_VALUE                     ORA$AT_OS_OPT_SY_125693
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/oracle/diag/rdbms/mydb01/mydb01/incident/incdir_9720716/mydb01_p007_11096_i9720716.trc
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/oracle/diag/rdbms/mydb01/mydb01/trace/mydb01_p007_11096.trc
   OWNER_ID                      1
   INCIDENT_FILE                 /u01/oracle/diag/rdbms/mydb01/mydb01/incident/incdir_9720716/mydb01_mz00_1311372_i9720716_a.trc
1 row fetched

The file mydb01_p007_11096_i9720716.trc shows:
*** MODULE NAME:(DBMS_SCHEDULER) 2024-08-06T17:37:32.699564+02:00
*** ACTION NAME:(ORA$AT_OS_OPT_SY_125693) 2024-08-06T17:37:32.699567+02:00

[TOC00002]
========= Dump for incident 9720716 (ORA 600 [kdxolrs2_comp: samekey]) ========

*** 2024-08-06T17:37:32.709272+02:00
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
[TOC00003]
----- Current SQL Statement for this session (sql_id=82xfc2m10quym) -----
select /*+ opt_param('_optimizer_use_auto_indexes' 'on')  
parallel_index(t, "mypk",8)  
dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) 
no_monitoring xmlindex_sel_idx_tbl 
opt_param('optimizer_inmemory_aware' 'false') 
no_substrb_pad  no_expand index_ffs(t,"mypk") */ 
count(*) as nrw,approx_count_distinct(sys_op_lbid(632968,'L',t.rowid)) as nlb,null as ndk,sys_op_countchg(substrb(t.rowid,1,15),1) 
as clf from "myuser"."mytable" t where "ID" is not null
[TOC00003-END]
To find the record in the data dictionary holding the history for this run, use the following query:
set lines 300
col client_name format a30
col job_name format a30
col job_status format a20
col job_start_time format a30
col job_duration format a20
col job_info format a80

SELECT client_name,job_name,job_status,to_char(job_start_time,'dd.mm.yyyy hh24:mi:ss') "job_start_time" ,job_duration,job_info
FROM  dba_autotask_job_history 
WHERE JOB_NAME = 'ORA$AT_OS_OPT_SY_125693';
CLIENT_NAME JOB_NAME JOB_STATUS job_start_time JOB_DURATION JOB_INFO
auto optimizer stats collection ORA$AT_OS_OPT_SY_125693 FAILED 06.08.2024 17:00:02 +000 00:37:34 ORA-12801: error signaled in parallel query server P007 ORA-00600: internal error code, arguments: [kdxolrs2_comp: samekey], [0], [0], [0], [0], [], [], [], [], [], [], []
Further details can be seen in the view DBA_OPTSTAT_OPERATION_TASKS, if necessary.

Workaround for ORA-65011: Pluggable database does not exist in an EBS environment

If you have upgraded an EBS database to Oracle 19c, you will also have migrated to the multitenant architechture, which is required.

During the upgrade, Oracle set the parameter _pdb_name_case_sensitive to TRUE by default, as explained in Doc ID 2642230.1.

If you then proceeded to convert your old EBS database to a pluggable database with a lower case name, this has the effect that a "flashback pluggable database" operation will fail when using sqlplus:
oracle@oraserver01:[pdb1]# echo $ORACLE_SID
cdb
oracle@oraserver01:[pdb1]# sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 6 10:23:55 2024
Version 19.23.0.0.0

SYS@_container_name SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         4 pdb1                           MOUNTED
SYS@_container_name SQL> flashback pluggable database "pdb1" to restore point SD76759_1;
flashback pluggable database "pdb1" to restore point SD76759_1
*
ERROR at line 1:
ORA-65011: Pluggable database PDB1 does not exist.


Workaround: Use RMAN instead
oracle@oraserver01:[pdb1]# rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Tue Aug 6 10:26:51 2024
Version 19.23.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB (DBID=2245029826)

RMAN> flashback pluggable database "pdb1" to restore point SD76759_1;

Starting flashback at 06-AUG-24

starting media recovery

archived log for thread 1 with sequence 21576 is already on disk as file /oradisk3/recoveryarea/CDB/archivelog/2024_07_11/o1_mf_1_21576__nw9trq2c_.arc
media recovery complete, elapsed time: 00:00:01
Finished flashback at 06-AUG-24

Monday, July 22, 2024

Friday, July 19, 2024

What do do if SET SERVEROUTPUT ON doesn't work

If you have an anonymous piece of pl/sql code and you have turned on serveroutput, like this:
set serveroutput on
You may need to enable output first:
exec dbms_output.enable();
set serveroutput on
In a multitenant environment, you will need to enable serveroutput after every time you switch container.

So, if you do
BEGIN
V_SQLSTRING := 'ALTER SESSION SET CONTAINER=' || '' || V_PDBNAME || '';
DBMS_OUTPUT.PUT_LINE(V_SQLSTRING);
EXECUTE IMMEDIATE V_SQLSTRING;
END;
/
you will need to repeat the enabling of serveroutput again, if you want to perform addition actions further down in your script.

How to check if your PostgreSQL server is part of a replication setup

If the server is down, check the entries in postgresql.auto.conf.

The following postgres.auto.conf is from a slave server:
wal_level = 'replica'
hot_standby = on
hot_standby_feedback = 'on'
primary_slot_name = 'stby1'
max_connections = 636

primary_conninfo      = 'host=pgserver01.oric.no port=5432 user=replicate password=mysecretpassword'
promote_trigger_file  = '/tmp/MasterNow'
The master server may have a very simple postgres.auto.conf file:
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
wal_level = 'replica'
If the server is up, use queries:

1. On the primary server, check that data is shipped over to the replica server:
[postgres@pgserver01.oric.no /pgdata/data]$ echo 'select usename,state,sync_state,sync_priority,client_hostname from pg_stat_replication;' | psql
  usename  |   state   | sync_state | sync_priority |        client_hostname
-----------+-----------+------------+---------------+--------------------------------
 replicate | streaming | async      |             0 | pgserver02.oric.no
(1 row)
2. The following query should return f (false) on primary server, t (true) on the replica server:
[postgres@pgserver01 /pgdata/data/log]$ echo "select pg_is_in_recovery();" | psql
 pg_is_in_recovery
-------------------
 f
(1 row)

[postgres@pgserver02 /pgdata/data/log]$  echo "select pg_is_in_recovery();" | psql
 pg_is_in_recovery
-------------------
 t
(1 row)

Tuesday, July 16, 2024

How to check StaticConnectIdentifier and other properties using Data Guard Broker

DGMGRL> show database cdb 'StaticConnectIdentifier'
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=oricdb01.oric.no)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb_DGMGRL.skead.no)(INSTANCE_NAME=cdb)(SERVER=DEDICATED)))'
There are several other properties that you can check, too. See this post for a couple of examples

The documentation differntiates between monitorable (read only) and configurable properties.

Friday, July 12, 2024

Datapatch seems to "hang" or taking an extraordinary long time to complete. What could be wrong?

When using the utility datapatch to either apply or rollback patches in my Oracle 19c ORACLE_HOME, I was suprised at the time it took to (in this case) rollback the patches from version 19.23 to 19.22.

The patching was just "hanging" without any sign of progress. I left the shell for several hours, thinking progress would eventually be made and patience would be needed.

When I came back the morning after, it had not progressed at all.

The solution was found in the logfiles located $ORACLE_BASE/cfgtoollogs:

cd $ORACLE_BASE/cfgtoollogs/sqlpatch

# list your directories. Enter the most recent ones:
ls -latr 

drwxr-xr-x  2 oracle dba 4.0K Feb  1 18:25 sqlpatch_9072_2024_02_01_18_13_47
drwxr-xr-x  2 oracle dba 4.0K May 14 09:21 sqlpatch_2561578_2024_05_14_09_20_33
-rw-r--r--  1 oracle dba    0 May 23 18:12 36199232_25601966.lock
drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36199232
-rw-r--r--  1 oracle dba    0 May 23 18:12 36420641_25643730.lock
drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36420641
-rw-r--r--  1 oracle dba    0 May 23 18:12 36233263_25638263.lock
drwxr-xr-x  3 oracle dba 4.0K May 23 18:12 36233263
drwxr-xr-x  2 oracle dba 4.0K May 23 18:25 sqlpatch_4642_2024_05_23_18_11_41
drwxr-xr-x  2 oracle dba 4.0K Jul 12 10:30 sqlpatch_1073748_2024_07_12_10_13_30

cd sqlpatch_1073748_2024_07_12_10_13_30
 ls -altrh
total 75M
-rw-r--r--  1 oracle dba  12K Jul 12 10:14 install1.sql
-rw-------  1 oracle dba 3.7M Jul 12 10:26 sqlpatch_catcon_0.log
-rw-------  1 oracle dba  689 Jul 12 10:26 sqlpatch_catcon__catcon_1073748.lst
-rw-r--r--  1 oracle dba  12K Jul 12 10:26 sqlpatch_summary.json
-rw-r--r--  1 oracle dba  133 Jul 12 10:26 sqlpatch_progress.json
-rw-r--r--  1 oracle dba 5.5M Jul 12 10:26 sqlpatch_invocation.log
-rw-r--r--  1 oracle dba  66M Jul 12 10:26 sqlpatch_debug.log
The source of the error was found in the file sqlpatch_catcon_0.log:
CREATE OR REPLACE PACKAGE BODY ku$_dpload AS
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01653: unable to extend table SYS.SOURCE$ by 8192 in tablespace SYSTEM
This simple error caused the rest of the script to fail:
SQL> --
SQL> -- ============================================================================
SQL> -- BEGINNING OF APPLY EXECUTION
SQL> -- ============================================================================
SQL> --
SQL> -- Initial phase sets up internal infrastructure for rest of dpload.
SQL> --
SQL> SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual;
SELECT ku$_dpload.prepare('setup') AS fname FROM sys.dual
       *
ERROR at line 1:
ORA-04067: not executed, package body "SYS.KU$_DPLOAD" does not exist
ORA-06508: PL/SQL: could not find program unit being called: "SYS.KU$_DPLOAD"
ORA-06512: at line 1
Obvious solution: Increase the tablespace datafile, or add a new file:
sqlplus / as sysdba
alter session set container=PDB01;
select f.file_name,f.bytes/1024/1024 "MB", f.autoextensible,f.maxbytes/1024/1024 "MaxMB", t.bigfile
from dba_data_files f, v$tablespace t
where f.tablespace_name='SYSTEM'
and f.tablespace_name = t.name;


FILE_NAME MB AUTOEXTENSIBLE MaxMB BIGFILE
/oradata/pdb01/system01.dbf
32712
YES
32767,984375
NO
So we have a smallfile tablespace which is full!
Add a datafile, still in the same session connected to the PDB01:
alter tablespace system 
add datafile '/oradata/pdb01/system02.dbf' size 256M autoextend on next 128M maxsize unlimited;