Monday, December 12, 2022

Some useful ADRCI commands

The ADR (Automatic Diagnostic Repository) and its command-line tool adrci was introduced in Oracle11g and is still available.

I still find it incredibly useful when troubleshooting problems, particulary ORA-600 errors which needs to be sent to Oracle Support.

An ADR may contain file from many homes, or components. You need to set the correct home before you start working with ADR:
show homes
set homepath 
exampel:
adrci> show homes
ADR Homes:
diag/tnslsnr/zus15d-0110/listener_pzhhtet2
diag/tnslsnr/zus15d-0110/listener_pzhhtet1
diag/rdbms/pzhhtet1/PZHHTET1
diag/rdbms/pzhhtet2/PZHHTET2

adrci> set homepath diag/rdbms/pzhhtet1/PZHHTET1
adrci> show home
ADR Homes:
diag/rdbms/pzhhtet1/PZHHTET1
help show tracefile
show tracefile 
order by timestamp:
show tracefile -t 
order by reversed timestamp:
show tracefile -rt
filter on tracefile name:
show tracefile %m00% -rt
alert log operations
show alert
show alert -tail -f
show alert -tail 50

show alert -p "module_id='DBMS_SCHEDULER'"
show alert -p "module_id != 'DBMS_SCHEDULER'"
show alert -p "module_id LIKE '%SCHEDULER%'"
If you want to extract the Streams related statements from the alert log, you would use:
show alert -p "message_text like '%STREAM%'"
Show all messages related to a group. For example, a startup:
show alert -p "MESSAGE_GROUP LIKE 'startup'"
spool 
spool off

show incident
show incident -mode basic 
show incident -mode detail
show incident -mode detail -p "incident_id=incident_ID;"
show incident -last 5
create a package from an incident:
adrci> ips create package incident 1697
Created package 1 based on incident id 1697, correlation level typical
view its contents:
adrci> ips show files package 1
   FILE_ID                1
   FILE_LOCATION          /incident/incdir_1697
   FILE_NAME              +ASM1_lmhb_15352_i1697.trc
   LAST_SEQUENCE          0
   EXCLUDE                Included

   FILE_ID                2
   FILE_LOCATION          /incident/incdir_1697
   FILE_NAME              +ASM1_lmhb_15352_i1697.trm
   LAST_SEQUENCE          0
   EXCLUDE                Included

   FILE_ID                3
   FILE_LOCATION          /trace
   FILE_NAME              +ASM1_lmhb_15352.trc
   LAST_SEQUENCE          0
   EXCLUDE                Included

   FILE_ID                4
   FILE_LOCATION          /trace
   FILE_NAME              +ASM1_lmhb_15352.trm
   LAST_SEQUENCE          0
   EXCLUDE                Included
generate zip-files on disk:
adrci> ips generate package 1 in /cs/oracle/grid/packages
Generated package 1 in file /cs/oracle/grid/packages/ORA29770_20100427133326_COM_1.zip, mode complete
To remove old incidents, use
purge -age 2 -type incident
where -age 2 means older than two days.

Automatic purging:
adrci> purge -age 2880 -type trace
--> will automatically remove all trace files older than 2880 minutes (2 days) type "help purge" on the adrci prompt for more options.

Other examples:
purge -age 129600 -type alert
purge -age 129600 -type incident
purge -age 129600 -type cdump
purge -age 129600 -type stage
purge -age 129600 -type sweep
purge -age 129600 -type hm
129600 = 90 days 43200 = 30 days or
purge -age 2880 -type trace
purge -age 2880 -type alert
purge -age 2880 -type cdump
purge -age 2880 -type stage
purge -age 2880 -type sweep
purge -age 2880 -type hm
you can create a batch file and run all ov the above in one bulk:
adrci script=adrci_settings.cmd
You can spool from within an adrci script:
spool myfile.log
There are two policies in place; the LONGP_POLICY and the SHORTP_POLICY. Events such as incidents will fall into the LONGP policy and trace files and dumps will fall into the SHORTP policy. You can configure the policies using the commands below, which set the short and long retention periods to 3 months and 6 months, respectively. Note that set control accepts the time in hours
show control
set control (SHORTP_POLICY = 2160)
set control (LONGP_POLICY = 4320)
Run in batch mode using the "exec" instruction"
adrci exec="command1; command2 ... command n"

Friday, December 9, 2022

What does the UR = A in tnsnames.ora mean?

I recently had to connect remotely as sysdba to remote instances when I created Physical Standby Databases using RMAN.

There were some some connection problems along the way, and this was solved by setting the (UR=A) attribute in the tnsnames.ora entry for the database.

What exactly is the UR=A attribute and what does it do?

This is explained very well in a blog post from Ed Chen Logic so rather than explaining the same points all over again, continue reading there.

But in short, an instance (memory, background processes, but no data files) will inform the listener to block all connections to its corresponding service by default.

To bypass this restriction, you can add UR=A to the connect descriptor. From the primary server, I will then be able to connect to the standby instance running on prodserver2.oric.no.

Let's demonstrate it:

Start the standby database in nomount:
sqlplus / as sysdba
startup nomount
ORACLE instance started.

Total System Global Area 1.2885E+10 bytes
Fixed Size                  8635096 bytes
Variable Size            2986347816 bytes
Database Buffers         9865003008 bytes
Redo Buffers               24915968 bytes
Start a default listener on the standby server:
lsnrctl start
Make sure the default service is loaded: wait for a minute, or execute
alter system register;
Listener status is now:
Service "proddb01_stby.oric.no" has 1 instance(s).
  Instance "proddb01", status BLOCKED, has 1 handler(s) for this service...
Create a tnsnames.ora entry for connecting to the standby database from the primary server:
proddb01_stby.oric.no =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = proddb02.oric.no)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = proddb01_stby.oric.no)
    )
  )
Test it:
sqlplus sys/password@proddb01_stby.oric.no as sysdba
Result:
ORA-12528: TNS:listener: all appropriate instances are blocking new connections
Add the UR=A to the connect_data section:
proddb01_stby.oric.no =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = proddb01.oric.no)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = proddb01_stby.oric.no)
      (UR = A)
    )
  )
And try again:
sqlplus sys/password@proddb01_stby.oric.no as sysdba
Result:
SQL*Plus: Release 12.2.0.1.0 Production on Fri Dec 9 16:43:09 2022

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select host_name from v$instance;

HOST_NAME
-----------------
proddb02.oric.no

SQL> show parameter db_unique

NAME             TYPE      VALUE
---------------- --------- --------------
db_unique_name   string    proddb01_stby

How to solve "Can't locate TXK/ARGS.pm in @INC (you may need to install the TXK::ARGS module)" when running txkChkPDBCompatability.pl

Background:

As a part of converting your non-CDB Oracle database to a pluggable database, you are supposed to run the perl script txkChkPDBCompatability.pl

Problem:

The script fails with:
Can't locate TXK/ARGS.pm in @INC (you may need to install the TXK::ARGS module) (@INC contains: $ORACLE_HOME/perl/lib/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/5.28.1 $ORACLE_HOME/perl/lib/site_perl/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/site_perl/5.28.1 $ORACLE_HOME/perl/lib/site_perl/5.28.1/x86_64-linux-thread-multi /$ORACLE_HOME/perl/lib/site_perl/5.28.1 $ORACLE_HOME/perl/lib/5.28.1/x86_64-linux-thread-multi $ORACLE_HOME/perl/lib/5.28.1) at ./txkChkPDBCompatability.pl line 61.
BEGIN failed--compilation aborted at ./txkChkPDBCompatability.pl line 61.
Cause:

The PERL5LIB environment variable is not complete. You need to add the location of the ADK and TXK directories, which in turn contains several .pm files.
These folders can be found in $ORACLE_HOME/appsutil/perl

Solution:

Add the $ORACLE_HOME/appsutil/perl to your existing PERL5LIB variable:
export PERL5LIB=$PERL5LIB:$ORACLE_HOME/appsutil/perl
Thanks to the author of Punitoracledba for providing the solution after lots of troubleshooting.

To see the note used to convert a non-cdb EBS database to a pluggable database, go to Doc ID 2552181.1 at Oracle Support

Tuesday, December 6, 2022

SWITCHOVER VERIFY WARNING: switchover target has no standby database defined in LOG_ARCHIVE_DEST_n parameter.

When performing a switchover verificaton from your primary database, in your Data Guard setup, you may see the following:
sqlplus / as sysdba

SQL>  ALTER DATABASE SWITCHOVER TO stb verify;
 ALTER DATABASE SWITCHOVER TO stb verify
*
ERROR at line 1:
ORA-16475: succeeded with warnings, check alert log for more details
Alert log reports:
2022-12-06T09:56:34.020025+01:00
ALTER DATABASE SWITCHOVER TO stb verify
2022-12-06T09:56:34.192599+01:00
SWITCHOVER VERIFY: Send VERIFY request to switchover target STB
SWITCHOVER VERIFY WARNING: switchover target has no standby database defined in LOG_ARCHIVE_DEST_n parameter.
If the switchover target is converted to a primary database, the new primary database will not be protected.
ORA-16475 signalled during: ALTER DATABASE SWITCHOVER TO stb verify...


Solution:
Update the standby database log_archive_dest_n parameter, to prepare it for a future primary role.

In the standby database, update one of the log_archive_dest_n parameters, I picked the next available from the list, log_archive_dest_2:
alter system set log_archive_dest_2='service=primary.oric.no LGWR ASYNC VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE) DB_UNIQUE_NAME=primary';
Run the verification again:
SQL>  ALTER DATABASE SWITCHOVER TO stb verify;

Database altered.
Check the alert log and it will confirm that the database stb can now be turned into a primary database:

2022-12-06T10:03:34.605309+01:00
ALTER DATABASE SWITCHOVER TO stb verify
2022-12-06T10:03:34.773710+01:00
SWITCHOVER VERIFY: Send VERIFY request to switchover target STB
SWITCHOVER VERIFY COMPLETE: READY FOR SWITCHOVER
Completed: ALTER DATABASE SWITCHOVER TO stb verify

What is an Oracle snapshot standby database?

A snapshot standby database is a fully updatable standby database.

Redo data is still received from the primary database, but it is not applied immediately. Before the redo database is applied, the database will need to be converted back into a physical standby database.

Any local updates that the snapshot standby database may have received, will be deleted when the snapshot database is converted into a physical standby database and resynchronized with the primary database.

Oracle 12.2 Data Guard Concepts and administration documentation

Oracle support Using Snapshot Standby Database. (Doc ID 443720.1)

Monday, December 5, 2022

How to avoid redo logs and standby redo logs being created as OMF files during a DUPLICATE ... FOR STANDBY operation

Recently I had to create a Data Guard physical standby database, and I used RMAN active database duplication ( "duplicate target database for standby from active database" ) to create the database on the remote server.

As the remote server had the same file system layout as the primary, I used the directive
NOFILENAMECHECK
in the duplicate command.

Consequently, I expected all datafiles, including the redo and standby redo log files, to be created in exactly the same location as the target.

For some reason this did not happen; in fact, they were named with OMF file names, and placed in the db_recovery_file_dest which point to the path /recovery_data, while on the primary, the same files reside in /u02/oradata

I found the solution based on a post called "JoeLi's TechLife" and added my own experience to find a workaround.

The author points out

after the duplication, all other files seem to be fine regarding their names and locations except for redo log files and standby redo log files. They are created under the fast recovery area with OMF file names!

How come? Well here is the reason — duplicate will always re-create redo log files and standby redo log files. And because DB_RECOVERY_FILE_DEST is defined on the Primary server, redo and standby redo log files are created as OMF despite NOFILENAMECHECK is used. And they are not multiplexed as you have on the Primary database!


I was not aware of this.

The author continues to point out that he hasn't found a solution, yet.

The proposed solution according to Joe is

... to recreate redo log and standby redo log files on the newly created standby server — basically dropping the OMF ones and creating them under the correct locations with correct file names.

I did however, find a solution, which worked for me and was really quite simple, although not very intuitive: simply set the directive log_file_name_convert in your clone script, like examplified below:

connect target sys/password@primary
connect auxiliary target sys/password@stb
run{
        allocate channel c1 type disk;
        allocate channel c2 type disk;
        allocate channel c3 type disk;
        allocate channel c4 type disk;
        allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
SET DB_UNIQUE_NAME='stb'
SET FAL_CLIENT='stb'
SET FAL_SERVER='primary'
SET LOG_ARCHIVE_DEST_1='location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stb'
SET LOG_ARCHIVE_DEST_2='' comment 'Must be set to empty string during duplication'
SET STANDBY_FILE_MANAGEMENT='AUTO'
SET DG_BROKER_CONFIG_FILE1='/u01/oracle/product/12c/dbs/dr1stb.dat'
SET DG_BROKER_CONFIG_FILE2='/u01/oracle/product/12c/dbs/dr2stb.dat'
SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(stb,primary)'
SET DG_BROKER_START='TRUE'
SET LOG_ARCHIVE_TRACE='0' comment 'Must be set to same value as primary during duplicaton'
SET dispatchers='(PROTOCOL=TCP) (SERVICE=stbXDB)' comment 'Must be set to unique name of stb db'
SET LOCAL_LISTENER='stb.skead.no' comment 'Must be set to unique name of stb db'
SET LOG_ARCHIVE_FORMAT='%t_%s_%r.dbf' comment 'Must be set to same value as primary during duplicaton'
SET LOG_FILE_NAME_CONVERT='/redodata','/redodata' comment 'Must be set during cloning to avoid OMF naming of redo log files'
NOFILENAMECHECK
USING COMPRESSED BACKUPSET;
}
Aftwards, the query
COL MEMBER FORMAT A50
SET LINES 200
SELECT A.GROUP#,A.MEMBER, B.BYTES/1024/1024 "MB", B.ARCHIVED,B.STATUS,B.SEQUENCE#
FROM V$LOGFILE A INNER JOIN  V$LOG B ON A.GROUP# = B.GROUP#
ORDER BY GROUP# ASC;

set lines 200
col member format a50
select s.group#,s.thread#,s.sequence#,s.archived,s.status,f.member, f.type
from v$standby_log s, v$logfile f
where f.type = 'STANDBY'
and s.group# = f.group#;

confirms that the standby database was created with non-OMF datafiles:
SQL> @chk_redo.sql

    GROUP# MEMBER                                                     MB ARC STATUS            SEQUENCE#
---------- -------------------------------------------------- ---------- --- ---------------- ----------
         1 /redodata/redo01.log                            2048 NO  CURRENT                 156
         2 /redodata/redo02.log                            2048 YES UNUSED                    0
         3 /redodata/redo03.log                            2048 YES UNUSED                    0


    GROUP#    THREAD#  SEQUENCE# ARC STATUS     MEMBER                                             TYPE
---------- ---------- ---------- --- ---------- -------------------------------------------------- -------
         4          1          0 YES UNASSIGNED /redodata/stb_redo01.log                  STANDBY
         5          1          0 YES UNASSIGNED /redodata/stb_redo02.log                  STANDBY
         6          1          0 YES UNASSIGNED /redodata/stb_redo03.log                  STANDBY
         7          1          0 YES UNASSIGNED /redodata/stb_redo04.log                  STANDBY

Friday, December 2, 2022

How to alter a column to use a sequence generated number

In my example, mytable contains a columnn called id, which is of datatype bigint.

If you want to use a sequence to automatically popluate the column at each insert, you can alter the column to support this with the below syntax:
ALTER TABLE myschema.mytable
    ALTER COLUMN id SET DEFAULT nextval('myschema.mysequence'::regclass);

Wednesday, November 16, 2022

How to log entire output from a shell script

I have recently used exec to send all output from a bash script to a designated logfile.

For example, if you have a complex script which performs multiple tasks and even execute tasks under a different user along the way, all the output will go into the global logfile indicated by the $GLOGFILE variable:
#!/bin/bash
export SCRIPT_NAME=`basename $0`
export TS=`date +\%m.\%d.\%y\_%H_%M_%S`
export GLOGFILE=/u01/mylogfile_${TS}.log
touch ${GLOGFILE}
chmod 666 ${GLOGFILE}
exec 1> ${GLOGFILE} 2>&1
echo "Starting job at " `date`

your shell instructions here

echo "Ending job at " `date`
exit


Good sources:

  • Linuxfordevices.com
  • Geeksforgeeks.org
  • Tuesday, November 15, 2022

    How to correct an identity column sequence after import

    Thanks to the authors behind Grassroots-oracle for sharing this trick.

    I finished an import of a table with an identity column, and for some reason, the sequence started to hand out number at the bottom of the range, although it had reached 10632546 in the source table.

    My table's DDL:
    CREATE TABLE MYTABLE
    (
      MYID                   NUMBER GENERATED ALWAYS AS IDENTITY ( START WITH 21 MAXVALUE 9999999999999999999999999999 MINVALUE 1 NOCYCLE CACHE 20 NOORDER NOKEEP NOSCALE) NOT NULL,
      EVENT_TYPE             VARCHAR2(75 BYTE)      NOT NULL,
      SOURCE                 VARCHAR2(50 BYTE)      NOT NULL,
      FORMAT                 VARCHAR2(75 BYTE)      NOT NULL,
      EVKEY                  VARCHAR2(60 BYTE)      NOT NULL,
      TITLE                  VARCHAR2(40 BYTE)      NOT NULL,
      TIMESTAMP              TIMESTAMP(6)           DEFAULT current_timestamp,
      DOCUMENT               CLOB                   NOT NULL,
      FILEREF                VARCHAR2(200 BYTE)     DEFAULT 'some text' NOT NULL,
      TRANNUM                NUMBER
    )
    LOB (DOCUMENT) STORE AS SECUREFILE (
      TABLESPACE  LOB_DATA
    )
    TABLESPACE DATA;
    
    After import, the developers tested an insert and the row was inserted with MYID set to 1.

    There is no Primary Key on the table, which I pointed out to the developers.

    The sequence name was ISEQ$$_88174, and the query below returned the number 2 after import in the destination database:
    select "ISEQ$$_88174".nextval
    from dual;
    
    To "start where the sequence left off" in the source database, all I had to do was to execute:
    ALTER TABLE MYTABLE
    MODIFY (MYID  GENERATED ALWAYS AS IDENTITY ( START WITH LIMIT VALUE) );
    

    If I queried the sequence's next value now, it had changed from 2 to 10632547.

    I then checked the maximum value of the column:
    select max(MYID) from MYTABLE;
    
    And it returned 10632546, in other words, the next value would be the current max value + 1.

    Friday, November 11, 2022

    Getting ORA-01031: insufficient privileges when creating a view that is refering to another schema's table

    If you get PRA-0131 during creation of a view refering to a schema object in a different table, try the following:
    grant select on foreing_schema.foreign_table to scott with grant option;
    

    Wednesday, November 9, 2022

    Listener output showing multiple instances when only one exists

    In one of my multitenant setups, I saw the following output from the listener:
    lsnrctl status cdb
    
    Service "sales" has 2 instance(s).
      Instance "cdb", status READY, has 2 handler(s) for this service...
      Instance "sales", status UNKNOWN, has 1 handler(s) for this service...
    
    There is only one instance in my setup, the container database called "cdb".
    What's causing the listener to view the pdb "sales" as a separate instance, then?

    See this post where Uwe Hesse explains:

    It is nothing to worry about when you see the same instance showing up twice, once with status READY and once with status UNKNOWN. READY means listener knows this because PMON of that instance told him so (dynamic registration) UNKNOWN means listener knows this because of SID_LIST entry in the listener.ora (static registration)

    Quite correctly, I had a static entry in my listener.ora file:
    SID_LIST_cdb =
      (SID_LIST =
        (SID_DESC =
          (ORACLE_HOME= /u01/orasoft/product/19c)
          (SID_NAME = sales)
        )
      )
    

    If I remove the following part
    (SID_NAME = sales)
    
    and restart the listener, the extra instance will be removed from the listener output:
    lsnrctl stop cdb
    lsnrctl start cdb
    

    Output:
    Service "sales" has 1 instance(s).
      Instance "cdb", status READY, has 2 handler(s) for this service...
    

    Tuesday, November 8, 2022

    Workaround for error ERROR: OCI Error DPI (status = 7-DPI-007: invalid OCI handle or descriptor) when starting Golden Gate replicate process

    When trying to login to the golden gate cli interface, I got the following error:
    GGSCI (myserver1.oric.no) 7> dblogin useridalias mydb  domain admin
    ERROR: OCI Error DPI (status = 7-DPI-007: invalid OCI handle or descriptor).
    
    Cause:

    Incorrectly specified network configuration:
    sqlnet.ora
    names.default_domain = world
    
    tnsnames.ora
    mydb =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = mydb)
        )
      )
    
    To correct the error, you need to do either of the following:

    1. remove the names.default_domain from sqlnet.ora

    or

    2. add an alias to your tnsnames.ora file
    mydb.world =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = myserver1.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = mydb)
        )
      )
    

    Monday, October 31, 2022

    How to clear resolved errors from PDB_PLUG_IN_VIOLATIONS

    The query below shows 4 resolved messages in the view PDB_PLUG_IN_VIOLATIONS:
    SELECT NAME,CAUSE,TYPE,MESSAGE,STATUS
    FROM PDB_PLUG_IN_VIOLATIONS
    WHERE NAME='PDB$SEED'
    
    NAME CAUSE TYPE MESSAGE STATUS
    PDB$SEED SQL Patch ERROR Interim patch 31424070/24854845 (APPSST19C XTTS PDB - TABLE IMPORT/CREATION FAILED WITH ORA-39083 ORA-14334): Installed in the CDB but not in the PDB RESOLVED
    PDB$SEED SQL Patch ERROR Interim patch 34086870/24803071 (OJVM RELEASE UPDATE: 19.16.0.0.220719 (34086870)): Not installed in the CDB but installed in the PDB RESOLVED
    PDB$SEED SQL Patch ERROR '19.16.0.0.0 Release_Update 2207030222' is installed in the CDB but no release updates are installed in the PDB RESOLVED
    PDB$SEED SQL patch error ERROR Interim patch 34086870/24803071 (OJVM RELEASE UPDATE: 19.16.0.0.220719 (34086870)): ROLLBACK with status INITIALIZE in the CDB. RESOLVED

    The resolved messages can be easily removed with the procedure DBMS_PDB.CLEAR_PLUGIN_VIOLATIONS:
    SYS@CDB$ROOT SQL> exec DBMS_PDB.CLEAR_PLUGIN_VIOLATIONS( pdb_name => 'PDB$SEED');
    

    Useful resources:

  • Mike Dietrich on the same topic
  • The Oracle 19c Documentation for the PDB_PLUG_IN_VIOLATIONS view

    Note that there is an Unpublished Bug 16192980 : NO SIMPLE WAY TO CLEAN ROWS FROM PDB_PLUG_IN_VIOLATIONS AFTER DBMS_PDB CALL that prevents you from discarding no-impact warnings.
    According to Oracle there should be a fix for this in version 19.10, but I still strugle to remove messages like
      Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.
      
    which can be ignored. Read more about Database option mismatch errors in Doc ID 2020172.1
  • Friday, October 21, 2022

    Rename schema in PostgreSQL: syntax

    \connect db01
    alter schema schema1 rename to schema1_old;
    
    Documented here

    Rename database in PostgreSQL: syntax

    alter database db01 rename to db01;
    
    Documented here

    See this post for how to create a new database in PostgreSQL

    Wednesday, October 19, 2022

    How to prevent a user from login into a postgres database

    alter role scott with nologin;
    ALTER ROLE
    
    Remember that in postgres, "users" and "roles" are used interchangably, so this would also work:
    alter user scott with nologin;
    ALTER ROLE
    
    In either case, the postgres server will echo "ALTER ROLE" back to the administrator.

    To see the result of such an operation:
    echo "\du" | psql
    
    Example output:
                                                   List of roles
                Role name            |                         Attributes                         |      Member of
    ---------------------------------+------------------------------------------------------------+---------------------
     scott                           | Cannot login                                              +| {business_users}
     

    You can also query the postgres data dictionary for the answer, like this:
    postgres=# select rolcanlogin from pg_roles where rolname='jim';
     rolcanlogin
    -------------
     t
    (1 row)
    
    postgres=# select rolcanlogin from pg_roles where rolname='scott';
     rolcanlogin
    -------------
     f
    (1 row)
    
    where t and f indicates true if the user can login and false if the user cannot, respectively.

    Tuesday, October 11, 2022

    How to solve "Service name or network name of ... in the PDB is invalid or conflicts with an existing service name or network name in the CDB."

    From the alert log during startup of the container database called "cdb":
    pdb1(3):***************************************************************
    pdb1(3):WARNING: Pluggable Database saes with pdb id - 3 is
    pdb1(3):         altered with errors or warnings. Please look into
    pdb1(3):         PDB_PLUG_IN_VIOLATIONS view for more details.
    pdb1(3):***************************************************************
    
    When I check the PDB_PLUG_IN_VIOLATIONS I see this:
    select name,cause,type,message,status,action 
    from pdb_plug_in_violations;
    

    NAME CAUSE TYPE MESSAGE STATUS ACTION
    sales Service Name Conflict WARNING Service name or network name of service salesXDB in the PDB is invalid or conflicts with an existing service name or network name in the CDB. PENDING Drop the service and recreate it with an appropriate name.

    Check the CDB_SERVICES view:
    SELECT name,network_name,creation_date,pdb,enabled
    FROM   cdb_services
    where con_id=3
    and name='salesXDB'
    ORDER BY 1;
    

    NAME NETWORK_NAME CREATION_DATE PDB ENABLED
    salesXDB salesXDB 03/03/2022 11:28:56 sales NO

    There is indeed a service called salesXDB, which is a reminisce from an earlier point in time where "sales" was a non-CDB database.
    It has now been replaced with the container database XDB service, in my case called "cdbXDB".

    How to address the warnings

    Log onto the sales pluggable database:
    sqlplus / as sysdba
    
    alter session set container=sales;
    
    Session altered.
    
    Remove the service from the PDB:
    exec dbms_service.delete_service('salesXDB');
    
    PL/SQL procedure successfully completed.
    
    If you check the CDB_SERVICES again, it will now be gone.

    Restart the pluggable database:
    sqlplus / as sysdba
    
    alter session set container=sales;
    
    Session altered.
    
    shutdown immediate
    startup
    

    If you query the PDB_PLUG_IN_VIOLATIONS again, you will see that the value for status in the error is now set to RESOLVED:
    time NAME CAUSE TYPE MESSAGE STATUS ACTION
    11.10.2022 12:49 sales Service Name Conflict WARNING Service name or network name of service salesXDB in the PDB is invalid or conflicts with an existing service name or network name in the CDB. RESOLVED Drop the service and recreate it with an appropriate name.


    Source: PDB Name Conflicts With Existing Service Name In The CDB Or The PDB (Doc ID 2247291.1) from Oracle Support

    How do I make sure that services are automatically brought online in an Oracle 19c Pluggable Database?

    In an Oracle 19c pluggable database, the service_names parameter has no bearing on whether or not the services are brought up automatically at instance or server restart.

    If the service was started in the PDB and you then saved as the pluggable database's state, it willl be brought online together with the pluggable database automatically.

    First, create and start your desired service from the PDBS:
    SQL> alter session set container=sales;
    
    Session altered.
    
    SQL> exec dbms_service.create_service(service_name=>'online_users', network_name=>'online_users');
    
    PL/SQL procedure successfully completed.
    
    SQL> exec dbms_service.start_service('online_users');
    
    PL/SQL procedure successfully completed.
    
    Switch to your root container:
    SYS@cdb>SQL>alter session set container=CDB$ROOT;
    
    Session altered.
    
    Then, save the state of your PDB:
    SYS@cdb>SQL>alter pluggable database sales save state;
    
    Pluggable database altered.
    
    Or, alternatively, save all current states of your PDBs:
    SYS@cdb>SQL> ALTER PLUGGABLE DATABASE ALL SAVE STATE;
    
    Pluggable database altered.
    

    The service_names parameter is actually deprecated in Oracle 19c. The documentation states
    To manage your services, Oracle recommends that you instead use the SRVCTL command-line utility, the GDSCTL command-line utility, or the DBMS_SERVICE PL/SQL package.

    How to check the code level currently in use in an EBS database

    To find the E-Business Suite and AD/TXK patch levels, log into the EBS container and use the following two queries:
    col abbreviation format a20
    col name format a40
    col codelevel format a20
    select  ABBREVIATION, NAME, codelevel FROM apps.AD_TRACKABLE_ENTITIES where abbreviation in ('txk','ad')
    
    ABBREVIATION         NAME                                     CODELEVEL
    -------------------- ---------------------------------------- --------------------
    ad                   Applications DBA                         C.11
    txk                  Oracle Applications Technology Stack     C.11
    
    select release_name from apps.FND_PRODUCT_GROUPS;
    
    RELEASE_NAME
    ------------
    12.2.9
    

    Friday, September 30, 2022

    What exactly is the LOCAL_LISTENER parameter, and what does it do?

    What is the local_listener parameter?

    It is a parameter that points to the listener running on the local server.

    What is the purpose of the local_listener parameter?

    It is used for dynamic listener registration, which is the process of the database contacting the listener and registrering the services it offers automatically.

    How is the database registering its services with the listener?

    From Oracle 12c and onwards, this registration process is handled by the LREG process.
    The lreg process is easy to spot from the operating system:
    ps -fu oracle |grep lreg
    oracle     22023       1  0 Sep19 ?        00:00:30 ora_lreg_cdb
    
    How is dynamic listener registration implemented?

    Start the listener process with or without a parameter file. If you do not have a listener.ora parameter file, the listener will run using default values.
    In either case, simply start the listener process by typing
    lsnrctl start
    
    After a little while (usually within a minute) the database has registered its services with the listener.
    You you do not want to wait, you can manually force a registration by logging onto the database as a dba and issue:
    alter system register;
    
    How will the database find the listener process?

    This is where the local_listener parameter comes in. It tells the database where to find it, and which port to use.
    The value of my local_listener parameter is:
    SYS@cdb>SQL>show parameter local
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    local_listener                       string      LISTENER_CDB
    

    My $TNS_ADMIN/tnsnames.ora file has an entry that matches the value LISTENER_CDB:
    LISTENER_CDB =
      (ADDRESS = (PROTOCOL = TCP)(HOST = testserver1.oric.no)(PORT = 1521))
    
    This makes it possible for the database to register its services with the listener.

    Even if the local_listener is not set at all, dynamic registeration would still work if your listener runs with the default values, which is
    (ADDRESS = (PROTOCOL=TCP)(HOST=hostname)(PORT=1521)) where hostname is the network name of the local host.
    
    How does the database know which services that should be registered with the listener?

    It will start to listen for services listed in the v$active_services view.

    How does the local_listener parameter work under the multitenant architecture?

    The local_listener works identically in both non-CDB and a multitenant database.
    However, in a multitenant setup, remember that each pdb will have its own corresponding service.
    This service you cannot stop unless you unplugg or shutdown your pdb.

    If you attempt to stop the service of a running pdb you will receive
    ORA-44793: cannot stop internal services
    
    Any other services created after this point can be stopped and started at will, and the listener will follow suit.


    Let's see how it works:

    First, list the services currently supported by the listener:
    Services Summary...
    Service "cdb.oric.no" has 1 instance(s).
      Instance "cdb", status READY, has 1 handler(s) for this service...
    Service "cdbXDB.oric.no" has 1 instance(s).
      Instance "cdb", status READY, has 1 handler(s) for this service...
    Service "sales.oric.no" has 1 instance(s).
      Instance "cdb", status READY, has 1 handler(s) for this service...
    The command completed successfully
    
    The listener supports the services for the root container ("cdb") and the pdb ("sales").

    Let's try to create and start a new service in my pdb called "sales":
    SQL> alter session set container=sales;
    
    Session altered.
    
    SQL> exec dbms_service.create_service(service_name=>'online_users', network_name=>'online_users');
    
    PL/SQL procedure successfully completed.
    
    SQL> exec dbms_service.start_service('online_users');
    
    PL/SQL procedure successfully completed.
    
    List the services supported by the listener now:
    Services Summary...
    Service "cdb.oric.no" has 1 instance(s).
      Instance "cdb", status READY, has 1 handler(s) for this service...
    Service "cdbXDB.oric.no" has 1 instance(s).
      Instance "cdb", status READY, has 1 handler(s) for this service...
    Service "online_users.oric.no" has 1 instance(s).
      Instance "cdb", status READY, has 1 handler(s) for this service...
    Service "sales.oric.no" has 1 instance(s).
      Instance "cdb", status READY, has 1 handler(s) for this service...
    

    For automatic restart of Pluggable databases and their services, please see this post.

    Documentation:

  • local_listener
  • service_names
  • dbms_service
  • A great blog post by Mr. Ed Stevens on the same topic
  • Wednesday, September 28, 2022

    How to use the html markup option in sqlplus

    Use the following directives in sqlplus:
    set timing on
    set trimspool on
    set markup html on spool on head MYHEADLINEHERE table "width='50%' border='1'"
    spool myfile.html
    

    Tuesday, September 27, 2022

    What is the difference between dba_services and v$services?

    Why do we see differences in output between dba_services and v$services view ?
    SQL>select name from v$services;
    
    NAME
    ------------------------------
    SYS$BACKGROUND
    SYS$USERS
    ebs_accnt
    accnt
    cdbXDB
    cdb
    accnt_ebs_patch
    
    7 rows selected.
    
    
    
    SQL>  select name from dba_services;
    
    NAME
    ------------------------------
    SYS$BACKGROUND
    SYS$USERS
    accntXDB
    cdb
    cdbXDB
    cdb.skead.no
    ebs_PATCH
    
    7 rows selected.
    
    New services were added and few services were stopped recently.

    Answer:

    The difference is due to the following:

    * DBA_SERVICES - This view lists all the services that are in the database. This includes both started and stopped services.
    * V$SERVICES - This view lists only the services that were started some time since the instance startup. When you stop a running service, that service will still be listed in this view until the next instance restart.

    Oracle states that "The difference between these outputs does not pose any issue."

    Source: Differences in outputs between DBA_SERVICES and V$SERVICES view (Doc ID 1496074.1) found at support.oracle.com

    Thursday, September 15, 2022

    What is the catcon.pl script used in a Multitenant Database installation?

    What is the catcton.pl script?

    From Doc ID 1932340.1:

    Oracle has provided script catcon.pl to execute scripts at Container and Pluggable database at once. In a CDB, the catcon.pl script is the best way to run SQL scripts and SQL statements. It can run them in the root and in specified PDBs in the correct order, and it generates log files.

    It can be found in the $ORACLE_HOME/rdbms/admin folder.

    Example of usage

    To execute preupgrd.sql at CDB and all PDBs, copy preupgrd.sql and utlppkf.sql from the software version you want to upgrade to, temporary location (say /u01/oracle) PDB should be in open state before executing script. Its status can be checked using
    SYS@cdb> connect / as sysdba
    SYS@cdb>SQL>show pdbs
    
        CON_ID CON_NAME                       OPEN MODE  RESTRICTED
    ---------- ------------------------------ ---------- ----------
             2 PDB$SEED                       READ ONLY  NO
             3 SALES                          READ WRITE NO
    
    
    At OS prompt, execute:
    cd $ORACLE_HOME/rdbms/admin
    $ORACLE_HOME/perl/bin/perl catcon.pl -d /u01/oracle -l /home/oracle/preupgrd_logs -b preupgrade_log_base_name preupgrd.sql
    
    The arguments used are:

    -d = directory containing the file to be run (Location of preupgrade script)
    -l = directory to use for spool log files
    -b = base name for log and spool file names

    Not that if neither the -c nor the -C parameter is specified, then catcon.pl runs the script in all containers by default:

    -c - container(s) in which to run sqlplus scripts for example, -c 'PDB1 PDB2'
    -C - container(s) in which NOT to run sqlplus scripts, i.e. skip all

    Sources:
    "How to execute sql scripts in Multitenant environment (catcon.pl) (Doc ID 1932340.1)"
    "How to recompile invalid objects in all PDBs at the same time (Doc ID 2880643.1)"

    Both from Oracle Support.

    Thursday, September 1, 2022

    What is the missing privilege when receiving ORA-01031: insufficient privileges when switching container?

    You need to grant the SET CONTAINER system privilege to a common user, in order for the user to be able to switch container.

    Eksample: you have a common user used for auditing, C##AUDITADMIN:
    CREATE USER c##auditadmin IDENTIFIED BY  CONTAINER=ALL;
    GRANT AUDIT_ADMIN TO c##auditadmin CONTAINER=ALL;
    GRANT SELECT ANY TABLE TO c##auditadmin CONTAINER=ALL;
    GRANT CREATE SESSION TO c##auditadmin CONTAINER=ALL;
    
    You log in with your user to the root container:
    sqlplus c##auditadmin@cdb
    Enter password:
    
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.16.0.0.0
    
    C##AUDITADMIN@cdb SQL> alter session set container=pdb1;
    ERROR:
    ORA-01031: insufficient privileges
    
    To grant the required privilege, login as sysdba:
    sqlplus / as sysdba
    
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.16.0.0.0
    
    Grant the SET CONTAINER privilege:
    SYS@cdb SQL> grant set container to c##auditadmin container=all;
    
    Grant succeeded.
    
    Connect with C##AUDITADMIN again, and switch to the PDB1 container within your session:
    sqlplus c##auditadmin@cdb
    
    Connected to:
    Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
    Version 19.16.0.0.0
    
    C##AUDITADMIN@cdb SQL> show con_name
    
    CON_NAME
    ------------------------------
    CDB$ROOT
    C##AUDITADMIN@cdb SQL> alter session set container=pdb1;
    
    Session altered.
    
    C##AUDITADMIN@cdb SQL> show con_name
    
    CON_NAME
    ------------------------------
    PDB1
    
    

    Wednesday, August 31, 2022

    Rename table in postgreSQL: syntax

    Rename a table:
    alter table myschema.mytable rename to mytable_old;
    
    Change ownership for a table:
    alter table myschema.mytable owner to anotherschema;
    

    Friday, August 26, 2022

    How to generate insert statements for a table in PostgreSQL

    Thanks to Carlos Becker for pointing out the following solution on how to generate insert statments for a table in PostgreSQL
    pg_dump mydb -n 'myschema' -a -Fp  -t myschema.mytable --column-inserts > /data1/insert_statements.sql| gzip

    How to find the oid for a table in PostgreSQL

    Connected to the relevant database using the connection tool of your choice, execute the following to find the oid for a table:
    mydb # select 'myschema.mytable'::regclass::oid;
       oid
    ----------
     19561436
    (1 row)
    

    Thursday, August 25, 2022

    Query to find schema size in PostgreSQL

    SELECT pg_size_pretty(sum(pg_relation_size(quote_ident(schemaname) || '.' || quote_ident(tablename)))::bigint) FROM pg_tables 
    WHERE schemaname = 'yourschema';
    

    Wednesday, August 24, 2022

    Does PostgreSQL support global indexes on partitioned tables?

    PostgreSQL does not support Global Indexes, i.e indexes that spans all partitions in a partitioned table. The documentation states:

    Although all partitions must have the same columns as their partitioned parent, partitions may have their own indexes, constraints and default values, distinct from those of other partitions.

    and

    Create an index on the key column(s), as well as any other indexes you might want, on the partitioned table. ... This automatically creates a matching index on each partition, and any partitions you create or attach later will also have such an index. An index or unique constraint declared on a partitioned table is “virtual” in the same way that the partitioned table is: the actual data is in child indexes on the individual partition tables. A proposal to add such feature seem to have been suggested back in 2019, but currently no implementation exists.

    I also found this comparison between Oracle and PostgreSQL that elaborates on the topic a bit more.

    Tuesday, August 23, 2022

    How to extract hostname withouth FQDN in a shell variable

    Taken from https://stackoverflow.com/questions/36151206/hostname-variable-in-shell-script-not-fqdn and really useful:
    export HOST=${HOSTNAME%%.*}
    

    Friday, August 12, 2022

    Workaround for ORA-65011 in an upgraded EBS database running multitenant

    If you are converting a non-cdb EBS database to a pdb in a multitenant setup, and your non-cdb had a name that used lower case ("pdb1" instead of "PDB1") the different configurationscripts that you will run during the upgrade will set the hidden parameter _pdb_name_case_sensitive to TRUE.

    If you try to connect to your pdb as you normally would, you will see this error:
    SYS@cdb SQL>  alter session set container=pdb1;
    ERROR:
    ORA-65011: Pluggable database PDB1 does not exist.
    
    Check the setting of _pdb_name_case_sensitive:
    SYS@cdb SQL> show parameter pdb
    
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ------------------------------
    _pdb_name_case_sensitive             boolean     TRUE
    

    To connect, enclose your pdb name with double quotation marks:
    SYS@cdb SQL> alter session set container="pdb1";
    
    Session altered.
    
    SYS@cdb SQL> show con_name
    
    CON_NAME
    ------------------------------
    pdb1
    SYS@cdb SQL> show con_id
    
    CON_ID
    ------------------------------
    3
    
    The note ORA-65011: Pluggable Database Does Not Exist (Doc ID 2642230.1) from Oracle Support confirms this finding.

    Tuesday, August 2, 2022

    How to change an Oracle instance parameter based on database version and container type

    Here is a simple pl/sql script that will set parameters based on version and container type.

    set serveroutput on
    DECLARE
    
    v_version     sys.v_$instance.version%type;
    v_param_value sys.v_$system_parameter.value%type;
    v_pdb boolean;
    v_con_id number;
    v_con_type varchar(7);
    
    BEGIN
    
    -- find current setting of optimizer_dynamic_sampling
    select value
    into   v_param_value
    from   v$parameter
    where  name = 'optimizer_dynamic_sampling';
    
    -- find current(major) version
    select version
    into   v_version
    from   sys.v_$instance;
    
    -- find container type
    select 
        case when (sys_context('USERENV','CON_ID') = 0 ) then 'NON-CDB' 
           when (sys_context('USERENV','CON_ID') = 1 ) then 'CDB'
           when (sys_context('USERENV','CON_ID') > 1 ) then 'PDB'
        end
    into v_con_type
    from DUAL;
    
    --dbms_output.put_line('v_con_type: ' || v_con_type);
    --dbms_output.put_line('version: ' || v_version);
    --dbms_output.put_line('optimizer_dynamic_sampling: ' || v_param_value);
    
    v_pdb := FALSE;
    
    IF v_con_type = 'PDB' THEN
      v_pdb := TRUE;
    ELSE
      v_pdb := FALSE;
    END IF;
    
    
    CASE WHEN v_version IN ('18.0.0.0.0','19.0.0.0.0') AND v_param_value = '0' THEN
        dbms_output.put_line('version is >= 18 and optimizer_dynamic_sampling = 0');
        
        IF v_pdb = FALSE THEN
          dbms_output.put_line('execute immediate alter system set optimizer_dynamic_sampling=4 scope=both;');
          execute immediate 'alter system set optimizer_dynamic_sampling=4 scope=both';
        ELSIF v_pdb = TRUE THEN
          dbms_output.put_line('execute immediate alter system set optimizer_dynamic_sampling=4 container=current scope=both;');
          execute immediate 'alter system set optimizer_dynamic_sampling=4 container=current scope=both';
        END IF;
        
    WHEN v_version IN ('18.0.0.0.0','19.0.0.0.0') AND v_param_value <> '0' THEN   
         dbms_output.put_line('version is >= 18 and optimizer_dynamic_sampling is already set.');
    ELSE
      dbms_output.put_line('version is < 18');
    END CASE;
    
    END;
    /
    

    Thursday, July 7, 2022

    How to check for existence of multiple files in a shell script

    # check for installation files existence
    if  [ ! -f /tmp/file1 -a -f /tmp/file2 -a -f /tmp/file3 ]
    then
    echo "Some of the files file1, file2 or file3 are missing from /tmp "
    exit 1
    fi
    

    Tuesday, July 5, 2022

    How to list all nfs mountpoints

    findmnt -t nfs
    
    TARGET SOURCE                                               FSTYPE OPTIONS
    /u01   pzl2ora1:/Oracle/software/pzh0oric/u01 nfs    rw,nodiratime,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=300,acregmax=300,acdirmin=300,ac
    /u02   pzl2ora1:/Oracle/pzh0oric/u02          nfs    rw,nodiratime,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=300,acregmax=300,acdirmin=300,ac
    /u03   pzl2ora1:/Oracle/pzh0oric/u03          nfs    rw,nodiratime,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=300,acregmax=300,acdirmin=300,ac
    /u04   pzl2ora1:/Oracle/pzh0oric/u04          nfs    rw,nodiratime,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=300,acregmax=300,acdirmin=300,ac
    

    Thursday, June 30, 2022

    How to solve ORA-01187: cannot read from file because it failed verification tests

    After a successful clone, alert log reports:
    022-06-30T04:04:17.542368+02:00
    Errors in file /orainst/oracle/diag/rdbms/testdb01/testdb01/trace/testdb01_dbw0_130154.trc:
    ORA-01186: file 201 failed verification tests
    ORA-01122: database file 201 failed verification check
    ORA-01110: data file 201: '/data/oradata/testdb01/temp01.dbf'
    ORA-01203: wrong incarnation of this file - wrong creation SCN
    File 201 not verified due to error ORA-01122
    

    Any operation you try against the database will give the following error stack in return:
    ORA-01187: cannot read from file  because it failed verification tests
    ORA-01110: data file 201: '/data/oradata/testdb01/temp01.dbf'
    ORA-06512: at "SYS.DBMS_LOB", line 741
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 5420
    ORA-06512: at line 1
    

    Check tempfile situation:
    SYS@testdb01>SQL>select ts#,status,enabled, name from v$tempfile;
    
           TS# STATUS  ENABLED    NAME
    ---------- ------- ---------- ---------------------------------
             3 ONLINE  READ WRITE /data/oradata/testdb01/temp01.dbf
             3 ONLINE  READ WRITE /data/oradata/testdb01/temp02.dbf
    
    A query against dba_temp_files however, will give an error:
    SYS@testdb01>SQL>SELECT tablespace_name, file_name  FROM dba_temp_files WHERE tablespace_name = 'TEMP';
    SELECT tablespace_name, file_name  FROM dba_temp_files WHERE tablespace_name = 'TEMP'
                                            *
    ERROR at line 1:
    ORA-01187: cannot read from file  because it failed verification tests
    ORA-01110: data file 201: '/data/oradata/testdb01/temp01.dbf'
    

    Is the temporary tablespace database default?
    SYS@testdb01>SQL>
    col property_value format a30
    SELECT PROPERTY_VALUE   
    FROM DATABASE_PROPERTIES 
    WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';
    
    PROPERTY_VALUE
    -----------------------
    TEMP
    
    Yes it is, so we cannot drop it, but will have to add new files and drop the old ones

    Add new tempfile:
    SYS@testdb01>SQL>ALTER TABLESPACE temp ADD TEMPFILE '/data/oradata/testdb01/temp.dbf' size 256M;
    
    Tablespace altered.
    

    Drop the old ones:
    SYS@testdb01>SQL>ALTER DATABASE TEMPFILE '/data/oradata/testdb01/temp01.dbf' DROP INCLUDING DATAFILES;
    
    Database altered.
    
    SYS@testdb01>SQL>ALTER DATABASE TEMPFILE '/data/oradata/testdb01/temp02.dbf' DROP INCLUDING DATAFILES;
    
    Database altered.
    
    

    Both of the old datafiles are now gone from the data dictionary:
    SYS@pserser4>SQL>
    set lines 200
    col tablespace_name format a20
    col file_name format a50
    SELECT tablespace_name, file_name  FROM dba_temp_files WHERE tablespace_name = 'TEMP';
    
    TABLESPACE_NAME         FILE_NAME
    ---------------------   --------------------------------------
    TEMP                    /data/oradata/testdb01/temp.dbf
    
    Note that the old tempfiles are still on disk:
    SYS@testdb01>SQL>!ls -la /data/oradata/testdb01/temp01.dbf
    -rw-r----- 1 oracle dba 114302976 Jun 30 04:04 /data/oradata/testdb01/temp01.dbf
    

    If you want to add the old datafiles back to your temp tablespace you can do this with the REUSE keyword:
    SYS@testdb01>SQL>ALTER TABLESPACE temp ADD TEMPFILE '/data/oradata/testdb01/temp01.dbf' size 256M reuse;
    
    Tablespace altered.
    
    In my case, the other tempfile /data/oradata/testdb01/temp02.dbf, was not present physically on disk.
    The alert log confirms this:
    2022-06-30T04:04:18.302852+02:00
    Errors in file /data/oracle/diag/rdbms/testdb01/testdb01/trace/testdb01_dbw0_130154.trc:
    ORA-01157: cannot identify/lock data file 202 - see DBWR trace file
    ORA-01110: data file 202: '/data/oradata/testdb01/temp02.dbf'
    ORA-17503: ksfdopn:4 Failed to open file /data/oradata/testdb01/temp02.dbf
    ORA-17500: ODM err:File does not exist