Wednesday, December 22, 2021

What is the difference between "force parallel" and "enable parallel" used in the "alter session" statement in Oracle?

What is the difference between these two statements?
ALTER SESSION ENABLE PARALLEL DML | DDL | QUERY;
and
ALTER SESSION FORCE PARALLEL DDL | DML | QUERY;
Answer:

The difference here lays in the details: the ENABLE statement merely enables parallelization using a concrete parallel directive or parallel hint. If this is not specified, Oracle will execute the statements sequenctually. The FORCE statement will parallelize everything it can with the default DOP (degree of parallelism), without you having to state anyting about this in your DML | DDL or query statements.

If the default DOP isn't good enough for you (for example during an index rebuild), you can force your session to use a DOP higher than the default, like this:
ALTER SESSION FORCE PARALLEL DDL | DML | QUERY PARALLEL 32;
This will override any other DOP in the same session and use 32 parallel workers.

Alter session in 19c is documentet here
The concept of forcing/enabling parallelization is explained here

Tuesday, December 21, 2021

How to work around error [INS-08101] Unexpected error while executing the action at state: ‘supportedOSCheck’

Thanks so much to Martin Berger for his blog post showing how to work around an error that shows up when you attempt to install Oracle 19c software on a RHEL8 distribution.

I wanted to install Oracle software on a RH8.5 Linux server:

cat /etc/redhat-release
Red Hat Enterprise Linux release 8.5 (Ootpa)
  

This is the error I received:




    














In my case, the only thing I had to do was to add another environmental variable:
export CV_ASSUME_DISTID=OEL7.8
Execute the installer again and you will see a different screen:
./runInstaller



Wednesday, December 15, 2021

How to find tables with specific column data types in a PostgreSQL

Thanks to Bart Gawrych for blogging this very useful post on which my own post is based on.

I needed to find all tables in a specific schema that used json or jsonb data type columns. Here is my query, saved in a file called "find_cols.sql":
select col.table_schema,
       col.table_name,
       col.ordinal_position as column_id,
       col.column_name,
       col.data_type
from information_schema.columns col
join information_schema.tables tab on tab.table_schema = col.table_schema
                                   and tab.table_name = col.table_name
                                   and tab.table_type = 'BASE TABLE'
where col.table_schema in ('myschema')
      and col.data_type IN ( 'json','jsonb' )
order by col.table_schema,
         col.table_name,
         col.ordinal_position;

Execution in psql directly on server like this:
psql mydb -f find_cols.sql

Wednesday, December 8, 2021

How to add a string to specific line in a text file

Here is an example of how I inserted a line needed in a postgres pg_restore command, at line 20:
sed -i '20s/^/SET search_path to sales,public;\n/' myfile.txt

If your string contains special characters, remember to escape it properly. In the next example, I want to append the string \timing at line number 7 in myfile.txt:
 sed -i '7s/^/\\timing on\n/' myfile.txt

How to replace a specific string in a very large text file

Here is how I removed the string "public." from a 4G file:
sed -i 's/public\.//g' mylargetextfile.txt
Notice the forward slash in front of the puncuation mark. It tells sed to interpret it literally, not as a special character used in regular expressions. Source: Stackoverflow

Wednesday, November 3, 2021

An Oracle procedure that allow users to kill their own sessions

First, create a view that lets the users check their own sessions:
connect / as sysdba

create or replace view my_sessions as
select username,sid,serial#,osuser,status,program,machine
from v$session
where username=SYS_CONTEXT('userenv','CURRENT_SCHEMA');
Create a public synonym, and grant select to public:
create public synonym my_sessions for my_sessions;
grant select on my_sessions to public;
To get the procedure to compile in the SYSTEM schema, it is required to grant the "alter system" privilege directly to the user - having the same privilege through a role won't do.
grant alter system to system;
grant select on v_$session to system;
Log in as system, and create the procedure:
CREATE OR REPLACE PROCEDURE KILL_SESSION(PN_SID NUMBER,PN_SERIAL NUMBER) AUTHID DEFINER
AS
LV_USER VARCHAR2(30);
EXE_USER VARCHAR2(30);
BEGIN
    SELECT S.USERNAME INTO LV_USER 
    FROM V$SESSION S, USER_USERS U 
    WHERE S.SID    = PN_SID 
    AND S.SERIAL#  = PN_SERIAL
    AND S.USERNAME = S.USERNAME;

    SELECT USERNAME INTO EXE_USER
    FROM V$SESSION 
    WHERE AUDSID=SYS_CONTEXT('userenv','SESSIONID');

 IF EXE_USER = LV_USER THEN
    EXECUTE IMMEDIATE 'alter system kill session '''||PN_SID||','||PN_SERIAL||'''';
    dbms_output.put_line('Session ' || PN_SID || ',' || PN_SERIAL || ' killed.');
ELSE
    RAISE_APPLICATION_ERROR(-20000,'Only your own sessions may be killed.');
END IF;
END;
/  
Create a synonym for the procedure and give everyone execute privileges on it:
CREATE PUBLIC SYNONYM KILL_SESSION FOR KILL_SESSION;
GRANT EXECUTE ON KILL_SESSION TO PUBLIC;
Now the users have a way to check their own sessions, and to terminate them if they chose to do so:
connect scott
select * from my_session;

USERNAME                    SID    SERIAL# OSUSER               STATUS               PROGRAM                                          MACHINE
-------------------- ---------- ---------- -------------------- -------------------- ------------------------------------------------ ----------------------------------------------------------------
SCOTT                        1152       9510 oracle               INACTIVE             sqlplus@myserver.mydomain.com (TNS V1- myserver.mydomain.com
SCOTT                        1726      31531 oracle               ACTIVE               sqlplus@myserver.mydomain.com (TNS V1- myserver.mydomain.com
Kill a session:
scott@mydb>SQL>exec kill_session(1152,9510);
Afterwards, the view will change to show session 1152 to be in status KILLED:
USERNAME                    SID    SERIAL# OSUSER               STATUS               PROGRAM                                          MACHINE
-------------------- ---------- ---------- -------------------- -------------------- ------------------------------------------------ ----------------------------------------------------------------
SCOTT                        1152       9510 oracle               KILLED             sqlplus@myserver.mydomain.com (TNS V1- myserver.mydomain.com
SCOTT                        1726      31531 oracle               ACTIVE               sqlplus@myserver.mydomain.com (TNS V1- myserver.mydomain.com

Friday, October 29, 2021

How to exctract a specific file from a .tar file

Say I would like to extract an old version of the oracle inventory, packed in a tar file called "oracle.tar". Change directory to the physical location you want to extract the files to:
cd /home/oracle
Find the path to the file you are interested in:
tar -tvf oracle.tar | grep oraInst.loc
-rw-rw---- oracle/dba        55 2018-01-25 15:02 ./oraInventory/oraInst.loc
Extract the tar file using the path displayed above:
tar xvf /u01/data/oracle.tar ./oraInventory/oraInst.loc

Tuesday, October 26, 2021

How to only insert rows that do not violate a specific constraint

PostgreSQL offers an alternative to raising an exception upon a unique key violation: ON CONFLICT ON CONSTRAINT
INSERT INTO sales.sales_history 
SELECT * from sales.sales_history_p201801 
ON CONFLICT ON CONSTRAINT uc_sh1 DO NOTHING;
Comes in as a handy way of transfering rows from an old table to a partially populated new table. Duplicate rows will simply be ignored.

Documentation for PostgreSQL 11 can be found here.

Monday, October 18, 2021

How to export a single table in PostgreSQL

Example of exporting a single table. In the directory-format:
pg_dump -Fd mydb -t myschema.mytable -v -f  /data/mydirectory -j 2
Where
-Fd = export into a directory-format archive
-t = table name
-v = verbose
-f = the target directory
-j = number of export jobs
Note that -f can be omitted for file based output formats.
Since I am using the directory output format however, I am required to use it, and it specifies the target directory instead of a file. In this case the directory is created by pg_dump and it must not exist before.

In plain text file format:
pg_dump mydb -n 'myschema' -a -Fp  -t mytable | gzip > mydb_myschema_mytable.gz
Where
-Fp = plain text format
-a  = data only, no metadata. In the case of plain text, this means no "create table" statement is generated
-t  = table name
When the export is done, you may unpack the mydb_myschema_mytable.gz and you will observe that the file contains COPY directives for each row.

In custom format:
pg_dump mydb -n 'myschema' -a -Fc  -t mytable | gzip > mydb_myschema_mytable_custom.gz
Where all the options are identical to the plain text format, except for
-Fc = custom data format

Friday, October 15, 2021

How to list the number of rows per partition in PostgreSQL

I found an old post from 2013 on stackoverflow.com which solved a problem I had finding the number of rows per partition in PostgreSQL.

I couldn't find a simple way to list the estimated number of rows per partition and subpartition in PostgreSQL so I created a view as suggested by the member MatheusOl:
CREATE OR REPLACE VIEW row_counts AS
    SELECT relname, reltuples::numeric
    FROM pg_class pg, information_schema.tables i
    WHERE pg.relname = i.table_name
        AND relkind='r'
        AND table_schema NOT IN ('pg_catalog', 'information_schema');
Logon to your database using psql and create the view. Then, query it and the result would be something similar to querying dba_tab_partitions and dba_tab_subpartitions in Oracle:
            relname              |  reltuples
---------------------------------+-------------
 event_sp_2019_ukf               |       20799
 event_sp_2019_vasplpr           |           0
 event_sp_2019_ukp               |         120
 event_sp_2019_ltp               |           0
 event_sp_2019_smp               |          95

The view will not distinguished between tables, partitiones or subpartitions - they're all tables in terms of object types.

Tuesday, October 12, 2021

In PostgreSQL, what does "regclass" mean?

"regclass" is an alias for an Object Identifier, which are used internally by PostgreSQL as primary keys for several data dictionary tables.

It can be used as a translator when querying the dictionary tables in postgres.

For example, the following query produces an error, since the predicate expects an integer ("oid"):
select relname,relowner,reltuples,relkind  from pg_class where oid='myuser.mytable_1';
ERROR:  invalid input syntax for type oid: "myuser.mytable_1"
LINE 1: ...lowner,reltuples,relkind  from pg_class where oid='myuser.mytabl...

However, if you add "::regclass", the alias for the object is used instead of the literal string:
 select relname,relowner,reltuples,relkind  from pg_class where oid='myuser.mytable_1'::regclass;
  relname  | relowner | reltuples | relkind
-----------+----------+-----------+---------
 mytable_1 |    69094 |         0 | p

Note that there are several types of OIDs. For example, for roles (users), you should use "regrole" instead of "regclass":
select relname,relowner,reltuples,relkind  from pg_class where relowner='myuser'::regrole;
        relname        | relowner | reltuples | relkind
-----------------------+----------+-----------+---------
 unique_idx_2          |    69094 |         0 | I
 pg_toast_191274       |    69094 |         0 | t
 mytable_1_id_seq      |    69094 |         1 | S
 mytable_1_pkey        |    69094 |         0 | I
 unique_idx_1          |    69094 |         0 | i
 mytable_2             |    69094 |         0 | p
 mytable_3             |    69094 |         0 | r
 

If you use them together, translate to different types of OID. In the below case, it's really not necessary, but illustrates the point nevertheless:
 select relname,relowner,reltuples,relkind  from pg_class where oid='myuser.mytable_1'::regclass and relowner='myuser'::regrole;
More information about OIDs
More information about pg_class

Thursday, September 30, 2021

How to rename a database to use lower case instead of upper case characters

At my customer's site, we have standardized the Oracle SIDs (System ID) to use lower case character.

During an upgrade, the SID was accidently set to upper case. This new uppercase SID was picked up by Oracle Cloud Control and all of a sudden we had a non-standardized database within our ranks. Not a disaster, but irritating, nevertheless.

Given that your environment variable in you operating system is also set to lower case, you can simply restart the database with a new value for the parameter db_name in the parameter file to rectify the situation. There is no need to use the nid ("new id") tool or to recreate the controlfile. A simple restart with the right value for the parameter db_name will do.

It's easy to verify this conclution if you have access to another database which adheres to the standard. Check the value of the columnn name in v$database:
select name from v$database;

NAME
---------
PRODDB01

SQL> show parameter db_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      PRODDB01

As expected, the query from v$database it is displayed in upper case as Oracle alwaysd does by default, but we also see that the value of the parameter db_name is actually in lower case.

In other words, there is no connection between the value of ORACLE_SID and the name of the database used in the controlfile. Oracle will always refer to the database in upper case. How we present and use this value in the operating system, is up to us.

These are the simple steps to get the value of the db_name parameter aligned with the value of your operating system variable ORACLE_SID:

Check your environment variable:
env | grep ORA
ORACLE_SID=testdb01 
It is indeed shown in lower case

Create a new pfile:
sqlplus / as sysdba
create pfile='inittestdb01.ora' from spfile;

Edit the pfile so that the db_name parameter has its value in lower case:
Old value:
*.db_name='TESTDB01'
New value:
*.db_name='testdb01'

Restart the database:
sqlplus / as sysdba
shutdown immediate
startup

If desirable, create a new spfile and restart the database one final time to instruct oracle to use it instead of the pfile:
create spfile from pfile='inittestdb01.ora';
shutdown immediate
startup

Tuesday, September 28, 2021

How to use opatch to check if Oracle Gateway for DRDA is installed

Log on to your oracle server, and type
cd $ORACLE_HOME/OPatch
./opatch lsinventory -oh /full/path/to/oracle_home -details > /home/oracle/inventory_details.log
The top lines of the generated file will reveal what's installed in your ORACLE_HOME:
cat /home/oracle/inventory_Details.log | more
Result:
Installed Top-level Products (2):

Oracle Database 12c                                                  12.1.0.2.0
Oracle Database Gateways                                             12.1.0.2.0
There are 2 products installed in this Oracle Home.
After that, more details about your installation is listed:

Installed Products (137):
Oracle Database Gateway for DRDA                                     12.1.0.2.0
Oracle Database Gateway for ODBC                                     12.1.0.2.0
Oracle Database Gateways                                             12.1.0.2.0

Wednesday, September 22, 2021

How to resolve ORA-48128 during export

Applicable to Oracle 19.3 and onwards.

The following error may occur when you export data using Data Pump

Parameter file:
userid=system/mypasswd
directory=DATA_PUMP_DIR
logfile=exp_sales.log
dumpfile=klm.dmp
exclude=statistics
job_name=exp_klm
schemas=KLM
flashback_time=systimestamp

Execute the export:
expdp parfile=expdp.par
Result:
RA-39126: Worker unexpected fatal error in KUPW$WORKER.CREATE_OBJECT_ROWS [USER]
USER:"GRDA_TEST"
ORA-39155: error expanding dump file name "/data1/export/klm.dmp"
ORA-48128: opening of a symbolic link is disallowed

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPW$WORKER", line 12630
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 105
ORA-06512: at "SYS.KUPF$FILE", line 9793
ORA-06512: at "SYS.KUPF$FILE_INT", line 1211
ORA-06512: at "SYS.KUPF$FILE", line 9734
ORA-06512: at "SYS.KUPW$WORKER", line 11341

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0xcf98d208     33543  package body SYS.KUPW$WORKER.WRITE_ERROR_INFORMATION
0xcf98d208     12651  package body SYS.KUPW$WORKER.DETERMINE_FATAL_ERROR
0xcf98d208     11612  package body SYS.KUPW$WORKER.CREATE_OBJECT_ROWS
0xcf98d208     15278  package body SYS.KUPW$WORKER.FETCH_XML_OBJECTS
0xcf98d208      3917  package body SYS.KUPW$WORKER.UNLOAD_METADATA
0xcf98d208     13746  package body SYS.KUPW$WORKER.DISPATCH_WORK_ITEMS
0xcf98d208      2439  package body SYS.KUPW$WORKER.MAIN
0xcf996200         2  anonymous block

KUPW: Object row index into parse items is: 1
KUPW: Parse item count is: 4
KUPW: In function CHECK_FOR_REMAP_NETWORK
KUPW: Nothing to remap
KUPW: In procedure BUILD_OBJECT_STRINGS - non-base info
KUPW: In procedure BUILD_SUBNAME_LIST with USER:KLM.KLM
KUPW: In function NEXT_PO_NUMBER
KUPW: PO number assigned: 13
KUPF$FILE.WRITE_LOB
KUPW: In procedure DETERMINE_FATAL_ERROR with ORA-39155: error expanding dump file name "/data1/export/klm.dmp"
ORA-06512: at "SYS.KUPF$FILE", line 9793
ORA-48128: opening of a symbolic link is disallowed
ORA-06512: at "SYS.KUPF$FILE_INT", line 1211
ORA-06512: at "SYS.KUPF$FILE", line 9734

Job "SYSTEM"."EXP_KLM" stopped due to fatal error at Wed Sep 22 08:42:49 2021 elapsed 0 00:00:04
Reason: bug

Solution:
Outlined in Doc ID 2654167.1 "Customer RecommendedDataPump Export (EXPDP) Fails Due to ORA-39155 ORA-48128":
ALTER SYSTEM  SET "_disable_directory_link_check" =true  SCOPE=SPFILE;
shutdown immediate
startup

Monday, September 20, 2021

PostgreSQL: how to use the to_date, to_char and cast conversion functions

Building on the example in my previous post, this is how you can use the to_date and to_char functions for predicates used against both the original, non partitioned table, and the partitioned version of the same.

Query the non-partitioned table, you could first convert to character, and use the desired format. After that you can convert to date:
select *
from myuser.mytable
where to_date(to_char(created_date,'YYYY.MM.DD'),'YYYY.MM.DD') between '2020.01.01' and  '2020.01.31';

To query the partitioned table, you need an an explicit cast to the same data type used by the partitioned key column:
select *
from myuser.mytable
where  cast(created_date as date) between '2020.01.01' and  '2020.01.31';
I realized this difference when the number of rows returned was not what I expected. After I changed my SQL statement to use CAST, the number of rows was exactly what I expected.

PostgreSQL: how to create a partitioned table using a conversion function

Consider the following table:
CREATE TABLE myuser.mytable (
    id integer NOT NULL,
    created_date timestamp without time zone NOT NULL,
    event_json text NOT NULL,
    format_version text NOT NULL,
    rep_id text NOT NULL,
    appversion text NOT NULL
);
Let's say you want to partition this table by RANGE, and you would like to use the column "created_date" which is of data type "TIMESTAMP WITHOUT TIME ZONE".
You could use a conversion function like CAST to convert the column to a partitioned one, like this:
  CREATE TABLE myuser.mytable (
    id integer NOT NULL,
    created_date timestamp without time zone NOT NULL,
    event_json text NOT NULL,
    format_version text NOT NULL,
    rep_id text NOT NULL,
    appversion text NOT NULL
)
partition by range(cast(created_date as date));
Obviously you proceed with adding your partitions the way you normally would in PostgreSQL, for example:
CREATE TABLE myuser.mytable_p_202001
 PARTITION OF myuser.mytable
 FOR VALUES FROM ('2020.01.01') TO ('2020.01.31');

CREATE TABLE myuser.mytable_p_202002
 PARTITION OF myuser.mytable
 FOR VALUES FROM ('2020.02.01') TO ('2020.02.29');

etc

CREATE TABLE myuser.mytable_p_default
 PARTITION OF myuser.mytable
 DEFAULT;

Beware: If you try to create a primary key constraint on the partition key column in this case, you will receive
DETAIL:  PRIMARY KEY constraints cannot be used when partition keys include expressions.

You are of course free to add a normal search index on the column:
CREATE INDEX myidx1 ON myuser.mytable USING btree ( cast (created_date as date) );
Read more about the CAST function in the documentation
Another good source is postgressqltutorial.com

Friday, September 17, 2021

How to solve error in pg_top stating "relation does not exist"

When you execute pg_top, you get to see the top database sessions in a way familiar to most people with experience from Unix environments.



There are many options to pg_top, and one that I wanted to use, was
E       - show execution plan (UPDATE/DELETE safe)
When you press E, you are prompted for a process ID:
Re-determine execution plan: 119692

But I kept getting the message:
Query Plan:

ERROR:  relation "myschema.mytable" does not exist
LINE 4:     from myschema.mytable
                 ^
Hit any key to continue:

Solution: Execute pg_top with database name, host name, port, username and password, like this:
$ pg_top -d mydb -h myhost.mydomain.com -p 5432  -U mysuperuser -W
After I executed pg_top in this manner, the E option worked as exected and displayed the execution plan.

Thursday, September 16, 2021

How to list the contents of a custom format PostgreSQL export file

List the content of a "custom format" export file created by pg_dump by specifying the -l flag:

pg_restore /data1/exports/musicdb_Fc.dmp -l


List the content of a "directory format" export file created by pg_dump by specifying the -l flag. Note that the path should point to the directory in which the toc.dat file resides, in my case /data1/exports/mydir:
pg_restore /data1/exports/mydir/ -l


In both cases, output will be something akin to
;
; Archive created at 2021-09-16 12:51:17 CEST
;     dbname: musicdb
;     TOC Entries: 29
;     Compression: -1
;     Dump Version: 1.13-0
;     Format: CUSTOM
;     Integer: 4 bytes
;     Offset: 8 bytes
;     Dumped from database version: 11.12
;     Dumped by pg_dump version: 11.12
;
;
; Selected TOC Entries:
;
10; 2615 45112854 SCHEMA - music postgres
3764; 0 0 ACL - SCHEMA music postgres
202; 1259 45112865 TABLE music albums music
203; 1259 45112868 TABLE music albums_1974 music
204; 1259 45112871 TABLE music albums_1979 music
201; 1259 45112863 SEQUENCE music albums_album_id_seq music
205; 1259 45112874 TABLE music albums_default music
207; 1259 45112879 TABLE music artists music
206; 1259 45112877 SEQUENCE music artists_artist_id_seq music
209; 1259 45112884 TABLE music bands music
208; 1259 45112882 SEQUENCE music bands_band_id_seq music
200; 1259 45112858 TABLE music songs music
199; 1259 45112856 SEQUENCE music songs_song_id_seq music
3750; 0 45112868 TABLE DATA music albums_1974 music
3751; 0 45112871 TABLE DATA music albums_1979 music
3752; 0 45112874 TABLE DATA music albums_default music
3754; 0 45112879 TABLE DATA music artists music
3756; 0 45112884 TABLE DATA music bands music
3748; 0 45112858 TABLE DATA music songs music
3765; 0 0 SEQUENCE SET music albums_album_id_seq music
3766; 0 0 SEQUENCE SET music artists_artist_id_seq music
3767; 0 0 SEQUENCE SET music bands_band_id_seq music
3768; 0 0 SEQUENCE SET music songs_song_id_seq music
3620; 2606 45112862 CONSTRAINT music songs songs_pkey music

Tuesday, September 14, 2021

How to export and import a schema using the directory format

What exactly is the directory format used by pg_dump? 

 The documentation states: 

  Dumps can be output in script or archive file formats

 and 

The alternative archive file formats must be used with pg_restore to rebuild the database. 

Do they bring any advantages over the plain-text format?

Archive file formats allows pg_restore to

  • be selective about what is restored
  • reorder the items prior to being restored
  • be portable across architectures.
  • be used to examine the contents of an archive
  • use parallel workers during export and import

    The documentation claims that pg_dump used archive file formats and pg_restore provides a flexible archival and transfer mechanism. 

    There are two types of archive file format output:

  • The “custom” format (-Fc)
  • The “directory” format (-Fd)

    Both of these allows for selection and reordering of all archived items, support parallel restoration, and are compressed by default, but note that only the “directory” format supports parallelization of the export utility pg_dump.

    In other words, using pg_dump + pg_restore to move data should be a transition that most seasoned Oracle DBAs will be able to do. The usage of these utilities is a lot similar to Oracle Data Pump.


    Export schema "music" in database "musicdb" using the directory format archive:
    pg_dump -Fd musicdb -n music -v -f /export/musicdbexport -j 2
    I am using two workers indicated by the -j flag.
    You need to point to a valid path for your directory, which is /export. pg_dump will create the subdirectory "musicdbexport" for you, and according to the documentation it should not be created beforehand.

    If you list the path after export, you'll see files like these:
    cd /export/musicdbexport
    total 40K
    drwxr-xr-x 3 postgres postgres 4.0K Sep  7 16:22 ..
    -rw-r--r-- 1 postgres postgres 8.0K Sep  7 16:22 toc.dat
    -rw-r--r-- 1 postgres postgres   53 Sep  7 16:22 3756.dat.gz
    -rw-r--r-- 1 postgres postgres  292 Sep  7 16:22 3754.dat.gz
    -rw-r--r-- 1 postgres postgres   67 Sep  7 16:22 3751.dat.gz
    -rw-r--r-- 1 postgres postgres  142 Sep  7 16:22 3748.dat.gz
    -rw-r--r-- 1 postgres postgres   25 Sep  7 16:22 3750.dat.gz
    drwx------ 2 postgres postgres 4.0K Sep  7 16:22 .
    -rw-r--r-- 1 postgres postgres   88 Sep  7 16:22 3752.dat.gz
    
    Transfer these file, including the toc.dat file, to the destination server.

    Import schema "music" in database "musicdb" using the directory format archive:
    pg_restore /pgdata/export -C -c -d postgres -j 4 -v
    -C means "create the target database" -c means "drop the database objects before creating" -j is the number of jobs started in parallel
    Remeber that the file toc.dat must exist in the directory, otherwise you'll see the error message pg_restore: [archiver] directory "/pgdata/export" does not appear to be a valid archive ("toc.dat" does not exist)

    Remember, you don't have to drop the database first, it's only an option you have. You could also choose to import the schema "music" directly into an existing database:
    pg_restore /pgdata/export -d musicdb -j 4 -v

    If you examine the log aftwards, you'll typically see output like this:
    pg_restore: connecting to database for restore
    
    So postgres is simply connecting to the default database (named "postgres") in order to create the database which schema "music" happens to be exported from.

    Following that, we see how the database "musicdb" is created:
    pg_restore: processing item 3759 ENCODING ENCODING
    pg_restore: processing item 3760 STDSTRINGS STDSTRINGS
    pg_restore: processing item 3761 SEARCHPATH SEARCHPATH
    pg_restore: processing item 3762 DATABASE musicdb
    pg_restore: creating DATABASE "musicdb"
    


    Then the pg_restore connects to the new database and creates the schema in there:
    pg_restore: connecting to new database "musicdb"
    pg_restore: processing item 10 SCHEMA music
    pg_restore: creating SCHEMA "music"
    


    After that, the objects are imported one by one:
    pg_restore: processing item 202 TABLE albums
    pg_restore: creating TABLE "music.albums"
    
    Like schema imports in Oracle, any users that may have grants on the objects being imported must exist beforehand for any GRANT statements to execute successfully.

    In my case, these errors were thrown because I did not take care to create a user called "music" up front:

    pg_restore: [archiver (db)] Error while PROCESSING TOC:
    pg_restore: [archiver (db)] Error from TOC entry 202; 1259 45112865 TABLE albums music
    pg_restore: [archiver (db)] could not execute query: ERROR:  role "music" does not exist
        Command was: ALTER TABLE music.albums OWNER TO music;
    


    pg_dump is documented here
    pg_restore is documenteted here
  • Wednesday, September 8, 2021

    How to analyze a table in PostgreSQL

    Just like Oracle, PostgreSQL stores statistics about the tables in an internal table called pg_statistics. This is used by the query planner to determine the most efficient execution path. To gather fresh statistics on a table, use the following simple syntax:
    \connect mydatabase
    
    analyze verbose myschema.mytable;
    
    Documentation here

    Thursday, August 26, 2021

    What is vacuuming in PostgreSQL and how do I vacuum a table?

    Under the headline "Vacuuming Basics" in the PostgreSQL documentation, you can read amongst other things:
    PostgreSQL's VACUUM command has to process each table on a regular basis for several reasons:
    * To recover or reuse disk space occupied by updated or deleted rows.
    * To update data statistics used by the PostgreSQL query planner.
    * To update the visibility map, which speeds up index-only scans.
    * To protect against loss of very old data due to transaction ID wraparound or multixact ID wraparound.
    

    Each of these reasons dictates performing VACUUM operations of varying frequency and scope,


    Further explaination can be found together with the syntax schema for VACUUM:

    VACUUM reclaims storage occupied by dead tuples.
    In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a VACUUM is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables.

    Vacuum a specific table in verbose mode:
    psql mydb
    You are now connected to database "mydb" as user "postgres".
    
    vacuum (verbose, analyze) myschema.mytable;

    Friday, August 20, 2021

    How to fetch first n rows only in PostgreSQL

    The equivalent of oracles "fetch first n rows only" for postgreSQL is:
    select *
    from myschema.mytable
    limit 10;
    
    If you want to select the coming next 20 rows, but skip the rows you already selected first time, use the keyword OFFSET:
    select *
    from myschema.mytable
    limit 20 offset 10;
    
    The PostgreSQL 11 documentation for this feature can be found here

    Wednesday, August 18, 2021

    Solution for no ocijdbc18 in java.library.path when using sqlcl

    sqlnet.ora:
    NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
    NAMES.DEFAULT_DOMAIN = oric.no
    
    tnsnames.ora
    testdb01,testdb01.oric.no =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = testserver.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = testdb01)
        )
      )
      


    Using tnsping:
    C:\>tnsping testdb01
    
    TNS Ping Utility for 64-bit Windows: Version 19.0.0.0.0 - Production on 17-AUG-2021 12:42:14
    
    Used parameter files:
    C:\app\client\vegard\network\admin\sqlnet.ora
    Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = testserver.oric.no)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = testdb01)))
    OK (40 msec)
    
    Using sqlplus:
    C:\>sqlplus system/mypassword@testdb01
    
    SQL*Plus: Release 19.0.0.0.0 - Production on Tue Aug 17 12:39:47 2021
    Version 19.3.0.0.0
    
    Last Successful login time: Tue Aug 17 2021 12:36:35 +02:00
    
    Connected to:
    Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
    Version 18.14.0.0.0
    
    Using sqlcl:
    C:\>sql.exe system/mypassword@testdb01
    
    SQLcl: Release 19.2.1 Production on Tue Aug 17 12:44:36 2021
    
    Copyright (c) 1982, 2021, Oracle.  All rights reserved.
    
      USER          = system
      URL           = jdbc:oracle:oci8:@testdb01
      Error Message = no ocijdbc18 in java.library.path
      USER          = system
      URL           = jdbc:oracle:thin:@testdb01
      Error Message = IO Error: could not resolve the connect identifier  "testdb01
      USER          = system
      URL           = jdbc:oracle:thin:@testdb01:1521/testdb01
      Error Message = IO Error: could not resolve the connect identifier  "testdb01:1521/testdb01"
    Username? (RETRYING) ('system/*********@testdb01'?)
    
    Why is sqlcl having trouble connecting, when the listener is accepting requests and connections over sqlplus are indeed accepted?

    My experience is that sqlcl doesn't seem to accept multiple tnsnames.ora aliases.

    Solution: Split up the tnsnames.ora entry in two different entries.

    Change it from
    testdb01, testdb01.oric.no =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = testserver.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = testdb01)
        )
      )
      
    to
    testdb01 =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = testserver.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = testdb01)
        )
      )
    
    testdb01.oric.no =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = testserver.oric.no)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = testdb01)
        )
      )
      

    Monday, August 2, 2021

    How to work around ORA-38323: policy conflicts with policy

    You try to enable a tablespace-wide ADO policy:
    ORA-38323: policy conflicts with policy 42
    
    If you lookup the error with oerr ora 38323, oracle will give you the following solution:
    * Cause: An attempt was made to create a policy with the same action
             type and based on the same statistic as another policy on the
             object.
     *Action: Use a different action and/or statistic for the new policy,
              or delete the old policy.
    
    Verify that there is a policy on the tablespace already:
    SELECT * 
    FROM DBA_ILMPOLICIES 
    WHERE TABLESPACE IS NOT NULL;
    
    Result:
    POLICY_NAME POLICY_TYPE TABLESPACE ENABLED DELETED
    P43 DATA MOVEMENT DATA1 YES NO

    Theres is indeed a policy named P43. What kind of policy is it?
    SELECT policy_name,action_type,scope,compression_level,condition_type,condition_days,policy_subtype
    FROM DBA_ILMDATAMOVEMENTPOLICIES 
    WHERE POLICY_NAME IN (SELECT POLICY_NAME FROM DBA_ILMPOLICIES WHERE TABLESPACE IS NOT NULL); 
    
    Result:
    POLICY_NAME ACTION_TYPE SCOPE COMPRESSION_LEVEL CONDITION_TYPE CONDITION_DAYS POLICY_SUBTYPE
    P43 COMPRESSION GROUP ADVANCED LAST ACCESS TIME
    1
    DISK

    So the policy is a directive to the oracle server to compress all objects after one day of no access. If you need to alter this policy, the old one must be dropped:
    ALTER TABLESPACE DATA1 DEFAULT ILM DELETE POLICY P43;
    
    You are now free to add a new ADO policy to the tablespace:
    ALTER TABLESPACE DATA1
    DEFAULT ILM ADD POLICY
    ROW STORE COMPRESS ADVANCED
    GROUP AFTER 30 DAYS OF NO ACCESS;
    

    How to add a default ADO compression policy to a tablespace

    Create the tablespace:
    CREATE BIGFILE TABLESPACE DATA1 DATAFILE
      '/oradata/mydb/data1.dbf' SIZE 1024M AUTOEXTEND ON NEXT 256M MAXSIZE 2T
    DEFAULT
    COMPRESS FOR OLTP
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE
    SEGMENT SPACE MANAGEMENT AUTO
    DEFAULT ILM ADD POLICY 
    COMPRESS FOR ALL OPERATIONS 
    GROUP 
    AFTER 1 DAYS OF NO ACCESS;
    
    Notice the "GROUP" keyword. It states the scope of an ADO policy, and can be a group of related objects, segment level or row-level, indicated by the keywords GROUP, ROW, or SEGMENT.

    According to the documentation, by default, this will give you advanced compression on heap tables, standard compression for indexes and LOW for LOB segments created in the tablespace.

    Any table created in this tablespace from now on will now inherit the ADO policy. Let's see it in action:
    create table sh.test_ado_tabspc_compression as select * from sh.sales;
    
    Table created.
    
    Verify:
    SELECT policy_name,
           object_owner,
           object_name,
           subobject_name,
           object_type,
           inherited_from,
           enabled,
           deleted
    FROM   dba_ilmobjects
    WHERE  object_owner='SH'
    AND    object_name='SALES'
    ORDER BY 1;
    
    Result:
    POLICY_NAME OBJECT_OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE INHERITED_FROM ENABLED DELETED
    P43 SH TEST_ADO_TABSPC_COMPRESSION   TABLE TABLESPACE YES NO

    The table has inherited the ADO policy from the tablespace it was created in.

    Wednesday, June 30, 2021

    How to deal with ORA-01110 during export

    If you receive errors like this during data pump export
    ORA-01187: cannot read from file  because it failed verification tests
    ORA-01110: data file 204: '/datafiles/oradata/proddb/temp.dbf'
    ORA-06512: at "SYS.DBMS_LOB", line 724
    ORA-06512: at "SYS.DBMS_DATAPUMP", line 4546
    ORA-06512: at line 1
    
    you need to drop and recreate your temporary tablespace(s). If you only use bigfile temporary tablespaces, you can use the following script to rectify the situation:
    set trimspool on
    set lines 200
    set feedback off
    set verify off
    set heading off
    set echo off
    
    spool 1.drop_non_default_temp.sql
    select 'spool 1.drop_non_default_temp.log' from dual;
    select 'drop tablespace ' || ts.name || ' including contents and datafiles;'
    from v$tempfile tf join v$tablespace ts
    on (ts.ts# = tf.ts#)
    and ts.name not in(
        select property_value
        FROM DATABASE_PROPERTIES
        WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
    );
    select 'exit' from dual;
    
    spool 2.recreate_temp.sql
    select 'spool 2.recreate_temp.log' from dual;
    select 'create bigfile temporary tablespace ' || ts.name || ' tempfile ''' || tf.name || ''' SIZE 32M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;'
    from v$tempfile tf join v$tablespace ts
    on (ts.ts# = tf.ts#)
    and ts.name not in(
        select property_value
        FROM DATABASE_PROPERTIES
        WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
    );
    select 'exit' from dual;
    spool off;
    
    spool 3.alter_default_temp.sql
    select 'spool 3.alter_default_temp.log' from dual;
    select 'alter database default temporary tablespace TEMP;' from dual;
    select 'exit' from dual;
    spool off
    
    spool 4.drop_default_temp.sql
    select 'spool 4.drop_default_temp.log' from dual;
    select 'drop tablespace ' || ts.name || ' including contents and datafiles;'
    from v$tempfile tf join v$tablespace ts
    on (ts.ts# = tf.ts#)
    and ts.name in(
        select property_value
        FROM DATABASE_PROPERTIES
        WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
    );
    select 'exit' from dual;
    spool off;
    
    spool 5.create_default_temp.sql
    select 'spool 5.create_default_temp.log' from dual;
    select 'create bigfile temporary tablespace ' || ts.name || ' tempfile ''' || tf.name || ''' SIZE 32M REUSE AUTOEXTEND ON NEXT 1M MAXSIZE UNLIMITED;'
    from v$tempfile tf join v$tablespace ts
    on (ts.ts# = tf.ts#)
    and ts.name in(
        select property_value
        FROM DATABASE_PROPERTIES
        WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
    );
    select 'exit' from dual;
    spool off;
    
    spool 6.reset_default_temp.sql
    select 'spool 6.reset_default_temp.log' from dual;
    select 'alter database default temporary tablespace ' || ts.name || ';'
    from v$tempfile tf join v$tablespace ts
    on (ts.ts# = tf.ts#)
    and ts.name in(
        select property_value
        FROM DATABASE_PROPERTIES
        WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE'
    );
    select 'exit' from dual;
    spool off;
    
    exit
    
    Run the script in your database, and it will product 6 new sql files, numbered 1 to 6. Run them sequenctially and you should have recreated your temporary tablespaces. After this your data pump export will start as expected.

    Monday, June 28, 2021

    How to add an ADO compression policy to a table partition

    ADO (Automatic Data Optimization) is one component of the Oracle 12c ILM (Information Life Cycle Management) solution.
    For ADO to work, access to and modification of data needs to be constantly tracked. This is done by enabling a heat map in your database:
    alter system set heat_map=on scope=both;
    
    The database will immediately start sampling information. After a while, the heat map will contain information about how your objects are being used.

    My table contains 211 partitions. I would like to add an ADO policy to one of them to illustrate how ADO policies works.
    Let's list the 3 largest partitions:
    select p.TABLE_NAME, s.partition_name,s.tablespace_name, p.compression,p.num_rows,Round(sum(s.bytes)/1024/1024/1024,1) "GB"
    from dba_segments S join dba_tab_partitions p
    on (s.partition_name = p.partition_name)
    where s.segment_name='ENTITIY_PR'
    and   s.owner='SSB'
    group by p.table_name,s.partition_name,s.tablespace_name,p.compression,p.num_rows
    order by 5 desc
    fetch first 3 rows only;
    

    TABLE_NAME PARTITION_NAME TABLESPACE_NAME COMPRESSION NUM_ROWS GB
    ENTITIY_PR SYS_P5055 SSB DISABLED
    43448193
    3,3
    ENTITIY_PR SYS_P4518 SSB DISABLED
    43447546
    3,4
    ENTITIY_PR SYS_P4709 SSB DISABLED
    43217045
    3,3


    Add a policy to have Oracle automatically compress the partition segment after one day without any access (read and write):
      
    alter table SSB.ENTITIY_PR 
    modify partition SYS_P5055 ilm add policy 
    row store compress advanced segment after 1 days of no access;
    
    In addition to no access, you could also use low access, no modification or creation to express activity type

    Verify that the ADO policy is in place:
    SELECT policy_name,
           object_owner,
           object_name,
           subobject_name,
           object_type,
           inherited_from,
           enabled,
           deleted
    FROM   dba_ilmobjects
    WHERE  object_owner='SSB'
    AND    object_name='ENTITIY_PR'
    AND    subobject_name='SYS_P5055'
    ORDER BY 1;
    
    POLICY_NAME OBJECT_OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE INHERITED_FROM ENABLED DELETED
    P22 SSB ENTITIY_PR SYS_P5055 TABLE PARTITION POLICY NOT INHERITED YES NO

    What does the heat map say about this particular partition?
    select * 
    from dba_heat_map_segment
    where owner='SSB'
    and   object_name='ENTITIY_PR' 
    AND   subobject_name='SYS_P5055';
    
    
    No rows returned.
    
    So this partitioned hasn't actually been modified or accessed since the heat map was enabled.
    Let's access the partition with a query, and see if this will be registered in the heat map.
    SELECT SYSTIMESTAMP FROM DUAL; --> 25-JUN-21 10.04.30
    
    SELECT *
    FROM   SSB.ENTITIY_PR
    PARTITION (SYS_P5055)
    fetch first 10 rows only;
    
    Run the query against dba_heat_map_segment again, and there is now an entry in the heat map for my visit to the table:
    OWNER OBJECT_NAME SUBOBJECT_NAME SEGMENT_WRITE_TIME SEGMENT_READ_TIME FULL_SCAN LOOKUP_SCAN
    SSB ENTITIY_PR SYS_P5055   25.06.2021 10:04:34 25.06.2021 10:04:34  
    Notice that the column "LOOKUP_SCAN" has no value.
    Let's try to send a query to the database that will require an PK index lookup:
    SELECT *
    FROM   SBB.ENTITIY_PR
    PARTITION (SYS_P5055)
    WHERE systemid = 1145708618;
    
    Run the query against dba_heat_map_segment again, and you can now see that the lookup was recorded in the heat map:
    OWNER OBJECT_NAME SUBOBJECT_NAME SEGMENT_WRITE_TIME SEGMENT_READ_TIME FULL_SCAN LOOKUP_SCAN
    SBB ENTITIY_PR SYS_P5055   25.06.2021 10:25:33 25.06.2021 10:25:33 25.06.2021 10:25:33
    If you wonder why the timestamps in dba_heat_map_segment are incrementing, although you're not executing any queries, it is because the object is still in the shared pool. There is no such thing as purging a table from the shared pool. However, you can purge the cursors that refer to it. To find those cursors, use the following query:
    set lines 200
    spool purge.sql
    set heading off
    set feedback off
    set verify off
    set echo off
    set pages 0
    set trimspool on
    
    select 'exec DBMS_SHARED_POOL.PURGE (''' || ADDRESS ||','|| HASH_VALUE || ''',''C'');'
    from V$SQLAREA where SQL_ID IN (SELECT SQL_ID FROM v$sqlarea where sql_text like 'SELECT * FROM%SBB.ENTITIY_PR%SYS_P5055%');
    select 'exit' from dual;
    exit
    
    Run the script and it will spool a file "purge.sql" to your current directory, which looks like this in my case:
    exec DBMS_SHARED_POOL.PURGE ('00000000BF56D560,1541949375','C');
    exec DBMS_SHARED_POOL.PURGE ('00000000C67994B8,3904135316','C');
    exec DBMS_SHARED_POOL.PURGE ('00000000C67973D8,27827680','C');
    
    Run these statements, and the cursors refering to ENTITY_PR will be removed from memory, and the incrementing timestamps will stop. Obviously, if other users are actually using the object, you would need to be careful about purging the shared pool.
    After one full day of no access, let's check if the ADO policy has done its work:
    select p.TABLE_NAME, s.partition_name,s.tablespace_name, p.compression,p.num_rows,Round(sum(s.bytes)/1024/1024/1024,1) "GB"
    from dba_segments S join dba_tab_partitions p
    on (s.partition_name = p.partition_name)
    where s.segment_name='ENTITIY_PR'
    and   s.owner='SSB'
    and   s.partition_name in ( 'SYS_P5055')
    and   s.segment_name = p.table_name
    group by p.table_name,s.partition_name,s.tablespace_name,p.compression,p.num_rows
    order by 5 desc;
    
    The output confirms that the partition has been compressed, and if you check the output from the same query earlier in this post, you'll notice that the size of the partition has dropped from 3,3G to 1,3G, a 60% reduction:
    TABLE_NAME PARTITION_NAME TABLESPACE_NAME COMPRESSION NUM_ROWS GB
    ENTITY_PR SYS_P5055 SSB ENABLED
    43448193
    1,3
    When the policy now has been enforced, what is the status of the policy?
    SELECT policy_name,
           object_owner,
           object_name,
           subobject_name,
           object_type,
           inherited_from,
           enabled,
           deleted
    FROM   dba_ilmobjects
    WHERE  object_owner='SSB'
    AND    object_name='ENTITY_PR'
    AND    subobject_name='SYS_P5055'
    ORDER BY 1;
    
    POLICY_NAME OBJECT_OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_TYPE INHERITED_FROM ENABLED DELETED
    P22 SAKSINFO_SKFIN ENTITET_EGENSKAP SYS_P5055 TABLE PARTITION POLICY NOT INHERITED NO NO
    Note how the rule is no longer enabled, now that it has been executed.
    Further reading:
    The official Oracle 19c ADO documentation As always, the excellently written tutorials from Tim Hall I also used examples from the book "Oracle Database 12c New Features", Chapter 8, by Robert G. Freeman published by Oracle Press.

    Friday, June 25, 2021

    utlu122s.sql is replaced from version 19c

    During upgrades, you will notice that any old script that you've assembled to upgrade 12c and 18c databases, will break when they try to run the script utlu122s.sql. Note that it has been rename from version 19c and onwards, and is now called utlusts.sql.

    Thanks to Rajasekhar Amudala for pointing this out on his blog.

    How to purge tables from the shared pool

    Tables cannot be directly purged from the shared pool using the dbms_shared_pool package. Instead, cursors refering to the table can be purged. To generate statements, use v$sqlarea:
    set lines 200
    spool purge.sql
    set heading off
    set feedback off
    set verify off
    set echo off
    set pages 0
    set trimspool on
    
    select 'exec DBMS_SHARED_POOL.PURGE (''' || ADDRESS ||','|| HASH_VALUE || ''',''C'');'
    from V$SQLAREA 
    where SQL_ID IN (SELECT SQL_ID 
                     FROM v$sqlarea 
                     WHERE sql_text like 'SELECT * FROM%SBB.ENTITIY_PR%SYS_P5055%');
    select 'exit' from dual;
    exit
    
    Execute the file "purge.sql" and the cursors refering to the object "SBB.ENTITY_PR" and the partition "SYS_P5055" will be purged. The Oracle 19c documentation for dbms_shared_pool can be found here

    Wednesday, June 23, 2021

    How to list index sizes

    For non-partitioned global indexes in a specific schema:
    select i.index_name, s.tablespace_name,round(sum(bytes)/1024/1024/1024) "GB"
    from dba_indexes i join dba_segments s
    on (i.index_name = s.segment_name)
    where i.owner='SH'
    and i.partitioned='NO'
    group by i.index_name,s.tablespace_name
    order by 3 desc;
    
    INDEX_NAME TABLESPACE_NAME GB
    AN_EEG_IDX SH
    2406
    PK_EEG SH
    994
    UIDX_REG_EEG SH
    502
    IDX_REL_IDX SH
    156

    For non-partitioned global indexes on a specific table:
    select i.index_name, s.tablespace_name,round(sum(bytes)/1024/1024/1024) "GB"
    from dba_indexes i join dba_segments s
    on (i.index_name = s.segment_name)
    where i.owner='SH'
    and i.partitioned='NO'
    and i.table_name = 'ACCOUNTING'
    group by i.index_name,s.tablespace_name
    order by 2 desc;
    

    INDEX_NAME TABLESPACE_NAME GB
    IDX_ACC1 SH
    120
    PK_ACC SH
    994

    For partitioned indexes on a specific table:
    select s.segment_name,round(sum(s.bytes)/1024/1024/1024,2) "GB" 
    from dba_segments s
    where segment_name IN (select unique index_name 
                            from dba_indexes 
                            where table_name='SALES' 
                            and PARTITIONED='YES'
                            and index_type <> 'LOB'
                            and owner='SH')
    group by s.segment_name;
    

    SEGMENT_NAME GB
    IDX_UK1
    28
    IDX_ACC_UK2
    78,24

    For all partitioned indexes in a specific schema:
    select i.index_name,i.table_name,s.tablespace_name,round(sum(s.bytes)/1024/1024/1024) "GB"
    from dba_indexes i join dba_segments s
    on (i.index_name = s.segment_name)
    where i.owner='SH'
    and i.partitioned='YES'
    and i.index_type <> 'LOB'
    group by i.index_name,i.table_name,s.tablespace_name
    order by 2 ;
    

    INDEX_NAME TABLE_NAME TABLESPACE_NAME GB
    IDX_OP_AA ENTITY SH
    260
    IDX_OP_NA ENTITY_HISTORY SH
    1082
    IDX_VER_AA EVENT SH
    28
    IDX_VER_AA_HIST EVENT SH
    78
    IDX_WW_UK2 RELATION SH
    226
    IDX_RELNO RELATION_ENTITY SH
    350