Tuesday, December 10, 2019

What is the Asynchronous Global Index Maintenance introduced in Oracle 12.1?


Asynchronous Global Index Maintenance, also called Deferred Global Index Maintenance, means that Oracle will maintain global indexes during the nightly maintenance window, when you perform operations on a partitioned tables that use a global index. Oracle has introduced this in an effort to make online operations less disruptive.

Examples of such operations are

* DROP PARTITION
* TRUNCATE PARTITION
* MOVE PARTITION (although this operation is not listed in the Oracle 12.1 documentation)

The execution of the index maintenance is done through the PL/SQL program PMO_DEFERRED_GIDX_MAINT, which purpose is to clear orphan data from global indexes generated during partition maintenance operations.

The program can be verified through DBA_SCHEDULER_PROGRAMS:
SELECT OWNER,PROGRAM_NAME,PROGRAM_TYPE,PROGRAM_ACTION,ENABLED
FROM DBA_SCHEDULER_PROGRAMS
WHERE PROGRAM_NAME='PMO_DEFERRED_GIDX_MAINT';

OWNER PROGRAM_NAME PROGRAM_TYPE PROGRAM_ACTION ENABLED
SYS PMO_DEFERRED_GIDX_MAINT PLSQL_BLOCK dbms_part.cleanup_gidx_internal( noop_okay_in => 1); TRUE

It is executed trough the scheduler every night:
select schedule_name,repeat_interval
from DBA_SCHEDULER_SCHEDULES
where schedule_name='PMO_DEFERRED_GIDX_MAINT_SCHED';

SCHEDULE_NAME REPEAT_INTERVAL
PMO_DEFERRED_GIDX_MAINT_SCHED FREQ=DAILY; BYHOUR=02; BYMINUTE=0; BYSECOND=0

It is executed through the job PMO_DEFERRED_GIDX_MAINT_JOB:
SELECT owner,job_name,program_name,schedule_name,enabled,state,run_count,to_char(last_start_date,'dd.mm.yyyy hh24:mi') "start",stop_on_window_close
FROM DBA_SCHEDULER_JOBS
WHERE PROGRAM_NAME='PMO_DEFERRED_GIDX_MAINT';

OWNER JOB_NAME PROGRAM_NAME SCHEDULE_NAME ENABLED STATE RUN_COUNT start STOP_ON_WINDOW_CLOSE
SYS PMO_DEFERRED_GIDX_MAINT_JOB PMO_DEFERRED_GIDX_MAINT PMO_DEFERRED_GIDX_MAINT_SCHED TRUE RUNNING
1328
10.12.2019 02:00 FALSE


If the job is not finished within a day, is it possible to run PMO_DEFERRED_GIDX_MAINT with parallelism, as specified in the Oracle Support Note "Is It Possible To Execute The Job PMO_DEFERRED_GIDX_MAINT_JOB With Parallel Degree? (Doc ID 2126370.1)".


An example from my own experience is that the following command
ALTER TABLE SALES
MOVE PARTITION P_2019_02 ONLINE
TABLESPACE DATA2
ROW STORE COMPRESS ADVANCED
PARALLEL 4;

Triggered the statement:
ALTER INDEX "SH"."SALES_IDX1" COALESCE CLEANUP;

I have found that I sometime need to stop an job that's running through the maintenance window, and into office hours.
In that case, I've used the procedure dbms_scheduler.stop_job, like this:
BEGIN
  DBMS_SCHEDULER.STOP_JOB('SYS.PMO_DEFERRED_GIDX_MAINT_JOB', 'TRUE');
END;
/
If you need to find out which indexes that are due for coalescing, use this query:
SELECT owner,index_name,orphaned_entries
FROM   dba_indexes
where owner ='SCOTT'
and orphaned_entries='YES'
ORDER BY 1;

Tuesday, December 3, 2019

How an incorrect password file format can stop the redo apply process (MRP0) on standby database



Error in dgmgrl shows:
DGMGRL> show configuration;

Configuration - DGConfig1

  Protection Mode: MaxPerformance
  Members:
  proddb01   - Primary database
    stby02- Physical standby database
    stby01 - Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover: DISABLED

Configuration Status:
ERROR   (status updated 36 seconds ago)


Checking the database throwing error:
DGMGRL> show database stby01

Database - stby01

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          21 hours 43 minutes 37 seconds (computed 0 seconds ago)
  Average Apply Rate: 5.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    pipat

  Database Error(s):
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold


Try restarting redo apply:
DGMGRL>  edit database 'stby01'  set state='APPLY-OFF';
Succeeded.
DGMGRL>  edit database 'stby01'  set state='APPLY-ON';
Succeeded.

The alert log reports:
2019-12-03T11:33:22.214114+01:00
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
2019-12-03T11:33:22.270642+01:00
Attempt to start background Managed Standby Recovery process (proddb01)
Starting background process MRP0
2019-12-03T11:33:22.312794+01:00
MRP0 started with pid=105, OS id=20354
2019-12-03T11:33:22.333315+01:00
MRP0: Background Managed Standby Recovery process started (proddb01)
2019-12-03T11:33:27.472196+01:00
 Started logmerger process
2019-12-03T11:33:27.599549+01:00
Managed Standby Recovery starting Real Time Apply
2019-12-03T11:33:27.801888+01:00
Parallel Media Recovery started with 4 slaves
2019-12-03T11:33:28.279378+01:00
Media Recovery Log /u04/fra/STBY01/archivelog/2019_12_02/o1_mf_1_121201__y2thfwyz_.arc
2019-12-03T11:33:28.318586+01:00
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
MRP0: Background Media Recovery terminated with error 46952
2019-12-03T11:33:28.372536+01:00
Errors in file /orasoft/diag/rdbms/stby01/proddb01/trace/proddb01_pr00_20395.trc:
 ORA-46952: standby database format mismatch for password file '/orasoft/product/122/dbs/orapwproddb01'
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 826857150
2019-12-03T11:33:28.447080+01:00
2019-12-03T11:33:28.554534+01:00
MRP0: Background Media Recovery process shutdown (proddb01)

The password file is of an older version and should be recreated in order to ressume log apply. How to do this is outlined in one of my previous posts, available here.

After you have done this, restart redo apply again with

DGMGRL>  edit database 'stby01'  set state='APPLY-ON';
Succeeded.

Put a tail on the standby database's alert log and you'll see that the outstanding logs are cherned through quicly.

Update, 04.11.2024

I have also seen situations where recreation of the password file does not help. In such cases:

  • On the standby, start the archiving process manually
      SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
    
  • Copy the password file from primary node 1 to standby node

    Source: Standby Database MRP Fails With ORA-46952: Standby Database Format Mismatch For Password (Doc ID 2503352.1)
  • Wednesday, November 27, 2019

    How to move a datafile online in Oracle 12.1


    Starting with Oracle 12.1, you can move datafiles online.


    Generate move-script:
    select 'alter database move datafile ''' || file_name || ''' TO ''' || replace(file_name,'old_sid','new_sid') || ''';'
    from dba_data_files;
    

    Output will be:
    alter database move datafile '/oradata/old_sid/system01.dbf' TO '/oradata/new_sid/system01.dbf';
    


    Tuesday, November 26, 2019

    How to execute a *.sql file using psql in a PostgreSQL database



    There are two ways:

    1. Already connected to the correct database:
    postgres=# \connect proddb01
    You are now connected to database "proddb01" as user "postgres".
    proddb01=# \conninfo
    You are connected to database "proddb01" as user "postgres" via socket in "/var/run/postgresql" at port "5432".
    proddb01=#
    

    Then use the -i notation in psql to execute your file, like this:
    proddb01=# \i yourfile.sql
    

    2. directly from the command line:
    proddb01=# psql proddb01 -f yourfile.sql
    

    You can also direct the output to a log file, using the -o flag:
    proddb01=# psql proddb01 -f test.sql -o test.log
    

    The -o flag will redirect everything to a log file and not show anything on your screen.
    If you would like to see the result of the script on your screen AND save the output in a log file, use the -L flag instead of the -o flag. This will also log the actual query sent to the postgres server:
    proddb01=# psql proddb01 -f test.sql -L test.log
    

    Note that no "exit" statement is necessary in your .sql scripts that you execute through psql directly from the command line. psql quits the session automatically.

    How to list the databases in a PostgreSQL instance


    Connect to the default database:
    psql
    

    List your databases:
    select oid as database_id,
           datname as database_name,
           datallowconn as allow_connect,
           datconnlimit as connection_limit
    from pg_database
    order by oid;
    

    Example output:
    database_id | database_name | allow_connect | connection_limit
    -------------+---------------+---------------+------------------
               1 | template1     | t             |               -1
           13880 | template0     | f             |               -1
           13881 | postgres      | t             |               -1
           16426 | proddb01      | t             |               -1
    (4 rows)
    

    Or use the psql meta-command "list+" (or "\l+") to list the available databases;
    prod-# \l+
                                                                          List of databases
       Name    |  Owner   | Encoding |  Collate   |   Ctype    |      Access privileges       |  Size   | Tablespace |                Description
    -----------+----------+----------+------------+------------+------------------------------+---------+------------+--------------------------------------------
     postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =Tc/postgres                +| 9269 kB | pg_default | default administrative connection database
               |          |          |            |            | postgres=CTc/postgres       +|         |            |
               |          |          |            |            | postgres_exporter=c/postgres |         |            |
     proddb01  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres=CTc/postgres       +| 25 GB   | pg_default |
     template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres                 +| 7965 kB | pg_default | unmodifiable empty database
               |          |          |            |            | postgres=CTc/postgres        |         |            |
     template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres                 +| 8221 kB | pg_default | default template for new databases
               |          |          |            |            | postgres=CTc/postgres        |         |            |
    (4 rows)
    
    Once again, thanks to Bart Gawrych for providing some very useful listings on his site.

    Thursday, November 14, 2019

    How to move XML LOB segments in a partitioned table to a new tablespace



    The examples used in this article is based on Oracle 18c.

    When moving a partition with XMLTYPE columns to a new tablespace, the LOB objects that was created by the XMLTYPE objects will follow along with the partition when you move it. Only the LOB partitions based on data types CLOB (and I assume also BLOB) will have to be moved explicitly.

    The table looks as follows. Note the column types of CLOB and XMLTYPE:
    CREATE TABLE CS_DOCUMENTS
    (
      ENTRY_ID                 VARCHAR2(100 CHAR),
      BATCH_ID                 NUMBER(28),
      DOC_ID                   VARCHAR2(100 CHAR),
      DOC_TYPE                 VARCHAR2(100 CHAR),
      DOC_NAME                 VARCHAR2(4000 CHAR),
      STATUS                   VARCHAR2(4000 CHAR),
      PUBLISHED                TIMESTAMP(6),
      CREATED                  TIMESTAMP(6),
      RAW_DATA                 CLOB,
      DOKCOUNT                 INTEGER,
      REVISION                 INTEGER,
      XML_P_HEADER             SYS.XMLTYPE,
      XML_P_HEADER_FORMAT      VARCHAR2(4000 CHAR),
      XML_P_DATA               SYS.XMLTYPE,
      XML_P_DATA_FORMAT        VARCHAR2(4000 CHAR),
      XML_P_EXTENSION          SYS.XMLTYPE,
      XML_P_EXTENSION_FORMAT   VARCHAR2(4000 CHAR)
    )
    -- CLOB
    LOB (RAW_DATA) STORE AS SECUREFILE (
      TABLESPACE  DATA1
    )
    -- XMLTYPE
    XMLTYPE XML_P_HEADER STORE AS SECUREFILE BINARY XML (
      TABLESPACE  DATA1
      )
    -- XMLTYPE
    XMLTYPE XML_P_DATA STORE AS SECUREFILE BINARY XML (
      TABLESPACE  DATA1
      )
    -- XMLTYPE
    XMLTYPE XML_P_EXTENSION STORE AS SECUREFILE BINARY XML (
      TABLESPACE  DATA1
    )
    TABLESPACE MOTTAK_DATA
    PARTITION BY RANGE (BATCH_ID)
    INTERVAL(1000)
    (  
      PARTITION P_INIT VALUES LESS THAN (1001)
        NOCOMPRESS 
        TABLESPACE DATA1
        LOB (RAW_DATA) STORE AS SECUREFILE (
          TABLESPACE  DATA1
        )
     )
    ;
    
    The tablespace DATA1 is filling up, and there is a need to move some partitions to another tablespace, DATA2.
    In this example, I am moving the latest added partitions first, and working my way backwards.

    Let's look at most recently added partition:
    SELECT /*+ result_cache */ LP.TABLE_NAME,LP.COLUMN_NAME,LP.LOB_NAME,LP.PARTITION_NAME,LP.LOB_PARTITION_NAME,LP.PARTITION_POSITION,LP.TABLESPACE_NAME,ROUND(SUM(S.BYTES)/1024/1024/1024,2) "GB"
     FROM DBA_LOB_PARTITIONS LP join DBA_TAB_PARTITIONS TP
     ON (LP.PARTITION_NAME = TP.PARTITION_NAME) JOIN DBA_SEGMENTS S
        ON (LP.LOB_PARTITION_NAME = S.PARTITION_NAME)
     WHERE  TP.TABLE_NAME='CS_DOCUMENTS'
     AND     TP.PARTITION_NAME = (
                 SELECT PARTITION_NAME
                FROM DBA_TAB_PARTITIONS 
                WHERE TABLE_NAME='CS_DOCUMENTS' 
                AND partition_position=(SELECT MAX(PARTITION_POSITION) FROM DBA_TAB_PARTITIONS WHERE TABLE_NAME='CS_DOCUMENTS' AND TABLESPACE_NAME != 'DATA2'
                )
                )
     GROUP BY LP.TABLE_NAME,LP.COLUMN_NAME,LP.LOB_NAME,LP.PARTITION_NAME,LP.LOB_PARTITION_NAME,LP.LOB_INDPART_NAME,LP.PARTITION_POSITION,LP.TABLESPACE_NAME
     ORDER BY PARTITION_NAME DESC;
    

    Output:
    TABLE_NAME COLUMN_NAME LOB_NAME PARTITION_NAME LOB_PARTITION_NAME PARTITION_POSITION TABLESPACE_NAME GB
    CS_DOCUMENTS RAW_DATA SYS_LOB0044210973C00012$$ SYS_P6533446
    SYS_LOB_P6533447
    62
    DATA1
    31,24
    CS_DOCUMENTS SYS_NC00017$ SYS_LOB0044210973C00017$$ SYS_P6533446 SYS_LOB_P6533449
    62
    DATA1
    0,01
    CS_DOCUMENTS SYS_NC00020$ SYS_LOB0044210973C00020$$ SYS_P6533446 SYS_LOB_P6533451
    62
    DATA1
    0,01
    CS_DOCUMENTS SYS_NC00023$ SYS_LOB0044210973C00023$$ SYS_P6533446 SYS_LOB_P6533453
    62
    DATA1
    0,01

    The LOB partition SYS_LOB_P6533447 is occupying 31GB of space.

    Move the partition to another tablespace. Notice how I specify tablespace for the table partition and the LOB object, but none of the XMLType objects:

    Take the opportunity to compress the LOB objects at the same time. Use the ONLINE clause to allow DML against the table during the move-operation:
    set timing on
    exec DBMS_APPLICATION_INFO.SET_CLIENT_INFO (client_info=>'move_objects');
    exec DBMS_APPLICATION_INFO.SET_MODULE(module_name=>'move_objects', action_name=>NULL);
    exec DBMS_APPLICATION_INFO.SET_ACTION(action_name=>'move_lobs');
    ALTER TABLE CS_DOCUMENTS
     MOVE PARTITION SYS_P6533446
        ROW STORE COMPRESS ADVANCED
        TABLESPACE DATA2
        LOB (ENTRY_RAW) STORE AS SECUREFILE (
          TABLESPACE  DATA2
          COMPRESS    MEDIUM
          )
     ONLINE;
    

    The situation after the move:
    SELECT /*+ result_cache */ LP.TABLE_NAME,LP.COLUMN_NAME,LP.LOB_NAME,LP.PARTITION_NAME,LP.LOB_PARTITION_NAME,LP.PARTITION_POSITION,LP.TABLESPACE_NAME,SUM(S.BYTES)/1024/1024/1024 "GB"
     FROM DBA_LOB_PARTITIONS LP join DBA_TAB_PARTITIONS TP
     ON (LP.PARTITION_NAME = TP.PARTITION_NAME) JOIN DBA_SEGMENTS S
        ON (LP.LOB_PARTITION_NAME = S.PARTITION_NAME)
     WHERE  TP.TABLE_NAME='CS_DOCUMENTS'
     AND     TP.PARTITION_NAME = (
                 SELECT PARTITION_NAME
                FROM DBA_TAB_PARTITIONS 
                WHERE TABLE_NAME='CS_DOCUMENTS' 
                AND partition_position=(SELECT MIN(PARTITION_POSITION) FROM DBA_TAB_PARTITIONS WHERE TABLE_NAME='CS_DOCUMENTS' AND TABLESPACE_NAME = 'DATA2'
                )
                )
     GROUP BY LP.TABLE_NAME,LP.COLUMN_NAME,LP.LOB_NAME,LP.PARTITION_NAME,LP.LOB_PARTITION_NAME,LP.LOB_INDPART_NAME,LP.PARTITION_POSITION,LP.TABLESPACE_NAME
     ORDER BY PARTITION_NAME DESC;
    

    Output:
    TABLE_NAME COLUMN_NAME LOB_NAME PARTITION_NAME LOB_PARTITION_NAME PARTITION_POSITION TABLESPACE_NAME GB
    CS_DOCUMENTS RAW_DATA SYS_LOB0044210973C00012$$ SYS_P6533446 SYS_LOB_P6533447
    62
    DATA2
    0,0078125
    CS_DOCUMENTS SYS_NC00017$ SYS_LOB0044210973C00017$$ SYS_P6533446 SYS_LOB_P6533449
    62
    DATA2
    0,0078125
    CS_DOCUMENTS SYS_NC00020$ SYS_LOB0044210973C00020$$ SYS_P6533446 SYS_LOB_P6533451
    62
    DATA2
    0,0078125
    CS_DOCUMENTS SYS_NC00023$ SYS_LOB0044210973C00023$$ SYS_P6533446 SYS_LOB_P6533453
    62
    DATA2
    0,0078125

    ALTER TABLE .... MOVE PARTITION statments can be generated on a partition-by-partition basis with:
    SELECT 'alter table ' || TABLE_OWNER || '.' || TABLE_NAME || ' move partition ' || PARTITION_NAME || ' ROW STORE COMPRESS ADVANCED TABLESPACE DATA2 lob (' || COLUMN_NAME || ') store as SECUREFILE (tablespace DATA2 COMPRESS MEDIUM) online update indexes;' 
    FROM DBA_LOB_PARTITIONS 
    WHERE TABLE_OWNER = 'CS'
    and table_name='CS_DOCUMENT'
    AND PARTITION_POSITION = (SELECT MAX(PARTITION_POSITION) FROM DBA_LOB_PARTITIONS WHERE TABLE_NAME='CS_DOCUMENT'  AND TABLESPACE_NAME != 'DATA2')
    AND COLUMN_NAME = (SELECT COLUMN_NAME FROM DBA_TAB_COLUMNS WHERE TABLE_NAME='CS_DOCUMENTS' AND DATA_TYPE LIKE ('%LOB%'));
    

    Tuesday, November 12, 2019

    How to convert a non-partitioned table into a partitioned table in Oracle version 12.2 and onwards




    With Oracle 12.2 and higher versions, it is really simple to convert a non-partitioned table to a partitioned table: you can now use the "ALTER TABLE .... MODIFY" syntax.



    Below I am showing how I used this feature for one of my tables.

    First, find some basic info about the table as it is right now:

    SELECT TABLE_NAME, PARTITIONED,NUM_ROWS,COMPRESSION
    FROM  DBA_TABLES
    WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES';

    Output:
    TABLE_NAME PARTITIONED NUM_ROWS COMPRESSION
    SEGMENT_SIZE_SAMPLES NO
    948401
    DISABLED


    Alter the table. I am taking the opportunity to compress the table at the same time as I am getting it partitioned. I am also using the ONLINE keyword, allowing DML against the table during the operation:
    ALTER TABLE SEGMENT_SIZE_SAMPLES MODIFY 
    PARTITION BY RANGE (SAMPLE_DATE)
    INTERVAL
    (
       NUMTOYMINTERVAL(1,'MONTH')
    )
    (
    PARTITION P_INIT VALUES LESS THAN (TO_DATE('2014-01','YYYY-MM') )
    )
    ROW STORE COMPRESS ADVANCED
    ONLINE;
    


    Analyze the table:
    exec DBMS_STATS.GATHER_TABLE_STATS (OwnName => 'DBDRIFT',TabName => 'SEGMENT_SIZE_SAMPLES');
    

    Check the table properties again:
    SELECT TABLE_NAME, PARTITIONED,NUM_ROWS,COMPRESSION
    FROM  DBA_TABLES
    WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES';
    

    Output:
    TABLE_NAME PARTITIONED NUM_ROWS COMPRESSION
    SEGMENT_SIZE_SAMPLES YES
    1000719
     

    How about compression?

    Check the DBA_TAB_PARTITIONS (I am only showing the first 5 rows for brevity):
    SELECT TABLE_NAME, PARTITION_NAME,NUM_ROWS,COMPRESSION, COMPRESS_FOR
    FROM DBA_TAB_PARTITIONS
    WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES'
    FETCH FIRST 5 ROWS ONLY;
    

    Output:
    TABLE_NAME PARTITION_NAME NUM_ROWS COMPRESSION COMPRESS_FOR
    SEGMENT_SIZE_SAMPLES P_INIT
    0
    ENABLED ADVANCED
    SEGMENT_SIZE_SAMPLES SYS_P17074
    19320
    ENABLED ADVANCED
    SEGMENT_SIZE_SAMPLES SYS_P17075
    24955
    ENABLED ADVANCED
    SEGMENT_SIZE_SAMPLES SYS_P17076
    24150
    ENABLED ADVANCED
    SEGMENT_SIZE_SAMPLES SYS_P17077
    24934
    ENABLED ADVANCED


    How about existing indexes?

    By default, existing non-unique indexes are also partitioned during the alteration of the table.

    There are two indexes on my table, one unique index supporting the primary key constraint, and one non-unique index.

    Before the table was altered, both were unpartitioned:
    SELECT INDEX_NAME,UNIQUENESS, PARTITIONED 
    FROM DBA_INDEXES
    WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES';
    

    INDEX_NAME UNIQUENESS PARTITIONED
    SEGMENT_SIZE_SAMPLES_IDX1 UNIQUE NO
    SEGMENT_SIZE_SAMPLES_IDX2 NONUNIQUE NO

    After the table was partitioned, the non-unique index was automatically partitioned, too:
    INDEX_NAME UNIQUENESS PARTITIONED
    SEGMENT_SIZE_SAMPLES_IDX1 UNIQUE NO
    SEGMENT_SIZE_SAMPLES_IDX2 NONUNIQUE YES

    Update 18.03.2020:

    Another example using LIST partitioning:
    alter table MUSIC.ALBUM modify
    partition by list (GENRE)
    (
        partition P_ROCK values (( 'ROCK')),
        partition P_POP values (( 'POP')),
        partition P_CLASSICAL values (( 'CLASSICAL')),
        partition P_MISC values (default)
    )
    online;
    

    Update 12.03.2021:
    An example using subpartitions:
    -- First, give the LOB the desired attributes, if you wish to change any of them:
    alter TABLE STOCKS MOVE
    LOB (DOC)
    STORE AS SECUREFILE(
      TABLESPACE  DATA1
      ENABLE      STORAGE IN ROW
      CHUNK       8192
      COMPRESS    MEDIUM
    );
    
    -- Alter the table
    alter TABLE STOCKS MODIFY
    PARTITION BY LIST (stockname)
    SUBPARTITION BY RANGE (LASTUPDATED)
    (   -- First partition is called a_name
        PARTITION a_name VALUES('a-abc')
        (
        -- older values
        SUBPARTITION SP_a_name_1_older VALUES LESS THAN ( TO_DATE('2016-01', 'YYYY-MM')) TABLESPACE DATA1
        LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA1 ),
        -- 2016
        subpartition SP_a_name_201601 VALUES LESS THAN (TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) TABLESPACE DATA_2016
        LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2016 ),
    
    ...continue with one subpartition per month per year...
    
    ...contine with  partitions b_name, c_name etc
    
    -- Final partition is called z_name
     PARTITION z_name VALUES(default)
      (
        -- 2016
        SUBPARTITION SP_sp_z_name_older VALUES LESS THAN ( TO_DATE('2016-01', 'YYYY-MM')) TABLESPACE DATA1,
        subpartition SP_sp_z_name_201601 VALUES LESS THAN (TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) TABLESPACE DATA_2016
        LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2016 ),
    
    ...continue with one subpartition per month per year...
    
     LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2022 ),
        subpartition SP_z_name_202212 VALUES LESS THAN (TO_DATE(' 2023-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) TABLESPACE DATA_2022
        LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2022 ),
        SUBPARTITION SP_z_name_DEFAULT VALUES LESS THAN (MAXVALUE) TABLESPACE DATA1
        LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA1)
        )
    )
    ONLINE
    UPDATE INDEXES(
      IDX_1 LOCAL,
      IDX_2 LOCAL
    );
      

    Note that UPDATE INDEXES converts IDX_1 and IDX_2 and convert them to LOCAL indexes, on the fly.
    Source: Oracle Documentation
    I also recommend to look at Oracle-base.com for further reading.