Showing posts with label Tables. Show all posts
Showing posts with label Tables. Show all posts

Wednesday, August 31, 2022

Rename table in postgreSQL: syntax

Rename a table:
alter table myschema.mytable rename to mytable_old;
Change ownership for a table:
alter table myschema.mytable owner to anotherschema;

Tuesday, November 12, 2019

How to convert a non-partitioned table into a partitioned table in Oracle version 12.2 and onwards




With Oracle 12.2 and higher versions, it is really simple to convert a non-partitioned table to a partitioned table: you can now use the "ALTER TABLE .... MODIFY" syntax.



Below I am showing how I used this feature for one of my tables.

First, find some basic info about the table as it is right now:

SELECT TABLE_NAME, PARTITIONED,NUM_ROWS,COMPRESSION
FROM  DBA_TABLES
WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES';

Output:
TABLE_NAME PARTITIONED NUM_ROWS COMPRESSION
SEGMENT_SIZE_SAMPLES NO
948401
DISABLED


Alter the table. I am taking the opportunity to compress the table at the same time as I am getting it partitioned. I am also using the ONLINE keyword, allowing DML against the table during the operation:
ALTER TABLE SEGMENT_SIZE_SAMPLES MODIFY 
PARTITION BY RANGE (SAMPLE_DATE)
INTERVAL
(
   NUMTOYMINTERVAL(1,'MONTH')
)
(
PARTITION P_INIT VALUES LESS THAN (TO_DATE('2014-01','YYYY-MM') )
)
ROW STORE COMPRESS ADVANCED
ONLINE;


Analyze the table:
exec DBMS_STATS.GATHER_TABLE_STATS (OwnName => 'DBDRIFT',TabName => 'SEGMENT_SIZE_SAMPLES');

Check the table properties again:
SELECT TABLE_NAME, PARTITIONED,NUM_ROWS,COMPRESSION
FROM  DBA_TABLES
WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES';

Output:
TABLE_NAME PARTITIONED NUM_ROWS COMPRESSION
SEGMENT_SIZE_SAMPLES YES
1000719
 

How about compression?

Check the DBA_TAB_PARTITIONS (I am only showing the first 5 rows for brevity):
SELECT TABLE_NAME, PARTITION_NAME,NUM_ROWS,COMPRESSION, COMPRESS_FOR
FROM DBA_TAB_PARTITIONS
WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES'
FETCH FIRST 5 ROWS ONLY;

Output:
TABLE_NAME PARTITION_NAME NUM_ROWS COMPRESSION COMPRESS_FOR
SEGMENT_SIZE_SAMPLES P_INIT
0
ENABLED ADVANCED
SEGMENT_SIZE_SAMPLES SYS_P17074
19320
ENABLED ADVANCED
SEGMENT_SIZE_SAMPLES SYS_P17075
24955
ENABLED ADVANCED
SEGMENT_SIZE_SAMPLES SYS_P17076
24150
ENABLED ADVANCED
SEGMENT_SIZE_SAMPLES SYS_P17077
24934
ENABLED ADVANCED


How about existing indexes?

By default, existing non-unique indexes are also partitioned during the alteration of the table.

There are two indexes on my table, one unique index supporting the primary key constraint, and one non-unique index.

Before the table was altered, both were unpartitioned:
SELECT INDEX_NAME,UNIQUENESS, PARTITIONED 
FROM DBA_INDEXES
WHERE TABLE_NAME='SEGMENT_SIZE_SAMPLES';

INDEX_NAME UNIQUENESS PARTITIONED
SEGMENT_SIZE_SAMPLES_IDX1 UNIQUE NO
SEGMENT_SIZE_SAMPLES_IDX2 NONUNIQUE NO

After the table was partitioned, the non-unique index was automatically partitioned, too:
INDEX_NAME UNIQUENESS PARTITIONED
SEGMENT_SIZE_SAMPLES_IDX1 UNIQUE NO
SEGMENT_SIZE_SAMPLES_IDX2 NONUNIQUE YES

Update 18.03.2020:

Another example using LIST partitioning:
alter table MUSIC.ALBUM modify
partition by list (GENRE)
(
    partition P_ROCK values (( 'ROCK')),
    partition P_POP values (( 'POP')),
    partition P_CLASSICAL values (( 'CLASSICAL')),
    partition P_MISC values (default)
)
online;

Update 12.03.2021:
An example using subpartitions:
-- First, give the LOB the desired attributes, if you wish to change any of them:
alter TABLE STOCKS MOVE
LOB (DOC)
STORE AS SECUREFILE(
  TABLESPACE  DATA1
  ENABLE      STORAGE IN ROW
  CHUNK       8192
  COMPRESS    MEDIUM
);

-- Alter the table
alter TABLE STOCKS MODIFY
PARTITION BY LIST (stockname)
SUBPARTITION BY RANGE (LASTUPDATED)
(   -- First partition is called a_name
    PARTITION a_name VALUES('a-abc')
    (
    -- older values
    SUBPARTITION SP_a_name_1_older VALUES LESS THAN ( TO_DATE('2016-01', 'YYYY-MM')) TABLESPACE DATA1
    LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA1 ),
    -- 2016
    subpartition SP_a_name_201601 VALUES LESS THAN (TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) TABLESPACE DATA_2016
    LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2016 ),

...continue with one subpartition per month per year...

...contine with  partitions b_name, c_name etc

-- Final partition is called z_name
 PARTITION z_name VALUES(default)
  (
    -- 2016
    SUBPARTITION SP_sp_z_name_older VALUES LESS THAN ( TO_DATE('2016-01', 'YYYY-MM')) TABLESPACE DATA1,
    subpartition SP_sp_z_name_201601 VALUES LESS THAN (TO_DATE(' 2016-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) TABLESPACE DATA_2016
    LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2016 ),

...continue with one subpartition per month per year...

 LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2022 ),
    subpartition SP_z_name_202212 VALUES LESS THAN (TO_DATE(' 2023-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) TABLESPACE DATA_2022
    LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA_2022 ),
    SUBPARTITION SP_z_name_DEFAULT VALUES LESS THAN (MAXVALUE) TABLESPACE DATA1
    LOB (DOC) STORE AS SECUREFILE  ( TABLESPACE DATA1)
    )
)
ONLINE
UPDATE INDEXES(
  IDX_1 LOCAL,
  IDX_2 LOCAL
);
  

Note that UPDATE INDEXES converts IDX_1 and IDX_2 and convert them to LOCAL indexes, on the fly.
Source: Oracle Documentation
I also recommend to look at Oracle-base.com for further reading.

Thursday, February 16, 2017

How to create a global temporary table

An example of a Global Temporary Table (GTT) would be:
CREATE GLOBAL TEMPORARY TABLE INCOMING_DATA (
  ROW_ID                      NUMBER(10) NOT NULL,
  SSN                         NUMBER(11),
  ENTRY_DATE                  DATE NOT NULL,
  HANDLED_BY                  VARCHAR2(30 BYTE) NOT NULL,
  CONTENT_CODE                VARCHAR2(4 BYTE),
  MESSAGE                     CLOB
 )
ON COMMIT DELETE ROWS
;

ON COMMIT DELETE ROWS indicates that Oracle will purge the rows after an ended transaction (after a COMMIT, ROLLBACK or an implicit COMMIT).
Instead of ON COMMIT DELETE ROWS you could also use ON DELETE PRESERVE ROWS, which will remove the rows at the end of the user's session.

You can add indexes, triggers and views on a GTT. You can even truncate a temporary table, and it will only affect the curren session's rows, leaving other users' rows intact.


With Oracle 12c came the ability for temporary tables to store its UNDO segments in temporary tablespaces, rather than conventional tablespaces, thus removing the need to generate REDO.

To enable this for your session, use:

ALTER SESSION SET TEMP_UNDO_ENABLED = TRUE;

before creating the GTT.

Sources:

Oracle-Base: article about GTT and another one explaining the 12c new feature "temporary undo"

Oracle Documentation: Overview of GTT
Check my note on private temporary tables, available from Oracle 18c.

Monday, August 24, 2015

What does the DEFAULT DEGREE on tables actually mean?

The DOP (degree of parallelism) settings on a table has to do with the number of threads per instance for scanning the table (an integer), or DEFAULT.

When a tables DEGREE setting is set to DEFAULT, it means that the DOP) is calculated via the following formula:

For a single instance:
DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT

For an Oracle RAC configuration:
DOP = PARALLEL_THREADS_PER_CPU x CPU_COUNT x INSTANCE_COUNT

The DOP for a table can be viewed in either of the *_TABLES views.

SELECT DEGREE
FROM   DBA_TABLES
WHERE  TABLE_NAME = 'EMP'
AND    OWNER = 'SCOTT';


DEGREE
------------------
         1

Wednesday, February 12, 2014

ORA-39726: unsupported add/drop column operation on compressed tables when attempting to drop column on a (previously) compressed table

My customer had a partitioned table that was previously compressed with FOR ALL OPERATIONS options.
One of the developers wanted to drop a columnn on a table, but receives the following error message:

ORA-39726: unsupported add/drop column operation on compressed tables

According to Oracle, this is expected behaviour. Oracle Support note 1068820.1 "Error ORA-39726 Drop Column Operation On Compressed Tables 10.2 Release" explains:

"In release 10g, a drop column attempt on a compressed table must raise an ORA-39726 error if the column has to be physically removed from the data blocks.
Hence "DROP COLUMN" and "DROP COLUMN UNUSED" are both illegal because they may have to touch the datablocks. SET UNUSED is OK because it is just a data
dictionary operation (as is DROP COLUMN on virtual columns)."

The Oracle support note concludes:

"In 11g it is allowed to drop columns from a compressed table IF compatible is set to 11.1 or higher AND table was created with the "compress for all OLTP" option but even in this situation there is no real drop but internally the database sets the column UNUSED to avoid long-running decompression and recompression operations."

My database unfortunately has compatible set to 11.0. So the only option at this point is to use the syntax

ALTER TABLE owner.table_name SET UNUSED COLUMN column_name;

This will render the column as good as dropped, another column can be added to the table with the same name.
The number of unused columns for a table can be tracked with

SELECT *
FROM USER_UNUSED_COL_TABS
WHERE TABLE_NAME='table_name';

Tuesday, January 28, 2014

How to set a column to UNUSED

Setting a column to UNUSED can be a solution when you cannot or should not attempt to drop a column due to performance reasons.

Syntax:
ALTER TABLE [table name] SET UNUSED (col1, col2, ... col n);
or
ALTER TABLE [table name] SET UNUSED COLUMN (col1);
Example:
ALTER TABLE MYTABLE
SET UNUSED COLUMN MODIFIED_DATE;

Verify that the columns were set to UNUSED:
SELECT * 
FROM USER_UNUSED_COL_TABS;

Result:
TABLE_NAME COUNT
MYTABLE 1
INVOICES_TMP 1

ALTER TABLE MYTABLE
DROP UNUSED COLUMNS;

The column is now gone:

SELECT * 
FROM USER_UNUSED_COL_TABS;
TABLE_NAME COUNT
INVOICES_TMP 1

You can also use the keyword CHECKPOINT when dropping the unused columns:
ALTER TABLE MYTABLE
DROP UNUSED COLUMNS CHECKPOINT 1000;

Oracle explains about the CHECKPOINT clause:

"This clause causes a checkpoint to be applied after processing the specified number of rows"
and
"Checkpointing cuts down on the amount of undo logs accumulated during the drop column operation to avoid a potential exhaustion of undo Space."

Note that:
Setting a column to UNUSED will drop constraints and indexes on the column
An UNUSED column cannot be accessed, nor recovered.
You can however, add a new column with the same name as any UNUSED column for the table.

Sources:
Oracle Documentation: Marking columns unused
Oracle Documentation: ALTER TABLE

Friday, November 29, 2013

How to create a sequence and a trigger that will emulate "autonumber" in other RDBMS systems

If you have been working with other RDBMS systems, such as SQL Server from Microsoft, you may be used to a datatype called AUTONUMBER, which will automatically insert a freshly generated number and create a unique value in your table. Perfect for generation of primary key values.
In oracle, no such data type exist. To achieve the same functionality, you need to create two additional objects: a sequence and a trigger. The trigger will fire on certain events. The sequence will generate a new number. Here is a simple example on how to set it up.
CREATE TABLE ADDRESSES
(
 ADDRESS_ID NUMBER,
 EMLOYEE_ID NUMBER,
 STREET_ADDRESS VARCHAR2(40),
 STREET_ADDRESS2 VARCHAR2(40),
 CITY VARCHAR2 (30),
 STATE VARCHAR2(2),
 ZIP VARCHAR2(5),
 ZIP_PLUS VARCHAR2(4),
 COUNTRY VARCHAR2(20),
 CONSTRAINT PK_ADDRESSES PRIMARY KEY (ADDRESS_ID)-- "OUT-OF-LINE" CONSTRAINT CREATION
);

CREATE SEQUENCE ADDRESSES_SEQ
 START WITH 1
 INCREMENT BY 1
 MINVALUE 1
 MAXVALUE 1000000
 NOCACHE
 NOCYCLE
 NOORDER;

CREATE OR REPLACE TRIGGER ADDRESSES_TRG
BEFORE INSERT
ON ADDRESSES
FOR EACH ROW
WHEN (NEW.ADDRESS_ID IS NULL)
BEGIN
 SELECT ADDRESSES_SEQ.NEXTVAL
 INTO   :NEW.ADDRESS_ID
 FROM   DUAL;
END;
/

Wednesday, November 27, 2013

How does the 11gR2 feature "deferred segment creation" work?

Starting from 11gR2 onwards, Oracle introduced a feature called deferred segment creation. This means that Oracle will not need to create any segments for a table until you actually insert rows into it.

Oracle creates all needed metadata about the new table in data dictionary, but doesn’t actually allocate any space from the tablespace. This applies to other segment types like index and table/index partitions as well.

Syntax:

-- Without deffered segment creation
CREATE TABLE TEST(tab_id number)
SEGMENT CREATION IMMEDIATE
TABLESPACE USERS;

-- With deffered segment creation
CREATE TABLE TEST (tab_id number)
SEGMENT CREATION DEFERRED
TABLESPACE USERS;


The parameter deferred_segment_creation controls the default behavior:
SQL> show parameter deferred

NAME_COL_PLUS_SHOW_PARAM     TYPE        VALUE
---------------------------- ----------- --------
deferred_segment_creation    boolean     TRUE

Monday, November 18, 2013

How to use the APPEND hint to optimize INSERT statements


When using the APPEND hint, you are telling the optimizer to use direct-path INSERT.

Serialized inserts:
Simply specify the APPEND hint in each INSERT statement, either immediately after the INSERT keyword or immediately after the SELECT keyword in the subquery of the INSERT statement.
INSERT /*+ APPEND */ INTO sales 
       SELECT product_id, customer_id, TRUNC(sales_date), 3, promotion_id, quantity, amount
       FROM sales_activity_direct;

Parallelized inserts:
First, enable parallel DML in your session:
ALTER SESSION ENABLE PARALLEL DML;
Then, specify the APPEND hint with the PARALLEL hint:
INSERT /*+ APPEND PARALLEL */ INTO sales 
       SELECT product_id, customer_id, TRUNC(sales_date), 3, promotion_id, quantity, amount
       FROM sales_activity_direct;

You can use NOLOGGING with APPEND to make the process even faster. NOLOGGING allows the direct-path INSERT operation to generate a negligible amount of REDO:
INSERT /*+ APPEND PARALLEL NOLOGGING */ INTO sales 
       SELECT product_id, customer_id, TRUNC(sales_date), 3, promotion_id, quantity, amount
       FROM sales_activity_direct;


Note:
* If the database or tablespace is in FORCE LOGGING mode, then direct path INSERT always logs, regardless of the logging setting.
* Direct-path INSERT is the default insert method when running in parallel mode, but unless you have specified the DEGREE attribute for your table, you still need to use the PARALLEL hint for each insert operation.
* Oracle Database performs index maintenance at the end of direct-path INSERT operations on tables that have indexes.





Tuesday, October 22, 2013

How to make a table read only (11g only)

ALTER TABLE table_name READ ONLY;

The command would make the table read-only even for its owner.

To reverse the operation:

ALTER TABLE table_name READ WRITE;