Thursday, January 31, 2019

How to solve SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled



Logged in as the user owning the objects, you are getting the following error when trying to use the autotrace feature:
SP2-0618: Cannot find the Session Identifier.  Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report

Solution: run the necessary scripts first:
conn / as sysdba
@$ORACLE_HOME/sqlplus/admin/plustrce.sql
grant plustrace to scott;

conn scott/password

@$ORACLE_HOME/rdbms/admin/utlxplan.sql
exit

After this is done, the errors should no longer appear.

last - show listing of last logged in users


A potentiall very useful tool when investigating what really happened, is the command "last".

Here is an abbrivated output:
root     pts/4        psl0ssh01.skead. Thu Jan 31 12:56   still logged in
oracle   pts/4        w7x64proddvh111. Thu Jan 31 12:50 - 12:52  (00:02)
root     pts/6        psl0ssh01.skead. Thu Jan 31 12:18 - 12:23  (00:05)
root     pts/5        psl0ssh01.skead. Thu Jan 31 11:08   still logged in
admin    pts/5        psa0addm2.skead. Wed Jan 30 20:21 - 20:23  (00:01)
admin    pts/5        psa0addm2.skead. Wed Jan 30 20:11 - 20:21  (00:09)

The shell variable HISTTIMEFORMAT and the "history" command



If you need to investigate the history of commands on your linux/Unix server, set the HISTTIMEFORMAT parameter first:
HISTTIMEFORMAT="%d/%m/%y %T " 

After that, you'll find a much more precis listing of your history:

  962  31/01/19 11:08:50 cd product/
  963  31/01/19 11:08:50 ls -al
  964  31/01/19 11:08:50 du -sh *
  965  31/01/19 11:08:50 cd ..

instead of
  962  cd product/
  963  ls -al
  964  du -sh *
  965  cd ..
The man page for history states

If the HISTTIMEFORMAT variable is set, the time stamp information associated with each history entry is written to the history file, marked with the history comment character.

Wednesday, January 30, 2019

How to solve "User "superuser" has an expired password" in postgreSQL



psql
psql (11.2)
Type "help" for help.

postgres=# alter role superuser valid until 'infinity';
ALTER ROLE

How to find the log file for a PostgreSQL database



Look in the file $PGDATA/postgresql.conf:

#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------

# - Where to Log -

log_destination = stderr # Valid values are combinations of
                                        # stderr, csvlog, syslog, and eventlog,
                                        # depending on platform.  csvlog
                                        # requires logging_collector to be on.

# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
                                        # into log files. Required to be on for
                                        # csvlogs.
                                        # (change requires restart)

# These are only used if logging_collector is on:
log_directory = log # directory where log files are written,
                                        # can be absolute or relative to PGDATA

log_filename = 'postgresql-%a.log' # log file name pattern,
                                        # can include strftime() escapes

log_truncate_on_rotation = on # If on, an existing log file with the
                                        # same name as the new log file will be
                                        # truncated rather than appended to.
                                        # But such truncation only occurs on
                                        # time-driven rotation, not on restarts
                                        # or size-driven rotation.  Default is
                                        # off, meaning append to existing files
                                        # in all cases.
log_rotation_age = 1d # Automatic rotation of logfiles will
                                        # happen after that time.  0 disables.

log_rotation_size = 0 # Automatic rotation of logfiles will
                                        # happen after that much log output.
                                        # 0 disables.


So in my case, the directory $PGDATA/log should contain the database server's log files.

The directory listing shows:

[postgres@myserver /pgdata/data/log]$ ls -latrh
total 5.6M
drwx------  2 postgres postgres 4.0K Dec 23 00:00 .
-rw-------  1 postgres postgres 867K Jan 24 23:59 postgresql-Thu.log
-rw-------  1 postgres postgres 869K Jan 25 23:59 postgresql-Fri.log
-rw-------  1 postgres postgres 873K Jan 26 23:59 postgresql-Sat.log
-rw-------  1 postgres postgres 873K Jan 27 23:59 postgresql-Sun.log
-rw-------  1 postgres postgres 873K Jan 28 23:59 postgresql-Mon.log
-rw-------  1 postgres postgres 873K Jan 29 23:59 postgresql-Tue.log
drwx------ 21 postgres postgres 4.0K Jan 30 10:47 ..
-rw-------  1 postgres postgres 407K Jan 30 11:01 postgresql-Wed.log

Since I am writing this post on a Wednesday, the log currently being written to is postgresql-Wed.log, and putting a tail on that will be somewhat like tailing the alert log of an oracle database.

Source: The online documentation

How to check the status, stop and start a postgreSQL server


The postgreSQL server comes with a wrapper called pg_ctl which will simplify some admin tasks for you.

Check if the variable PGDATA is set:
[postgres@myserver ~]$ echo $PGDATA
/pgdata/data

Check status:
[postgres@myserver ~]$ pg_ctl status
pg_ctl: server is running (PID: 27870)
/usr/pgsql-11/bin/postgres "-D" "/pgdata/data"

If the PGDATA variable is not set in your operating system, you need to specify the D-flag, like this:
[postgres@myserver ~]$ pg_ctl status -D /pgdata/data
pg_ctl: server is running (PID: 27870)
/usr/pgsql-11/bin/postgres "-D" "/pgdata/data"

Check if the database accepts incoming requests:
[postgres@myserver ~]$  pg_isready -d testdb01
/var/run/postgresql:5432 - accepting connections

Stop:

[postgres@myserver ~]$ pg_ctl stop
waiting for server to shut down.... done
server stopped

Start:

[postgres@myserver ~]$ pg_ctl start
waiting for server to start....2019-01-30 10:35:04 CET LOG:  listening on IPv4 address "0.0.0.0", port 5432
2019-01-30 10:35:04 CET LOG:  listening on IPv6 address "::", port 5432
2019-01-30 10:35:04 CET LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2019-01-30 10:35:04 CET LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-01-30 10:35:04 CET LOG:  redirecting log output to logging collector process
2019-01-30 10:35:04 CET HINT:  Future log output will appear in directory "log".
 done
server started
If your postgres server is running on RHEL7 or newer, you should use systemctl to control the service:
systemctl start postgresql-11
systemctl stop postgresql-11
systemctl restart postgresql-11
systemctl status postgresql-11
Example output(abbreviated) from the status command:
● postgresql-11.service - PostgreSQL 11 database server
   Loaded: loaded (/etc/systemd/system/postgresql-11.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-01-08 02:20:51 CET; 1 months 10 days ago
     Docs: https://www.postgresql.org/docs/11/static/
 Main PID: 1910 (postmaster)
   CGroup: /system.slice/postgresql-11.service
           ├─ 1910 /usr/pgsql-11/bin/postmaster -D /postgres/data
           ├─ 2047 postgres: logger
           ├─ 2390 postgres: checkpointer
           ├─ 2392 postgres: background writer
           ├─ 2394 postgres: walwriter
           ├─ 2395 postgres: autovacuum launcher
           ├─ 2396 postgres: archiver   last was 00000001000002360000009A
           ├─ 2397 postgres: stats collector
           ├─ 2398 postgres: logical replication launcher
           ├─ 2766 postgres: postgres_exporter postgres localhost(60360) idle
           ├─ 3061 postgres: postgres_exporter postgres localhost(44202) idle
           ├─48217 postgres: postgres postgres [local] idle


If the postgres server is down, and you need to check the current version installed, you can use either of these methods:
which psql
/sw/postgres/pgsql-13/bin/psql

cat /u02/pgdata/datafiles/PG_VERSION
13

cat /etc/PG_VERSION
13.ansible

Tuesday, January 29, 2019

The difference between REDO and UNDO


UNDO REDO
Used during normal database operations Used during instance and media recovery only
Contains changed column values Contains "change vectors", a description of a change made to a single block
Stored inside the database, in a separate tablespace Stored outside the database, in redo log files
Can be utilized for many purposes Has one purpose only

The redo log files has one and only one purpose in life: instance recovery.
The undo tablespace(s) has one primary purpose in life: to support read-consistency. However, unlike redo, UNDO data can also be utilized for many other purposes, like Oracles flashback technologies.

Oracle 12.2 Documentation about the redo logs can be found here

Monday, January 28, 2019

How to avoid ORA-00947 "not enough values" when performing an ITAS operation



If you are inserting rows from a table using an ITAS (Insert Tables As Select) statement, and the table being written to has a different layout than the table being selected from, you may struggle with getting your INSERT statement to succeed.

For example, let's say that the interim_table below has a virtual column defined on it, that the original table does not have. When attempting to execute your ITAS statement, you may see the error

ORA-00947: not enough values

To avoid this, enclose the the column listing in parenthesis, while omitting it for the SELECT statement constituting the VALUES-list, like this:

insert into interim_table
( 
  tq_id
  ,ename
  ,setting
)
  SELECT
    f.tq_id,
    f.ename,
    f.setting
  FROM original_table f JOIN interim_table wf
  ON wf.tq_id = f.tq_id;

How to enable DML logging



DML logging can be used as a workaround if you have a large number of rows to process, and you expect that some of these will fail to be inserted correctly due to data quality problems. In other words, some rows may violate one or more constraints on the receiving table.

Here is a short demonstration that I have recently tested as a workaround for a client.

I have a table called MYTABLE, which looks like this:
 Name                   Null?    Type
 --------------------- -------- -------------------
 ORDER_ID                        NOT NULL NUMBER
 ORG_ORDERID                     NUMBER
 VALUE_ID_TYPE                   VARCHAR2(20)
 COL_PREFIX                      VARCHAR2(80)
 COL_NAME                        VARCHAR2(500)
 COL_SUFFIX                      VARCHAR2(20)

There is a unique constraint on the table:
--create the index
CREATE UNIQUE INDEX C_COL_UIX ON MYTABLE
(ORG_ORDERID, COL_PREFIX, COL_NAME, COL_SUFFIX)
TABLESPACE USERS;

-- create the constraint, using index created above
ALTER TABLE MYTABLE ADD (
  CONSTRAINT C_COL_CONS
  UNIQUE (ORG_ORDERID, COL_PREFIX, COL_NAME, COL_SUFFIX)
  USING INDEX C_COL_UIX);

The actual DML logging is performed by adding the statement
LOG ERRORS INTO error_table ('INSERT') REJECT LIMIT UNLIMITED;
at the end of the INSERT statement.

Before we can do this we need to create the error log table:
BEGIN
  DBMS_ERRLOG.create_error_log (dml_table_name => 'MYTABLE');
END;
/

The statement above will create a table called ERR$_MYTABLE, where rejected rows will be inserted into.

For this test, I will now verify that the values I intend to use, actually exist in the table:
select count(*)
from mytable
where org_orderid = 122
and col_prefix = 'A'
and col_name = 'B'
and col_suffix = 'X'
;

  COUNT(*)
----------
         1

Any attempt to insert a row into the table with the same values as in the select statement above (122, A, B, X), will violate constraint C_COL_CONS, and logged in the table ERR$_MYTABLE.

Let's execute an INSERT statement that will violate the constraint:
INSERT
INTO mytable (order_id, col_prefix, col_name, col_suffix, col_type)
VALUES(122, 'A', 'B', 'X', 'D')
LOG ERRORS INTO ERR$_MYTABLE ('INSERT') REJECT LIMIT UNLIMITED;

As expected, I get the following error:
ORA-00001: unique constraint (C_COL_CONS) violated


Let's check the log table:
SELECT * FROM ERR$_MYTABLE;

ORA_ERR_NUMBER$ ORA_ERR_MESG$ ORA_ERR_ROWID$ ORA_ERR_OPTYP$ ORA_ERR_TAG$ ORDER_ID ORG_ORDERID VALUE_ID_TYPE COL_PREFIX COL_NAME COL_SUFFIX
1
ORA-00001: unique constraint (C_COL_CONS; violated   I INSERT 2850841 122 D A B X

If you execute a batchjob with thousands of rows, the offending rows will be caught so they can be corrected later. The non-offending rows will be neatly inserted where they belong.

Thursday, January 17, 2019

What types of materialized views can be created in an Oracle database?


There are three types of Materialized Views that can be created in an Oracle database:

1. Materialized Views with Aggregates

* The most commonly used materialized view, particularly in Datawarehouse databases.

Example:
CREATE MATERIALIZED VIEW SEQ_AGGR_INFO 
REFRESH FAST ON COMMIT
ENABLE QUERY REWRITE
AS
select RDATE, count(SEQNO) as antall, max(SEQNO) as makssekvens
from DATE_TO_SEQNO
group by dato
order by 1, 2;

Requirements:

* The SELECT-list must contain all of the GROUP BY columns (if present)
* There must be a COUNT(*) and a COUNT(column) on any aggregated columns.
* A materialized view logs must be present on all tables referenced in the query that defines the materialized view.
* Valid aggregate functions are: SUM, COUNT(x), COUNT(*), AVG, VARIANCE, STDDEV, MIN, and MAX.

2. Materialized Views containing only joins

* Materialized views contain only joins and no aggregates

Example:
CREATE MATERIALIZED VIEW active_data
TABLESPACE data1
CACHE
REFRESH FORCE ON COMMIT
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT 
    act.q_id, 
    act.doc_type, 
    act.next_q_id, 
    act.prev_q_id,
    act.created, 
    act.created_by, 
    act.last_modified, 
    act.last_modified_by, 
    act.contents, 
    act.accum_value, 
    act.accum_total
FROM active_data act JOIN archived_data arc
ON arc.q_id = act.q_id;

An advantage with these types of materialized views is that expensive joins are precalculated.
Only REFRESH ON COMMIT or REFRESH ON DEMAND is legal with mviews containing only joins. When ON COMMIT is used, the refresh is performed at commit time of the transaction that does DML on the materialized view's detail table.


3. Nested Materialized Views

* A nested materialized view is a materialized view whose definition is based on another materialized view.

I have no personal experience using this type of materialized view, but Oracle has supplied some examples if you follow the link above.

Sources: Oracle 12.2 Data Warehousing Guide

Some simple timestamp expressions


Here is a simple template for how to add hours, minutes and seconds using Oracle timestamp expressions:

select sysdate, sysdate+1/24 "+ 1hr",  sysdate+1/24/60 "+ 1 min",  sysdate+1/24/60/60 "+ 1 sec", sysdate+10/24/60/60 "+ 10 sec"
from dual;

SYSDATE + 1hr + 1 min + 1 sec + 10 sec
17.01.2019 13:34:38 17.01.2019 14:34:38 17.01.2019 13:35:38 17.01.2019 13:34:39 17.01.2019 13:34:48

What is a "fast refresh" of materialized views?



A FAST REFRESH of a materialized view applies only changed data to the materialized view. This way you don't have to rebuild the materialized view every time data in the base tables' change. Oracle keeps a log of changed data in a different object called a materialized view log.

To indicate a FAST REFRESH, use the following syntax:
CREATE MATERIALIZED VIEW SEQ_AGGR_INFO
REFRESH FAST ON COMMIT | ON DEMAND | ON STATEMENT | START WITH SYSDATE NEXT SYSDATE+1/24/60
ENABLE QUERY REWRITE
AS
select RDATE, count(SEQNO) as antall, max(SEQNO) as makssekvens
from DATE_TO_SEQNO
group by dato
order by 1, 2;

ON COMMIT instructs oracle to refresh the materialized view whenever a transaction against the master table commits.
ON DEMAND instructs oracle that the materialized view should not be refreshed unless the user manually executes one of the DBMS_MVIEW refresh
procedures
.
ON STATEMENT instructs oracle to automatically refresh refresh the materialized view every time a DML operation is performed on any of the materialized view's base tables.
START WITH specifies a datetime expression for the first automatic refresh time. NEXT indicates the interval between automatic refreshes

General restrictions for FAST REFRESH are listed here

Tuesday, January 15, 2019

The simplest way to see if your database is part of a data guard setup



Execute this query:
select * from v$dataguard_config;

DB_UNIQUE_NAME PARENT_DBUN DEST_ROLE CURRENT_SCN CON_ID
proddb01 NONE PRIMARY DATABASE
238029367683
0
proddb01_stby1 proddb01 PHYSICAL STANDBY
238029353772
0
In this example, I am logged into the database proddb01, the primary database.

The columnn PARENT_DBUN is interesting. It displays "DB_UNIQUE_NAME of the parent database, also known as the database that supplies live redo to the destination."

If executed on the standby database, the output will be reversed, but the principle the same.

Documentation for the view V$DATAGUARD_CONFIG can be found here

Friday, January 11, 2019

How to troubleshoot query rewrites



1. Run the script $ORACLE_HOME/rdbms/admin/utlxrw.sql logged in as the schema who owns the materialized view.

2. Analyze the query by logging on as the schema owning the materialized view and execute:
set serveroutput on

begin
      DBMS_MVIEW.EXPLAIN_REWRITE('select dato, max(sekvensnummer)'||
            'from DATE_TO_SEQNUM where dato <= :B1 group by dato','SEQ_AGGREGATED','x');
end;
/
where the first argument is the one you expect to use query rewrite, and the second argument is the name of the materialized view. 3. The potential reasons for the failed rewriting can be found in the table REWRITE_TABLE:
select message from rewrite_table;

Thursday, January 10, 2019

What are Index-organized tables and how are they created?


An example of creating an IOT table from my own experience:
CREATE TABLE DATE_TO_SEQNUM (
    DATO DATE, 
    SEQNUM NUMBER(19) NOT NULL,
CONSTRAINT PK_DTS_IOT PRIMARY KEY (DATO,SEQNUM)
)
ORGANIZATION INDEX
TABLESPACE USERS;


If you expect a lot of repeating entries in the IOT, you can use index compression, like this:
CREATE TABLE DATE_TO_SEQNUM (
    DATO DATE, 
    SEQNUM NUMBER(19) NOT NULL,
CONSTRAINT PK_DTS_IOT PRIMARY KEY (DATO,SEQNUM)
)
ORGANIZATION INDEX
TABLESPACE USERS 
COMPRESS;

In my case, the IOT was really this simple. There are some other important directives when creating an IOT, too, which may be applicable in other cases. They are the use of OVERFLOW partition, the INCLUDING keyword, and the PCTTHRESHOLD-clause. I will come back to them when I have had a chance to put them into practice. In the mean time, read about them here.

Here are some statements from the Oracle documentation which defines what an IOT is, and the potential benefits the can provide:

* Index-organized tables are tables stored in an index structure.

* In an index-organized table, rows are stored in an index defined on the primary key for the table. Each index entry in the B-tree also stores the non-key column values. Thus, the index is the data, and the data is the index.

* Index-organized tables provide faster access to table rows by primary key or a valid prefix of the key. The presence of non-key columns of a row in the leaf block avoids an additional data block I/O.

* Index-organized tables are useful when related pieces of data must be stored together or data must be physically stored in a specific order.


Sources: Oracle Database 12.2 documentation

Wednesday, January 9, 2019

How to create a LIST-RANGE sub-partitioned table in PostgreSQL



Create the table:
CREATE TABLE orders(
  id            integer,
  country_code  VARCHAR(5),
  customer_id   integer,
  order_date    DATE,
  order_total   numeric(8,2)
)
PARTITION BY LIST (country_code);

Create partition for your table. I have defined two. Define it so that it is ready to be further diveded into sub-partitions:
CREATE TABLE orders_NO
 PARTITION OF orders 
 FOR VALUES IN ('no') 
 PARTITION BY RANGE (order_total);

CREATE TABLE orders_SE
 PARTITION OF orders 
 FOR VALUES IN ('se') 
 PARTITION BY RANGE (order_total);

Finally, create your sub-partitions and set their boundaries:
CREATE TABLE small_orders_no
 PARTITION OF orders_NO FOR VALUES FROM (0) TO (2000);

CREATE TABLE medium_orders_no
 PARTITION OF orders_NO FOR VALUES FROM (2001) TO (4000);

CREATE TABLE large_orders_no
 PARTITION OF orders_NO FOR VALUES FROM (4001) to (maxvalue);

CREATE TABLE small_orders_se
PARTITION OF orders_SE FOR VALUES FROM (0) TO (2000);

CREATE TABLE medium_orders_se
 PARTITION OF orders_SE FOR VALUES FROM (2001) TO (4000);

CREATE TABLE large_orders_se
 PARTITION OF orders_SE FOR VALUES FROM (4001) to (maxvalue);

How to avoid ERROR: no partition of relation ... found for row in PostgreSQL



Building on my previous post, I would now like to insert the following rows into my LIST partitioned table:
INSERT INTO ORDERS values (1,'NO',100,'01.02.2018',1000);
INSERT INTO ORDERS values (2,'SE',101,'01.03.2018',1200);

So one rows that I expect to go into the partition for Norway, and another row for the partition for Sweden.

I execute the following:
INSERT INTO ORDERS values (1,'NO',100,'01.02.2018',1000);

which results in an error:
Error: ERROR: no partition of relation "orders" found for row
  Detail: Partition key of the failing row contains (country_code) = (NO).

Contrary to Oracle, with PostgreSQL you need to explicitly create your partition first, like this:
CREATE TABLE orders_NO
 PARTITION OF orders FOR VALUES IN ('NO');

After this is done, PostgreSQL will accept your INSERT-statement.

You can also enforce contraints at the same time, if desirable:
CREATE TABLE orders_NO
 PARTITION OF orders (
 CONSTRAINT orders_id_nn CHECK (id is not null)
) FOR VALUES IN ('NO');

Tuesday, January 8, 2019

How to create a LIST-partitioned table with a primary key in PostgreSQL


In my experiements with a new RDBMS, I have tried to replicate what I am familiar with from Oracle, to Postgres.

Postgres supports partitioning: RANGE, LIST and HASH.

Here are some findings regarding LIST partitioning, which is extensively used in Oracle by my customers:

To create a list partitioned table in Postgres:

CREATE TABLE orders(
  order_id      integer,
  country_code  VARCHAR(5),
  customer_id   integer,
  order_date    DATE,
  order_total   numeric(8,2)
)
PARTITION BY LIST (country_code);

Add the desired partitions:
CREATE TABLE orders_NO
 PARTITION OF orders FOR VALUES IN ('NO');

CREATE TABLE orders_SE
 PARTITION OF orders FOR VALUES IN ('SE');

When working with Oracle, the equivalent table in Oracle could have been defined with a primary key on the column order_id:

CREATE TABLE orders(
  order_id      number primary key,
  country_code  varchar2(5),
  customer_id   number,
  order_date    date,
  order_total   number(8,2)
)
PARTITION BY LIST (country_code);

With PostgreSQL however, doing so will result in an error:
Error: ERROR: insufficient columns in PRIMARY KEY constraint definition
Detail: PRIMARY KEY constraint on table "orders" lacks column "country_code" which is part of the partition key

If the primary key is changed to use the partitioning column, you'll get a valid primary key:
CREATE TABLE orders(
  id            integer,
  country_code  VARCHAR(5) primary key,
  customer_id   integer,
  order_date    DATE,
  order_total   numeric(8,2)
)
PARTITION BY LIST (country_code);

But this is not what I want, since my table is designed to store multiple values for each country.

To create a partitioned table including a valid primary key:
CREATE TABLE orders(
  id            integer,
  country_code  VARCHAR(5),
  customer_id   integer,
  order_date    DATE,
  order_total   numeric(8,2),
PRIMARY KEY (id,country_code)
)
PARTITION BY LIST (country_code);

As long as the primary key includes the partition key, the syntax is accepted by the postgres server.
This is different from Oracle, where you define the primary key independently of the partition key.

As with Oracle, a primary key constraint will automatically create a unique B-tree index.

Note that
PRIMARY KEY (id,country_code)
can alternatively be replaced by
UNIQUE (id,country_code)

The result is the same, either way.

Monday, January 7, 2019

How to connect to a postgres database using jdbc

Using SquirrelSQL I connected to a postgres database as can be seen below:




The entire connection string is
jdbc:postgresql://mypostgresserver01:5432/vegdb01

How to create a multicolumn list-partitioned table in Oracle 12.2



From Oracle 12.2, you can LIST partition a table using multiple columns:

CREATE TABLE regional_orders(
  id            NUMBER,
  country_code  VARCHAR2(5),
  region_code   VARCHAR2(20),
  customer_id   NUMBER,
  order_date    DATE,
  order_total   NUMBER(8,2),
  CONSTRAINT regional_orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code,region_code) 
(
                PARTITION NORWAY_SOUTH_WEST VALUES (
                                        ( 'NOR','AUST-AGDER'),
                                        ( 'NOR','VEST-AGDER'),
                                        ( 'NOR','ROGALAND')
                                         ),
                PARTITION NORWAY_WEST VALUES (
                                        ( 'NOR','HORDALAND'),
                                        ( 'NOR','SOGN OG FJORDANDE'),
                                        ( 'NOR','MØRE OG ROMSDAL')
                                         ),
                PARTITION NORWAY_CENTRAL VALUES (
                                        ( 'NOR','TRØNDELAG')
                                         ),
                PARTITION NORWAY_EAST VALUES (
                                        ( 'NOR','BUSKERUD'),
                                        ( 'NOR','HEDMARK'),
                                        ( 'NOR','TELEMARK')
                                         )
);
Incoming data that has any of the combinations below, will enter the partition NORWAY_SOUTH_WEST

'NOR','AUST-AGDER'
'NOR','VEST-AGDER'
'NOR','ROGALAND'

This functionality removes the need for LIST-LIST subpartitioning, as far as I can see.


Some queries to verify that the partitions exist:

select  partition_name,partition_position,tablespace_name
from user_tab_partitions 
where table_name='REGIONAL_ORDERS';

PARTITION_NAME PARTITION_POSITION TABLESPACE_NAME
NORWAY_SOUTH_WEST
1
USERS
NORWAY_WEST
2
USERS
NORWAY_CENTRAL
3
USERS
NORWAY_EAST
4
USERS

SELECT PARTITIONING_TYPE,PARTITION_COUNT,PARTITIONING_KEY_COUNT 
FROM USER_PART_TABLES 
where table_name='REGIONAL_ORDERS';
PARTITIONING_TYPE PARTITION_COUNT PARTITIONING_KEY_COUNT
LIST
4
2

SELECT column_name,column_position 
FROM USER_PART_KEY_COLUMNS
where name='REGIONAL_ORDERS';

COLUMN_NAME COLUMN_POSITION
COUNTRY_CODE
1
REGION_CODE
2

Let's insert a row:

insert into regional_orders
values (1,'NOR','ROGALAND',3344,'04.01.2019', 1000);
commit;

Analyze the table:
exec DBMS_STATS.GATHER_TABLE_STATS (OwnName => 'SCOTT',TabName => 'REGIONAL_ORDERS');

Verify:
select partition_name, num_rows
from user_tab_partitions 
where table_name='REGIONAL_ORDERS';
PARTITION_NAME NUM_ROWS
NORWAY_CENTRAL
0
NORWAY_EAST
0
NORWAY_SOUTH_WEST
1
NORWAY_WEST
0

Insert with a different value:
insert into regional_orders
values (2,'NOR','BUSKERUD',3345,'04.01.2019', 1200);

COMMIT;
Analyze again:
exec DBMS_STATS.GATHER_TABLE_STATS (OwnName => 'SCOTT',TabName => 'REGIONAL_ORDERS');

Verify:
select partition_name, num_rows
from user_tab_partitions 
where table_name='REGIONAL_ORDERS';

PARTITION_NAME NUM_ROWS
NORWAY_CENTRAL
0
NORWAY_EAST
1
NORWAY_SOUTH_WEST
1
NORWAY_WEST
0


If you send in a value that is not covered in your pre-specified list, your statement will fail:
insert into regional_orders
values (3,'NOR','NORDLAND',3346,'04.01.2019', 1100);

ORA-14400: inserted partition key does not map to any partition

The solution to this is to either create your table with a default partition or to use the Oracle 12.2 new feature of automatic list partitioning.

Using a default partition will be as follows:
CREATE TABLE regional_orders(
  id            NUMBER,
  country_code  VARCHAR2(5),
  region_code   VARCHAR2(20),
  customer_id   NUMBER,
  order_date    DATE,
  order_total   NUMBER(8,2),
  CONSTRAINT regional_orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code,region_code) 
                (       
                PARTITION NORWAY_SOUTH_WEST VALUES (
                                        ( 'NOR','AUST-AGDER'),
                                        ( 'NOR','VEST-AGDER'),
                                        ( 'NOR','ROGALAND')
                                         ),
                PARTITION NORWAY_WEST VALUES (
                                        ( 'NOR','HORDALAND'),
                                        ( 'NOR','SOGN OG FJORDANDE'),
                                        ( 'NOR','MØRE OG ROMSDAL')
                                         ),
                PARTITION NORWAY_CENTRAL VALUES (
                                        ( 'NOR','TRØNDELAG')
                                         ),
                PARTITION NORWAY_EAST VALUES (
                                        ( 'NOR','BUSKERUD'),
                                        ( 'NOR','HEDMARK'),
                                        ( 'NOR','TELEMARK')
                                         ),
                PARTITION NORWAY_REST VALUES (DEFAULT) 
                )
;

Using the AUTOMATIC keyword will be as easy as adding the keyword AUTOMATIC to your partitioning-clause and to remove the specification for your overflow partition:
CREATE TABLE regional_orders(
  id            NUMBER,
  country_code  VARCHAR2(5),
  region_code   VARCHAR2(20),
  customer_id   NUMBER,
  order_date    DATE,
  order_total   NUMBER(8,2),
  CONSTRAINT regional_orders_pk PRIMARY KEY (id)
)
PARTITION BY LIST (country_code,region_code) AUTOMATIC
                (       
                PARTITION NORWAY_SOUTH_WEST VALUES (
                                        ( 'NOR','AUST-AGDER'),
                                        ( 'NOR','VEST-AGDER'),
                                        ( 'NOR','ROGALAND')
                                         ),
                PARTITION NORWAY_WEST VALUES (
                                        ( 'NOR','HORDALAND'),
                                        ( 'NOR','SOGN OG FJORDANDE'),
                                        ( 'NOR','MØRE OG ROMSDAL')
                                         ),
                PARTITION NORWAY_CENTRAL VALUES (
                                        ( 'NOR','TRØNDELAG')
                                         ),
                PARTITION NORWAY_EAST VALUES (
                                        ( 'NOR','BUSKERUD'),
                                        ( 'NOR','HEDMARK'),
                                        ( 'NOR','TELEMARK')
                                         )
                )
;

New values will be added automatically and given system-generated partition names.

There is an obvious trade-off here: if you use automation, you will not be able to give your partition logical names and place values logically belong together, in the same partition.

For example, the values
'NOR','NORDLAND'
may logically belong to the same partition as the values
'NOR','TROMS'
and
'NOR','FINNMARK'
If I knew such a value may be sent at the time of table creation, I would of course create another partition, like this:
PARTITION NORWAY_NORTH VALUES (
                                        ( 'NOR','NORDLAND'),
                                        ( 'NOR','TROMS'),
                                        ( 'NOR','FINNMARK')
                                         )

If you use a default partition, you will always be able to save your incoming rows, and you have full controll over your partition names and their values.
But new rows end up in an overflow partition together with any other record that cannot find its way into a properly named and configured partition.
You may need to split your default partition later, to satisify your business needs or to reduce the size of your default partition.

Friday, January 4, 2019

How to solve the problem with ORA-00997: illegal use of LONG datatype when quering the dba_tab_partitions



If you need to selectively find partitions based on their upper-bound value, like this:

select partition_name,high_value
from dba_tab_partitions
where table_name='MYTABLE'
and high_value='201805';

you will run into the following error:
ORA-00997: illegal use of LONG datatype

One solution I found was to create a temporary table, like this:
CREATE GLOBAL TEMPORARY TABLE PARTITION_HIGH_VAL
ON COMMIT PRESERVE ROWS
AS
select 
     PARTITION_NAME,
     to_lob(high_value) as text_clob
     from dba_tab_partitions 
     where TABLE_OWNER='SH'
     and table_name='SALES';

Then, use the table PARTITION_HIGH_VAL to view the upper bounds of each partition:
select PARTITION_NAME,TO_CHAR(TEXT_CLOB) "high value" 
from partition_high_Val;

PARTITION_NAME high value
SYS_P79451 201806
SYS_P71617 201805
SYS_P70243 201803
SYS_P70231 201804
SYS_P70112 201802


Since this is a temporary table, your rows will be deleted upon your session exit because of the directive ON COMMIT PRESERVE ROWS.
Obviously, you could create a heap-organized table to store these results in, too, if you need to store your data for a longer period.

Wednesday, January 2, 2019

How to setup Oracle Network Integrity



Encryption will protect your data from being read in clear text, when in transition.
However, data can still be tampered with and arrive modified at the receiver's end.

To protect against modification, you need to ensure the integrity of the data being sent over the network.

Data integrity protection from Oracle works independently from the encryption process. In other words, you can use both encryption and checksuming together or either one separately.

To implement network integrity, add the following to your server’s sqlnet.ora file:
SQLNET.CRYPTO_CHECKSUM_SERVER = requested
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER= (SHA256, SHA384, SHA512, SHA1, MD5)
All of the hash algorithms in the list create a checksum that changes if the data is altered in any way.

The value "requested" in the CRYPTO_CHECKSUM_SERVER instructs the Oracle server that calculating checksums when sending network packets to the client is desirable, but not required. Other valid settings are "required", "accepted" and "rejected". See the documentation for details.

The list of available encryption algorithms are stated in the CRYPTO_CHECKSUM_TYPE_SERVER, and the first one common to both client and server in the list will be selected. During my tests, the SHA256 was selected.

More information is to be found in the official documentation