Friday, December 21, 2018

How to create a materialized view with query rewrite enabled


In this example, I am creating a materialized view to support queries against a table with 20 million rows, which are looking at the maximum sequence number within a range of dates.

Oracle describes the benefits of Materialized views like this:

"One of the major benefits of creating and maintaining materialized views is the ability to take advantage of query rewrite, which transforms a SQL statement expressed in terms of tables or views into a statement accessing one or more materialized views that are defined on the detail tables."

Since I want the materialized view to be refreshed every time a new row is commited to the base table, I need to specify that the mview should use "fast refresh".

First, create a materialized view log to store changed rows in the base tables that your materialized views are built on. This is required when using FAST REFRESH:
CREATE MATERIALIZED VIEW LOG ON DATE_TO_SEQNO
TABLESPACE DATA1
CACHE
ENABLE ROW MOVEMENT
WITH ROWID (SEQNO, RDATE)
INCLUDING NEW VALUES;

Then, create the materialized view. Query rewrites are disabled by default, so it must be explicitly stated:
CREATE MATERIALIZED VIEW SEQ_AGGR_INFO
SEGMENT CREATION IMMEDIATE
TABLESPACE DATA1
CACHE
ENABLE ROW MOVEMENT
ROW STORE COMPRESS ADVANCED
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
ENABLE QUERY REWRITE
AS
select RDATE, count(SEQNO) as antall, max(SEQNO) as makssekvens
from DATE_TO_SEQNO
group by dato
order by 1, 2;

Verify the mview's existence:
SELECT mview_name,rewrite_enabled,refresh_mode,refresh_method, last_refresh_type, last_refresh_date,last_refresh_end_time,staleness,compile_state
FROM USER_MVIEWS;

MVIEW_NAME REWRITE_ENABLED REFRESH_MODE REFRESH_METHOD LAST_REFRESH_TYPE LAST_REFRESH_DATE LAST_REFRESH_END_TIME STALENESS COMPILE_STATE
SEQ_AGGR_INFO Y COMMIT FAST COMPLETE 18.01.2019 13:58:24 18.01.2019 13:58:24 FRESH VALID

Verify the mview log's existence:
SELECT master,log_table,rowids, primary_key,filter_columns, include_new_values
FROM USER_MVIEW_LOGS;

MASTER LOG_TABLE ROWIDS PRIMARY_KEY FILTER_COLUMNS INCLUDE_NEW_VALUES
DATE_TO_SEQNO MLOG$_DATE_TO_SEQNO YES NO YES YES


The following query will now use the materialized view, due to the QUERY REWRITE directive used in the creation of the mview:
var B1 VARCHAR2(10);
EXEC :B1 := to_date('01.01.2018','dd.mm.yyyy');
set lines 200
set autot on
set timing on
select max(seqno)
from DATE_TO_SEQNO
where rdate = :B1;

The explain plan verifies it:
MAX(SEKVENSNUMMER)
------------------
          18580449

Elapsed: 00:00:00.05

Execution Plan
----------------------------------------------------------
Plan hash value: 1856567295

--------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                        |     1 |    14 |   243   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE               |                        |     1 |    14 |            |          |
|*  2 |   MAT_VIEW REWRITE ACCESS FULL| SEQ_AGGR_INFO          |     1 |    14 |   243   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("SEQ_AGGR_INFO"."RDATE"=:B1)

To drop the materialized view and its log object:
drop materialized view log on DATE_TO_SEQNO;
drop materialized view SEQ_AGGR_INFO;

Sources:
Create Materialized View Log

Create Materialized view

Datawarehousing guide

Thursday, December 20, 2018

How to generate a script to rebuild unusable indexes



set trimspool on
set lines 200
set pages 0
set heading off
set verify off
set feedback off
set echo off
spool rebuild_stmts.sql

select 'spool rebuild_stmts.log' from dual;
SELECT 'alter index ' || idx.table_owner || '.' || idx.index_name || ' rebuild ' || ' ONLINE;'
FROM dba_indexes idx
where idx.status = 'UNUSABLE'
ORDER BY idx.table_owner, idx.index_name
/
select 'exit' from dual;

exit

Wednesday, December 19, 2018

How to find the version of postgres installed on a server


$ su - postgres
Last login: Tue Dec 18 23:03:07 CET 2018
$ pg_config --version
PostgreSQL 11.1

Using psql:
[postgres@til0drift-dbteam-sandbox-pgsql01 ~]$ psql
psql (11.2)
Type "help" for help.

postgres=# select version();
                                                 version
---------------------------------------------------------------------------------------------------------
 PostgreSQL 11.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit
(1 row)

Monday, December 10, 2018

How to send a file to a specific directory using wget

Here demonstrated while getting a file from our RH satellite server:

wget -P /home/oracle/myfiles http://satellite/oracle/cloningscripts/post_clone.sql

If the directory /home/oracle/mfiles is not present, it will be created.

Friday, December 7, 2018

How to create the SCOTT user

To create the user SCOTT, run the script

$ORACLE_HOME/rdbms/admin/utlsampl.sql

Observe that user SCOTT will be created with the following statement:

GRANT CONNECT,RESOURCE,UNLIMITED TABLESPACE TO SCOTT IDENTIFIED BY tiger;

Note that if the default profile is using a password verification function, the creation will fail.

To work around, exchange default password "tiger" with something different that will honor the verification function, for example "T123G456R".

Remember to set
SET TERMOUT ON
SET ECHO ON
at the top, if you want to see the script output.

Wednesday, November 28, 2018

Changes in privilege "SELECT ANY DICTIONARY" in Oracle 12c


From version 12.1 and onwards, Oracle has introduced some changes to enhance security when granting the system privilege "SELECT ANY DICTIONARY".

In the New Features guide for version 12.1, the authors explain:

The SELECT ANY DICTIONARY privilege no longer permits access to security sensitive data dictionary tables DEFAULT_PWD$, ENC$, LINK$, USER$, USER_HISTORY$, and XS$VERIFIERS.

This change increases the default security of the database by not allowing access to a subset of data dictionary tables through the SELECT ANY DICTIONARY privilege.

The access to USER$ has also been excempt when granting the system privilege SELECT ANY TABLE and the role SELECT_CATALOG_ROLE, but I have not yet found the documentation that verifies this.

Monday, November 19, 2018

How to use rman to create a backup-based clone on a remote server


It has been a while since I needed to use this technique, since we rely on storage snapshot clones these days. Sometimes though, a good old-fashioned rman clone based on backups is the only way to solve a problem.
Since I always avoid working on the production server during cloning, most steps are done logged onto the auxiliary server.

The method I used is based on "Backup-Based Duplication Without a Target Database and Recovery Catalog Connection"
Here is what I did:

1. On the production server, backup of the source database using RMAN:
rman target / nocatalog
backup database plus archivelog;

From this point and onwards, everything is done while logged onto the destination server.

2. Get the files from the source server:
ssh testserver
cd /u05
mkdir bup
cd bup
scp -r prodserver:/u05/flash_recovery_area/PRODDB01/2018_11_16/* .

3. Prepare the auxiliar pfile. Note that the parameters I use exceeds the actual required parameters.
But since the auxiliary instance will replace an already existing database instance, which is already tuned and has the correct memory parameters, I choose to include them. Notice also the db_file_name_convert and log_file_name_convert parameters. They control where rman will place the files during the restore process.
cd $ORACLE_HOME/dbs
vi inittest1.ora

Add the following:

*.db_name='test1'
*.db_unique_name='test1'
*.audit_file_dest='/u01/oracle/admin/test1/adump'
*.audit_trail='DB'
*.compatible='12.2.0'
*.control_files='/u02/oradata/test1/control01.ctl','/u04/fra/test1/control02.ctl'
*.db_block_size=8192
*.db_file_name_convert='proddb01','test1'
*.log_file_name_convert='proddb01','test1'
*.db_recovery_file_dest='/u05/flash_recovery_area/test1'
*.db_recovery_file_dest_size=1000G
*.diagnostic_dest='/u01/oracle'
*.nls_language='NORWEGIAN'
*.nls_territory='NORWAY'
*.open_cursors=300
*.optimizer_adaptive_plans=FALSE
*.optimizer_dynamic_sampling=0
*.optimizer_mode='ALL_ROWS'
*.pga_aggregate_target=7222M
*.processes=1500
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=21696M
*.shared_pool_size=2624M
*.streams_pool_size=256M
*.undo_tablespace='UNDOTBS1'
*.diagnostic_dest=/u01/oracle

4. I prefer using an spfile over a pfile. Therefore I take the time here to create an spfile:
sqlplus / as sysdba
startup nomount pfile=inittest1.ora
create spfile from pfile;

5. Startup the auxiliary instance in nomount-mode using the spfile:
shutdown abort
startup nomount

6. Start the duplication process:
rman auxiliary /
duplicate database to test1
backup location '/u05/fra/bup';

RMAN went through the normal restore, name switching and recovery phases. Finally, the database was opened with the resetlog option.

Wednesday, November 14, 2018

New parameter in dbms_redefinition.finish_redef_table in 12c



Oracle 12c adds a potentiall very useful feature to the dbms_redefinition package, in the procedure finish_redef_table: dml_lock_timeout

It specifies a time limit in seconds for how long a DML statment waits in a DML lock queue, before the procedure terminates gracefully.

By default, this parameter is set to 0 which means no waiting at all; the procedure will simply error out if it cannot get a lock on the table.

The maximumm value is 1000000 and will cause any DML statements to wait in perpetuity to aquire a DML lock.

Here is an example of its usage, where I have set the limit to 5 minutes (300 seconds):
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE(uname=>'SCOTT',
                                     orig_table=>'EMP',
                                     int_table=>'EMP_INTERIM', 
                                     dml_lock_timeout=>300);
END;
/

Monday, October 29, 2018

How do you configure an Oracle Golden Gate extract to run in Integrated Capture Mode?

There are different parameters valid for Golden Gate extracts, depending on which mode you choose (integrated mode or classical mode).

* Integrated extract parameters are found here
* Classical extract parameters are founnd here

Some of the parameters can be used in both configuration, while others are specific to one or the other.

An example of a parameter that is unique for Integrated Capture mode, is
TRANLOGOPTIONS INTEGRATEDPARAMS( option, ...)

You instruct the Golden Gate software to start your extract in Integrated mode by the following ggsci commands:

ADD EXTRACT MYEXT INTEGRATED TRANLOG, BEGIN NOW

To instruct the Golden Gate software to start the same extract in classical mode, alter the instructions slightly:
ADD EXTRACT MYEXT, TRANLOG, BEGIN NOW

Potential solution to Golden Gate error message "Parameter [INTEGRATEDPARAMS] is not valid for this configuration"


I was trying to set up an Integrated Capture in my Golden Gate environement, and kept getting the following error when starting my extract process:

OGG-10144  Oracle GoldenGate Capture for Oracle, myext.prm:  (myext.prm) line 3: Parameter [INTEGRATEDPARAMS] is not valid for this configuration.
ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, myext.prm:  PROCESS ABENDING.

In my case, this was caused by the fact that the extract was configured with classical capture, like this:

ADD EXTRACT MYEXT, TRANLOG, BEGIN NOW

The correct way to configure an extract in integrated capture mode is:

ADD EXTRACT MYEXT INTEGRATED TRANLOG, BEGIN NOW

How to lookup parameter definitions in Golden Gate 12.2


A potentially very useful feature when working with Golden Gate configurations is the ability to lookup parameter definition usage, with the new command

GGSCI (myserver.mydomain.com as ggadmin@testdb01) 31> info param tranlogoptions.INTEGRATEDPARAMS

param name  : tranlogoptions.integratedparams
description : (Oracle) Valid for Extract in integrated capture mode (Oracle Standard or Enterprise Edition 11.2.0.3 or later) 
Passes parameters and values to the Oracle database logmining server when Extract is in integrated capture mode.
argument    : no argument
options     : ANNOTATEDDLSIZE, CAPTURE_IDKEY_OBJECTS
              CAPTURE_SEQUENCE_NEXTVAL, DISABLE_ON_LIMIT
              DOWNSTREAM_REAL_TIME_MINE, ENABLE_PROCEDURAL_REPLICATION
              IGNORE_TRANSACTION, IGNORE_UNSUPPORTED_TABLE, INCLUDE_OBJECTS
              INLINE_LOB_OPTIMIZATION, INLINE_SFLOB_OPTIMIZATION, MAXIMUM_SCN
              MAX_SGA_SIZE, MERGE_THRESHOLD, MESSAGE_LIMIT
              MESSAGE_TRACKING_FREQUENCY, PARALLELISM
              SKIP_AUTOFILTERED_TABLE_DDL, SPLIT_THRESHOLD, STARTUP_SECONDS
              TIME_LIMIT, TRACE_LEVEL, USE_RAC_SERVICE, WRITE_ALERT_LOG
              XOUT_CLIENT_EXISTS
component(s): EXTRACT
mode(s)     : Integrated Extract
platform(s) : AIX
              HPUX-IT
              HPUX-PA
              Linux
              Solaris SPARC
              Solaris x86
              Windows x64
versions    :
    max ver : 12.3.0.1.85
database(s) : all supported databases (on the supported platforms).
status      : current
mandatory   : false
dynamic     : false
relations   : none

Source: http://www.oracle-scn.com/oracle-goldengate-12-2-new-feature-info-param/

Friday, October 26, 2018

How to avoid OGG-10470 during and "add trandata" operation

When configuring a table for an Oracle GoldenGate replication, you need to enable supplemental logging for the table.

If you see the following message at the end of ADD TRANDATA command in ggsci:
GGSCI (myserver.mydomain.com as gg@testdb01) 8> ADD TRANDATA SCOTT.EMP

2018-10-25 11:59:35  INFO    OGG-15132  Logging of supplemental redo data enabled for table SCOTT.EMP.

2018-10-25 11:59:35  INFO    OGG-15133  TRANDATA for scheduling columns has been added on table SCOTT.EMP.

2018-10-25 11:59:35  INFO    OGG-15135  TRANDATA for instantiation CSN has been added on table SCOTT.EMP.

2018-10-25 11:59:35  ERROR   OGG-10470  Error encountered during gathering support information on table SCOTT.EMP.
ERROR: OCI Error ORA (status = 942-ORA-00942: table or view does not exist)

You need to grant access to the dictionary for your golden gate databaser user. In my case:
GRANT SELECT ANY DICTIONARY TO GG; 

From now on, any attempt to add supplemental logging for tables will end without errors, like this:

2018-10-26 13:01:41  INFO    OGG-10471  ***** Oracle Goldengate support information on table SCOTT.EMP. *****
Oracle Goldengate support native capture on table SCOTT.EMP.
Oracle Goldengate marked following column as key columns on table SCOTT.EMP: EMP_ID.

How to add supplemental logging on a table

alter table scott.emp 
add supplemental log group gg_feed (feed_id) always;

To reverse:
alter table scott.emp
drop supplemental log group gg_feed;

Thursday, October 18, 2018

What is the meaning of the "10G 11G 12C" value in DBA_USERS.PASSWORD_VERSIONS?


Since I wrote my post What is the meaning of the "10G 11G" value in DBA_USERS.PASSWORD_VERSIONS?, Oracle 12c added yet another value to this column.

It's now common to see the string

PASSWORD_VERSIONS
10G 11G 12C

when you query the password_versions column of dba_users view.

What does it mean?

It's a list of password versions that was generated at the time the account was created.
This list will look a little different depending on your setting of the parameter SQLNET.ALLOWED_LOGON_VERSION_SERVER in your $TNS_ADMIN/sqlnet.ora at the time of account creation.

Oracle explains:

"The PASSWORD_VERSIONS column shows the list of password versions that exist for the account. 10G refers to the earlier case-insensitive Oracle password DES-based version, 11G refers to the SHA-1 version, and 12C refers to the SHA-2-based SHA-512 version."


In my 12.2 database, I have set the following parameter in my $TNS_ADMIN/sqlnet.ora file:
SQLNET.ALLOWED_LOGON_VERSION_SERVER=11

to allow older clients to connect.

When I then create a user with the CREATE USER statement, it will automatically generate all three password versions. Since my SQLNET.ALLOWED_LOGON_VERSION_SERVER was set to 11 at the time of creation, my password will indeed by case-sensitive, since case sensitive password was introduced in version 11.1 of the Oracle software.

If I adjust the parameter sqlnet.ora parameter:
SQLNET.ALLOWED_LOGON_VERSION_SERVER=12

and drop/recreate the user, my password version will have changed:

PASSWORD_VERSIONS
11G 12C

The setting of SQLNET.ALLOWED_LOGON_VERSION have the following effects:

SQLNET.ALLOWED_LOGON_VERSION_SERVER=11 will keep generating 10G, 11G and 12c password versions
SQLNET.ALLOWED_LOGON_VERSION_SERVER=12 will generate both 11G and 12C password versions, and also remove the 10G password version.
SQLNET.ALLOWED_LOGON_VERSION_SERVER=12a will generate only 12c password versions

Oracle call these three settings Greatest level of compatibility, Medium level of security, and Highest level of security, respectivly.

If you for some reason want the old-school case-insensitive password versions to apply, set your SQLNET.ALLOWED_LOGON_VERSION_SERVER to 10 or lower, and make sure the parameter sec_case_sensitive_logon is set to FALSE.

Any user created after setting SQLNET.ALLOWED_LOGON_VERSION_SERVER to 10 or lower, will be able to logon using case-insensitive passwords.



Sunday, October 7, 2018

How to list all schemas and their sizes


If you want a list of schema sizes, join dba_segments with dba_users to limit the extract to users that are non-oracle maintained:
set lines 200
col owner  format a30
col "GB" format 999999
SELECT s.owner,sum(s.bytes/1024/1024/1024) "GB"
FROM dba_segments s join dba_users u
on (s.owner = u.username)
where u.oracle_maintained = 'N'
group by owner
ORDER BY 2 desc;

Example output
OWNER                               GB
------------------------------ -------
USER1                            19577
USER2                             6144
USER3                             2306

Friday, September 28, 2018

How to work around ORA-38338: incorrect ILM policy scope for row-level ADO policies


When adding an ADO policy on a table, like this:

CREATE TABLE TEST_TABLE1
(
  col1 NUMBER(38),
  col2 NUMBER(38),
  col3 DATE
)
NOCOMPRESS 
TABLESPACE USERS
ILM ADD POLICY ROW STORE COMPRESS ADVANCED 
ROW AFTER 1 DAY OF NO ACCESS;

oracle returned


ORA-38338: incorrect ILM policy scope

This policy is not valid for row-level ADO policy. The only valid option is

ILM ADD POLICY ROW STORE COMPRESS ADVANCED 
ROW AFTER X DAY OF NO MODIFICATION;

Monday, September 24, 2018

How to deinstall an old oracle installation using oui

You can use the following command to remove an obsolete oracle software installation from the central inventory:
$ORACLE_HOME/oui/bin/runInstaller -silent -deinstall REMOVE_HOMES={"/u01/oracle/product/11204"}

Remember to clean out the physical files with an appropriate operating system command.

The above method is an alternative to the method outlined in this post.

Friday, September 21, 2018

Parameters that can be removed after upgrade to 12.2

From Information For Parameters _upgrade_optim and _upgrade_capture_noops (Doc ID 2182783.1):


Below parameters gets added after upgrade to 12.2:

_upgrade_optim = FALSE
_upgrade_capture_noops = FALSE

These parameters are part of upgrade optimizations. They do not have any effect on upgrade currently.

Since these parameters does not have any impact, it can be removed safely after upgrade to 12.2.


Can be removed, like this:
echo "alter system reset \"_upgrade_optim\" scope=spfile SID='*';" | sqlplus / as sysdba
echo "alter system reset \"_upgrade_capture_noops\" scope=spfile SID='*';" | sqlplus / as sysdba

Thursday, September 20, 2018

How to load a plan from the AWR into the SMB


To load a plan found in an AWR snapshot into the SMB, you need to add it to an STS first, then load it into the SMB.
Here is how:

1. create an empty STS:
exec dbms_sqltune.create_sqlset(sqlset_name=>'mysts');

2. Note the snapshots where your plan was recorded, and add the plans to your STS:
DECLARE
  my_cur  dbms_sqltune.sqlset_cursor;
BEGIN
  OPEN my_cur FOR
     SELECT VALUE(x)
     FROM   TABLE( dbms_sqltune.select_workload_repository(begin_snap=>5840, 
                                                           end_snap=>5841, 
                                                           basic_filter=>'sql_id = ''45wmakdh9ak9s''')) x;

     dbms_sqltune.load_sqlset(sqlset_name => 'mysts', populate_cursor => my_cur);
END;
/

You can now check your STS for its contents:
select sqlset_name,
     sql_id,
     plan_timestamp,
     parsing_schema_name "schema",
     plan_hash_value,round(elapsed_time/1000000) "duration in seconds",
     cpu_time,
     buffer_gets,
     disk_reads,
     executions
from   dba_sqlset_statements
where  sqlset_name = 'mysts'
;

SQLSET_NAME SQL_ID PLAN_TIMESTAMP schema PLAN_HASH_VALUE duration in seconds CPU_TIME BUFFER_GETS DISK_READS EXECUTIONS
mysts 45wmakdh9ak9s 11.09.2018 15:22:13 SH
3827183161
0
0
0
0
0
mysts 45wmakdh9ak9s 06.09.2018 16:45:26 SH
4026264003
579
85550842
24062750
101808
3278857

We can see from the output that there are actually two different plans in the STS now, one with plan_hash_value=3827183161 and another one with plan_hash_value=4026264003.

In my case I wanted only the one with plan_hash_value=3827183161.

3. Finallly, load the contents of your STS into your SMB:
VARIABLE cnt NUMBER
EXECUTE :cnt := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( -
                    sqlset_name=>'mysts', basic_filter=>'plan_hash_value=3827183161', sqlset_owner=>'SYS');

print :cnt;

How to drop an STS


exec dbms_sqltune.drop_sqlset ('mysts');

Yet another example on how to create and populate an STS


Here is how to create an STS and populate it with SQL statements matching a particular SQL ID:
exec dbms_sqltune.create_sqlset(sqlset_name=>'mysts');

DECLARE
  my_cur  dbms_sqltune.sqlset_cursor;
BEGIN
  OPEN my_cur FOR
     SELECT VALUE(x)
     FROM   TABLE( dbms_sqltune.select_cursor_cache(basic_filter=>'sql_id = ''45wmakdh9ak9s''')) x;

     dbms_sqltune.load_sqlset(sqlset_name => 'mysts', populate_cursor => my_cur);
END;
/

Check the contents of the STS:
set lines 200

col name format a20
col owner format a20
col created format a30
col statement_count format 999999999999
select name,owner,created,statement_count
from dba_sqlset
where name = 'mysts';

And check the details of the statements packed in the STS, like this:
select sql_id,plan_timestamp,parsing_schema_name,plan_hash_value,round(elapsed_time/1000000) "duration in seconds",cpu_time,buffer_gets,disk_reads,executions
from   dba_sqlset_statements
where  sqlset_name = 'mysts'
fetch first 3 rows only;

SQL_ID PLAN_TIMESTAMP PARSING_SCHEMA_NAME PLAN_HASH_VALUE duration in seconds CPU_TIME BUFFER_GETS DISK_READS EXECUTIONS
45wmakdh9ak9s 18.09.2018 23:48:21 SCOTT
4026264003
7718
1331447472
336095417
828189
40286996

Tuesday, September 18, 2018

How the DBTIMEZONE is set in an Oracle database


A customer wanted me to look into why his two databases running the exact same application code were returning different time zones.

select name, created,(select dbtimezone from dual) "dbtimezone"
from v$database;


NAME CREATED dbtimezone
testdb1 25.04.2016 16:40:12 +02:00


NAME CREATED dbtimezone
testdb2 08.11.2017 14:07:51 +01:00

The os-command date +"%Z %z" returned CEST +0200 on both servers. Why the difference?

It didn't take long to find a good answer from Oracle about this topic.


Oracle states:

The database timezone is the time zone of the location where the database is installed

and

Use the SET TIME_ZONE clause to set the time zone of the database. If you do not specify the SET TIME_ZONE clause, then the database uses the operating system time zone of the server.

And also, from the document "Timestamps & time zones - Frequently Asked Questions (Doc ID 340512.1)":

If not specified with the CREATE DATABASE statement, the database time zone defaults to the server's O/S time zone offset. Note that if your server runs in a time zone affected by DST, then the database time zone would default to whatever the current offset is at the time of database creation, so you can have databases created with the same script on the same server with a different database time zone.


In Norway, we use DST, more commonly referred to as "summertime" and "wintertime". The first datbase was installed April 25th, after the server had switched to summertime. The second database was installed November 8th, after the server had switched to wintertime.

Oracle continues with the following about the database time zone:


The database time zone is not as important as it sounds. First of all it does not influence functions like sysdate, or systimestamp.

These function take their contents (date and time, and in the case of systimestamp also time zone) completely from the OS without any "Oracle" intervention.

and

A common misconception is that the database timezone needs to be "your" timezone. This is NOT true.
The database timezone has NO relation with "where" the server is located.
There is NO advantage whatsoever in using your timezone or a named timezone as database timezone.
The best setting for dbtimezone is simply +00:00 (or any other OFFSET like -09:00, +08:00, ...),

if your current dbtimezone value is an OFFSET then please leave it like it is.

Wednesday, September 12, 2018

How to use the purge command in adrci





In Version 11gR1 Oracle introduced the Active Diagnostic Repository (ADR), which serves as a common repository for all log files, tracefiles, incidents etc, that the database produces.

ADR has a "short" and a "long" purging policy. The default values are:

• 720 hours (30 days) for SHORTP_POLICY
• 8760 hours (356 days) for LONGP_POLICY

Each policy controls different types of content:

These files are controlled by the value of LONGP_POLICY:

ALERT
INCIDENT
SWEEP
STAGE
HM

These files are controlled by the value of SHORTP_POLICY:

TRACE
CDUMP
UTSCDMP
IPS

Sometimes it may be necessary to manually purge the ADR. The following example will delete all trace files older than 48 hours:
adrci> purge -age 48 -type trace
Other examples (2160 hrs = 90 days):
purge -age 2160 -type alert
purge -age 2160 -type incident
purge -age 2160 -type cdump
purge -age 2160 -type stage
purge -age 2160 -type sweep
purge -age 2160 -type hm

The policies can be adjusted according to need by using the following commands :
adrci> show control

ADR Home = /u01/oracle/diag/rdbms/proddb01/proddb01:

*************************************************************************
ADRID                SHORTP_POLICY        LONGP_POLICY
-------------------- -------------------- --------------------
597879840            720                  8760

Set a new policy (2 and 3 days, respectively):
set control (SHORTP_POLICY = 48)
set control (LONGP_POLICY = 72)

An example from one of my databaes:
adrci> show incident -mode basic

ADR Home = /u01/oracle/diag/rdbms/proddb01/proddb01:
*************************************************************************

INCIDENT_ID          PROBLEM_KEY                                                 CREATE_TIME
-------------------- ----------------------------------------------------------- ----------------------------------------
30858                ORA 603                                                     2014-03-19 11:26:41.287000 +01:00
30857                ORA 600 [kqlidchg0]                                         2014-03-19 11:26:38.226000 +01:00
27284                ORA 603                                                     2014-03-18 14:42:28.228000 +01:00
27283                ORA 600 [kqlidchg0]                                         2014-03-18 14:42:23.706000 +01:00
.
.
.
17044                ORA 603                                                     2013-09-26 15:37:30.870000 +02:00
17043                ORA 600 [kqlidchg0]                                         2013-09-26 15:37:27.255000 +02:00
17042                ORA 603                                                     2013-09-26 15:35:45.918000 +02:00
17041                ORA 600 [kqlidchg0]                                         2013-09-26 15:35:42.239000 +02:00

45 rows fetched
Remove incidents that are older than 90 days:
adrci> purge -age 2160 -type incident
Another check of the incident list shows that the number has been reduced:
adrci> show incident -mode basic

ADR Home = /u01/oracle/diag/rdbms/sergat/sergat:
*************************************************************************

INCIDENT_ID          PROBLEM_KEY                                                 CREATE_TIME
-------------------- ----------------------------------------------------------- ----------------------------------------
30858                ORA 603                                                     2014-03-19 11:26:41.287000 +01:00
30857                ORA 600 [kqlidchg0]                                         2014-03-19 11:26:38.226000 +01:00
27284                ORA 603                                                     2014-03-18 14:42:28.228000 +01:00
27283                ORA 600 [kqlidchg0]                                         2014-03-18 14:42:23.706000 +01:00
.
.
.
24 rows fetched

How to work around ORA-14402: updating partition key column would cause a partition change

The error
ORA-14402: updating partition key column would cause a partition change
occurs when you try to update the value of a partition key column.
This will lead to an implicit INSERT operation.
The ROWID of the row was assigned when the row was first inserted, and it doesn't change. Therefore, by default, Oracle will reject any DML that will cause a rowid to be changed.

Example:
update sales.quarterly_sales
set sold_month = '2016-02'
WHERE unique_sales_id = '3d6fb1ad-243d-4435-97d8-9ca4bfde3ab5';

Since the partition key is the column "sold_month", it will return ORA-14402.

The workaround is to allow row movement for the table:

alter table sales.quarterly_sales enable row movement;

You can now update the row.

You may want to revoke the permission for the rows in the table to change rowid when you are finished:
alter table sales.quarterly_sales disable row movement;

Further reading:
Doc ID 1518567.1 "FAQ: Row Movement Common Questions and Problems on Partitioned Tables"
Doc ID 236191.1: "Updating Partition Key Column Fails with ORA-14402"

Thursday, August 9, 2018

How to connect to a PDB using jdbc

Before you connect to a PDB using jdbc, make sure you have added the following to your database server's $TNS_ADMIN/listener.ora file:

USE_SID_AS_SERVICE_<listener_nam>=on

Restart or reload the listener. You should now be able to connect to the service_name representing your .

Here is a screenshot from SQuirrel SQL client running on Ubuntu, connecting to a PDB running on a Virtual Box with CentOS and Oracle 12.1:



















The entire connection string is

jdbc:oracle:thin:@lx01oric.oric.no:1521:pdbveg1.oric.no

Friday, June 29, 2018

Overview over archivelogs present in the FRA


The following query reveals that my FRA contains archived logs from the last 3 days only (indicated by an "A" - available).

-- Set your NLS_DATE_FORMAT in your session to avoid ORA-01830: date format picture ends before converting entire input string
alter session set NLS_DATE_FORMAT='DD.MM.YYYY HH24:MI:SS';

select trunc(completion_time),status,count(*)
from v$archived_log
where completion_time > to_date('20.06.2018 00:00:00')
group by trunc(completion_time),status
order by 1 desc;

Output:

TRUNC(COMPLETION_TIME) STATUS COUNT(*)
29.06.2018 A
37
28.06.2018 A
57
27.06.2018 A
54
27.06.2018 D
1
26.06.2018 D
75
25.06.2018 D
75
24.06.2018 D
53
23.06.2018 D
55
22.06.2018 D
59
21.06.2018 D
58
20.06.2018 D
54

Archive log files prior to these, have been deleted (indicated by a "D") by rman after having been successfully backed up.

Thursday, June 28, 2018

How to solve ORA-00069: cannot acquire lock -- table locks disabled

Unfortunately, the only known way out of this is to

1. stop the database
2. start in restricted mode with "startup restrict"
3. enable table locks:
 alter table scott.emp enable table lock;
4. startup the database in normal mode, and you will now be able to aquire locks on the table.

How to fix ORA-02449: unique/primary keys in table referenced by foreign keys after an online redefinition



When attempting to drop the interim table after a successful online redefinition, you may get:

ORA-02449: unique/primary keys in table referenced by foreign keys

This is easy to overlook - the online redef procedure does not disable the foreign key relationship from other tables to your (now) obsolete interim table.

To find these tables and their constraints:
SELECT  TABLE_NAME,CONSTRAINT_NAME,CONSTRAINT_TYPE,R_CONSTRAINT_NAME,STATUS
FROM    DBA_CONSTRAINTS
WHERE   OWNER = 'SH'
AND     CONSTRAINT_TYPE='R'
AND     R_CONSTRAINT_NAME IN (SELECT CONSTRAINT_NAME FROM DBA_CONSTRAINTS WHERE TABLE_NAME='SALES_INTERIM')

TABLE_NAME CONSTRAINT_NAME CONSTRAINT_TYPE R_CONSTRAINT_NAME STATUS
INVOICES TMP$$_INVOICES_FK0 R TMP$$_SALES_PK0 DISABLED
REFUNDS TMP$$_REFUNDS_FK0 R TMP$$_SALES_PK0 DISABLED
REPLENISHMENT TMP$$_REPLENISHMENT_FK0 R TMP$$_SALES_PK0 DISABLED

As can be seen from the table above, there are certainly constraints from other tables, pointing to the primary key on the interim table. They are disabled, but nevertheless preventing us from dropping the table.

Let's make sure that each of these "funny named" constraints have a sibling constraint, one that is named correctly after the redefintion:

SELECT TABLE_NAME,CONSTRAINT_NAME,R_CONSTRAINT_NAME,STATUS
FROM DBA_CONSTRAINTS 
WHERE TABLE_NAME in (
                    SELECT  TABLE_NAME
                    FROM    DBA_CONSTRAINTS
                    WHERE   OWNER = 'Sh'
                    AND     CONSTRAINT_TYPE='R'
                    AND     R_CONSTRAINT_NAME IN (SELECT CONSTRAINT_NAME FROM DBA_CONSTRAINTS WHERE TABLE_NAME='SALES_INTERIM')
)
AND CONSTRAINT_TYPE='R'
AND CONSTRAINT_NAME LIKE '%REPLENISHMENT%' OR CONSTRAINT_NAME LIKE '%INVOICES%' OR CONSTRAINT_NAME LIKE '%REFUNDS%'
ORDER BY TABLE_NAME,CONSTRAINT_NAME;

Result:
TABLE_NAME CONSTRAINT_NAME R_CONSTRAINT_NAME STATUS
INVOICES INVOICES_FK SALES_PK ENABLED
INVOICES TMP$$_INVOICES_FK0 TMP$$_SALES_PK0 DISABLED
REFUNDS REFUNDS_FK SALES_PK ENABLED
REFUNDS TMP$$_REFUNDS_FK0 TMP$$_SALES_PK0 DISABLED
REPLENISHMENT REPLENISHMENT_FK SALES_PK ENABLED
REPLENISHMENT TMP$$_REPLENISHMENT_FK0 TMP$$_SALES_PK0 DISABLED

As you can see, the constraints come in pairs and the ones with names starting with TMP$$ are leftovers from the redefinition.

You can safely drop these constraints. Put the following in a .sql script and run it as sysdba:
alter session set nls_language='american';
set lines 200
set pages 0
set heading off
set feedback off
set trimspool on
set verify off
set echo off
spool 7.alter_table.sql

select 'spool 7.alter_table.log' from dual;

SELECT    'alter table '
        || B.OWNER
        || '.'
        || B.TABLE_NAME
        || ' drop constraint '
        || B.CONSTRAINT_NAME
        || ';'
  FROM DBA_CONSTRAINTS A
        FULL OUTER JOIN DBA_CONSTRAINTS B
           ON A.CONSTRAINT_NAME = B.R_CONSTRAINT_NAME
  WHERE A.OWNER = 'SH'
  AND A.TABLE_NAME='SALES_INTERIM'
  AND B.R_CONSTRAINT_NAME IS NOT NULL;
select 'exit' from dual;

exit

Run the script, and you will have a new script containing the drop-clauses:
alter table SH.INVOICES drop constraint TMP$$_INVOICES_FK0;
alter table SH.REFUNDS drop constraint TMP$$_REFUNDS_FK0;
alter table SH.REPLENISHMENT drop constraint TMP$$_REPLENISHMENT_FK0;

After these statements have been successfully executed, you can go ahead and drop the interim table.

How to solve ORA-14024 during execution of DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS


Background: you are redefining a table and have successfully completed the DBMS_REDEFINITION.START_REDEF_TABLE procedure.

As you are executing the next step, DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS, you receive the following error:

ERROR at line 1:
ORA-14024: number of partitions of LOCAL index must equal that of the underlying table

Cause: Your interim table has been created with more partitions than the original table you're trying to redefine.

Verify like this:

select 'original table: ' || MAX(PARTITION_POSITION) "number of partitions"
from dba_tab_partitions where table_name='SALES' 
union
select 'interim table: ' || MAX(PARTITION_POSITION) 
from dba_tab_partitions where table_name='SALES_INTERIM' 
;

number of partitions
interim table: 19
original table: 18

A simple solution here is avoiding to create the indexes as a part of the COPY_TABLE_DEPENDENTS, and create them afterwards instead.

Simply change the directive
copy_indexes=>dbms_redefinition.cons_orig_params

to
copy_indexes=>0

when you execute DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS.

When you're done redefining your table, make sure you recreate the indexes.

If this is not acceptable, you need to add another partition to your original table, so that the missing index partitions will be automatically created.


Update 21.06.2021: I faced a similar challenge today as I was trying to redefine a table with interval range partitioning. As I was testing my redefinition strategy in a test environment first, the table in question had 46 partitions with zero rows. This fact raised ORA-14024: number of partitions of LOCAL index must equal that of the underlying table, and I realized I could quite easily work around the problem by simply dropping the empty partitions. To be safe you drop any partitions that actually had rows in them, analyzed them first:
set lines 200
spool analyze_part.sql
set   trimspool on
set   verify off
set   heading off
set   pages 0
set   echo off
set   feedback off
select 'alter session set nls_language=''american'';' from dual;
select 'set timing on' from dual;
select 'set lines 200' from dual;
select 'set trimspool on' from dual;
select 'spool analyze_part.log' from dual;
select 'exec DBMS_STATS.GATHER_TABLE_STATS (OwnName=> ''TRACKER'', TabName => ''EVENTS'',partname =>''' || partition_name || ''' granularity => ''PARTITION'';' 
from dba_tab_partitions
where table_name='EVENTS'
and num_rows = 0
and partition_name like 'SYS%';
Run the analyze-script:
sqlplus / as sysdba @analyze_part.sql
Afterwards, the num_rows column for the same partitions should show 0 (zero) rows. In other words, they can be safely dropped.

To generate a "drop partition" script, put the following in an sql-script:
set lines 200
spool drop_empty_part.sql
set   trimspool on
set   verify off
set   heading off
set   pages 0
set   echo off
set   feedback off
select 'alter session set nls_language=''american'';' from dual;
select 'set timing on' from dual;
select 'set lines 200' from dual;
select 'set trimspool on' from dual;
select 'spool drop_empty_part.log' from dual;

select 'ALTER TABLE ' || table_owner || '.' || table_name ||  ' DROP PARTITION ' || PARTITION_NAME ||  ' UPDATE INDEXES;'
from dba_tab_partitions
where table_name='EVENTS'
and num_rows = 0
and partition_name like 'SYS%';
select 'exit' from dual;
exit

Execut the script:
sqlplus / as sysdba @drop_empty_part.sql
The number of partitions in the original table and the interim table should now match. Now, execute the DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS again. You shouldn't see the ORA-14024 this time around :-)

Monday, June 25, 2018

How to move data using Transportable Tablespaces



Mission:

* Move schema FXMART with all of its data from source database proddb01 to destination database testdb01
* In the same operation, rename schema FXMART with FXMARTST in the destination database testdb01.

While there are certainly a number of ways to accomplish such a task, I will show you how to use transportable tablespaces.

1. In your source database, check which tablespaces your schema is writing to:

SELECT UNIQUE TABLESPACE_NAME
FROM DBA_SEGMENTS
WHERE OWNER = 'FXMART';

TABLESPACE_NAME
FXDATA_2016
FX_MART_DATA
FXDATA_2014
FXDATA_2015

2. In your source database, check that these are self-contained:
exec dbms_tts.transport_set_check('FXDATA_2016,FX_MART_DATA,FXDATA_2014,FXDATA_2015', true);

Check errors using the view TRANSPORT_SET_VIOLATIONS:
SELECT * FROM TRANSPORT_SET_VIOLATIONS;

If you see no rows here, proceed to step 3.

3. In your source database, set the tablespaces to be transported to read only:
alter tablespace FXDATA_2016 read only;
alter tablespace FX_MART_DATA read only;
alter tablespace FXDATA_2014 read only;
alter tablespace FXDATA_2015 read only;

4. Make sure you actually have a directory object called "DP". If not, create it:
CREATE OR REPLACE DIRECTORY DP AS '/u02/exports';
5. In your source database, export the metadata for the tablespaces to be moved using data pump export.

First, create a parameter file as follows:
userid='/ as sysdba'
DIRECTORY=DP
DUMPFILE=FXMOVE_TSS.dmp
LOGFILE=FXMOVE_TSS.log
JOB_NAME=FXMOVE
TRANSPORT_TABLESPACES=FXDATA_2016,FX_MART_DATA,FXDATA_2014,FXDATA_2015
EXCLUDE=STATISTICS
METRICS=YES

Then, execute the export:
expdp parfile=expdp_tts.par

6. Identify the data files which belong to the tablespaces to be transported.

To identify the files, use this simple query:
SELECT FILE_NAME
FROM DBA_DATA_FILES
WHERE TABLESPACE_NAME IN (
    SELECT UNIQUE TABLESPACE_NAME
    FROM DBA_SEGMENTS
    WHERE OWNER = 'FXMART'
  );

FILE_NAME
/u02/oradata/proddb01/fx_mart_data.dbf
/u02/oradata/proddb01/fxdata_2014.dbf
/u02/oradata/proddb01/fxdata_2015.dbf
/u02/oradata/proddb01/fxdata_2016.dbf


7. Transport the resulting dmpfile along with the datafiles of the tablespaces to be transported, over the network to the destination server, using scp
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/FXMOVE_TSS.dmp oracle@destinationserver:/u02/exports/.
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2016.dbf oracle@destinationserver:/u02/exports/.
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2015.dbf oracle@destinationserver:/u02/exports/
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fxdata_2014.dbf oracle@destinationserver:/u02/exports/
scp -i /home/oracle/.ssh/destinationserver_rsa /u02/exports/fx_mart_data.dbf oracle@destinationserver:/u02/exports/

When the scp session is over, set the tablespaces back to read/write to allow normal usage in the source database:
alter tablespace FXDATA_2016 read write;
alter tablespace FX_MART_DATA read write;
alter tablespace FXDATA_2014 read write;
alter tablespace FXDATA_2015 read write;

8. In the destination database, create a directory object that points to the location where the datafiles and the dump file now resides:

CREATE OR REPLACE DIRECTORY DP AS '/u02/exports';

9. In the destination database, drop the tablespaces to be replaced:
drop tablespace FXDATA_2016 including contents and datafiles;
drop tablespace FXDATA_2014 including contents and datafiles;
drop tablespace FX_MART_DATA including contents and datafiles;
drop tablespace FXDATA_2015 including contents and datafiles;

10. In the destination database, drop and recreate the schema to be repopulated with fresh data. Note that the user's default tablepace need to be temporarily set to USERS, as the tablespaces we intend to use as default tablespace is not yet imported:

drop user FXMARTTST cascade;

CREATE USER FXMARTTST
  IDENTIFIED BY mysecretpassword
  DEFAULT TABLESPACE USERS 
  TEMPORARY TABLESPACE TEMP
  PROFILE APP_SCHEMAS;

-- Grant of system privileges and/or roles here

11. On the destination server, create an import parameter file as follows:
userid='/ as sysdba'
DIRECTORY=DP
DUMPFILE=FXMOVE_TSS.dmp
LOGFILE=imp_FXMOVE_TSS.log
JOB_NAME=FXMOVE
TRANSPORT_DATAFILES=/u02/oradata/testdb01/fx_mart_data.dbf,/u02/oradata/testdb01/fxdata_2014.dbf,/u02/oradata/testdb01/fxdata_2015.dbf,/u02/oradata/testdb01/fxdata_2016.dbf
REMAP_SCHEMA=FXMART:FXMARTTST
METRICS=YES

12. In the destination database, make the freshly imported tablepaces read write:
alter tablespace FXDATA_2016 read write;
alter tablespace FX_MART_DATA read write;
alter tablespace FXDATA_2014 read write;
alter tablespace FXDATA_2015 read write;

All done. You should now be able to logon to the database using FXMARTST and use the refreshed data.

Thursday, June 21, 2018

What is the meaning of INCREMENTAL_STALENESS in dbms_stats?



INCREMENTAL_STALENESS means "how we decide if a partition's statistics are stale".

Its values can be either USE_STALE_PERCENT or USE_LOCKED_STATS:

USE_STALE_PERCENT - a partition/subpartition is not considered as stale if DML changes are less than the
STALE_PERCENT preference value.

USE_LOCKED_STATS - locked partitions/subpartitions statistics are not considered as stale, regardless of DML changes

NULL - this is the default value, meaning a partition or subpartition is considered as stale as long as it has any DML changes

STALE_PERCENT is the value that determine the percentage of rows in a table that have to change stats before the statistics are deemed stale and should be regathered. Default = 10%


I have used the following script to implement this in one of my 12.2 database.
The last preference (DEGREE) set has nothing to do with incremental statistics, but I have found that using the default degree of parallelism is a good setting for most databases

define owner = 'SCOTT';
define table_name = 'EMP';
BEGIN
   dbms_stats.set_table_prefs('&&owner','&&table_name','INCREMENTAL','TRUE');
   dbms_stats.set_table_prefs('&&owner','&&table_name','INCREMENTAL_STALENESS','USE_STALE_PERCENT, USE_LOCKED_STATS');
   dbms_stats.set_table_prefs('&&owner','&&table_name','DEGREE','DBMS_STATS.DEFAULT_DEGREE');
END;
/


Verfiy your settings with the following script::
 
define owner = 'SCOTT';
define table_name = 'EMP';
set lines 200
set verify off
col "incremental" format a20
col "incr staleness" format a40
col "incremental level" format a10
col "stale percent" format a20
col "synopsis type" format a40
col "degree" format a30
col "granularity" format a20

select dbms_stats.get_prefs('INCREMENTAL','&&owner','&&table_name') "incremental",
        dbms_stats.get_prefs('INCREMENTAL_STALENESS','&&owner','&&table_name') "incr staleness",
        dbms_stats.get_prefs('INCREMENTAL_LEVEL','&&owner','&&table_name') "incremental level",
        dbms_stats.get_prefs('STALE_PERCENT','&&owner','&&table_name') "stale percent",
        dbms_stats.get_prefs('APPROXIMATE_NDV_ALGORITHM','&&owner','&&table_name') "synopsis type",
        dbms_stats.get_prefs('DEGREE','&&owner','&&table_name') "degree" ,
        dbms_stats.get_prefs('GRANULARITY','&&owner','&&table_name') "granularity"
FROM DUAL
/

Example output:

incremental incr staleness incremental level stale percent synopsis type degree granularity
TRUE USE_STALE_PERCENT, USE_LOCKED_STATS PARTITION 10 REPEAT OR HYPERLOGLOG 32 AUTO


Source: "Understanding Optimizer Statistics With Oracle Database 12c Release 2"

Wednesday, June 20, 2018

How to solve ORA-55622 during DROP USER ... CASCADE;



During a DROP USER ... CASCADE operation, you may encounter the following error:
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-55622: DML, ALTER and CREATE UNIQUE INDEX operations are not allowed on table "SCOTT"."SYS_FBA_TCRV_83287"

Solution: disable flashback archive:

Check the currently enabled flashback archive tables:
set lines 155
col TABLE_NAME for a30
col OWNER_NAME for a30
col FLASHBACK_ARCHIVE_NAME for a30
col ARCHIVE_TABLE_NAME for a30
col STATUS for a30
SELECT * 
FROM DBA_FLASHBACK_ARCHIVE_TABLES 
WHERE OWNER_NAME=upper('SCOTT');

Put the following in a script called gen_disable_fda.sql:
set trimspool on
set heading off
set feedback off
set echo off
set verify off
spool disable_flb_archive.sql
SELECT 'spool disable_flb_archive.log' FROM dual;
SELECT 'alter table ' || owner_name || '.' || table_name || ' no flashback archive;' 
FROM DBA_FLASHBACK_ARCHIVE_TABLES 
WHERE OWNER_NAME=upper('SCOTT');
SELECT 'exit' FROM dual;
exit

The script above will generate alter table statements for all SCOTT-owned tables with flashback archive enabled:
sqlplus / as sysdba @gen_disable_fda.sql

Finally, run the script disable_flb_archive.sql, which now contain statements as shown below:
alter table SCOTT.EMP no flashback archive;

After this has been done, the schema may be dropped.

Wednesday, June 13, 2018

Display the current locks in a tree-like fashion



Oracle provides some built-in scripts and views that can be used to monitor locks in the database


http://docs.oracle.com/database/121/ADMIN/monitoring.htm#ADMIN11255

To illustrate this, I will open a session that updates a table, then create another session that updates the same table:

Session# 1:
select SYS_CONTEXT('userenv','con_name') "container name",
       SYS_CONTEXT('userenv','con_id') "container id",
       SYS_CONTEXT('userenv','CURRENT_SCHEMA') "Current schema",
       SYS_CONTEXT('userenv','SID') "SID"
FROM DUAL

container name       container id         Current schema                 SID
-------------------- -------------------- ------------------------------ -----
VEGDB01              3                    SCOTT                          362

--Update a the table rows:
 update emp set sal=1000;

14 rows updated.


I will keep the session alive, without doing commit/rollback.

From session# 2:
select SYS_CONTEXT('userenv','con_name') "container name",
        SYS_CONTEXT('userenv','con_id') "container id",
        SYS_CONTEXT('userenv','CURRENT_SCHEMA') "Current schema",
        SYS_CONTEXT('userenv','SID') "SID"
FROM DUAL;

container name       container id         Current schema                 SID
-------------------- -------------------- ------------------------------ -----
VEGDB01              3                    SCOTT                          130

--Update the same rows as in session# 1:
update emp set sal=2000;

This session will not give you the prompt back, as it is trying to aquire a lock on the table rows.

I then ran the utllockt.sql script:
sqlplus / as sysdba @$ORACLE_HOME/rdbms/admin/utllockt.sql

Output (abbreviated):
WAITING_SESSION   LOCK_TYPE         MODE_REQUESTED  MODE_HELD     LOCK_ID1          LOCK_ID2
----------------- ----------------- --------------- ------------ ----------------- -----------------
362               None                
   130            Transaction       Exclusive       Exclusive     524308             17836


The leftmost session (362) is blocking the sessions listed underneath it.

Let's check what the session with SID = 130 is doing:
SELECT sess.sid,sess.serial#,sess.sql_id,s.plan_hash_value, s.child_number,sess.client_info,substr(sql_text,1,30) "sql text",sess.program,sess.pq_status,
        decode(sess.blocking_session_status,'VALID','blocked', 'NO HOLDER','Not blocked') "blocked?",
        sess.blocking_session "blocking session SID",sess.event
FROM V$SESSION sess inner join v$sql s 
on (sess.sql_id = s.sql_id)
WHERE sess.SID IN (130)
and sess.sql_child_number = s.child_number;

This query confirms the output from the Oracle supplied utllockt.sql, and reveals what the blocking session is trying to do, in this case, to execute DML on table rows as that are already being locked for update:

SID SERIAL# SQL_ID PLAN_HASH_VALUE sql text PROGRAM PQ_STATUS blocked? blocking session SID EVENT
130
9755
1gpj28ptjj43p
1494045816
update emp set sal=2000 sqlplus@ ENABLED blocked
362
enq: TX - row lock contention


Monday, June 11, 2018

How to find the number of times a query has been executed


The following query was given to me by an experienced colleague. It shows the number of times different variations of a particular query has been executed. The information is collected from V$SQL which samples the contents in the Library Cache, a part of the Shared Pool.

SELECT SQL_ID, COUNT(DISTINCT SQL_ID) NUM_SQL_IDS, COUNT(CHILD_NUMBER) ANT_CHILD, SUM(EXECUTIONS) 
FROM V$SQL
WHERE SQL_TEXT LIKE ' select count(*) TOTALS from   scott.emp%'
GROUP BY ROLLUP(SQL_ID)
;

Here is parts of the output:

SQL_ID NUM_SQL_IDS ANT_CHILD SUM(EXECUTIONS)
01b71y5wmr2bx
1
1
0
081ha7tszas19
1
3
2
0bfn4yj267xy0
1
2
34
0dk42bqfz6fbr
1
3
6
g1vq03475hqgc
1
3
10
g3hky4g8vu108
1
3
9
--------- Abbreviated ---------
 
274
549
1940

From the output we can see that there are a number of variations of the query being executed. The numbers are being rolled up nicely at the end, letting us know that there are 274 unique SQL_IDs, 549 different child cursors with a total of 1940 executions.

This information can certainly be useful during an effort to tune the library cache.

Monday, June 4, 2018

one-liner for debugging script parameter output

To view the output of your script parameters, put this line in the beginning of your script:

printf "Parameters passed are : $*\n"

What are 'TCP Socket (Kgas)' Waits?

Doc ID 416451.1 "What are 'TCP Socket (Kgas)' Waits?" from Oracle support explains:

A session is waiting for an external host to provide requested data over a network socket. The time that this wait event tracks does not indicate a problem, and even a long wait time is not a reason to contact Oracle Support.

and

An application that communicates with a remote host must wait until the data it will read has arrived.
The db session cannot proceed to do anything else until the external host provides the requested data over the network socket.


The proposed "solution":

From the database point of view, these waits can safely be ignored; the wait event does not represent a database issue. It merely reports the total elapsed time for a network connection to be established or for data to arrive from over the network.


Check also Doc ID 558510.1 "WAITEVENT: "TCP Socket (KGAS)" Reference Note"

Tuesday, May 29, 2018

Shell script snipplet for resetting a password

This little code snipplet to use a Linux/Unix escape character to send a line shift into a string of mulitple commands:

 #!/bin/bash
. /home/oracle/.bash_profile
userid=$1
password=$2
userexist="WHENEVER SQLERROR EXIT;"
echo -e "alter session set container=PDB01;\n$userexist\nalter user $userid identified by $password account unlock;" | sqlplus -L -s / as sysdba

Tuesday, May 22, 2018

Workaround for ORA-46372: audit file not found


In my environment, I had the following errors popping up in Cloud Control:

ORA-46372: audit file '/u01/oracle/audit/proddb01/ora_audit_10.bin' not found
ORA-17503: ksfdopn:4 Failed to open file
/u01/oracle/audit/proddb01/ora_audit_10.binin
ORA-17500: ODM err:File does not exist

It can be manually reproduced by executing the following query:
select count(1) from unified_audit_trail;

It seems to be related to BUG 26352615 - SPILLOVER AUDIT FILES CANNOT BE FOUND ORA-46372 for platform Linux x86-64. Per May 22ond 2018, it is still in development according to Oracle Support.

In the directory /u01/oracle/audit/proddb01, there is quite correctly no file named ora_audit_10.binin. However, there is one called ora_audit_10.bin

The following workaround worked for me:

Go to the directory /u01/oracle/audit/proddb01
Change the name of the file ora_audit_10.bin to ora_audit_10.binin

Test like this:
select count(1) from unified_audit_trail;

  COUNT(1)
----------
        25

Update 09.03.2020:

Oracle has released a patch for 26352615. It is available for a number of RUs. I just applied the patch for 12c Release 12.2.0.1.200114DBJAN2020RU, and it solved the problem outlined above.

Tuesday, April 24, 2018

Partitioned index properties



The following query will reveal information about the indexes on a partitioned table.

Very useful when you want to investigate if a partitioned table is correctly and efficiently indexed:

SELECT C.TABLE_NAME,I.INDEX_NAME,I.PARTITIONING_TYPE "part type", I.SUBPARTITIONING_TYPE "sub type", I.LOCALITY,I.ALIGNMENT,C.COLUMN_NAME,C.COLUMN_POSITION "col pos", I.INTERVAL,I.AUTOLIST, II.VISIBILITY
FROM DBA_PART_INDEXES I JOIN DBA_IND_COLUMNS C
ON (I.INDEX_NAME = C.INDEX_NAME)
 JOIN DBA_INDEXES II
   ON (II.INDEX_NAME = I.INDEX_NAME)
WHERE I.TABLE_NAME IN (SELECT TABLE_NAME FROM dba_tables WHERE PARTITIONED='YES' and table_name='MYTABLE')
ORDER BY I.INDEX_NAME,C.COLUMN_POSITION;

In my case, the results were:

TABLE_NAME INDEX_NAME part type sub type LOCALITY ALIGNMENT COLUMN_NAME col pos INTERVAL AUTOLIST VISIBILITY
MYTABLE MYTABLE_SI1 RANGE NONE LOCAL PREFIXED CREATED_DAY
1
NUMTOYMINTERVAL(1,'MONTH') NO VISIBLE
MYTABLE MYTABLE_SI1 RANGE NONE LOCAL PREFIXED SEQ_NUM
2
NUMTOYMINTERVAL(1,'MONTH') NO VISIBLE
MYTABLE MYTABLE_SI2 RANGE NONE LOCAL NON_PREFIXED SUBMITTED_DATE
1
NUMTOYMINTERVAL(1,'MONTH') NO VISIBLE

Thursday, April 19, 2018

How do I find which objects that were analyzed during an autostats job?


Step 1

Find the start and stop times of the maintenance window:
select window_name, cast(last_start_date as date) "Start Time", DURATION, cast(last_start_date+duration as date) "End Time"
FROM DBA_SCHEDULER_WINDOWS
WHERE ENABLED='TRUE'
AND WINDOW_NAME LIKE UPPER('%&&day%');

Which results in:
WINDOW_NAME Start Time DURATION End Time
WEDNESDAY_WINDOW 18.04.2018 23:59:00 +00 04:00:00.000000 19.04.2018 03:59:00

So my Wednesday window starts at 23:59 and ends 4 hours later, at 03:59.

Step 2
Add the query above as a subquery in a query against DBA_OPTSTAT_OPERATIONS, to get all the IDs for the sub-steps that was executed during this window:
select ID
from DBA_OPTSTAT_OPERATIONS 
where operation='gather_database_stats (auto)' 
and CAST(start_time AS DATE) BETWEEN 
                                ( SELECT CAST (LAST_START_DATE AS DATE)
                                  FROM  DBA_SCHEDULER_WINDOWS
                                  WHERE ENABLED='TRUE'
                                  AND   WINDOW_NAME LIKE UPPER('%&&day%')
                                 )
                              AND 
                                 (
                                  SELECT CAST (LAST_START_DATE+DURATION AS DATE)
                                  FROM  DBA_SCHEDULER_WINDOWS
                                  WHERE ENABLED='TRUE'
                                  AND   WINDOW_NAME LIKE UPPER('%&&day%')
                                 )
;

Step 3
Put everything together by querying DBA_OPTSTAT_OPERATION_TASKS, to see which tasks and which objects that were analyzed during the time window:

-- get the details for every task executed by "gather_database_stats (auto)" 
SELECT opid,
       target,
       target_objn,
       target_type,
       target_size,
       cast(start_time as date) "Start Time",
       cast(end_time as date) "End Time", 
       status
FROM DBA_OPTSTAT_OPERATION_TASKS 
WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs
                SELECT ID FROM DBA_OPTSTAT_OPERATIONS 
                WHERE OPERATION='gather_database_stats (auto)' 
                AND CAST (START_TIME AS DATE) BETWEEN 
                                                -- get the start and stop times for the last window
                                                -- for a particular weekday
                                                (SELECT CAST(LAST_START_DATE AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
                                                AND (
                                                      SELECT CAST(LAST_START_DATE+DURATION AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
)
order by CAST(start_time as DATE) asc;

Sample output (displaying the first and last 5 rows of the record set):
OPID TARGET TARGET_OBJN TARGET_TYPE TARGET_SIZE Start Time End Time STATUS
48848
SYS.ET$16E30DF80001
6177051
TABLE
0
18.04.2018 23:59:30 18.04.2018 23:59:30 FAILED
48848
SYS.I_SEQ1
79
INDEX
1
18.04.2018 23:59:30 18.04.2018 23:59:31 COMPLETED
48848
SYS.SEQ$
74
TABLE
7
18.04.2018 23:59:30 18.04.2018 23:59:31 COMPLETED
48848
SYS.OPTSTAT_HIST_CONTROL$
506
TABLE
1
18.04.2018 23:59:31 18.04.2018 23:59:31 COMPLETED
48848
SYS.REG$
5775
TABLE
1
18.04.2018 23:59:31 18.04.2018 23:59:31 COMPLETED
...
48885
SYS.WRH$_SERVICE_STAT.WRH$_SERVICE_STAT_2490506650_826
11986128
TABLE PARTITION
5
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SEG_STAT_PK.WRH$_SEG_STAT_2490506650_826
11986126
INDEX PARTITION
1
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SGASTAT_U.WRH$_SGASTAT_2490506650_826
11986114
INDEX PARTITION
1
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SEG_STAT.WRH$_SEG_STAT_2490506650_826
11986124
TABLE PARTITION
5
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SEG_STAT.WRH$_SEG_STAT_2490506650_826
11986124
TABLE PARTITION
5
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED

Check the overall status of your run:
SELECT target_type,status,count(*)
FROM DBA_OPTSTAT_OPERATION_TASKS 
WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs
                SELECT ID FROM DBA_OPTSTAT_OPERATIONS 
                WHERE OPERATION='gather_database_stats (auto)' 
                AND CAST(START_TIME AS DATE) BETWEEN 
                                                -- get the start and stop times for the last window
                                                -- for a particular weekday
                                                (SELECT CAST(LAST_START_DATE AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
                                                AND (
                                                      SELECT CAST(LAST_START_DATE+DURATION AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
)
group by target_type,status
order by status;

Result:
TARGET_TYPE STATUS COUNT(*)
INDEX COMPLETED
501
INDEX PARTITION COMPLETED
177
TABLE COMPLETED
394
TABLE PARTITION COMPLETED
362
TABLE FAILED
54


To further filter out those objecs that failed, use:
SELECT distinct target,target_objn,target_type
FROM DBA_OPTSTAT_OPERATION_TASKS 
WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs
                SELECT ID FROM DBA_OPTSTAT_OPERATIONS 
                WHERE OPERATION='gather_database_stats (auto)' 
                AND CAST(START_TIME AS DATE) BETWEEN 
                                                -- get the start and stop times for the last window
                                                -- for a particular weekday
                                                (SELECT CAST(LAST_START_DATE AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
                                                AND (
                                                      SELECT CAST(LAST_START_DATE+DURATION AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
)
AND status = 'FAILED'
order by TARGET;

Which in my case revealed that the following objects need attention:

TARGET TARGET_OBJN TARGET_TYPE
SYS.ET$16E30DF80001
6177051
TABLE
SYS.X$LOGMNR_CONTENTS
4294951535
TABLE
SYS.X$LOGMNR_REGION
4294951631
TABLE