Showing posts with label Errors. Show all posts
Showing posts with label Errors. Show all posts

Thursday, March 15, 2018

The error

SP2-0552: Bind variable "B2" not declared.

can arise when using incorrect declarations in your SQL scripts.

For example, I had the follwing in a script:
var B2 DATE;
EXEC :B2 := to_date('22.02.2018','dd.mm.yyyy');

For sqlplus to accept this string, you need to declare your string as VARCHAR2, even though you intend to use datatype DATE in your query.

Declare it as VARCHAR instead:

var B2 VARCHAR2(10);


Thursday, February 22, 2018

DST_UPGRADE_STATE set to DATAPUMP(1) when running preupgrade_fixups.sql

During the preparation for a 12.2 upgrade, I found that the preupgrade-tool reported about an incomplete DST upgrade, where DST_UPGRADE_STATE was set to "DATAPUMP(1)".

The preupgrade tool reported:
$ORACLE_HOME/jdk/bin/java -jar /tmp/preupgradetool/preupgrade.jar TERMINAL TEXT
Report generated by Oracle Database Pre-Upgrade Information Tool Version
12.2.0.1.0

Upgrade-To version: 12.2.0.1.0

=======================================
Status of the database prior to upgrade
=======================================


Complete any pending DST update operation before starting the database
     upgrade.

     There is an unfinished DST update operation in the database.  It's
     current state is: DATAPUMP(1)

     There must not be any Daylight Savings Time (DST) update operations
     pending in the database before starting the upgrade process.
     Refer to My Oracle Support Note 1509653.1 for more information.


Then, running the preupgrade_fixups.sql:
Fixup
Check Name                Status  Further DBA Action
----------                ------  ------------------
purge_recyclebin          Passed  None
pending_dst_session       Failed  Manual fixup required.
invalid_objects_exist     Failed  None
dictionary_stats          Passed  None


Result:
SYS@proddb01 SQL> r
  1  SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
  2  FROM DATABASE_PROPERTIES
  3  WHERE PROPERTY_NAME LIKE 'DST_%'
  4* ORDER BY PROPERTY_NAME

PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   18
DST_SECONDARY_TT_VERSION                 14
DST_UPGRADE_STATE                        DATAPUMP(1)

According to https://blog.oracle-ninja.com/2013/09/17/stuck-timezone-upgrades-and-smart-scans the solution would be, as sysdba:

1. ALTER SESSION SET EVENTS '30090 TRACE NAME CONTEXT FOREVER, LEVEL 32';
2. exec dbms_dst.unload_secondary;

I tried this, with the following result:
SYSproddb01 SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;

PROPERTY_NAME                            VALUE
---------------------------------------- --------------------
DST_PRIMARY_TT_VERSION                   18
DST_SECONDARY_TT_VERSION                 0
DST_UPGRADE_STATE                        NONE

I also executed

ALTER SESSION SET EVENTS '30090 TRACE NAME CONTEXT FOREVER, OFF';

just to be on the safe side.

Finally, run preupgrade_fixups.sql again:
Check Name                Status  Further DBA Action
----------                ------  ------------------
purge_recyclebin          Passed  None
pending_dst_session       Passed  None
invalid_objects_exist     Failed  None
dictionary_stats          Passed  None

PL/SQL-prosedyren ble fullført.

And the test is passed, and you are ready to upgrade

Thursday, January 18, 2018

How to work around ORA-01017 in a migrated 12c database



You may see some users in your 12c database that have the password versions set to 10G:

select username, password_versions from dba_users where username='SCOTT';

USERNAME             PASSWORD_VERSIONS
-------------------- -----------------
SCOTT                10G

At the same time, most other users have their password_versions set to the value 10G 11G 12C.

Oracle uses different password versions in all these three versions:

* Oracle 11g it uses SHA1 password based version
* Oracle 10g uses DES based version.
* Oracle 12c uses SHA-2-based SHA-512 password version

Since the Oracle 12c database runs in exclusive mode by default, users with passwords generated in previous versions
will not be able to login (exclusive mode means that the SQLNET.ALLOWED_LOGON_VERSION_SERVER is set either to 12 or 12a).

Workaround is to force a password reset so that the password is generated for the current version.
But before you do that, you need to change the database's minimum allowed authentication protocol.
This is done by editing the file $TNS_ADMIN/sqlnet.ora.
The parameter controlling the sqlnet authentication protocol is SQLNET.ALLOWED_LOGON_VERSION_SERVER.

Here is how I did it:

1. Find the user(s) with password versions of 10G
select username, password_versions 
from dba_users 
where password_versions = '10G';

I have found cases where there is a space after the string '10G' so that you need to actually search for the string '10G '.

2. Edit the $TNS_ADMIN/sqlnet.ora file so that the database doesn't run in exclusive mode. Add

SQLNET.ALLOWED_LOGON_VERSION_SERVER = 11

3. Restart database

4. Expire the user(s) that you want to force a password reset for:
alter user SCOTT password expire;

5. Try to connect as the user:
connect SCOTT
Enter password:
ERROR:
ORA-28001: the password has expired


Changing password for SCOTT
New password:
Retype new password:
Password changed
Connected.

Check that the users now has the correct password version:
select username, password_versions,account_status from dba_users where username='SCOTT';

USERNAME             PASSWORD_VERSIONS ACCOUNT_STATUS
-------------------- ----------------- --------------------------------
SCOTT                10G 11G 12C       OPEN

6. When all the affected users have been changed, set the database to run in exclusive mode.
Change the SQLNET.ALLOWED_LOGON_VERSION_SERVER from 11 to 12 in $TNS_ADMIN/sqlnet.ora, and restart once more.


Wednesday, September 20, 2017

How to migrate a non-CDB database to a PDB on the same host

There are many ways to migrate your non-cdb Oracle databases to the new Multitenant Architecture. Here I will show you how to clone a non-cdb database to a PDB running in a container database

Asumptions:
You have two Oracle databases of version 12.1 or higher running on the same server:

1. Your original, non-cdb database called db01
2. Your new container database called cdb01
Both of these databases are running out of Oracle Home installe in /u01/oracle/product/12c

Step 1: For Oracle 12.1, open your non-CDB in read only mode (not needed from version 12.2 and onwards):
shutdown immedate
startup mount
alter database open read only;

Step 2: create an xml file that describes the non-CDB database using the package dbms_pdb:
export ORACLE_SID=db01
sqlplus / as sysdba
Generate the file:
set serveroutput on
begin
  dbms_pdb.describe( pdb_descr_file => '/tmp/ncdb.xml');
end;
/

Step 3: Create the pluggable database

Connect to your CDB, and create the PDB using the script you created in step 2.

export ORACLE_SID=cdb01
sqlplus / as sysdba
create pluggable database pdb01
using '/tmp/ncdb.xml'
copy
file_name_convert = ('/u02/oradata/db01/', '/u02/oradata/cdb01/PDBS/pdb01/');
Note that I am choosing to copy the files from the original location of the non-CDB database, to a brand new one, using the directive file_name_convert. There are other options, too: MOVE and NOCOPY

Step 4: execute $ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql


sqlplus / as sysdba
alter session set container=PDB01;
@?/rdbms/admin/noncdb_to_pdb.sql

I did receive errors during this phase:
ORA-65172: cannot run noncdb_to_pdb.sql unless pluggable database is an
unconverted non-container database
ORA-06512: at "SYS.DBMS_PDB", line 154
ORA-06512: at line 1

The error message seemed to be harmless:
oracle@myserver.mydomain.com:[cdb1]# oerr ora 65172
65172, 00000, "cannot run noncdb_to_pdb.sql unless pluggable database is an unconverted non-container database"
// *Cause:  An attempt was made to run 'noncdb_to_pdb.sql' on a pluggable
//          database (PDB) that was not an unconverted non-container database.
// *Action: 'noncdb_to_pdb.sql' is not necessary for this PDB.
//


Further research showed 1) the Pdb was indeed created, and 2) that there were errors in PDB_PLUG_IN_VIOLATIONS:
select PDB_ID,PDB_NAME,STATUS,CON_ID from cdb_pdbs

    PDB_ID PDB_NAME                       STATUS                          CON_ID
---------- ------------------------------ --------------------------- ----------
         2 PDB01                          NEW                                  2

alter session set container=cdb$root;


SELECT TO_CHAR(TIME,'dd.mm.yyyy hh24:mi') "time",NAME,STATUS,MESSAGE 
FROM PDB_PLUG_IN_VIOLATIONS;

Result:
time NAME STATUS MESSAGE
20.09.2017 14:17 PDB01 PENDING Database option CATJAVA mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option CONTEXT mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option JAVAVM mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option ORDIM mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Database option XML mismatch: PDB installed version 12.1.0.2.0. CDB installed version NULL.
20.09.2017 14:17 PDB01 PENDING Sync PDB failed with ORA-65177 during 'alter user sys identified by *'
20.09.2017 14:17 PDB01 PENDING Sync PDB failed with ORA-65177 during 'alter user system identified by *'

A quick search on Oracles support site revealed that these errors can be ignored. See Doc ID 2020172.1 "OPTION WARNING Database option mismatch: PDB installed version NULL" in PDB_PLUG_IN_VIOLATIONS"

Finally, remember to open your pdb in read write mode. It was in MIGRATE mode after the noncdb_to_pdb.sql script had been run and failed:
 select con_id,name,open_mode from v$containers;

    CON_ID NAME      OPEN_MODE
---------- --------- -----------------------------
         1 CDB$ROOT  READ WRITE
         2 PDB01     MIGRATE

alter session set container=PDB01;

alter pluggable database close;

Pluggable database altered.

alter pluggable database open;

 select con_id,name,open_mode from v$containers;

    CON_ID NAME      OPEN_MODE
---------- --------- -----------------------------
         2 PDB01     READ WRITE

Wednesday, September 13, 2017

A workaround for ORA-06553: PLS-213: package STANDARD not accessible when using datapatch in a CDB

Short background:

I was having trouble applying "patch 26550023 - COMBO of OJVM Component 12.1.0.2.170718 DB PSU + DB PSU 12.1.0.2.170814" in my Multitenant environment. The container database only had one PDB at the time, the PDB$SEED.

After having successfully applied opatch apply for both patches, I ran datapatch -verbose to load modified SQL into the database. I had already opened my container database in upgrade mode, and also opened the PDB$SEED in upgrade mode by executing
alter pluggable database all open upgrade;
The state of the PDB$SEED could be confirmed in the alert log, as well as from v$pdb:

SELECT name, open_mode FROM v$pdbs;

NAME       OPEN_MODE
--------- --------------
PDB$SEED   MIGRATE

Still, I kept getting weird errors like


Bootstrapping registry and package to current versions...done
Error in bootstrap log /u01/oracle/cfgtoollogs/sqlpatch/sqlpatch_17273_2017_09_13_13_29_29/bootstrap1_CDBVEG_PDBSEED.log:
Error at line 7: ORA-06553: PLS-213: package STANDARD not accessible
Error at line 17: ORA-06553: PLS-213: package STANDARD not accessible
Error at line 25: SP2-0310: unable to open file "/u01/oracle/product/12102/sqlpatch/FALSE.sql"
Prereq check failed, exiting without installing any patches.


There was little information about the problem and potential workarounds to be found on the internet.

After trying different options without success, I could find no other solution than to drop the PDB$SEED container, so that patching could continue.

Here's how:
SQL> 
-- necessarry to avoid "ORA-65017: seed pluggable database may not be dropped or altered"
alter session set "_oracle_script"=TRUE;

Session altered.

SQL> alter pluggable database PDB$SEED close;

Pluggable database altered.

SQL> drop pluggable database pdb$seed including datafiles;

Pluggable database dropped.

SQL> alter session set "_oracle_script"=FALSE;

Session altered.

SQL> select * from cdb_pdbs;

no rows selected


After this point, run datapatch again:
oracle@myserver:[cdbveg]# datapatch -verbose
SQL Patching tool version 12.1.0.2.0 Production on Wed Sep 13 13:34:05 2017
Copyright (c) 2012, 2016, Oracle.  All rights reserved.

Log file for this invocation: /u01/oracle/cfgtoollogs/sqlpatch/sqlpatch_18072_2017_09_13_13_34_05/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 26027162 (Database PSU 12.1.0.2.170718, Oracle JavaVM Component (JUL2017)):
  Installed in the binary registry only
Bundle series PSU:
  ID 170814 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT
    Nothing to roll back
    The following patches will be applied:
      26027162 (Database PSU 12.1.0.2.170718, Oracle JavaVM Component (JUL2017))
      26609783 (DATABASE PATCH SET UPDATE 12.1.0.2.170814)

Installing patches...
Patch installation complete.  Total patches installed: 2

Validating logfiles...
Patch 26027162 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26027162/21319014/26027162_apply_CDBVEG_CDBROOT_2017Sep13_13_34_18.log (no errors)
Patch 26609783 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/oracle/cfgtoollogs/sqlpatch/26609783/21481899/26609783_apply_CDBVEG_CDBROOT_2017Sep13_13_34_18.log (no errors)
SQL Patching tool complete on Wed Sep 13 13:34:52 2017

Verification that the patches are applied:
SQL> select ACTION,DESCRIPTION,STATUS,BUNDLE_SERIES from registry$sqlpatch;

ACTION     DESCRIPTION                                                            STATUS               BUNDLE_SERIES
---------- ---------------------------------------------------------------------- -------------------- --------------------
APPLY      Database PSU 12.1.0.2.170718, Oracle JavaVM Component (JUL2017)        SUCCESS
APPLY      DATABASE PATCH SET UPDATE 12.1.0.2.170814                              SUCCESS              PSU

Tuesday, September 12, 2017

How to attach an ORACLE_HOME to an existing inventory

I wanted to check the patchlevel in one of my Oracle installations, and the following errow was returned:
oracle@tsl0map-dbteam-sandbox-db04:[vegdb01]# opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.3
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/oracle/product/12102
Central Inventory : /home/oracle/oraInventory
   from           : /u01/oracle/product/12102/oraInst.loc
OPatch version    : 12.1.0.1.3
OUI version       : 12.1.0.2.0
Log file location : /u01/oracle/product/12102/cfgtoollogs/opatch/opatch2017-09-12_15-30-47PM_1.log

List of Homes on this system:

  Home name= agent13c2, Location= "/u01/oracle/product/agent13c/agent_13.2.0.0.0"
  Home name= 11204, Location= "/u01/oracle/product/11204"
  Home name= OraHome3, Location= "/u01/oracle/product/gg121"
Inventory load failed... OPatch cannot load inventory for the given Oracle Home.
Possible causes are:
   Oracle Home dir. path does not exist in Central Inventory
   Oracle Home is a symbolic link
   Oracle Home inventory is corrupted
LsInventorySession failed: OracleHomeInventory gets null oracleHomeInfo

OPatch failed with error code 73

Turned out my inventory had not been updated with my new Oracle Home. When I looked in the inventory.xml file on my server, there was no entry for the installation there.

To fix this, add the new OH to your inventory. From your OH that is missing, do the following:
cd $ORACLE_HOME/oui/bin
./runInstaller -invPtrLoc /u01/oracle/product/12102/oraInst.loc -attachHome ORACLE_HOME=/u01/oracle/product/12102 ORACLE_HOME_NAME="Ora12cHome"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 8191 MB    Passed
The inventory pointer is located at /u01/oracle/product/12102/oraInst.loc
'AttachHome' was successful.

Check the inventory file now, and you'll see a new entry for the OH (in my case the Ora12cHome):
<HOME NAME="agent13c2" LOC="/u01/oracle/product/agent13c/agent_13.2.0.0.0" TYPE="O" IDX="8"/>
<HOME NAME="11204" LOC="/u01/oracle/product/11204" TYPE="O" IDX="1"/>
<HOME NAME="Ora12cHome" LOC="/u01/oracle/product/12102" TYPE="O" IDX="9"/>

Monday, September 11, 2017

Why is my PDB seemingly stuck in RESTRICTED mode?


Check out the view pdb_plug_in_violations - it will point you in the right direction.

In my case, I had created a pluggable database some days after my CDB was finished.

I had completely forgotten about a common user that I had created at that time:

-- create a common user for all CONTAINERS
-- common users must use the prefix c##
create user C##dba identified by MySecretPasswor
default tablespace USERS
temporary tablespace TEMP
quota unlimited on USERS
container=all;

The new PDB was created without the USERS tablespace, which prevented the new PDB to be synchronized with the parent container.
The pdb_plug_in_violations contained the following message:

Sync PDB failed with ORA-959 during 'create user C##dba identified by *default tablespace USERS
temporary tablespace TEMP
quota unlimited on USERS
container=all'


To resolve the situation, connect to the container with the missing tablespace:
alter session set container=pdbveg2;
Make sure my session OMF parameter is correctly set:
select name,value,DEFAULT_VALUE,ISDEFAULT,ISPDB_MODIFIABLE, ISSES_MODIFIABLE, DESCRIPTION
from V$PARAMETER where name like '%create_file_dest%';
NAME VALUE DEFAULT_VALUE ISDEFAULT ISPDB_MODIFIABLE ISSES_MODIFIABLE DESCRIPTION
db_create_file_dest /u02/oradata/cdbveg/pdbveg2 NONE TRUE TRUE TRUE default database location

Create the missing tablespace:
create tablespace USERS
datafile size 8M autoextend on next 2M maxsize 2G;
Finally, close and reopen your pluggable database:
alter pluggable database pdbveg2 close;
alter pluggable database pdbveg2 open read write;
Check status:
select CON_ID,name,OPEN_MODE, RESTRICTED
from v$containers;
CON_ID NAME OPEN_MODE RESTRICTED
4 PDBVEG2 READ WRITE NO

Thursday, September 7, 2017

How to solve OUI-10197:Unable to create a new Oracle Home during cloning

Short background:

After a failed attempt to clone an Oracle 12c installation, using Oracle's perl script clone.pl, like this:
oracle@myserver:[DBSID]# cd /u01/oracle/product/12102/clone/bin
oracle@myserver:[DBSID]# perl clone.pl ORACLE_HOME=/u01/oracle/product/12102 ORACLE_HOME_NAME=12102 ORACLE_BASE=/u01/oracle OSDBA_GROUP=dba

the following error was thrown:

OUI-10197:Unable to create a new Oracle Home at /u01/oracle/product/12102. Oracle Home already exists at this location. Select another location.
SEVERE:OUI-10197:Unable to create a new Oracle Home at /u01/oracle/product/12102. Oracle Home already exists at this location. Select another location.

The solution is to detach the Oracle Home from the Inventory:
oracle@myserver:[DBSID]# ./runInstaller -detachHome ORACLE_HOME=/u01/oracle/product/12102 invPtrLoc=/u01/oracle/product/12102/oraInst.loc
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 7814 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.

You can now rerun clone.pl.
Note that you do not need to physically remove files from disk and unzip new ones, since your Inventory is now unaware of the previously failed installation attempt.

Simply run your clone again. Log output below.

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 1810 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 7814 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2017-09-07_09-28-15AM. Please wait ...You can find the log of this install session at:
 /home/oracle/oraInventory/logs/cloneActions2017-09-07_09-28-15AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of 12102 was successful.
Please check '/home/oracle/oraInventory/logs/cloneActions2017-09-07_09-28-15AM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
..................................................   95% Done.
As a root user, execute the following script(s):
        1. /u01/oracle/product/12102/root.sh
..................................................   100% Done.

Sunday, September 3, 2017

Solution for ORA-01442: column to be modified to NOT NULL is already NOT NULL during online redefinition

When trying to execute dbms_redefinition.copy_table_dependents, like this:
whenever sqlerror exit
set serveroutput on
set feedback off
set verify   off
set timing on

DECLARE
l_num_errors PLS_INTEGER;
BEGIN
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(uname=>'USER1',
orig_table=>'DOCUMENTS',
int_table=>'DOCUMENTS_INTERIM',
copy_indexes=>0,
copy_triggers=>TRUE,
copy_constraints=>TRUE,
copy_privileges=>TRUE,
ignore_errors=>FALSE,
num_errors => l_num_errors,
copy_statistics=>TRUE,
copy_mvlog=>TRUE);
DBMS_OUTPUT.put_line('l_num_errors=' || l_num_errors);
END;
/
You get the following error:
ORA-01442: column to be modified to NOT NULL is already NOT NULL
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1646

Cause:
Your interim table has NOT NULL constraints. This is easy to overlook, particulary if you create your interim table using CTAS ("Create Table As Select") statement.
Your interim table should not have any constraints before you execute the copy_table_dependents procedure.

Solution:
Drop the NOT NULL constraint, and retry the operation:
SQL> alter table user1.documents_interim drop constraint SYS_C0018782;

Table altered.

Note that you do not have to abort the redefinion procedure at this point, and start all over again.

Simply drop the constraint, and retry your operation.

Thursday, August 31, 2017

A workaround for ORA-12008 and ORA-14400 during online redefinition

To introduce partitions in one of my tables, I was using online redefinition of a rather large table.
The following error was thrown after about 30 minutes after having started the redefinition with dbms_redefintion.start_redef:
begin
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-12801: error signaled in parallel query server P004
ORA-14400: inserted partition key does not map to any partition
ORA-06512: at "SYS.DBMS_REDEFINITION", line 75
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3459
ORA-06512: at line 2

It turned out that there were NULL values in the column that would be the future partition key.
Everything was explained very well in Doc ID 2103273.1 "ORA-14400: inserted partition key does not map to any partition, with possible ORA-12008: error in materialized view refresh path".

You can solve this in two ways:

a) find the rows with null values, and update them

or

b) use an overflow partition which will work as a "catch all" basket for rows that can't be mapped to a specific partition. NULL values sort in this category.

Since I was using interval range partitioning, I had to choose the former of the two options. If this is not possible, you can't use interval partitioning, and need to explicitly define every partition + your overflow partition in your interim table.

Wednesday, July 26, 2017

How to solve "INS-32035: The chosen installation conflicts with software already installed in the given oracle home" when installing into an old directory

Workaround for the error
INS-32035: The chosen installation conflicts with software already installed in the given oracle home
from the Oracle Universal Installer:

1. detatch the home from the ORACLE_HOME you install from:
cd $current_oracle_home/oui/bin
./runInstaller -detachHome ORACLE_HOME=/u01/oracle/product/11204 invPtrLoc=/u01/oracle/product/11201/oraInst.loc

2. edit the file /u01/oraInventory/ContentsXML/inventory.xml

In my case from:

<HOME_LIST>
<HOME NAME="OraDb11g_home1" LOC="/u01/oracle/product/db/11201" TYPE="O" IDX="1"/>
<HOME NAME="OraDb11g_home2" LOC="/u01/oracle/product/db/11204" TYPE="O" IDX="2"/>
</HOME_LIST>

to

<HOME_LIST>
<HOME NAME="OraDB11g_home1" LOC="/u01/oracle/product/db/11201" TYPE="O" IDX="1"/>
</HOME_LIST> 

3. Delete the files in /u01/oracle/product/db/11204 physically from disk, before you try again.

That's it.
Universal Installer should now recognize the selected directory, and not give you any complaints when reinstalling into the same directory.

Thursday, July 13, 2017

Possible solution to ORA-01450: maximum key length (3800) exceeded when rebuilding an index

As a part of disabling TDE in a test database, I was moving indexes out of the TDE encrypted tablespace to another, similarly created tablespace, but one without encryption.

When trying to execute

ALTER INDEX SCOTT.MYTABLE_U01 REBUILD ONLINE TABLESPACE DATA32K_NOTDE
one of the tables returned an error during the online rebuild:

ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01450: maximum key length (3800) exceeded

This error is normally worked around by one of the following actions:

1. Rebuild the database with a larger blocksize
2. Add a new tablespace with a larger blocksize (my preferred solution)
3. Make the index smaller, meaning drop and recreate the index and omit one or more of the previously indexed columns


This is documented in Doc ID 747107.1 "ORA-01450 Error on Create Index" at Oracle Support.

For me though, it worked by simply changing

REBUILD ONLINE 

to

REBUILD

and the index rebuild executed without problems.
The Oracle Documentation for 12cR1 has a few restrictions regarding the use of online rebuilds, but none of them seems relevant to the error I observed.

For more information about the ALTER INDEX statement, see the official documentation from Oracle

Friday, April 7, 2017

How to work around ORA-12008 and ORA-01858 during an online redefinition

The following error occured during redefintion:

Move method (cons_use_pk or cons_use_rowid): cons_use_rowid
begin
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-01858: a non-numeric character was found where a numeric was expected
ORA-06512: at "SYS.DBMS_REDEFINITION", line 75
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3459
ORA-06512: at line 2

The reason:
One of the reasons why I wanted to redefine the table, was to make it partitioned.
One of the column in the interim table was therefore defined as DATE instead of VARCHAR2:

Original table:
CREATE TABLE SCOTT.MYTABLE
(
  SEQ_NUM              NUMBER Generated as Identity ( START WITH 9984 MAXVALUE 9999999999999999999999999999 MINVALUE 1 NOCYCLE CACHE 20 NOORDER NOKEEP) CONSTRAINT SYS_C0073650 NOT NULL,
  ENTRY_ID             VARCHAR2(50 BYTE)            NULL,
  ENAME                VARCHAR2(100 BYTE)           NULL,
  CREATED_DT           TIMESTAMP(6)                 NULL,
  REVISION#            INTEGER                      NULL,
  ACTIVE_YEAR          INTEGER                      NULL,
  PERIOD               VARCHAR2(10 BYTE)            NULL,
  CONDITION            VARCHAR2(50 BYTE)            NULL,
  FN_ID                VARCHAR2(11 BYTE)            NULL
)
TABLESPACE USERS;

Interim table:
CREATE TABLE SCOTT.MYTABLE_INTERIM
(
  SEQ_NUM              NUMBER,
  ENTRY_ID             VARCHAR2(50 BYTE),
  ENAME                VARCHAR2(100 BYTE),
  CREATED_DT           TIMESTAMP(6),
  REVISION#            INTEGER,
  ACTIVE_YEAR          INTEGER,
  PERIOD               DATE,
  CONDITION            VARCHAR2(50 BYTE),
  FN_ID                VARCHAR2(11 BYTE)
)
PARTITION BY RANGE (PERIOD)
INTERVAL( NUMTOYMINTERVAL(1,'MONTH'))
(
  PARTITION P_INIT VALUES LESS THAN (TO_DATE('2015-01', 'YYYY-MM'))
)
TABLESPACE USERS
;

The requirement from the Developers was that the partitions should be range partitioned on dates in the format 'YYYY-MM'.

When starting the online redefintion process, I hit the error at the top in this post.

I started to search for the root cause, and executed the following:
select seq_num,TO_DATE(PERIOD,'YYYY-MM') from Scott.mytable
fetch first 5 rows only;
which resulted in
ORA-01858: a non-numeric character was found where a numeric was expected

It turned out that the following SQL would return the data I wanted, in the correct format, defined with the correct data type:
select seq_num,to_date(to_char(TO_DATE(PERIOD),'YYYY-MM'),'YYYY-MM')
from Scott.mytable
fetch first 5 rows only;

This Expression had to go into the redefinition-command, like this:
begin
DBMS_REDEFINITION.start_redef_table(uname=>'EVENT',
orig_table=>'MYTABLE',
int_table=>'MYTABLE_INTERIM',
col_mapping =>'SEQ_NUM SEQ_NUM,ENTRY_ID ENTRY_ID,ENAME ENAME,CREATED_DT CREATED_DT,REVISION# REVISION#,ACTIVE_YEAR ACTIVE_YEAR,to_date(to_char(TO_DATE(PERIOD),''YYYY-MM''),''YYYY-MM'') PERIOD,CONDITION CONDITION,FN_ID FN_ID',
options_flag=>dbms_redefinition.cons_use_pk);
end;
/

After this change, the redefinition executed successfully.

Monday, April 3, 2017

ORA-01775: Looping chain of synonyms during repository installation

During installation of a repository for an Oracle tool, the following error occurs

Select distinct mrc_name from schema_version_registry;

Select distinct mrc_name from schema_version_registry
                              *
ERROR on line 1:
ORA-01775: Looping chain of synonyms

Cause: There was only a synonym present in the database, and no underlying base object:
SQL> SELECT owner, SYNONYM_NAME,TABLE_OWNER,TABLE_NAME from DBA_SYNONYMS where table_name='SCHEMA_VERSION_REGISTRY'

OWNER                SYNONYM_NAME               TABLE_OWNER          TABLE_NAME
-------------------- -------------------------- -------------------- ----------------------------------------
PUBLIC               SCHEMA_VERSION_REGISTRY    SYSTEM               SCHEMA_VERSION_REGISTRY


Solution is to drop the public synonym:
SQL> drop public synonym SCHEMA_VERSION_REGISTRY;

Synonym dropped.

Source: ORA-01775 or ORA-980 from Public Synonym when Base Table is Missing (Doc ID 392705.1)

Wednesday, March 29, 2017

How to solve ORA-01031: insufficient privileges when creating a materialized view

At first, I got the following error from the Oracle server when exeuting the "CREATE MATERIALIZED VIEW" script:
where    tab2.year >   to_number(to_char(sysdate, 'YYYY')) - 4
                                       *
ERROR at line 36:
ORA-01031: insufficient privileges

Solution:
If you try to create a materialized view based on tables in a different schema, you need the privilege

GLOBAL QUERY REWRITE

as well as

CREATE TABLE
CREATE MATERIALIZED VIEW

Grant the privileges:

grant GLOBAL QUERY REWRITE to scott;
grant CREATE TABLE to scott;
grant CREATE MATERIALIZED VIEW to scott;

Verify your users privileges:
connect scott/tiger

select * from user_sys_privs;


USERNAME                       PRIVILEGE                                ADM
------------------------------ ---------------------------------------- ---
EKSTERN                        CREATE SESSION                           NO
EKSTERN                        CREATE TABLE                             NO
EKSTERN                        CREATE MATERIALIZED VIEW                 NO
EKSTERN                        GLOBAL QUERY REWRITE                     NO

This should fix your ORA-01031 problems.

Source: Oracle Support "Create Local Materialized View with Query Rewrite Option Fails with ORA-1031 Insufficient Privileges" (Doc ID 1079983.6)

Thursday, March 9, 2017

Errors ORA-39083 and ORA-28365 during import

During an import you get:

ORA-39083: Object type TABLE:"SCOTT"."EMP" failed to create with error:
ORA-28365: wallet is not open

Action:
Open the wallet:
SQL> administer key management set keystore open identified by "mysecretpassword";

keystore altered.

Check status of the wallet:

select status from  v$encryption_Wallet;

STATUS
------
OPEN

You can now perform your import as you normally would.

After import you can Close the wallet:
administer key management set keystore close identified by "mysecretpassword";

Monday, February 20, 2017

How to solve ORA-14323: cannot add partition when DEFAULT partition exists

You have a partitioned table with an overflow partition, and you try to add another partition to it:
ALTER TABLE RECEIVED_DOCUMENTS 
ADD PARTITION EES_COUNTRIES VALUES('NORWAY','ICELAND')
Result:
ERROR at line 1:
ORA-14323: cannot add partition when DEFAULT partition exists

Solution: split the overflow partition:
ALTER TABLE RECEIVED_DOCUMENTS 
SPLIT PARTITION "OTHERS" 
VALUES ('NORWAY','ICELAND')
INTO ( PARTITION EES_COUNTRIES,
       PARTITION OTHERS
       )
UPDATE INDEXES;

Thursday, January 12, 2017

How to solve ORA-02391 exceeded simultaneous SESSIONS_PER_USER limit

The error definition:

//  *Cause: An attempt was made to exceed the maximum number of
//          concurrent sessions allowed by the SESSION_PER_USER clause
//          of the user profile.
//  *Action: End one or more concurrent sessions or ask the database
//           administrator to increase the SESSION_PER_USER limit of
//           the user profile.

You need to adjust the SESSIONS_PER_USER of the profile the user is assigned to.
alter profile general_users LIMIT SESSIONS_PER_USER <num> | default | unlimited;

num = a fixed number of sessions that is the upper bound for the number of concurrent session the user can create
Default = the user is subject to the limits on those resources defined by the DEFAULT profile
unlimited = the user assigned this profile can use an unlimited amount of this resource

Friday, September 30, 2016

Missing trailing slash in listener.ora caused ORA-27101 when attempting to connect

Not too long ago, I got an error when connecting to my database using TOAD:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
IBM AIX RISC System/6000 Error: 2: No such file or directory

However, when connecting to the same database with sqlplus from the command line from a remote client, I could connect without errors.


Doc ID 1296982.1 "DVCA receives ORA-01034, ORA-27101" pointed me in the right direction:


"The value of ORACLE_HOME (or ORACLE_SID) passed to the DVCA utility does not match the value of ORACLE_HOME (or ORACLE_SID) that was in effect when the instance was started.

The shared memory segment key for an Oracle instance uses a hashed value based on the contents of ORACLE_HOME and ORACLE_SID. So, if one or the other of these values does not match what was used to start the instance, the resulting hash will not match, and one will encounter "ORA-27101: shared memory realm does not exist" when trying to connect to the instance.

This can commonly be caused by the presence (or lack thereof) of trailing slashes in the string for ORACLE_HOME, as passed to the -oh parameter of DVCA."


I changed my listener.ora file to read as follows:
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = mydb)
      (ORACLE_HOME = /u01/oracle/product/11204 )
      (SID_NAME = mydb)
    )
  )
to
SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = mydb)
      (ORACLE_HOME = /u01/oracle/product/11204/) <-- Note the trailing slash character at the end of the ORACLE_HOME path
      (SID_NAME = mydb)
    )
  )
Reload the listener with lsnrctl reload, and the listener once again accepted connections from TOAD.

Wednesday, April 20, 2016

How to solve ORA-28017: The password file is in the legacy format


SQL> alter user sys identified by iossS1Qwmk_kKfGHqs0UVu93xxswQ;
alter user sys identified by iossS1Qwmk_kKfGHqs0UVu93xxswQ
*
ERROR at line 1:
ORA-28017: The password file is in the legacy format.
Cause:
#oerr ora 28017
28017, 00000, "The password file is in the legacy format."
// *Cause:    There are multiple possibilities for the cause of the error:
//
//              * An attempt was made to grant SYSBACKUP, SYSDG or SYSKM.
//              * These administrative privileges could not be granted unless
//                the password file used a newer format ("12" or higher).
//              * An attempt was made to grant a privilege to a user who
//                has a large password hash which cannot be stored in
//                the password file unless the password file uses a newer
//                format ("12" or higher).
//              * An attempt was made to grant or revoke a common administrative
//                privilege in a CDB
// *Action:   Regenerate the password file in the newer format ("12" or higher).
//            Use the newer password file format ("12" or higher) if you need to
//            grant a user the SYSBACKUP, SYSDG or SYSKM administrative
//            privileges, or if you need to grant a privilege to a user
//            who has a large password hash value.

Cause: the password file is of the wrong format. You typically see this error after a migration from 11g to 12c.

Solution: regenerate the password file.

First, check the users that are affected by the change:
SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
DBAMON                         TRUE  FALSE FALSE FALSE FALSE FALSE          0
In my case, two users are affected: SYS and DBAMON.


Regenerate the password file:
# orapwd file=$ORACLE_HOME/dbs/orapwproddb01 entries=5 force=y

Enter password for SYS:

Check the password file users again. SYS was added as a result of the recreation of the password file:
SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0

To add DBAMON as a sysdba user, grant the SYSDBA privilege to the account:
SQL> grant sysdba to DBAMON;

Grant succeeded.
Check again, and DBAMON is now registered as a privileged user in the password file:
SQL> select * from v$pwfile_users;

USERNAME                       SYSDB SYSOP SYSAS SYSBA SYSDG SYSKM     CON_ID
------------------------------ ----- ----- ----- ----- ----- ----- ----------
SYS                            TRUE  TRUE  FALSE FALSE FALSE FALSE          0
DBAMON                         TRUE  FALSE FALSE FALSE FALSE FALSE          0