#!/bin/bash . /home/oracle/.bash_profile userid=$1 password=$2 userexist="WHENEVER SQLERROR EXIT;" echo -e "alter session set container=PDB01;\n$userexist\nalter user $userid identified by $password account unlock;" | sqlplus -L -s / as sysdba
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Tuesday, May 29, 2018
Shell script snipplet for resetting a password
This little code snipplet to use a Linux/Unix escape character to send a line shift into a string of mulitple commands:
Tuesday, May 22, 2018
Workaround for ORA-46372: audit file not found
In my environment, I had the following errors popping up in Cloud Control:
ORA-46372: audit file '/u01/oracle/audit/proddb01/ora_audit_10.bin' not found ORA-17503: ksfdopn:4 Failed to open file /u01/oracle/audit/proddb01/ora_audit_10.binin ORA-17500: ODM err:File does not exist
It can be manually reproduced by executing the following query:
select count(1) from unified_audit_trail;
It seems to be related to BUG 26352615 - SPILLOVER AUDIT FILES CANNOT BE FOUND ORA-46372 for platform Linux x86-64. Per May 22ond 2018, it is still in development according to Oracle Support.
In the directory /u01/oracle/audit/proddb01, there is quite correctly no file named ora_audit_10.binin. However, there is one called ora_audit_10.bin
The following workaround worked for me:
Go to the directory /u01/oracle/audit/proddb01
Change the name of the file ora_audit_10.bin to ora_audit_10.binin
Test like this:
select count(1) from unified_audit_trail; COUNT(1) ---------- 25
Update 09.03.2020:
Oracle has released a patch for 26352615. It is available for a number of RUs. I just applied the patch for 12c Release 12.2.0.1.200114DBJAN2020RU, and it solved the problem outlined above.
Tuesday, April 24, 2018
Partitioned index properties
The following query will reveal information about the indexes on a partitioned table.
Very useful when you want to investigate if a partitioned table is correctly and efficiently indexed:
SELECT C.TABLE_NAME,I.INDEX_NAME,I.PARTITIONING_TYPE "part type", I.SUBPARTITIONING_TYPE "sub type", I.LOCALITY,I.ALIGNMENT,C.COLUMN_NAME,C.COLUMN_POSITION "col pos", I.INTERVAL,I.AUTOLIST, II.VISIBILITY FROM DBA_PART_INDEXES I JOIN DBA_IND_COLUMNS C ON (I.INDEX_NAME = C.INDEX_NAME) JOIN DBA_INDEXES II ON (II.INDEX_NAME = I.INDEX_NAME) WHERE I.TABLE_NAME IN (SELECT TABLE_NAME FROM dba_tables WHERE PARTITIONED='YES' and table_name='MYTABLE') ORDER BY I.INDEX_NAME,C.COLUMN_POSITION;
In my case, the results were:
TABLE_NAME | INDEX_NAME | part type | sub type | LOCALITY | ALIGNMENT | COLUMN_NAME | col pos | INTERVAL | AUTOLIST | VISIBILITY |
---|---|---|---|---|---|---|---|---|---|---|
MYTABLE | MYTABLE_SI1 | RANGE | NONE | LOCAL | PREFIXED | CREATED_DAY | 1 |
NUMTOYMINTERVAL(1,'MONTH') | NO | VISIBLE |
MYTABLE | MYTABLE_SI1 | RANGE | NONE | LOCAL | PREFIXED | SEQ_NUM | 2 |
NUMTOYMINTERVAL(1,'MONTH') | NO | VISIBLE |
MYTABLE | MYTABLE_SI2 | RANGE | NONE | LOCAL | NON_PREFIXED | SUBMITTED_DATE | 1 |
NUMTOYMINTERVAL(1,'MONTH') | NO | VISIBLE |
Thursday, April 19, 2018
How do I find which objects that were analyzed during an autostats job?
Step 1
Find the start and stop times of the maintenance window:
select window_name, cast(last_start_date as date) "Start Time", DURATION, cast(last_start_date+duration as date) "End Time" FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%');
Which results in:
WINDOW_NAME | Start Time | DURATION | End Time |
---|---|---|---|
WEDNESDAY_WINDOW | 18.04.2018 23:59:00 | +00 04:00:00.000000 | 19.04.2018 03:59:00 |
So my Wednesday window starts at 23:59 and ends 4 hours later, at 03:59.
Step 2
Add the query above as a subquery in a query against DBA_OPTSTAT_OPERATIONS, to get all the IDs for the sub-steps that was executed during this window:
select ID from DBA_OPTSTAT_OPERATIONS where operation='gather_database_stats (auto)' and CAST(start_time AS DATE) BETWEEN ( SELECT CAST (LAST_START_DATE AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) AND ( SELECT CAST (LAST_START_DATE+DURATION AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) ;
Step 3
Put everything together by querying DBA_OPTSTAT_OPERATION_TASKS, to see which tasks and which objects that were analyzed during the time window:
-- get the details for every task executed by "gather_database_stats (auto)" SELECT opid, target, target_objn, target_type, target_size, cast(start_time as date) "Start Time", cast(end_time as date) "End Time", status FROM DBA_OPTSTAT_OPERATION_TASKS WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs SELECT ID FROM DBA_OPTSTAT_OPERATIONS WHERE OPERATION='gather_database_stats (auto)' AND CAST (START_TIME AS DATE) BETWEEN -- get the start and stop times for the last window -- for a particular weekday (SELECT CAST(LAST_START_DATE AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) AND ( SELECT CAST(LAST_START_DATE+DURATION AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) ) order by CAST(start_time as DATE) asc;
Sample output (displaying the first and last 5 rows of the record set):
OPID | TARGET | TARGET_OBJN | TARGET_TYPE | TARGET_SIZE | Start Time | End Time | STATUS |
---|---|---|---|---|---|---|---|
48848 | SYS.ET$16E30DF80001 | 6177051 | TABLE | 0 | 18.04.2018 23:59:30 | 18.04.2018 23:59:30 | FAILED |
48848 | SYS.I_SEQ1 | 79 | INDEX | 1 | 18.04.2018 23:59:30 | 18.04.2018 23:59:31 | COMPLETED |
48848 | SYS.SEQ$ | 74 | TABLE | 7 | 18.04.2018 23:59:30 | 18.04.2018 23:59:31 | COMPLETED |
48848 | SYS.OPTSTAT_HIST_CONTROL$ | 506 | TABLE | 1 | 18.04.2018 23:59:31 | 18.04.2018 23:59:31 | COMPLETED |
48848 | SYS.REG$ | 5775 | TABLE | 1 | 18.04.2018 23:59:31 | 18.04.2018 23:59:31 | COMPLETED |
... | |||||||
48885 | SYS.WRH$_SERVICE_STAT.WRH$_SERVICE_STAT_2490506650_826 | 11986128 | TABLE PARTITION | 5 | 19.04.2018 02:00:48 | 19.04.2018 02:00:48 | COMPLETED |
48885 | SYS.WRH$_SEG_STAT_PK.WRH$_SEG_STAT_2490506650_826 | 11986126 | INDEX PARTITION | 1 | 19.04.2018 02:00:48 | 19.04.2018 02:00:48 | COMPLETED |
48885 | SYS.WRH$_SGASTAT_U.WRH$_SGASTAT_2490506650_826 | 11986114 | INDEX PARTITION | 1 | 19.04.2018 02:00:48 | 19.04.2018 02:00:48 | COMPLETED |
48885 | SYS.WRH$_SEG_STAT.WRH$_SEG_STAT_2490506650_826 | 11986124 | TABLE PARTITION | 5 | 19.04.2018 02:00:48 | 19.04.2018 02:00:48 | COMPLETED |
48885 | SYS.WRH$_SEG_STAT.WRH$_SEG_STAT_2490506650_826 | 11986124 | TABLE PARTITION | 5 | 19.04.2018 02:00:48 | 19.04.2018 02:00:48 | COMPLETED |
Check the overall status of your run:
SELECT target_type,status,count(*) FROM DBA_OPTSTAT_OPERATION_TASKS WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs SELECT ID FROM DBA_OPTSTAT_OPERATIONS WHERE OPERATION='gather_database_stats (auto)' AND CAST(START_TIME AS DATE) BETWEEN -- get the start and stop times for the last window -- for a particular weekday (SELECT CAST(LAST_START_DATE AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) AND ( SELECT CAST(LAST_START_DATE+DURATION AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) ) group by target_type,status order by status;
Result:
TARGET_TYPE | STATUS | COUNT(*) |
---|---|---|
INDEX | COMPLETED | 501 |
INDEX PARTITION | COMPLETED | 177 |
TABLE | COMPLETED | 394 |
TABLE PARTITION | COMPLETED | 362 |
TABLE | FAILED | 54 |
To further filter out those objecs that failed, use:
SELECT distinct target,target_objn,target_type FROM DBA_OPTSTAT_OPERATION_TASKS WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs SELECT ID FROM DBA_OPTSTAT_OPERATIONS WHERE OPERATION='gather_database_stats (auto)' AND CAST(START_TIME AS DATE) BETWEEN -- get the start and stop times for the last window -- for a particular weekday (SELECT CAST(LAST_START_DATE AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) AND ( SELECT CAST(LAST_START_DATE+DURATION AS DATE) FROM DBA_SCHEDULER_WINDOWS WHERE ENABLED='TRUE' AND WINDOW_NAME LIKE UPPER('%&&day%') ) ) AND status = 'FAILED' order by TARGET;
Which in my case revealed that the following objects need attention:
TARGET | TARGET_OBJN | TARGET_TYPE |
---|---|---|
SYS.ET$16E30DF80001 | 6177051 | TABLE |
SYS.X$LOGMNR_CONTENTS | 4294951535 | TABLE |
SYS.X$LOGMNR_REGION | 4294951631 | TABLE |
Wednesday, April 18, 2018
Potential solution for ORA-12008: error in materialized view or zonemap refresh path
During the synch-phase of an online redefintion, I received the error:
begin ORA-42009: error occurred while synchronizing the redefinition ORA-12008: error in materialized view or zonemap refresh path ORA-01843: not a valid month ORA-06512: at "SYS.DBMS_REDEFINITION", line 219 ORA-06512: at "SYS.DBMS_REDEFINITION", line 5392
Solution:
Alter your session setting for NLS_TERRITORY. Add the line
alter session set nls_territory='';
to your script.
This seems to be due to the fact that I am partitioning on a virtual column, using a column that is in turn of datatype TIMESTAMP:
CREATE TABLE USER1.TAB1_INTERIM ( ENTRY_ID VARCHAR2(36 BYTE) NULL, DNAME VARCHAR2(11 BYTE) NULL, DTYPE VARCHAR2(64 BYTE) NULL, CREATED_TIME TIMESTAMP(6) NULL, DYEAR INTEGER NULL, CREATED_DAY AS ( CAST( TO_CHAR(CREATED_TIME,'DD.MM.YYYY') AS DATE ) ) VIRTUAL ) TABLESPACE MYTABLESPACE PARTITION BY RANGE (CREATED_DAY) INTERVAL( NUMTOYMINTERVAL(1,'MONTH')) ( PARTITION P_INIT VALUES LESS THAN (TO_DATE('01.01.2017', 'DD.MM.YYYY')) TABLESPACE MYTABLESPACE );
Since the creator of the database didn't set the NLS_TERRITORY to NORWAY, it is using the default value "AMERICA".
Without any altering of the default NLS setting, the CREATED_TIME column will therefore contain data formatted according to US standard:
select created_time from user1.tab1 fetch first 1 rows only; CREATED_TIME -------------------------- 26-SEP-17 09.30.17.040000 AM
I would like the data to be formatted according to Norwegian locale rules, so I use 'DD.MM.YYYY' in my virtual column. So I tested in sqlplus:
alter session set nls_territory='NORWAY'; select created_time from user1.tab1 fetch first 1 rows only; CREATED_TIME -------------------------- 26.09.2017 09.30.17,040000
So my script would look like this:
alter session set nls_territory='NORWAY'; begin DBMS_REDEFINITION.SYNC_INTERIM_TABLE(uname=>'USER1',orig_table=>'TAB1',int_table=>'TAB1_INTERIM'); end; /
and it completed successfully.
Tuesday, April 17, 2018
Syntax for index hint
Use /*+ INDEX(table_name index_name) */
For example:
Which gave me the following plan:
Line 4 of the explain plan output shows that the index MY_IDX1 is being used.
For example:
set autot trace exp stat select /*+ INDEX(MYTABLE MY_IDX1) */ min(entry_num) from sales.sales_log where sold_date >= '05.12.2017';
Which gave me the following plan:
----------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | ----------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 11 | 9 (0)| 00:00:01 | | | | 1 | SORT AGGREGATE | | 1 | 11 | | | | | | 2 | PARTITION RANGE ITERATOR | | 1 | 11 | 9 (0)| 00:00:01 | 1 | 13 | |* 3 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SALES | 1 | 11 | 9 (0)| 00:00:01 | 1 | 13 | | 4 | INDEX FULL SCAN | MY_IDX1 | 3 | | 3 (0)| 00:00:01 | 1 | 13 | -----------------------------------------------------------------------------------------------------------------------------
Line 4 of the explain plan output shows that the index MY_IDX1 is being used.
Friday, April 13, 2018
A solution for ORA-28040
When connecting to Oracle 12c databases you may get
ERROR: ORA-28040: No matching authentication protocol
upon login.
The error is defined as follows:
28040, 0000, "No matching authentication protocol"
Cause: There was no acceptable authentication protocol for either client or server.
Action: The administrator should set the values of the SQLNET.ALLOWED_LOGON_VERSION_SERVER and SQLNET.ALLOWED_LOGON_VERSION_CLIENT parameters, on both the client and on the server, to values that match the minimum version software supported in the system. This error is also raised when the client is authenticating to a user account which was created without a verifier suitable for the client software version. In this situation, that account's password must be reset, in order for the required verifier to be generated and allow authentication to proceed successfully.
Solution: edit your $TNS_ADMIN/sqlnet.ora file to include this directive:
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
which will allow older clients to connect. Oracle claims that the default value after an upgrade to Oracle 12.1 is 11, but I have found that the parameter needs to be explicitly set in the server's sqlnet.ora file before older clients are allowed.
Note that the parameter SQLNET.ALLOWED_LOGON_VERSION_CLIENT is relevant only when the database server itself acts like a client against other database servers. Consequeltly, unless I know for certain that a database server is actually used as a client against other remote database servers, I will specify them like this for my Oracle 12c database servers:
SQLNET.ALLOWED_LOGON_VERSION_SERVER=11 SQLNET.ALLOWED_LOGON_VERSION_CLIENT=12
Check the Oracle documentation for more info
Subscribe to:
Posts (Atom)