Thursday, April 19, 2018

How do I find which objects that were analyzed during an autostats job?


Step 1

Find the start and stop times of the maintenance window:
select window_name, cast(last_start_date as date) "Start Time", DURATION, cast(last_start_date+duration as date) "End Time"
FROM DBA_SCHEDULER_WINDOWS
WHERE ENABLED='TRUE'
AND WINDOW_NAME LIKE UPPER('%&&day%');

Which results in:
WINDOW_NAME Start Time DURATION End Time
WEDNESDAY_WINDOW 18.04.2018 23:59:00 +00 04:00:00.000000 19.04.2018 03:59:00

So my Wednesday window starts at 23:59 and ends 4 hours later, at 03:59.

Step 2
Add the query above as a subquery in a query against DBA_OPTSTAT_OPERATIONS, to get all the IDs for the sub-steps that was executed during this window:
select ID
from DBA_OPTSTAT_OPERATIONS 
where operation='gather_database_stats (auto)' 
and CAST(start_time AS DATE) BETWEEN 
                                ( SELECT CAST (LAST_START_DATE AS DATE)
                                  FROM  DBA_SCHEDULER_WINDOWS
                                  WHERE ENABLED='TRUE'
                                  AND   WINDOW_NAME LIKE UPPER('%&&day%')
                                 )
                              AND 
                                 (
                                  SELECT CAST (LAST_START_DATE+DURATION AS DATE)
                                  FROM  DBA_SCHEDULER_WINDOWS
                                  WHERE ENABLED='TRUE'
                                  AND   WINDOW_NAME LIKE UPPER('%&&day%')
                                 )
;

Step 3
Put everything together by querying DBA_OPTSTAT_OPERATION_TASKS, to see which tasks and which objects that were analyzed during the time window:

-- get the details for every task executed by "gather_database_stats (auto)" 
SELECT opid,
       target,
       target_objn,
       target_type,
       target_size,
       cast(start_time as date) "Start Time",
       cast(end_time as date) "End Time", 
       status
FROM DBA_OPTSTAT_OPERATION_TASKS 
WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs
                SELECT ID FROM DBA_OPTSTAT_OPERATIONS 
                WHERE OPERATION='gather_database_stats (auto)' 
                AND CAST (START_TIME AS DATE) BETWEEN 
                                                -- get the start and stop times for the last window
                                                -- for a particular weekday
                                                (SELECT CAST(LAST_START_DATE AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
                                                AND (
                                                      SELECT CAST(LAST_START_DATE+DURATION AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
)
order by CAST(start_time as DATE) asc;

Sample output (displaying the first and last 5 rows of the record set):
OPID TARGET TARGET_OBJN TARGET_TYPE TARGET_SIZE Start Time End Time STATUS
48848
SYS.ET$16E30DF80001
6177051
TABLE
0
18.04.2018 23:59:30 18.04.2018 23:59:30 FAILED
48848
SYS.I_SEQ1
79
INDEX
1
18.04.2018 23:59:30 18.04.2018 23:59:31 COMPLETED
48848
SYS.SEQ$
74
TABLE
7
18.04.2018 23:59:30 18.04.2018 23:59:31 COMPLETED
48848
SYS.OPTSTAT_HIST_CONTROL$
506
TABLE
1
18.04.2018 23:59:31 18.04.2018 23:59:31 COMPLETED
48848
SYS.REG$
5775
TABLE
1
18.04.2018 23:59:31 18.04.2018 23:59:31 COMPLETED
...
48885
SYS.WRH$_SERVICE_STAT.WRH$_SERVICE_STAT_2490506650_826
11986128
TABLE PARTITION
5
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SEG_STAT_PK.WRH$_SEG_STAT_2490506650_826
11986126
INDEX PARTITION
1
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SGASTAT_U.WRH$_SGASTAT_2490506650_826
11986114
INDEX PARTITION
1
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SEG_STAT.WRH$_SEG_STAT_2490506650_826
11986124
TABLE PARTITION
5
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED
48885
SYS.WRH$_SEG_STAT.WRH$_SEG_STAT_2490506650_826
11986124
TABLE PARTITION
5
19.04.2018 02:00:48 19.04.2018 02:00:48 COMPLETED

Check the overall status of your run:
SELECT target_type,status,count(*)
FROM DBA_OPTSTAT_OPERATION_TASKS 
WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs
                SELECT ID FROM DBA_OPTSTAT_OPERATIONS 
                WHERE OPERATION='gather_database_stats (auto)' 
                AND CAST(START_TIME AS DATE) BETWEEN 
                                                -- get the start and stop times for the last window
                                                -- for a particular weekday
                                                (SELECT CAST(LAST_START_DATE AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
                                                AND (
                                                      SELECT CAST(LAST_START_DATE+DURATION AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
)
group by target_type,status
order by status;

Result:
TARGET_TYPE STATUS COUNT(*)
INDEX COMPLETED
501
INDEX PARTITION COMPLETED
177
TABLE COMPLETED
394
TABLE PARTITION COMPLETED
362
TABLE FAILED
54


To further filter out those objecs that failed, use:
SELECT distinct target,target_objn,target_type
FROM DBA_OPTSTAT_OPERATION_TASKS 
WHERE OPID IN ( -- get the unique ID for the "gather_database_stats (auto)" jobs
                SELECT ID FROM DBA_OPTSTAT_OPERATIONS 
                WHERE OPERATION='gather_database_stats (auto)' 
                AND CAST(START_TIME AS DATE) BETWEEN 
                                                -- get the start and stop times for the last window
                                                -- for a particular weekday
                                                (SELECT CAST(LAST_START_DATE AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
                                                AND (
                                                      SELECT CAST(LAST_START_DATE+DURATION AS DATE)
                                                      FROM DBA_SCHEDULER_WINDOWS
                                                      WHERE ENABLED='TRUE'
                                                      AND WINDOW_NAME LIKE UPPER('%&&day%')
                                                     )
)
AND status = 'FAILED'
order by TARGET;

Which in my case revealed that the following objects need attention:

TARGET TARGET_OBJN TARGET_TYPE
SYS.ET$16E30DF80001
6177051
TABLE
SYS.X$LOGMNR_CONTENTS
4294951535
TABLE
SYS.X$LOGMNR_REGION
4294951631
TABLE

Wednesday, April 18, 2018

Potential solution for ORA-12008: error in materialized view or zonemap refresh path



During the synch-phase of an online redefintion, I received the error:
begin
ORA-42009: error occurred while synchronizing the redefinition
ORA-12008: error in materialized view or zonemap refresh path
ORA-01843: not a valid month
ORA-06512: at "SYS.DBMS_REDEFINITION", line 219
ORA-06512: at "SYS.DBMS_REDEFINITION", line 5392

Solution:
Alter your session setting for NLS_TERRITORY. Add the line

alter session set nls_territory='';

to your script.


This seems to be due to the fact that I am partitioning on a virtual column, using a column that is in turn of datatype TIMESTAMP:
CREATE TABLE USER1.TAB1_INTERIM
(
ENTRY_ID             VARCHAR2(36 BYTE)        NULL,
DNAME                VARCHAR2(11 BYTE)        NULL,
DTYPE                VARCHAR2(64 BYTE)        NULL,
CREATED_TIME         TIMESTAMP(6)             NULL,
DYEAR                INTEGER                  NULL,
CREATED_DAY AS (
CAST(
TO_CHAR(CREATED_TIME,'DD.MM.YYYY')
AS DATE
)
) VIRTUAL
)
TABLESPACE MYTABLESPACE
PARTITION BY RANGE (CREATED_DAY)
INTERVAL( NUMTOYMINTERVAL(1,'MONTH'))
(
PARTITION P_INIT VALUES LESS THAN (TO_DATE('01.01.2017', 'DD.MM.YYYY'))
TABLESPACE MYTABLESPACE
);

Since the creator of the database didn't set the NLS_TERRITORY to NORWAY, it is using the default value "AMERICA".

Without any altering of the default NLS setting, the CREATED_TIME column will therefore contain data formatted according to US standard:
select created_time from user1.tab1 fetch first 1 rows only;

CREATED_TIME
--------------------------
26-SEP-17 09.30.17.040000 AM

I would like the data to be formatted according to Norwegian locale rules, so I use 'DD.MM.YYYY' in my virtual column. So I tested in sqlplus:
alter session set nls_territory='NORWAY';
select created_time from user1.tab1 fetch first 1 rows only;

CREATED_TIME
--------------------------
26.09.2017 09.30.17,040000



So my script would look like this:
alter session set nls_territory='NORWAY';
begin
   DBMS_REDEFINITION.SYNC_INTERIM_TABLE(uname=>'USER1',orig_table=>'TAB1',int_table=>'TAB1_INTERIM');
end;
/

and it completed successfully.




Tuesday, April 17, 2018

Syntax for index hint

Use /*+ INDEX(table_name index_name) */

For example:
set autot trace exp stat
select  /*+ INDEX(MYTABLE MY_IDX1) */ min(entry_num)
from sales.sales_log
where sold_date >= '05.12.2017';

Which gave me the following plan:
-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |               |     1 |    11 |     9   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE                             |               |     1 |    11 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR                  |               |     1 |    11 |     9   (0)| 00:00:01 |     1 |    13 |
|*  3 |    TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SALES         |     1 |    11 |     9   (0)| 00:00:01 |     1 |    13 |
|   4 |     INDEX FULL SCAN                         | MY_IDX1       |     3 |       |     3   (0)| 00:00:01 |     1 |    13 |
-----------------------------------------------------------------------------------------------------------------------------

Line 4 of the explain plan output shows that the index MY_IDX1 is being used.

Friday, April 13, 2018

A solution for ORA-28040



When connecting to Oracle 12c databases you may get

ERROR:
ORA-28040: No matching authentication protocol

upon login.

The error is defined as follows:

28040, 0000, "No matching authentication protocol"
Cause: There was no acceptable authentication protocol for either client or server.

Action: The administrator should set the values of the SQLNET.ALLOWED_LOGON_VERSION_SERVER and SQLNET.ALLOWED_LOGON_VERSION_CLIENT parameters, on both the client and on the server, to values that match the minimum version software supported in the system. This error is also raised when the client is authenticating to a user account which was created without a verifier suitable for the client software version. In this situation, that account's password must be reset, in order for the required verifier to be generated and allow authentication to proceed successfully.


Solution: edit your $TNS_ADMIN/sqlnet.ora file to include this directive:

SQLNET.ALLOWED_LOGON_VERSION_SERVER=8

which will allow older clients to connect. Oracle claims that the default value after an upgrade to Oracle 12.1 is 11, but I have found that the parameter needs to be explicitly set in the server's sqlnet.ora file before older clients are allowed.

Note that the parameter SQLNET.ALLOWED_LOGON_VERSION_CLIENT is relevant only when the database server itself acts like a client against other database servers. Consequeltly, unless I know for certain that a database server is actually used as a client against other remote database servers, I will specify them like this for my Oracle 12c database servers:

SQLNET.ALLOWED_LOGON_VERSION_SERVER=11
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=12


Check the Oracle documentation for more info

Tuesday, April 10, 2018

How to change the start time for a maintenance window

For example, if you want to change the start time for the MONDAY_WINDOW from the default 22:00 to 23:59, use the following syntax:

BEGIN
DBMS_SCHEDULER.DISABLE(name=>'"SYS"."MONDAY_WINDOW"',force=>TRUE);
END;
/

BEGIN
DBMS_SCHEDULER.SET_ATTRIBUTE(name=>'"SYS"."MONDAY_WINDOW"',attribute=>'REPEAT_INTERVAL',value=>'FREQ=WEEKLY;BYDAY=MON;BYHOUR=23;BYMINUTE=59;BYSECOND=0');
END;
/

BEGIN
DBMS_SCHEDULER.ENABLE(name=>'"SYS"."MONDAY_WINDOW"');
END;
/

More on finding failed autostats jobs



In a previous post, I showed how to check if an autostats job had succeeded.

Recently I had to unlock table stats on a very large partitioned table, and then lock down older partition stats explicitly, to allow Oracle to gather statistics for the most recent partitions automatically with the auto stats job.

I wanted to know if Oracle was able to analyze the partitions within the designated 4 hours of weekday window.
When you read the code snippet below, please keep in mind that I have defined my weekday windows to start at 23:59 every evening.

I found the following query to be useful:

select distinct 
       opid, 
       target,
       target_type, 
       to_char(start_Time,'dd.mm.yyyy hh24:mi:ss') "starttime",
       to_char(end_time,'dd.mm.yyyy hh24:mi:ss')   "endtime",
       status
from dba_optstat_operation_tasks 
where opid IN (
            SELECT ID
            FROM DBA_OPTSTAT_OPERATIONS
            WHERE OPERATION LIKE 'gather_database_stats (auto)'
            AND TO_CHAR(START_TIME,'dd.mm.yyyy hh24:mi') = '09.04.2018 23:59'
)
and target like '%SALES%';
The result:

OPID TARGET TARGET_TYPE starttime endtime STATUS
47815
SH.SALES TABLE 09.04.2018 23:59:39 10.04.2018 03:25:17 COMPLETED
47815
SH.SALES.P_SALES_DATA_2016 TABLE PARTITION 10.04.2018 03:25:41 10.04.2018 03:59:00 TIMED OUT
47815
SH.SALES.P_SALES_DATA_MAX TABLE PARTITION 10.04.2018 03:25:33 10.04.2018 03:25:33 COMPLETED

All in all, the autostat job started on April 9th, 2018 at 23:59, had the following outcome:
select status,count(*)
from dba_optstat_operation_tasks 
where opid IN (
            SELECT ID
            FROM DBA_OPTSTAT_OPERATIONS
            WHERE OPERATION LIKE 'gather_database_stats (auto)'
            AND TO_CHAR(START_TIME,'dd.mm.yyyy hh24:mi') = '09.04.2018 23:59'
)
group by status;

STATUS COUNT(*)
FAILED
1
TIMED OUT
12
COMPLETED
339

From the output you can now proceed to find out why the analysis failed or timed out. The query below will list these objects for you:

select distinct opid, 
       target,
       target_type, 
    to_char(start_Time,'dd.mm.yyyy hh24:mi:ss') "starttime",
    to_char(end_time,'dd.mm.yyyy hh24:mi:ss')   "endtime",
    status
from dba_optstat_operation_tasks 
where opid IN (
            SELECT ID
            FROM DBA_OPTSTAT_OPERATIONS
            WHERE OPERATION LIKE 'gather_database_stats (auto)'
            AND TO_CHAR(START_TIME,'dd.mm.yyyy hh24:mi') = '09.04.2018 23:59'
)
and STATUS IN ('TIMED OUT','FAILED');

Thursday, March 15, 2018

The error

SP2-0552: Bind variable "B2" not declared.

can arise when using incorrect declarations in your SQL scripts.

For example, I had the follwing in a script:
var B2 DATE;
EXEC :B2 := to_date('22.02.2018','dd.mm.yyyy');

For sqlplus to accept this string, you need to declare your string as VARCHAR2, even though you intend to use datatype DATE in your query.

Declare it as VARCHAR instead:

var B2 VARCHAR2(10);