Tuesday, August 26, 2014

What is the meaning of the "10G 11G" value in DBA_USERS.PASSWORD_VERSIONS?

When the value of DBA_USERS.PASSWORD_VERSIONS is shown as "10G 11G", it means that both old and new-style hash values are available for the user.

Note that instead of storing the hashed password values directly in the DBA_USERS table, they are from 11gR1 and onwards both stored in the table USER$, column PASSWORD for the 10G style hash value, and column SPARE4 for the 11G SHA-1 style hash value.

A NULL value indicates that the password has not been changed since the migration and the user still has the old case insensitive password.

In my query below, USER1 through USER4 have been migrated and will now take advantage of 11g password case-sensitivity, if enabled.

USER5 is still using 10G style case insensitive passwords.

USER6 is created after migration, and will also take advantage of the 11g password case-sensitivity, if available.

SELECT U.NAME "Name",
            NVL(REGEXP_REPLACE(U.SPARE4,'^.+', 'Password changed since migration',1,0), 'Password unchanged since migration') "Action undertaken",
            DU.PASSWORD_VERSIONS "Password Version"
FROM USER$ U LEFT OUTER JOIN DBA_USERS DU
ON U.NAME = DU.USERNAME
WHERE DU.USERNAME NOT IN (SELECT ROLE FROM DBA_ROLES)
ORDER BY NAME ASC;


Name Action undertaken Password Version
USER1 Password changed since migration 10G 11G
USER2 Password changed since migration 10G 11G
USER3 Password changed since migration 10G 11G
USER4 Password changed since migration 10G 11G
USER5 Password unchanged since migration 10g
USER6 Password changed since migration 11G

Users that are imported from an earlier release into an 11g database, will remain case-insensitive until the password is changed.


To generate "alter user identified by values" statements, use the following:

SELECT 'alter user ' || NAME || ' identified by values ' || '''' || SPARE4 ||';' || PASSWORD ||''';' 
FROM USER$ 
WHERE  NAME IN ('USER1','USER2');

Sources: Oracle Documentation

Monday, August 25, 2014

How to use trace event 10046


ALTER SESSION SET max_dump_file_size = unlimited;
ALTER SESSION SET tracefile_identifier = 'normal_run_with_trace_10046';
ALTER SESSION SET statistics_level = ALL;
ALTER SESSION SET events '10046 trace name context forever, level 12';

Your SQL statement(s) here

ALTER SYSTEM SET EVENTS '10046 trace name context off';
EXIT

Alternatively, use the oradebug utility:

select s.sid,s.serial#,p.spid
from v$session s, v$process p 
where sid = 29
and s.paddr = p.addr

Output:
SID SERIAL# SPID
29
9
24510662
Check that the process is indeed there:
prodserver1>ps -ef | grep 24510662
  ora11g 24510662        1   0 17:24:42      -  0:00 oraclemagr (LOCAL=NO)
prodserver1>proctree 24510662
  24510662    oraclemagr (LOCAL=NO)

Enable tracing on the operating system process, inside sqlplus:
SQL> oradebug setospid 24051998
Oracle pid: 420, Unix process pid: 24051998, image: oracle@ystu032ma
SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever, level 12

When done, disable tracing:
SYS@magr SQL> oradebug setospid 24051998
Oracle pid: 420, Unix process pid: 24051998, image: oracle@ystu032ma

SQL> oradebug unlimit
SQL> oradebug event 10046 trace name context forever, level 12

You can now parse the trace file produced through tkprof:
tkprof mytracefile.trc mytracefile.out sys=yes waits=yes sort=exemis

For an extensive list of commands to be used with tkprof, simply type tkprof at the prompt.

An excellent note on 10046 trace event is called "How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance Issues (Doc ID 376442.1)".

Another great note on tracing in general is "Tracing Enhancements Using DBMS_MONITOR (In 10g, 11g and Above) (Doc ID 293661.1)"

- both available on Oracle Supports site.

Friday, August 1, 2014

Why are there multiple rows in V$SESSION_LONGOPS when running an RMAN backup?

Answer:
When performing an RMAN backup, V$SESSION_LONGOPS will be populated with two types of rows:

Detailed rows: Captures the progress of individual job steps.

Aggregated rows: Captures and summarizes the progress for the individual step of the backup job, as they complete. They are updated only after each detailed step has finished.

Example:
Below I am querying the V$SESSION_LONGOPS after an incremental backup level 0 has started, using 6 channels. Notice that there are both individual rows and an aggregated row present:
SELECT SID,SERIAL#,OPNAME,TARGET_DESC,TOTALWORK,SOFAR,UNITS,START_TIME,TIME_REMAINING,ELAPSED_SECONDS,MESSAGE
FROM V$SESSION_LONGOPS 
WHERE  TOTALWORK <> 0
AND SOFAR <> TOTALWORK
ORDER BY SERIAL# ASC;

SID SERIAL# OPNAME TARGET_DESC TOTALWORK SOFAR UNITS START_TIME TIME_REMAINING ELAPSED_SECONDS MESSAGE
203
7409
RMAN: incremental datafile backup Set Count
4096000
1004926
Blocks 01.08.2014 12:28:28
557
181
RMAN: incremental datafile backup: Set Count 137085: 1004926 out of 4096000 Blocks done
398
15079
RMAN: incremental datafile backup Set Count
3018368
1122302
Blocks 01.08.2014 12:28:28
306
181
RMAN: incremental datafile backup: Set Count 137088: 1122302 out of 3018368 Blocks done
439
17451
RMAN: incremental datafile backup Set Count
2560000
1127038
Blocks 01.08.2014 12:28:29
229
180
RMAN: incremental datafile backup: Set Count 137089: 1127038 out of 2560000 Blocks done
357
31627
RMAN: incremental datafile backup Set Count
3044608
1115518
Blocks 01.08.2014 12:28:28
313
181
RMAN: incremental datafile backup: Set Count 137087: 1115518 out of 3044608 Blocks done
476
55467
RMAN: incremental datafile backup Set Count
2557440
940542
Blocks 01.08.2014 12:28:29
309
180
RMAN: incremental datafile backup: Set Count 137090: 940542 out of 2557440 Blocks done
512
55527
RMAN: aggregate input backup
76514816
4360436
Blocks 01.08.2014 12:28:28
2085
126
RMAN: aggregate input: backup 33: 4360436 out of 76514816 Blocks done
281
56085
RMAN: incremental datafile backup Set Count
3251200
969214
Blocks 01.08.2014 12:28:28
426
181
RMAN: incremental datafile backup: Set Count 137086: 969214 out of 3251200 Blocks done

Let's join V$SESSION_LONGOPS with V$SESSION and (if desirable) V$PROCESS, to view only the detailed rows, and estimate the progress in percent. This query is very useful to quickly get an overview of your backup, whether or not it is progressing according to expected speed etc:

SELECT S.CLIENT_INFO "Client Info", SL.OPNAME "Operation" ,SL.MESSAGE, SL.SID, SL.SERIAL#, P.SPID "OS Process ID", SL.SOFAR "So Far", SL.TOTALWORK "Totalwork", ROUND(SL.SOFAR/SL.TOTALWORK*100,2) "% complete"
FROM V$SESSION_LONGOPS SL INNER JOIN V$SESSION S ON SL.SID = S.SID 
                          INNER JOIN V$PROCESS P ON P.ADDR = S.PADDR
AND OPNAME LIKE 'RMAN%'
AND OPNAME NOT LIKE '%aggregate%'
AND TOTALWORK != 0
AND SOFAR <> TOTALWORK
/
Client Info Operation MESSAGE SID SERIAL# OS Process ID So Far Totalwork % complete
rman channel=tsm_channel_0 RMAN: incremental datafile backup RMAN: incremental datafile backup: Set Count 137096: 1173118 out of 2252800 Blocks done
203
7409
42926864
1173118
2252800
52,07
rman channel=tsm_channel_1 RMAN: incremental datafile backup RMAN: incremental datafile backup: Set Count 137095: 2029054 out of 2393600 Blocks done
281
56085
52101750
2029054
2393600
84,77
rman channel=tsm_channel_2 RMAN: incremental datafile backup RMAN: incremental datafile backup: Set Count 137097: 293246 out of 2096640 Blocks done
357
31627
56361104
293246
2096640
13,99
rman channel=tsm_channel_3 RMAN: incremental datafile backup RMAN: incremental datafile backup: Set Count 137099: 174206 out of 2048000 Blocks done
398
15079
52756692
174206
2048000
8,51
rman channel=tsm_channel_4 RMAN: incremental datafile backup RMAN: incremental datafile backup: Set Count 137098: 460286 out of 2048000 Blocks done
439
17451
29032454
460286
2048000
22,47
rman channel=tsm_channel_5 RMAN: incremental datafile backup RMAN: incremental datafile backup: Set Count 137100: 196094 out of 2048000 Blocks done
476
55467
9700724
196094
2048000
9,57

Notice how the V$RMAN_STATUS only hold information on aggregated level (2 rows, one for each operation although the first spawns the other):
SELECT SID,OPERATION,STATUS,MBYTES_PROCESSED, START_TIME, END_TIME, OBJECT_TYPE, OUTPUT_DEVICE_TYPE 
FROM V$RMAN_STATUS WHERE STATUS = 'RUNNING';

SID OPERATION STATUS MBYTES_PROCESSED START_TIME OBJECT_TYPE OUTPUT_DEVICE_TYPE
512
BACKUP RUNNING
226641,953125
01.08.2014 12:28:28 DB INCR SBT_TAPE
512
RMAN RUNNING
0
01.08.2014 12:28:28    



Tuesday, July 15, 2014

What is the difference between the columns PID and SPID in v$process?

The PID column is an internal counter that oracle uses for its own processes. It is incremented by one for every new process. Consequently, since every session creates its own server process (also called "shadow process") you will see the number increment by one for each session that connects.

The documentation for V$PROCESSES states that PID is the "Oracle Process Identifier" which is confusing.
In most circumstances, the column that you would need is the SPID, which shows the "Operating system process identifier".

More on the TO_DSINTERVAL conversion function

The TO_DSINTERVAL function is very flexible.
All you need to do is to specify the interval you're looking for, and it will convert your string into a valid INTERVAL datatype. Intervals are DAY, HOUR, MINUTE, SECOND

Examples:

Turn 2 days into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '2' DAY)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'2'DAY)
 --------------------------------------------------
 +02 00:00:00.000000
 1 row selected.

Turn 75 minutes into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '75' MINUTE)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'75'MINUTE)
 --------------------------------------------------
 +00 01:15:00.000000
 1 row selected.

Turn 60 minutes into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '60' MINUTE)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'60'MINUTE)
 --------------------------------------------------
 +00 01:00:00.000000
 1 row selected.

Turn 60 seconds into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '60' SECOND)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'60'SECOND)
 --------------------------------------------------
 +00 00:01:00.000000
 1 row selected.

Turn 24 hours into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '24' HOUR)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'24'HOUR)
 --------------------------------------------------
 +01 00:00:00.000000
 1 row selected.

Turn 6 hours into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '6' HOUR)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'6'HOUR)
 --------------------------------------------------
 +00 06:00:00.000000
 1 row selected.
Turn 61.2 seconds into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '61.2' SECOND)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'61.2'SECOND)
 --------------------------------------------------
 +00 00:01:01.200000
 1 row selected.
Turn 1 hour into a valid INTERVAL datatype
SELECT TO_DSINTERVAL(INTERVAL '1' HOUR)
 FROM DUAL;

 TO_DSINTERVAL(INTERVAL'1'HOUR)
 --------------------------------------------------
 +00 01:00:00.000000
 1 row selected.

Thursday, July 10, 2014

A convenient way to declare variables in PL/SQL

In PL/SQL programming, a convenient (and programmatically safe) way to specify data types for your variables is to let them be inherited from the table columns:

v_tab_name DBA_TABLES.TABLE_NAME%TYPE;
v_tabspc_name DBA_TABLES.TABLESPACE_NAME%TYPE;

How to drop stuck purge job created through DBMS_AUDIT_MGMT


In the process of implementing automatic purging of the audit trail, I came across a problem that I was really puzzled by: In the debugging process I had to drop and recreate the purge job that I had previously created through the dbms_audit_mgmt package.

I was completely unable to drop a particular scheduler job. All attempts threw different errors:

Trying to drop the job through the dbms_audit_mgmt.drop_purge_job procedure:
dbms_audit_mgmt.drop_purge_job(audit_trail_purge_name=>'PURGE_ALL_AUDIT_TRAILS');

Then trying to drop it directly through the dbms_scheduler.drop_job procedure:
dbms_scheduler.drop_job(job_name=>'PURGE_ALL_AUDIT_TRAILS',force=>TRUE);

Both attempts returned the error:
ERROR at line 1:
ORA-27475: "SYS.PURGE_ALL_AUDIT_TRAILS" must be a job
ORA-06512: at "SYS.DBMS_ISCHED", line 224
ORA-06512: at "SYS.DBMS_SCHEDULER", line 657
ORA-06512: at line 2

Trying to enable the job explicitly, in case the scheduled job was in an unknown state to the database:
exec dbms_scheduler.enable(name=>'PURGE_ALL_AUDIT_TRAILS');

which resulted in
ERROR at line 1:
ORA-27476: SYS.PURGE_ALL_AUDIT_TRAILS does not exist
ORA-06512: at "SYS.DBMS_ISCHED", line 4440
ORA-06512: at "SYS.DBMS_SCHEDULER", line 2803
ORA-06512: at line 1

Trying to recreate the job:
BEGIN
    DBMS_AUDIT_MGMT.create_purge_job(
      audit_trail_type           => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
      audit_trail_purge_interval => 24 /* hours */,
      audit_trail_purge_name     => 'PURGE_ALL_AUDIT_TRAILS',
      use_last_arch_timestamp    => TRUE);
  EXCEPTION
    WHEN OTHERS THEN
     DBMS_OUTPUT.PUT_LINE(sqlerrm(sqlcode));
  END;
  /

Result:
ORA-46254: 'PURGE_ALL_AUDIT_TRAILS' already exists

Particuarly the last two error messages were discouraging. You cannot enable the job, because it doesn't exist, neither can you recreate the job, since it already DOES exist.

Solution:
From Oracle Support 1252235.1 "Unable to Create or Drop Purge Job Using DBMS_AUDIT_MGMT".

In short, log on as user sys and execute the following SQL statements:
SELECT JOB_NAME,STATE
FROM DBA_SCHEDULER_JOBS
WHERE JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS'; <-- No rows should be returned

SELECT JOB_NAME,JOB_STATUS,AUDIT_TRAIL
FROM DBA_AUDIT_MGMT_CLEANUP_JOBS
WHERE JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS'; <-- 1 row should be returned, the job that's "stuck"

DELETE FROM SYS.DAM_CLEANUP_JOBS$
WHERE JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS'; <-- Delete the row from one of Oracle's internal tables,

COMMIT; <-- commit the change
And you're done, your job can be recreated, after which it will turn up as usual when you query the DBA_SCHEDULER_JOBS view:
SELECT STATE,ENABLED
FROM DBA_SCHEDULER_JOBS
WHERE  JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS';
STATE ENABLED
SCHEDULED TRUE