v_tab_name DBA_TABLES.TABLE_NAME%TYPE; v_tabspc_name DBA_TABLES.TABLESPACE_NAME%TYPE;
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Thursday, July 10, 2014
A convenient way to declare variables in PL/SQL
In PL/SQL programming, a convenient (and programmatically safe) way to specify data types for your variables is to let them be inherited from the table columns:
How to drop stuck purge job created through DBMS_AUDIT_MGMT
In the process of implementing automatic purging of the audit trail, I came across a problem that I was really puzzled by: In the debugging process I had to drop and recreate the purge job that I had previously created through the dbms_audit_mgmt package.
I was completely unable to drop a particular scheduler job. All attempts threw different errors:
Trying to drop the job through the dbms_audit_mgmt.drop_purge_job procedure:
dbms_audit_mgmt.drop_purge_job(audit_trail_purge_name=>'PURGE_ALL_AUDIT_TRAILS');
Then trying to drop it directly through the dbms_scheduler.drop_job procedure:
dbms_scheduler.drop_job(job_name=>'PURGE_ALL_AUDIT_TRAILS',force=>TRUE);
Both attempts returned the error:
ERROR at line 1: ORA-27475: "SYS.PURGE_ALL_AUDIT_TRAILS" must be a job ORA-06512: at "SYS.DBMS_ISCHED", line 224 ORA-06512: at "SYS.DBMS_SCHEDULER", line 657 ORA-06512: at line 2
Trying to enable the job explicitly, in case the scheduled job was in an unknown state to the database:
exec dbms_scheduler.enable(name=>'PURGE_ALL_AUDIT_TRAILS');
which resulted in
ERROR at line 1: ORA-27476: SYS.PURGE_ALL_AUDIT_TRAILS does not exist ORA-06512: at "SYS.DBMS_ISCHED", line 4440 ORA-06512: at "SYS.DBMS_SCHEDULER", line 2803 ORA-06512: at line 1
Trying to recreate the job:
BEGIN DBMS_AUDIT_MGMT.create_purge_job( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL, audit_trail_purge_interval => 24 /* hours */, audit_trail_purge_name => 'PURGE_ALL_AUDIT_TRAILS', use_last_arch_timestamp => TRUE); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE(sqlerrm(sqlcode)); END; /
Result:
ORA-46254: 'PURGE_ALL_AUDIT_TRAILS' already exists
Particuarly the last two error messages were discouraging. You cannot enable the job, because it doesn't exist, neither can you recreate the job, since it already DOES exist.
Solution:
From Oracle Support 1252235.1 "Unable to Create or Drop Purge Job Using DBMS_AUDIT_MGMT".
In short, log on as user sys and execute the following SQL statements:
SELECT JOB_NAME,STATE FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS'; <-- No rows should be returned SELECT JOB_NAME,JOB_STATUS,AUDIT_TRAIL FROM DBA_AUDIT_MGMT_CLEANUP_JOBS WHERE JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS'; <-- 1 row should be returned, the job that's "stuck" DELETE FROM SYS.DAM_CLEANUP_JOBS$ WHERE JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS'; <-- Delete the row from one of Oracle's internal tables, COMMIT; <-- commit the changeAnd you're done, your job can be recreated, after which it will turn up as usual when you query the DBA_SCHEDULER_JOBS view:
SELECT STATE,ENABLED FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS';
STATE | ENABLED |
---|---|
SCHEDULED | TRUE |
Tuesday, July 8, 2014
How to purge the log entries for a specific dbms_scheduler job
exec dbms_scheduler.purge_log(log_history=>0, job_name=>'YOUR_JOB_NAME');
Zero is actually default for log_history, it doesn't have to be specified.
If you want to save some of the logs, you can specify for example that Oracle should only purge logs older than, say, two days:
exec dbms_scheduler.purge_log(log_history=>2, job_name=>'YOUR_JOB_NAME');
Wednesday, July 2, 2014
How to work around ORA-01830: date format picture ends before converting entire input string
When querying a column of type TIMESTAMP WITH TIMEZONE, I wanted to omit the fractional seconds + the time zone information.
The query
would return
I ran into the error:
Workaround:
In your session, set
Then cast the data type from TIMESTAMP WITH TIMEZONE to TIMESTAMP:
The query
SELECT STATE, START_DATE "START DATE" FROM DBA_SCHEDULER_JOBS WHERE OWNER = 'SYS' AND JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS';
would return
STATE START DATE --------- -------------------------------- SCHEDULED 02.07.2014 13:351:16,558701 +02:00and I simply didn't want to display that level of information. When trying to format the string as follows:
TO_DATE(START_DATE,'DD.MM.YYYY HH24:MI:SS') "START DATE"
I ran into the error:
TO_DATE(START_DATE,'DD.MM.YYYY HH24:MI:SS') "START DATE" * ERROR at line 2: ORA-01830: date format picture ends before converting entire input string
Workaround:
In your session, set
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='DD.MM.YYYY HH24:MI:SS'; ALTER SESSION SET NLS_DATE_FORMAT='DD.MM.YYYY HH24:MI:SS';
Then cast the data type from TIMESTAMP WITH TIMEZONE to TIMESTAMP:
CAST(START_DATE AS TIMESTAMP)and finally, convert to DATE:
TO_DATE(CAST(START_DATE AS TIMESTAMP),'DD.MM.YYYY HH24:MI:SS')Put together:
SELECT STATE, TO_DATE(CAST(START_DATE AS TIMESTAMP),'DD.MM.YYYY HH24:MI:SS') "START DATE" FROM DBA_SCHEDULER_JOBS WHERE OWNER = 'SYS' AND JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS';Output:
STATE START DATE --------------- ------------------------------ SCHEDULED 02.07.2014 13:51:16Another example that worked for me was against a column defined like this:
DATELASTUPDATED TIMESTAMP(6) NOT NULL,In my resultset I would only like dates from 29.12.2016, the time of the day is not interesting. Simply use the fuction TRUNC:
WHERE trunc(DateLastUpdated) = '29.12.2016'
Tuesday, July 1, 2014
How to print the database incarnations from V$DATABASE_INCARNATION using an hierarchical query
I just realized that V$DATABASE_INCARNATION is a perfect candidate for using hierarchical queries for formatting.
Let's start with current incarnation as root node, then work our way down the older incarnations:
indicate where the recursion starts, and should be should be read as
In the reverse order, starting with the oldest incarnation, and working our way through the incarnation list:
Here, the line
should be read as
I find the semantics of the recursion to be important; when you understand what you're actually selecting, it's so much easier to construct the SQL statement and get it right the first time around.
Let's start with current incarnation as root node, then work our way down the older incarnations:
SELECT INCARNATION#, LPAD(' ', LEVEL*2) || STATUS "STATUS", RESETLOGS_TIME,RESETLOGS_CHANGE#,PRIOR_INCARNATION# FROM V$DATABASE_INCARNATION START WITH INCARNATION# = 4 CONNECT BY PRIOR PRIOR_INCARNATION# = INCARNATION#; INCARNATION# STATUS RESETLOGS_ RESETLOGS_CHANGE# PRIOR_INCARNATION# ------------ -------------------- ---------- ----------------- ------------------ 4 CURRENT 30.06.2014 136759423847 3 3 PARENT 07.05.2013 130807968161 2 2 PARENT 14.02.2011 1002970 1 1 PARENT 07.10.2010 1 0The line
CONNECT BY PRIOR PRIOR_INCARNATION# = INCARNATION#
indicate where the recursion starts, and should be should be read as
"print the rows that has INCARNATION# equal to the previous row's PRIOR_INCARNATION#"
In the reverse order, starting with the oldest incarnation, and working our way through the incarnation list:
SELECT INCARNATION#,LPAD(' ', LEVEL*2) || STATUS "STATUS" , RESETLOGS_TIME, RESETLOGS_CHANGE#,PRIOR_INCARNATION# FROM V$DATABASE_INCARNATION START WITH INCARNATION# = 1 CONNECT BY PRIOR_INCARNATION# = PRIOR INCARNATION#; INCARNATION# STATUS RESETLOGS_ RESETLOGS_CHANGE# PRIOR_INCARNATION# ------------ -------------------- ---------- ----------------- ------------------ 1 PARENT 07.10.2010 1 0 2 PARENT 14.02.2011 1002970 1 3 PARENT 07.05.2013 130807968161 2 4 CURRENT 30.06.2014 136759423847 3
Here, the line
CONNECT BY PRIOR_INCARNATION# = PRIOR INCARNATION#
should be read as
"print the next rows that has its PRIOR_INCARNATION# equal to the previous row's INCARNATION#"
I find the semantics of the recursion to be important; when you understand what you're actually selecting, it's so much easier to construct the SQL statement and get it right the first time around.
Monday, June 30, 2014
log_archive_format default value
Having trouble finding out why the value of the parameter LOG_ARCHIVE_FORMAT does not seem to apply to your database?
From Oracle 10g and onwards, the default value for LOG_ARCHIVE_FORMAT is %t_%S_%r.dbf
However, this parameter will only make a difference if you are NOT using a "Fast Recovery Area" (in Version 10 Oracle called the same thing a "Flash Recovery Area")
If you have defined a FRA by using the parameters db_recovery_file_dest and db_recovery_file_dest_size, the archivelogs will be automatically named according to OMF (Oracle Managed Files) standard and the value of LOG_ARCHIVE_FORMAT will be ignored.
An example of an OMF managed file in the FRA:
From the documentation:
At first glance, letting the LOG_ARCHIVE_FORMAT default to such a poor naming standard may seem like a bad idea, but I guess it illustrates the point of defining an FRA vs. using conventional archive log destinations.
From Oracle 10g and onwards, the default value for LOG_ARCHIVE_FORMAT is %t_%S_%r.dbf
However, this parameter will only make a difference if you are NOT using a "Fast Recovery Area" (in Version 10 Oracle called the same thing a "Flash Recovery Area")
If you have defined a FRA by using the parameters db_recovery_file_dest and db_recovery_file_dest_size, the archivelogs will be automatically named according to OMF (Oracle Managed Files) standard and the value of LOG_ARCHIVE_FORMAT will be ignored.
An example of an OMF managed file in the FRA:
/fra/PRODDB01/archivelog/2014_06_30/o1mv_1_1_9vxzch5_.arc
From the documentation:
"The fast recovery area is an Oracle Database managed space that can be used to hold RMAN disk backups, control file autobackups and archived redo log files. The files placed in this location are maintained by Oracle Database and the generated file names are maintained in Oracle Managed Files (OMF) format."
At first glance, letting the LOG_ARCHIVE_FORMAT default to such a poor naming standard may seem like a bad idea, but I guess it illustrates the point of defining an FRA vs. using conventional archive log destinations.
Wednesday, June 25, 2014
How to gather dictionary statistics and fixed-objects statistics
Use the procedures
dbms_stats.gather_dictionary_stats
and
dbms_stats.gather_fixed_objects_stats
Examples:
Used in a simple shell script:
Good sources for further reading: Maria Colgan's blog
dbms_stats.gather_dictionary_stats
and
dbms_stats.gather_fixed_objects_stats
Examples:
BEGIN DBMS_STATS.GATHER_DICTIONARY_STATS ( estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE ,method_opt => 'FOR ALL COLUMNS SIZE AUTO' ,degree => NULL ,cascade => DBMS_STATS.AUTO_CASCADE ,granularity => 'AUTO' ,no_Invalidate => DBMS_STATS.AUTO_INVALIDATE); END; / BEGIN DBMS_STATS.GATHER_FIXED_OBJECTS_STATS; END; /
Used in a simple shell script:
#!/usr/bin/ksh # Gather dictionary and fixed objects stats as recommended by oracle cat << EoF > ${SCRIPT_BASE}/sql/gather_dict_stats.sql alter session set nls_date_format='DD-MM-YYYY HH24:MI:SS'; spool ${SCRIPT_BASE}/log/gather_dict_stats.log append set timing on select 'Start time: ' || sysdate as "startime" from dual; execute dbms_stats.gather_dictionary_stats(estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt=>'FOR ALL COLUMNS SIZE AUTO',degree=>DBMS_STATS.DEFAULT_DEGREE,granularity=>'ALL',cascade=>DBMS_STATS.AUTO_CASCADE,options=>'GATHER AUTO',no_invalidate=>DBMS_STATS.AUTO_INVALIDATE); execute dbms_stats.gather_fixed_objects_stats(NULL); select 'End time: ' || sysdate as "endtime" from dual; exit EoF sqlplus / as sysdba @${SCRIPT_BASE}/sql/gather_dict_stats.sql rm -r ${SCRIPT_BASE}/sql/gather_dict_stats.sql exit
Good sources for further reading: Maria Colgan's blog
Subscribe to:
Posts (Atom)