Result:sqlplus /nolog SQL>connect sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=proddb01.mydomain.com)(PORT=1534))(CONNECT_DATA=(SERVICE_NAME=proddb01.mydomain.com)(INSTANCE_NAME=proddb01)(SERVER=DEDICATED)))' as sysdbaEnter password: Connected. SQL> select name,database_role from v$database; NAME DATABASE_ROLE --------- ---------------- proddb01 PRIMARY
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Tuesday, March 29, 2022
How to connect via sqlplus with the connect string
To connect as sys using the entire connect string, use the following syntax:
Thursday, March 17, 2022
How to solve message from broker Warning: standby redo logs not configured for thread 1 on primary
DGMGRL> validate database 'proddb01'; Database Role: Primary database Ready for Switchover: Yes Flashback Database Status: tdridbt1: Off Managed by Clusterware: tdridbt1: NO Warning: Ensure primary database's StaticConnectIdentifier property is configured properly so that the primary database can be restarted by DGMGRL after switchover
DGMGRL> validate database 'stby01';
Database Role: Physical standby database
Primary Database: proddb01
Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)
Flashback Database Status:
proddb01: Off
stby01: Off
Managed by Clusterware:
proddb01: NO
stby01: NO
Warning: Ensure primary database's StaticConnectIdentifier property
is configured properly so that the primary database can be restarted
by DGMGRL after switchover
Current Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(proddb01) (stby01)
1 3 2 Insufficient SRLs
Future Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(stby01) (proddb01)
1 3 0 Insufficient SRLs
Warning: standby redo logs not configured for thread 1 on proddb01
The solution was found in Doc ID 1956103.1 "Warning: standby redo logs not configured for thread This is how I corrected the situation:
On the primary, check the status of the standby redo log files:
set lines 200 col member format a50 select s.group#,s.thread#,s.sequence#,s.archived,s.status,f.member, f.type from v$standby_log s, v$logfile f where f.type = 'STANDBY' and s.group# = f.group#; GROUP# THREAD# SEQUENCE# ARC STATUS MEMBER TYPE ---------- ---------- ---------- --- ---------- -------------------------------------------------- ------- 4 0 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo01.log STANDBY 5 0 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo02.log STANDBY 6 0 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo03.log STANDBY 7 0 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo04.log STANDBYIndeed, there is no standby redo logs assigned to thread 1.
First, set standby_file_management to MANUAL on both primary and standby, if not already done.
alter system set standby_file_management=MANUAL;
Still on the primary, drop the standby redo logs:
alter database drop logfile group 4; Database altered. alter database drop logfile group 5; Database altered. alter database drop logfile group 6; Database altered. alter database drop logfile group 7; Database altered.Still on the primary, add new standby redo logs:
alter database add standby logfile thread 1 group 4('/u03/oradata/proddb01/stb_redo01.log') size 2048M REUSE; alter database add standby logfile thread 1 group 5('/u03/oradata/proddb01/stb_redo02.log') size 2048M REUSE; alter database add standby logfile thread 1 group 6('/u03/oradata/proddb01/stb_redo03.log') size 2048M REUSE; alter database add standby logfile thread 1 group 7('/u03/oradata/proddb01/stb_redo04.log') size 2048M REUSE;Run the query above again, and you should see a thread number assigned to each group:
GROUP# THREAD# SEQUENCE# ARC STATUS MEMBER TYPE ---------- ---------- ---------- --- ---------- -------------------------------------------------- ------- 4 1 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo01.log STANDBY 5 1 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo02.log STANDBY 6 1 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo03.log STANDBY 7 1 0 YES UNASSIGNED /u03/oradata/proddb01/stb_redo04.log STANDBYOn the standby database, stop managed recovery:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;Check the status:
GROUP# THREAD# SEQUENCE# ARC STATUS MEMBER TYPE ---------- ---------- ---------- --- ---------- -------------------------------------------------- ------- 4 1 46 YES ACTIVE /u03/oradata/stby01/stb_redo01.log STANDBY 5 1 0 YES UNASSIGNED /u03/oradata/stby01/stb_redo02.log STANDBY 6 0 0 YES UNASSIGNED /u03/oradata/stby01/stb_redo03.log STANDBY 7 0 0 YES UNASSIGNED /u03/oradata/stby01/stb_redo04.log STANDBYDrop the two standby redo log groups not having been assigned a thread number:
alter database drop logfile group 6; Database altered. alter database drop logfile group 7; Database altered.Add two new groups:
alter database add standby logfile thread 1 group 6('/u03/oradata/stby01/stb_redo03.log') size 2048M REUSE; Database altered. alter database add standby logfile thread 1 group 7('/u03/oradata/stby01/stb_redo04.log') size 2048M REUSE; Database altered.Start managed recovery again:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT; Database altered.Verify by running the same query once more:
GROUP# THREAD# SEQUENCE# ARC STATUS MEMBER TYPE ---------- ---------- ---------- --- ---------- -------------------------------------------------- ------- 4 1 46 YES ACTIVE /u03/oradata/stby01/stb_redo01.log STANDBY 5 1 0 NO UNASSIGNED /u03/oradata/stby01/stb_redo02.log STANDBY 6 1 0 YES UNASSIGNED /u03/oradata/stby01/stb_redo03.log STANDBY 7 1 0 YES UNASSIGNED /u03/oradata/stby01/stb_redo04.log STANDBYThe warning is now gone from the output of data guard broker:
DGMGRL> validate database 'stby01'; Database Role: Physical standby database Primary Database: proddb01 Ready for Switchover: Yes Ready for Failover: Yes (Primary Running) Flashback Database Status: proddb01: Off stby01: Off Managed by Clusterware: proddb01: NO stby01: NO Warning: Ensure primary database's StaticConnectIdentifier property is configured properly so that the primary database can be restarted by DGMGRL after switchover
Finally, remember to adjust standby_file_management to AUTO on both primary and standby databases:
alter system set standby_file_management=AUTO;
How to use active database duplication for creating a physical standby
When you use active database duplication for creating a physical standby database, make sure of the following:
1. keep your pfile used to start an auxiliary instance to a minimum. Only the following lines are needed:
2. In your duplication script, make sure you the spfile contains a correct value for db_unique name:
When the duplicaton has finished, the value of "db_name" parameter will be identical to the primary database, since we are fetching the spfile from your primary database server over the network.
1. keep your pfile used to start an auxiliary instance to a minimum. Only the following lines are needed:
db_block_size=8192 db_name='proddb01'Of course, the db_name must be identical to the db_name of the primary database.
2. In your duplication script, make sure you the spfile contains a correct value for db_unique name:
run{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate auxiliary channel stby type disk;
duplicate target database for standby from active database
spfile
SET db_unique_name='stby01'
SET db_domain='mydomain.no'
SET FAL_CLIENT='stby01'
SET FAL_SERVER='proddb01'
SET log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stby01'
SET log_archive_dest_2=''
SET control_files='/data1/oradata/stby01/control01.ctl','/data2/fra/stby01/control02.ctl'
SET STANDBY_FILE_MANAGEMENT='MANUAL'
SET LOG_ARCHIVE_MAX_PROCESSES='2'
SET local_listener='stby01.skead.no'
SET dg_broker_config_file1='/sw/oracle/product/12201/dbs/dr1stby01.dat'
SET dg_broker_config_file2='/sw/oracle/product/12201/dbs/dr2stby01.dat'
SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(stby01,proddb01)'
SET diagnostic_dest='/sw/oracle'
SET db_recovery_file_dest='/data2/fra'
SET db_file_name_convert='/data1/oradata/proddb01','/data1/oradata/stby01'
SET log_file_name_convert='/data3/oradata/proddb01','/data3/oradata/stby01'
NOFILENAMECHECK;
}
Note that the directive "spfile" used right before the individual SET commands does not mean "create spfile". It means "fetch spfile from primary". The SET commands, however, will be written to your local spfile which will then be used to start the auxiliary instance once more, before the actual cloning starts.
When the duplicaton has finished, the value of "db_name" parameter will be identical to the primary database, since we are fetching the spfile from your primary database server over the network.
Friday, March 11, 2022
What constitute "default auditing" under Unified Auditing?
Under Mixed-mode auditing and Pure Unified Auditing, the policies ORA_SECURECONFIG and ORA_LOGON_FAILURES are audited by default.
From the documentation:
ORA_SECURECONFIG audits the same default audit settings from Oracle Database Release 11g.
It tracks the use of a number of privileges such as ALTER ANY TABLE, GRANT ANY PRIVILEGE, and CREATE USER. The actions that it tracks include ALTER USER, CREATE ROLE, LOGON, and other commonly performed activities.
There are a couple of other policies that are not enabled by default but certainly worth auditing. These are:
ORA_DATABASE_PARAMETER audits commonly used Oracle Database parameter settings: ALTER DATABASE, ALTER SYSTEM, and CREATE SPFILE.
ORA_ACCOUNT_MGMT audits the commonly used user account and privilege settings: CREATE USER, ALTER USER, DROP USER, CREATE ROLE, DROP ROLE,ALTER ROLE, SET ROLE, GRANT, and REVOKE.
Enabling these is as simple as executing the following SQL against the database as a privileged user:
From the documentation:
ORA_SECURECONFIG audits the same default audit settings from Oracle Database Release 11g.
It tracks the use of a number of privileges such as ALTER ANY TABLE, GRANT ANY PRIVILEGE, and CREATE USER. The actions that it tracks include ALTER USER, CREATE ROLE, LOGON, and other commonly performed activities.
There are a couple of other policies that are not enabled by default but certainly worth auditing. These are:
Enabling these is as simple as executing the following SQL against the database as a privileged user:
AUDIT POLICY ORA_DATABASE_PARAMETER; AUDIT POLICY ORA_ACCOUNT_MGMT;
How to create a partial index on a table in PostgreSQL
I was asked by one of my customers to advise on how to create an index for a new column called "sk_type", of datatype varchar, with only 3 distinct values in addition to NULL.
Let's call them STRING1, STRING2, STRING3.
The table is already partitioned on column refyear(date). Here is the table DDL:
In the documentation I found that perhaps a partial index would be ideal in this situation. A partial index is defined as
an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries only for those table rows that satisfy the predicate.
According to the documentation, a partial index could be useful in cases where you want to avoid indexing common values:
Since a query searching for a common value (one that accounts for more than a few percent of all the table rows) will not use the index anyway, there is no point in keeping those rows in the index at all. This reduces the size of the index, which will speed up those queries that [actually] do use the index. It will also speed up many table update operations because the index does not need to be updated in all cases
In other words, we would only index rows which have column value different from 'STRING1'.
First, add the new column:
Observe that this kind of partial index requires that the common values be predetermined, so such partial indexes are best used for data distributions that do not change. The indexes can be recreated occasionally to adjust for new data distributions, but this adds maintenance effort.
The offical PostgreSQL 11 documentation can be found here
The table is already partitioned on column refyear(date). Here is the table DDL:
CREATE TABLE event ( id bigint NOT NULL DEFAULT nextval('hendelselager.hendelse_id_seq'::regclass), skpl character varying(8) COLLATE pg_catalog."default", refyear integer NOT NULL, pnum bigint NOT NULL, ksystem character varying(128) COLLATE pg_catalog."default" NOT NULL, category character varying(128) COLLATE pg_catalog."default" NOT NULL, event character varying(128) COLLATE pg_catalog."default" NOT NULL, tstamp timestamp without time zone NOT NULL, ip_id character varying(128) COLLATE pg_catalog."default" NOT NULL, details jsonb, CONSTRAINT e_pkey PRIMARY KEY (refyear, event, id), CONSTRAINT uc_e01 UNIQUE (refyear, pnum, ksystem, category, event, ip_id) ) PARTITION BY RANGE (refyear);The distribution of values was expected to be very scew right from the start:
* STRING1 - 95%
* STRING2 - 5%
* STRING3 < 0,5%
* NULL < 0,1%
* STRING2 - 5%
* STRING3 < 0,5%
* NULL < 0,1%
In the documentation I found that perhaps a partial index would be ideal in this situation. A partial index is defined as
an index built over a subset of a table; the subset is defined by a conditional expression (called the predicate of the partial index). The index contains entries only for those table rows that satisfy the predicate.
According to the documentation, a partial index could be useful in cases where you want to avoid indexing common values:
Since a query searching for a common value (one that accounts for more than a few percent of all the table rows) will not use the index anyway, there is no point in keeping those rows in the index at all. This reduces the size of the index, which will speed up those queries that [actually] do use the index. It will also speed up many table update operations because the index does not need to be updated in all cases
In other words, we would only index rows which have column value different from 'STRING1'.
First, add the new column:
ALTER TABLE event ADD sk_type character varying(8);Next, create the partial index. Here is the syntax I used in a sandbox environment:
CREATE INDEX sk_type_idx ON event (refyear,sk_type)
WHERE NOT (sk_type = 'STRING1');
This approach comes with a caviat, which may or may not be acceptable:
Observe that this kind of partial index requires that the common values be predetermined, so such partial indexes are best used for data distributions that do not change. The indexes can be recreated occasionally to adjust for new data distributions, but this adds maintenance effort.
The offical PostgreSQL 11 documentation can be found here
How to use adrci to show all messages related to shutdown and startup of a database
These commands will give you all messages related to a shutdown and a startup of an oracle database, respectively:
adrci> show alert -p "MESSAGE_GROUP LIKE 'shutdown'" adrci> show alert -p "MESSAGE_GROUP LIKE 'startup'"Example output for a shutdown:
Instance shutdown complete (OS id: 4058731) 2022-03-10 18:00:18.042000 +01:00 License high water mark = 863 2022-03-10 18:00:19.115000 +01:00 Instance shutdown complete (OS id: 3611031) 2022-03-10 18:00:33.286000 +01:00 Shutting down instance: further logons disabled 2022-03-10 18:00:36.357000 +01:00 License high water mark = 1 Shutting down archive processes Archiving is disabled 2022-03-10 18:00:37.519000 +01:00 Shutting down archive processes Archiving is disabled 2022-03-10 18:00:43.753000 +01:00 Instance shutdown complete (OS id: 3611306)For the startup sequence, adrci will show you all the details of the startup, in other words what you normally see if you put a tail on the alert log. This is a rather lengthy output, so I am shorting it down. It should be familiar to most DBAs:
2022-03-10 18:00:22.413000 +01:00 Oracle instance running with ODM in PGA: Oracle Direct NFS ODM Library Version 6.0 2022-03-10 18:00:25.600000 +01:00 LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. IMODE=BR ILAT =249 LICENSE_MAX_USERS = 0 SYS auditing is enabled Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.14.0.0.0. ORACLE_HOME: /sw/oracle/product/19c System name: Linux Node name: myserver.mydomain.com Release: 4.18.0-348.12.2.el8_5.x86_64 Version: #1 SMP Mon Jan 17 07:06:06 EST 2022 Machine: x86_64 Using parameter settings in server-side spfile /sw/oracle/product/19c/dbs/spfiletestdb01.ora System parameters with non-default values: etc etcSince the output is very long, if you simply want to check for an indication of a database startup you could use this command instead:
adrci> show alert -p "message_text like '%Starting ORACLE instance%'"which will give you a shorter list to examine, and in my case showing that the database was restarted Jan 28th and March 10th:
2022-02-28 07:46:54.505000 +01:00 Starting ORACLE instance (restrict) (OS id: 4058595) 2022-02-28 07:47:39.567000 +01:00 Starting ORACLE instance (normal) (OS id: 3300) 2022-03-10 18:00:22.421000 +01:00 Starting ORACLE instance (restrict) (OS id: 3611160) 2022-03-10 18:02:06.831000 +01:00 Starting ORACLE instance (normal) (OS id: 3429)
Thursday, March 10, 2022
How to create a unfied auditing policy that captures logons from privileged users
By default, only unsuccessful logon attempts are audited by mandatory auditing under Unified Auditing.
The policy used for this purpose is ORA_LOGON_FAILURES, and it will audit both privileged and non-privileged users' attempts to logon to the database.
If you want to audit all privileged users that have successfully logged onto the database, you need to create a new policy.
Here is how:
The policy used for this purpose is ORA_LOGON_FAILURES, and it will audit both privileged and non-privileged users' attempts to logon to the database.
If you want to audit all privileged users that have successfully logged onto the database, you need to create a new policy.
Here is how:
CREATE AUDIT POLICY PRIVILEGED_USER_LOGONS ACTIONS LOGON WHEN 'SYS_CONTEXT (''USERENV'',''CURRENT_USER'') IN (''SYS'',''SYSTEM'')' EVALUATE PER SESSION;Start using it:
AUDIT POLICY PRIVILEGED_USER_LOGONS;The resulting audit record can be found immediately afterwards, with this query against UNIFIED_AUDIT_TRAIL:
select os_username "os user", userhost "host", authentication_type "authtype", dbusername "db user", client_program_name "client", event_timestamp "time", action_name "action", system_privilege_used "sys priv", unified_audit_policies "aud pol" from UNIFIED_AUDIT_TRAIL where event_Timestamp = (select max(event_Timestamp) from UNIFIED_AUDIT_TRAIL) order by event_timestamp desc;Result:
os user | host | authtype | db user | client | time | action | sys priv | aud pol |
---|---|---|---|---|---|---|---|---|
oracle | myserver.mydomain.com | (TYPE=(OS));(CLIENT ADDRESS=((PROTOCOL=beq)(HOST=192.168.0.34))); | SYS | sqlplus@myserver.mydomain.com (TNS V1-V3) | 10.03.2022 11:38:14,972147 | LOGON | SYSDBA | PRIVILEGED_USER_LOGONS |
How to use dbms_metadata to generate DDL for a Unified Auditing Policy
SQL> set long 5000 SQL> select DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES') from dual; DBMS_METADATA.GET_DDL('AUDIT_POLICY','ORA_LOGON_FAILURES') -------------------------------------------------------------------------------- CREATE AUDIT POLICY "ORA_LOGON_FAILURES" ACTIONS LOGON
Wednesday, March 2, 2022
How to check when a parameter was changed
Thanks to Sandeep Singh for providing this very useful information.
The following query will reveal any changes to a particular parameter:
The following query will reveal any changes to a particular parameter:
select instance_number instance, snap_id, time, parameter_name, old_value, new_value from ( select a.snap_id,to_char(end_interval_time,'DD-MON-YY HH24:MI') TIME, a.instance_number, parameter_name, value new_value, lag(parameter_name,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_pname, lag(value,1) over (partition by parameter_name, a.instance_number order by a.snap_id) old_value , decode(substr(parameter_name,1,2),'__',2,1) calc_flag from dba_hist_parameter a, dba_Hist_snapshot b , v$instance v where a.snap_id=b.snap_id and a.instance_number=b.instance_number and parameter_name like nvl('¶meter_name',parameter_name) and a.instance_number like nvl('&instance_number',v.instance_number) ) where new_value != old_value order by 1,2;In my case the output was
INSTANCE | SNAP_ID | TIME | PARAMETER_NAME | OLD_VALUE | NEW_VALUE |
---|---|---|---|---|---|
1 |
52050 |
28-FEB-22 14:00 | optimizer_adaptive_statistics | TRUE | FALSE |
Subscribe to:
Posts (Atom)