The difference is mainly that the orapki tool is dealing with certificates rather than the wallet itself. The mkstore is more of a tool for administrating privileged users and their passwords, so that you can set up connections without exposing their passwords in your scripts.
Can both of these tools be used to manage my wallets?
Yes.
From MOS Doc ID 2044185.1 "What is an Oracle Wallet?":
If you configure TDE the database creates the wallet for you when you issue the ALTER SYSTEM command to initialize TDE, the other tools to create and inspect wallets are Oracle Wallet Manager (owm) which is a GUI tool, then we also have command line tools orapki to setup certificates and mkstore more suited to store so called secret store entries like the above mentioned user credentials, often the tools can be used interchangeably, for example if you create a wallet for TDE using the database with SQL you can inspect the contents later using mkstore etc.
What exactly is the orapki tool?
From the latest Oracle documentation:
The orapki utility manages public key infrastructure (PKI) elements, such as wallets and certificate revocation lists, from the command line.
What exactly is the mkstore tool?
From the latest Oracle Documentation:
The mkstore command-line utility manages credentials from an external password store.
What exactly is a wallet?
A wallet is a password-protected container that is used to store authentication and signing credentials, including private keys, certificates, and trusted certificates needed by SSL.
It can be stored directly on the server, wherever suits the DBA. The path must be pointed out in the client's sqlnet.ora file, using the directive WALLET_LOCATION
For an example on how to set up a wallet, see this post
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Tuesday, June 21, 2022
Friday, June 17, 2022
How to solve ORA-17628: Oracle error 1031 returned by remote Oracle server ORA-01031: insufficient privileges when cloning a non-cdb oracle instance to a PDB
When attempting to clone my database testdb01, a normal, non-cdb database, into a CDB and convert it to a PDB, I hit a permission error. I found the solution on Oracle Supports web site, Doc ID 2485839.1.
Prior to this error, I had set up a database link in my CDB:
Prior to this error, I had set up a database link in my CDB:
SYS@cdb>SQL>create database link noncdb connect to system identified by mypassword using 'testdb01.mydomain.no';I tested the database link, worked fine:
SYS@cdb>SQL>select host_name from v$instance@noncdb; HOST_NAME -------------------------------------------------- mynoncdbserver.mydomain.noI tried to create the pluggable database, using the appropriate file destination paths already created:
create pluggable database testdb01 from non$cdb@noncdb file_name_convert=('/data1/oradata/testdb01/','/data1/oradata/cdb/testdb01/', '/data2/oradata/testdb01/', '/data2/oradata/cdb/testdb01/');Errors out:
ORA-17628: Oracle error 1031 returned by remote Oracle server ORA-01031: insufficient privilegesTo solve the error, simply logon to your non-cdb database as a sysdba user and grant the privilege "create pluggable database" to the user you're using for copying (in my case, SYSTEM):
grant create pluggable database to system;Try the create pluggable database command again, and it succeeds.
Tuesday, June 14, 2022
How to alter a flashback data archive that needs additional quota
SELECT F.FLASHBACK_ARCHIVE_NAME, F.TABLESPACE_NAME,F.QUOTA_IN_MB, (SELECT ROUND(SUM(S.BYTES)/1024/1024/1024) FROM DBA_SEGMENTS S WHERE S.TABLESPACE_NAME=F.TABLESPACE_NAME) "occupied" FROM DBA_FLASHBACK_ARCHIVE_TS F;
FLASHBACK_ARCHIVE_NAME | TABLESPACE_NAME | QUOTA_IN_MB | occupied |
---|---|---|---|
MY_FDA | FDA | 20480 | 19 |
If you've reached the quota, add a larger one, like this:
ALTER FLASHBACK ARCHIVE MY_FDA MODIFY TABLESPACE FDA QUOTA 20 G;
Tuesday, May 31, 2022
How to solve ORA-65035: unable to create pluggable database from PDB$SEED
The error means:
oerr ora 65035 65035, 00000, "unable to create pluggable database from %s" // *Cause: An attempt was made to clone a pluggable database that did not have // local undo enabled. // *Action: Enable local undo for the PDB and and retry the operation.So let's do that: add local undo to our CDB, so that we can create new PDBs from the PDB$SEED container:
SYS@cdb01>SQL>show con_name CON_NAME ------------------------------ CDB$ROOT SYS@cdb01>SQL>alter database local undo on; alter database local undo on * ERROR at line 1: ORA-65192: database must be in UPGRADE mode for this operation
SYS@cdb01>SQL>shutdown SYS@cdb01>SQL>startup upgrade SYS@cdb01>SQL>ALTER DATABASE LOCAL UNDO ON; Database altered. SYS@cdb01>SQL>shutdown immediate SYS@cdb01>SQL>startupVerify that local undo is enabled:
column property_name format a30 column property_value format a30 select property_name, property_value from database_properties where property_name = 'LOCAL_UNDO_ENABLED'; PROPERTY_NAME PROPERTY_VALUE ------------------------------ ------------------------------ LOCAL_UNDO_ENABLED TRUEYou can now create your PDB:
SYS@cdb01>SQL>create pluggable database veg1 admin user pdbadmin identified by mypassword file_name_convert=('/data/pdbseed','/data/veg1'); Pluggable database created.
Wednesday, May 11, 2022
Find available directories on your server
set lines 200 col directory_name format a30 col directory_path format a60 select directory_name,directory_path from dba_directories; exit
Tuesday, May 3, 2022
How to take a standby database out of a data guard configuration and convert it to a standalone read/write database using the Data Guard Broker
Deactive the Data Guard Broker configuration:
To avoid errors related to redo shipping, make sure that your old primary no longer is attempting to ship redo log information to the old standby database, which is now out of the data guard configuration.
On the old primary server, set the relevant log_archive_dest_n parameter to DEFER:
dgmgrl / show configuration # disable log shipping edit database 'stdb' SET STATE='APPLY-OFF'; disable configuration;Check if the standby database is opened in READ ONLY WITH APPLY, READ ONLY or MOUNTED mode:
select open_mode from v$database;If in READ ONLY WITH APPLY or READ ONLY mode, close the database:
alter database close;Often, a database that has been opened in READ ONLY mode still have active sessions. In such cases, it may be necessary to shut the database down and open it in mount-mode:
shutdown immediate startup mountActivate the standby database:
ALTER DATABASE ACTIVATE STANDBY DATABASE;Verify that the status of the control file has changed from "STANDBY" to "CURRENT":
select CONTROLFILE_TYPE from v$database; CONTROL ------- CURRENTOpen the database:
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE; ALTER DATABASE OPEN;If not done by the broker, reset the value of log_archive_dest_2:
Alter system set log_archive_dest_2='';Your database should now be stand alone and out of the data guard configuration.
To avoid errors related to redo shipping, make sure that your old primary no longer is attempting to ship redo log information to the old standby database, which is now out of the data guard configuration.
On the old primary server, set the relevant log_archive_dest_n parameter to DEFER:
alter system set log_archive_dest_state_2=defer scope=both;
The cause for and solution to the error message "Database mount ID mismatch" in your previously configured standby database
If you have a previously configured standby database, and you have converted it to a free standing database, no longer a part of a data guard setup, you may see some errors in the alert log looking like this
Even though you have used the data guard broker to stop log shipping, and activated the standby database (inn effect making it read writable), the broker will not stop the previously configured primary database from shipping logs to its previously configured standby destination.
Solution:
Cut off the log shipping from the previously configured primary database completely by either
1) changing the value of log_archive_dest_state_2 from enabled to defer:
2) removing the value of log_archive_dest_2 altogether:
2022-05-03T12:52:33.896905+02:00 RFS[1332]: Assigned to RFS process (PID:128748) RFS[1332]: Database mount ID mismatch [0xb40d4ed8:0xb42c30e2] (3020771032:3022794978) RFS[1332]: Not using real application clustersReason:
Even though you have used the data guard broker to stop log shipping, and activated the standby database (inn effect making it read writable), the broker will not stop the previously configured primary database from shipping logs to its previously configured standby destination.
Solution:
Cut off the log shipping from the previously configured primary database completely by either
1) changing the value of log_archive_dest_state_2 from enabled to defer:
alter system set log_archive_dest_state_2=defer scope=both;or by
2) removing the value of log_archive_dest_2 altogether:
alter system set log_archive_dest_2='' scope=both;
Subscribe to:
Posts (Atom)