Monday, March 16, 2020

How to compress an existing table online



The ONLINE keyword makes it simple to compress an existing table online:
alter TABLE scott.emp
move new_tablespace
row store compress advanced
online;

If you omit the new_tablespace clause, Oracle will move it to the same tablespace as it is already residing in and the table will be reorganized and compressed.

How to modify a unified auditing policy to make exceptions based on login information


The audit policy had been created as follows:
create audit policy all_select_policy actions select;

Here is how you can alter an existing policy so that the policy will make an exception for session created by user "DBAADMIN"
alter audit policy ALL_SELECT_POLICY 
condition 'SYS_CONTEXT (''USERENV'',''CURRENT_USER'') NOT IN (''DBAADMIN'')' 
evaluate per Session;

Documentation can be found here
The oracle-supplied policy ORA_LOGIN_FAILURES automatically audits all failed login attempts. You can alter it to exclude certain uninteresting connections, such as for example DBSNMP, like this:
alter audit policy ORA_LOGON_FAILURES
condition 'SYS_CONTEXT (''USERENV'',''CURRENT_USER'') NOT IN (''DBSNMP'')'
evaluate per session;
For more information about the SYS_CONTEXT function, check the official 12.2 documentation.

Friday, March 13, 2020

Remember to delete archivelogs from obsolete incarnations after a flashback database operation



I have recently been supporting a customer by setting guaranteed restore points and using these to flash the database back on their signal, typically when they need to redeploy some code for their application.

It's easy to forget that every time you open the database with the RESETLOG options, you create another incarnation of your database.
Here is what my flash recovery area looked like:
cd /fra/SALESDB/archivelogs
du -sh *
300M    2020_01_30
36G     2020_02_03
248M    2020_02_04
1.5G    2020_02_05
273M    2020_02_06
284M    2020_02_07
332M    2020_02_08
3.7G    2020_02_09
510G    2020_02_11
15G     2020_02_12
1.1G    2020_02_13
386M    2020_02_14
237M    2020_02_15
3.7G    2020_02_16
14G     2020_02_17
523G    2020_02_18
1.5G    2020_02_19
208M    2020_02_20
213M    2020_02_21
239M    2020_02_22
3.6G    2020_02_23
217M    2020_02_24
293M    2020_02_25
1.5G    2020_02_26
258M    2020_02_27
261M    2020_02_28
296M    2020_02_29
3.6G    2020_03_01
507G    2020_03_02
216M    2020_03_03
13G     2020_03_04
214M    2020_03_05
211M    2020_03_06
237M    2020_03_07
3.6G    2020_03_08
288M    2020_03_09
244M    2020_03_10
1.6G    2020_03_11
16G     2020_03_12
504G    2020_03_13


The latest incarnation was created on the 12.03.2020:
RMAN> list incarnation of database;

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       SALESDB 1626528189       PARENT  270012427233 30.01.2020 11:39:03
2       2       SALESDB 1626528189       PARENT  270012433324 30.01.2020 11:51:46
3       3       SALESDB 1626528189       PARENT  270020068677 11.02.2020 11:00:20
4       4       SALESDB 1626528189       PARENT  270020175869 13.02.2020 12:02:52
5       5       SALESDB 1626528189       PARENT  270020238995 18.02.2020 10:50:25
6       6       SALESDB 1626528189       PARENT  270020500540 02.03.2020 13:51:53
7       7       SALESDB 1626528189       CURRENT 270020646622 12.03.2020 14:18:33

This situation caused the flash recovery area to fill, since Oracle wasn't able to delete archivelogs from the previous incarnation.

I solved this by deleting archivelogs from before the last incarnation was created.

First, list the logs you want to delete:
rman target / nocatalog log=list_archlogs.txt'
RMAN> list archivelog all completed before "to_date('12.03.2020 14:18:33')";

When I check the file, these were all older logs, residing in folders dated before the last incarnation was created.

I then deleted them as follows:
delete archivelog all completed before "to_date('12.03.2020 14:18:33')";

The size of the flash recovery area has now dropped to 17% and the file listing of /fra/SALESDB/archivelogs now shows empty folders:
4.0K    2020_01_30
8.0K    2020_02_03
4.0K    2020_02_04
4.0K    2020_02_05
4.0K    2020_02_06
4.0K    2020_02_07
4.0K    2020_02_08
16K     2020_02_09
48K     2020_02_11
4.0K    2020_02_12
4.0K    2020_02_13
4.0K    2020_02_14
4.0K    2020_02_15
4.0K    2020_02_16
4.0K    2020_02_17
52K     2020_02_18
4.0K    2020_02_19
4.0K    2020_02_20
4.0K    2020_02_21
4.0K    2020_02_22
4.0K    2020_02_23
4.0K    2020_02_24
4.0K    2020_02_25
4.0K    2020_02_26
4.0K    2020_02_27
4.0K    2020_02_28
4.0K    2020_02_29
4.0K    2020_03_01
52K     2020_03_02
4.0K    2020_03_03
4.0K    2020_03_04
4.0K    2020_03_05
4.0K    2020_03_06
4.0K    2020_03_07
4.0K    2020_03_08
4.0K    2020_03_09
4.0K    2020_03_10
4.0K    2020_03_11
15G     2020_03_12
504G    2020_03_13

Thursday, March 12, 2020

Workaround for error Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the member setting



Error from data guard:
oracle@oric-dbserver01:[vegdb01]# dgmgrl /
DGMGRL for Linux: Release 18.0.0.0.0 - Production on Thu Mar 12 08:02:54 2020
Version 18.9.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "vegdb01"
Connected as SYSDG.
DGMGRL> show configuration;

Configuration - myConfig

  Protection Mode: MaxPerformance
  Members:
  vegdb01      - Primary database
    vegdb01_stby - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with member setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING   (status updated 10 seconds ago)

DGMGRL> show database vegdb01_stby

Database - vegdb01_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 143.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    vegdb01
      Warning: ORA-16714: the value of property LogFileNameConvert is inconsistent with the member setting

Database Status:
WARNING

Solved by setting the log_file_name_convert on the standby side.
Using dgmgrl:
edit DATABASE "vegdb01_stby" SET PROPERTY LogFileNameConvert =  " "," "; 
Or set it by using sqlplus on the standby site, see this post for how to do that

Wednesday, March 11, 2020

How to grant access to all tables in a schema in a PostgreSQL database


PostgreSQL offers a feature that Oracle currently does not: the ability to grant access rights to all objects in a specific schema.
Here is how.

Connect to the database "klmdb" and grant user JIM DML rights + query rights to all tables in the schema "klm":
postgres-# \connect klmdb
You are now connected to database "klmdb" as user "postgres".
klmdb=# GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA klm to JIM;
or
klmdb=# GRANT ALL ON ALL TABLES IN SCHEMA KLM TO JIM;

Friday, March 6, 2020

How to describe a table in a different schema in PostgreSQL



First, list the databases on the server:
[postgres@oric-pg01.oric.no ~]$ psql
psql (11.6)
Type "help" for help.

postgres=# \conninfo
You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432".
postgres=# \list
                                    List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |      Access privileges
-----------+----------+----------+------------+------------+------------------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =Tc/postgres                +
           |          |          |            |            | postgres=CTc/postgres       +
           |          |          |            |            | postgres_exporter=c/postgres
 kldb      | postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres=CTc/postgres       +
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres                 +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres                 +
           |          |          |            |            | postgres=CTc/postgres
(4 rows)
Then, connect to the correct database
postgres=# \connect kldb
You are now connected to database "kldb" as user "postgres".

You can now describe any table in the database:
kldb=# \d kld.actions

                                          Table "kld.actions"
     Column     |           Type           | Collation | Nullable |                             Default
----------------+--------------------------+-----------+----------+-----------------------------------------------------------------
 id             | integer                  |           | not null | nextval('kld.action_id_seq'::regclass)
 qid            | character varying(11)    |           | not null |
 systemid       | character varying(20)    |           | not null |
 actiontype     | character varying(20)    |           | not null |
 relation       | character varying(50)    |           |          |
 messageheader  | character varying(50)    |           |          |
 rnumber        | character varying(13)    |           |          |
 versionID      | numeric                  |           |          |
 errormsg       | character varying(4000)  |           |          |
 occured_time   | timestamp with time zone |           | not null | now()
Indexes:
    "pk__actions" PRIMARY KEY, btree (id)
    "idx__actions_sk1" btree (partsid, systemid, hendelsestype)

You can also use the metacommand "dt" or "dt+", like this:
prod=# \dt+ sales.*
                                       List of relations
  Schema   |                Name                | Type  |   Owner   |    Size    | Description
-----------+------------------------------------+-------+-----------+------------+-------------
 sales     | mytable1                           | table | jim       | 8192 bytes |
 sales     | mytable2                           | table | jim       | 8192 bytes |
 sales     | mytable3                           | table | jim       | 16 kB      |
 sales     | mytable4                           | table | jim       | 9757 MB    |
 sales     | mytable5                           | table | jim       | 312 GB     |
 (5 rows)


Thursday, February 27, 2020

How to install and configure a simple replication using Golden Gate 19.1 Classic Architecture



In this article I will show the steps to set up a very simple schema replication using the latest Oracle Golden Gate software, which is 19.1 at the time of writing. Although the Microservices Architecture has been released, I will be implementing an Oracle-to-Oracle one-way replication, using the "Classic Architecture".

Servers involved:

Source server: db-server1.oric.no
Source database: td1
Oracle RDBMS version 12.2.0.1
Oracle RDBMS installation path: /u01/oracle/product/12201
Software owner: oracle

Destination server: db-server2.oric.no
Destination database: td2
Oracle RDBMS version 19.0
Oracle RDBMS installation path: /u01/oracle/product/19x00
Software owner: oracle

The steps in this article are based on source from the Oracle documentation available online, as well as Document ID 1276058.1 available support.oracle.com

Step 1: Download the software from otn.oracle.com

Step 2: Create a temporary directory on your servers:

su - oracle
mkdir -p /u01/oracle/tmp/software
Step 3: Upload the software to both of your servers

Step 4: Change directory to your temporary directory and unpack the GG software on each server

cd /u01/oracle/tmp/software
unzip 191004_fbo_ggs_Linux_x64_shiphome.zip
Step 5: Start an X-server on your client, like Xming

Step 6: Configure your servers to forward X-windows.
If xauth is not installed, you may need to install it as root with yum install xauth and then follow the instructions in this post

Step 7: Start the installer:

cd /u01/oracle/tmp/software/fbo_ggs_Linux_x64_shiphome/Disk1
./runInstaller

Follow the installation wizard. It is a very simple procedure consisting of 4 screens. The installation will be identical on both serveres except for the first screen, which needs to be adapted to the database version running on the server you're installing on:


When the installation is finished, you will see lots of subdirectories in the Golden Gate installation directory, such as dirchk,dirdat,dirprm etc

Step 8 (optional): Create an environment file for the golden gate software, for convenience:
cd
vi .gg191
# add the following lines to the file .gg191:
export ORACLE_BASE=/u01/oracle export ORACLE_SID=td1 export ORACLE_UNQNAME=td1 export TNS_ADMIN=$ORACLE_HOME/network/admin export ORACLE_HOME_LISTNER=$ORACLE_HOME PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$JAVA_HOME/bin export PATH export CDPATH=.:$HOME:$ORACLE_HOME:$ORACLE_BASE:$ORACLE_BASE/admin/$ORACLE_SID:$ORACLE_BASE/diag/rdbms/$ORACLE_SID/$ORACLE_SID DISPLAY=localhost:10.0 export DISPLAY export GGHOME=$ORACLE_BASE/product/gg191 export ORACLE_GG=$GGHOME export LIBPATH=$ORACLE_HOME/lib:$GGHOME:/lib:/usr/lib export LD_LIBRARY_PATH=$LIBPATH export PATH=$GGHOME:$GGHOME/OPatch:$ORACLE_HOME/bin:/usr/bin:/bin:/etc:/usr/sbin:/usr/ucb:$HOME/bin:/usr/bin/X11:/sbin:/usr/java14/bin:. set -o vi clear echo "OH is : $ORACLE_HOME" echo "GG is : $GGHOME" cd $GGHOME;pwd

Step 9: Prepare the database for integrated capture mode

a) Implement forced logging and minimal supplemental logging for your source database:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER DATABASE FORCE LOGGING;

SQL> SELECT supplemental_log_data_min, force_logging FROM v$database;

SUPPLEME FORCE_LOGGING
-------- ---------------
YES      YES

b) Enable golden gate replication in both your source and target database instance:
SQL> alter system set ENABLE_GOLDENGATE_REPLICATION=true scope=both;

System altered.

b) Create the Golden Gate user:
create bigfile tablespace ogg_data 
datafile  '/oradata/ogg_data.dbf' SIZE 128M AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

CREATE USER OGG
  IDENTIFIED BY mypassword
  DEFAULT TABLESPACE OGG_DATA
  TEMPORARY TABLESPACE TEMP
  PROFILE NOEXP
  ACCOUNT UNLOCK;

-- 4 Roles for OGG
GRANT CONNECT TO OGG;
GRANT RESOURCE TO OGG;
GRANT SELECT_CATALOG_ROLE TO OGG;
ALTER USER OGG DEFAULT ROLE ALL;

GRANT ALTER ANY TABLE TO OGG;
GRANT ALTER SYSTEM TO OGG;
GRANT ALTER SESSION TO OGG;
GRANT BECOME USER TO OGG;
GRANT SELECT ANY DICTIONARY TO OGG;
GRANT SELECT ANY TABLE TO OGG;
GRANT SELECT ANY TRANSACTION TO OGG;
GRANT UNLIMITED TABLESPACE TO OGG;

-- base privileges for Oracle 11.2.0.4 and later
exec dbms_goldengate_auth.grant_admin_privilege('OGG','CAPTURE');
c) Edit both of the servers' GLOBALS file. This is done in the ggsci cli interface. The parameter added here is the GGSCHEMA, which specifies the schema containing the database objects that are owned by Oracle GoldenGate. Here I am showing how I did it:
oracle@db-server1.oric.no # ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 19.1.0.0.4 OGGCORE_19.1.0.0.0_PLATFORMS_191017.1054_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Oct 18 2019 01:38:51
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.

GGSCI (db-server1.oric.no) 1> edit params ./GLOBALS
Add the following line, then save and close it:
GGSCHEMA ogg
d) Configure connection types for integrated processes. I will use a USERALIAS, which is based on the Oracle Credential Store Framework (CSF).
oracle@db-server1.oric.no # ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 19.1.0.0.4 OGGCORE_19.1.0.0.0_PLATFORMS_191017.1054_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Oct 18 2019 01:38:51
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


GGSCI (db-server1.oric.no) 1> add credentialstore

Credential store created.

GGSCI (db-server1.oric.no) 2> alter credentialstore add user OGG, password mypassword alias ggadmin

Credential store altered.

#test it
GGSCI (db-server1.oric.no) 4> dblogin useridalias ggadmin
Successfully logged into database.

Repeat the same procedure on the target host.

e) Since I want DDL to be replicated, we need to enable schema-level supplemental logging:
GGSCI (db-server1.oric.no as OGG@td1) 6> add schematrandata SCOTT
2020-02-26 10:45:12  INFO    OGG-01788  SCHEMATRANDATA has been added on schema "SCOTT".
2020-02-26 10:45:12  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema "SCOTT".
2020-02-26 10:45:12  INFO    OGG-10154  Schema level PREPARECSN set to mode NOWAIT on schema "SCOTT".

The source server should now be ready for extract.

10. Prepare the destination database for integrated apply

a) create the Golden Gate database user.
Note: The DBA role seems to be important to make DDL replication work

create bigfile tablespace ogg_data datafile '/oradata/td2/ogg_data.dbf' SIZE 128M AUTOEXTEND ON NEXT 32M MAXSIZE UNLIMITED
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
SEGMENT SPACE MANAGEMENT AUTO;

CREATE USER OGG
  IDENTIFIED BY mypassword
  DEFAULT TABLESPACE OGG_DATA
  TEMPORARY TABLESPACE TEMP
  PROFILE NOEXP
  ACCOUNT UNLOCK;

-- 5 Roles for OGG
GRANT CONNECT TO OGG;
GRANT RESOURCE TO OGG;
GRANT SELECT_CATALOG_ROLE TO OGG;
GRANT DBA TO OGG;
ALTER USER OGG DEFAULT ROLE ALL;

GRANT ALTER ANY TABLE TO OGG;
GRANT ALTER SYSTEM TO OGG;
GRANT ALTER SESSION TO OGG;
GRANT BECOME USER TO OGG;
GRANT SELECT ANY DICTIONARY TO OGG;
GRANT SELECT ANY TABLE TO OGG;
GRANT SELECT ANY TRANSACTION TO OGG;
GRANT UNLIMITED TABLESPACE TO OGG;
grant insert any table to ogg;
grant update any table to ogg;
grant delete any table to ogg;

-- base privileges for Oracle 11.2.0.4 and later
exec dbms_goldengate_auth.grant_admin_privilege('OGG','apply');
exit


11. Instantiate the replication environment by running an initial load

There are different methods available for loading an empty target database with data from the source. They are
  1. Load using Data Pump
  2. Loading Data from File to Replicat
  3. Loading Data with an Oracle GoldenGate Direct Load
  4. Loading Data with a Direct Bulk Load to SQL*Loader

I will use method number 1 in this article. With this method, you do not have to set up any initial-load processes on the target.
You can use either a full export or an schema export.

Check that the source server is ready for instantiation:
GGSCI (db-server1.oric.no as OGG@td1) 10> INFO SCHEMATRANDATA SCOTT
2020-02-27 09:15:52  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema "SCOTT".
2020-02-27 09:15:52  INFO    OGG-01980  Schema level supplemental logging is enabled on schema "SCOTT" for all scheduling columns.
2020-02-27 09:15:52  INFO    OGG-10462  Schema "SCOTT" have 23 prepared tables for instantiation.
The last line shown above confirms the database has been prepared for instantiation.

12. Create a Manager parameter file on the source server

GGSCI (db-server1.oric.no as OGG@td1) 16> edit params mgr

PORT 7809
DYNAMICPORTLIST 7802-7803
PURGEOLDEXTRACTS ./dirdat/et* , USECHECKPOINTS, MINKEEPHOURS 24, FREQUENCYMINUTES 60
AUTORESTART EXTRACT *, RETRIES 3, WAITMINUTES 10, RESETMINUTES 60
AUTOSTART EXTRACT *
DOWNREPORTMINUTES 15
LAGCRITICALSECONDS 10
LAGINFOMINUTES 0
LAGREPORTMINUTES 15

13. Create an extract parameter file on the source server
GGSCI (db-server1.oric.no  as OGG@td1) 20> edit params vkext
EXTRACT vkext
--- User login
USERIDALIAS ggadmin
DISCARDFILE ./dirrpt/vkapps.dsc, APPEND
DISCARDROLLOVER AT 01:00 ON SUNDAY
EXTTRAIL ./dirdat/et
REPORT AT 01:00
STATOPTIONS REPORTFETCH
REPORTCOUNT every 10 minutes, RATE
REPORTROLLOVER AT 01:15 ON SUNDAY
--- DDL Parameters
DDL INCLUDE MAPPED
DDLOPTIONS REPORT
TABLE SCOTT.* ;

14. Create an Extract Pump parameter file on the source server
GGSCI (db-server1.oric.no  as OGG@td1) 23> edit params vkpump
EXTRACT VKPUMP
USERIDALIAS ggadmin
RMTHOST db-server2.oric.no, MGRPORT 7809
RMTTRAIL ./dirdat/rt
ddl include mapped
ddloptions report
TABLE SCOTT.* ;

15. Register the extracts:
GGSCI (db-server1.oric.no as OGG@td1) 27> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (db-server1.oric.no as OGG@td1) 28> register extract vkext, database
2020-02-27 11:59:21  INFO    OGG-02003  Extract VKEXT successfully registered with database at SCN 55440705.

GGSCI (db-server1.oric.no as OGG@td1) 29> add extract vkext, integrated tranlog, begin now
EXTRACT (Integrated) added.


GGSCI (db-server1.oric.no as OGG@td1) 31> add exttrail ./dirdat/et, extract vkext, megabytes 128
EXTTRAIL added.

GGSCI (db-server1.oric.no as OGG@td1) 32> add extract vkpump, exttrailsource ./dirdat/et
EXTRACT added.

GGSCI (db-server1.oric.no as OGG@td1) 34> add rmttrail ./dirdat/rt, extract vkpump, megabytes 128
RMTTRAIL added.

16. Create the parameter file for the manager on the target server:

GGSCI (db-server2.oric.no) 4> edit params mgr
PORT 7809
DYNAMICPORTLIST 7802-7803
PURGEOLDEXTRACTS ./dirdat/rt* , USECHECKPOINTS, MINKEEPHOURS 72
AUTORESTART REPLICAT *, RETRIES 3, WAITMINUTES 10, RESETMINUTES 60
AUTOSTART REPLICAT *
DOWNREPORTMINUTES 15
LAGCRITICALSECONDS 10
LAGINFOMINUTES 0

17. Create the parameter file for the replicat process on the target server:

GGSCI (db-server2.oric.no) 1> dblogin useridalias ggadmin
Successfully logged into database.

GGSCI (db-server2.oric.no as OGG@td2) 3> edit params vkrepl
REPLICAT vkrepl
DISCARDFILE ./dirrpt/vkrepl.dsc, APPEND
USERIDALIAS ggadmin
REPORTCOUNT EVERY 30 MINUTES, RATE
REPORTROLLOVER AT 01:00 ON SUNDAY
DISCARDROLLOVER AT 01:00 ON SUNDAY
--- DDL Parameters
DDL INCLUDE ALL
DDLOPTIONS REPORT
MAP SCOTT.*, TARGET SCOTT.*;

18. On the target, create a checkpoint table:

GGSCI (db-server2.oric.no as OGG@td2) 7> add checkpointtable ogg.chktbl

Successfully created checkpoint table ogg.chktbl.

19. On the target, start the manager

GGSCI (db-server2.oric.no as OGG@td2) 7> start mgr

20. On the target, add the replicate process and trail files:

GGSCI (db-server2.oric.no as OGG@td2) 13> add replicat vkrepl, integrated, exttrail ./dirdat/rt, checkpointtable ogg.chktbl
REPLICAT (Integrated) added.

21. Start all of the Golden Gate processes on both servers before target instantiation:

# source
GGSCI (db-server1.oric.no as OGG@td1) 15> start mgr

# target
GGSCI (db-server2.oric.no as OGG@td2) 15> start mgr
If this doesn't bring up all processes, check the file $GGHOME/ggserr.log for errors. Correct the errors as indicated. You could see a number of errors at this point, from a misspelled keyword, an incorrectly specified group name, to an incorrectly specified port number.

When ready to try again, you can issue start of individual golden gate processes, like this:
GGSCI (db-server1.oric.no as OGG@td1) 15> start extract *
or
GGSCI (db-server1.oric.no as OGG@td1) 15> start extract vkext | vkpump

20. On the source server db-server1.oric.no, export the schema I'd like replicated:


a) create a directory called DATA_PUMP_DIR
create or replace directory DATA_PUMP_DIR as '/data/dpdir';
grant read,write on directory DATA_PUMP_DIR to system;

b) create a parameter file for data pump export. Add the following lines to the file expdp_instantinate.par
userid=system
directory=DATA_PUMP_DIR
schemas=SCOTT
dumpfile=instantiate_scott.dmp
logfile=instantiate_scott.log

c) export the data
expdp parfile=instantinate.par


21. On the target server create a directory similar to he one in step 20a.

22. Transfer the dumpfile to the target server db-server2.oric.no

23. Create a parameter file for data pump import. Add the following lines to the file impdp_instantinate.par:

userid=system
directory=DATA_PUMP_DIR
dumpfile=instantiate_scott.dmp
logfile=imp_instantiate_scott4.log
job_name=instantiate_scott
remap_tablespace=DOCS_2014:USERS
remap_tablespace=DOCS_2015:USERS
remap_tablespace=DOCS_2016:USERS
remap_tablespace=SCOTT_DATA:USERS
remap_tablespace=DOCS_DATA:USERS
remap_tablespace=LOB_DATA:USERS
table_exists_action=SKIP
exclude=statistics

As you can see from the above directives, I am taking the opportunity to consolidate all my replicated data into one single tablespace, USERS.

24. Test your replication.
Insert a row into your source database td1, and check if's being transferred over the network to the target database t2.


Insert into COUNTRY_TABLE
   (ID, DYEAR, COUNTRY, CREATED, CREATED_BY, 
    LAST_CHANGED, LAST_CHANGED_BY, VERSION)
 Values
   ('443hy70-s421-11x5-8031-0054567837649', 2018, 'Japan', TO_TIMESTAMP('02/01/2018 00:00:00,000000','DD/MM/YYYY HH24:MI:SS,FF'), 'BOB',    TO_TIMESTAMP('07/03/2019 10:56:18,563307','DD/MM/YYYY HH24:MI:SS,FF'), 'JIM', 4);
COMMIT;

Tail of the $GGHOME/ggserr.log file on the target server db-server2.oric.no shows:
2020-02-27T15:14:27.981+0100  INFO    OGG-02243  Oracle GoldenGate Delivery for Oracle, vkrepl.prm:  Opened trail file /u01/oracle/product/gg191/dirdat/rt000000000 at 2020-02-27 15:14:27.981153.
2020-02-27T15:14:27.981+0100  INFO    OGG-03506  Oracle GoldenGate Delivery for Oracle, vkrepl.prm:  The source database character set, as determined from the trail file, is UTF-8.
2020-02-27T15:14:27.981+0100  INFO    OGG-06506  Oracle GoldenGate Delivery for Oracle, vkrepl.prm:  Wildcard MAP resolved (entry SCOTT.*): MAP "SCOTT"."COUNTRY_TABLE", TARGET SCOTT."COUNTRY_TABLE".
2020-02-27T15:14:27.992+0100  INFO    OGG-02756  Oracle GoldenGate Delivery for Oracle, vkrepl.prm:  The definition for table SCOTT.COUNTRY_TABLE is obtained from the trail file.
2020-02-27T15:14:27.992+0100  INFO    OGG-06511  Oracle GoldenGate Delivery for Oracle, vkrepl.prm:  Using following columns in default map by name: ID, DYEAR, COUNTRY, CREATED, CREATED_BY, LAST_CHANGED, LAST_CHANGED_BY, VERSION, DOCUMENT, PERIOD.
2020-02-27T15:14:27.992+0100  INFO    OGG-10155  Oracle GoldenGate Delivery for Oracle, vkrepl.prm:  Instantiation CSN filtering is enabled on table SCOTT.COUNTRY_TABLE at CSN 55,489,598.
2020-02-27T15:14:27.992+0100  INFO    OGG-06510  Oracle GoldenGate Delivery for Oracle, vkrepl.prm:  Using the following key columns for target table SCOTT.COUNTRY_TABLE: ID.

When checking the target database, the inserted row has indeed been transferred.