\set ECHO queries2. execute the script using the -e flag
psql -e -f myfile.sql
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
\set ECHO queries2. execute the script using the -e flag
psql -e -f myfile.sql
SELECT conname FROM pg_constraint WHERE conrelid = 'mytable'::regclass; conname ---------------- sys_c002733781 sys_c002733782 (2 rows)2. Drop them:
ALTER TABLE mytable DROP CONSTRAINT sys_c002733781; ALTER TABLE ALTER TABLE mytable DROP CONSTRAINT sys_c002733782; ALTER TABLE3. Load your data using your preferred method
alter table mytable add constraint sys_c002733781 foreign key(job_id) references jobs(id) on delete cascade not valid; ALTER TABLE alter table mytable add constraint sys_c002733782 foreign key(task_id) references tasks(id) on delete cascade not valid; ALTER TABLE5. Validate constraints:
alter table mytable validate constraint sys_c002733781; ALTER TABLE alter table mytable validate constraint sys_c002733782; ALTER TABLE
SELECT schema_name, object_name, object_type FROM ( -- Tables, Views, Indexes, etc. SELECT n.nspname AS schema_name, c.relname AS object_name, CASE c.relkind WHEN 'r' THEN 'TABLE' WHEN 'v' THEN 'VIEW' WHEN 'm' THEN 'MATERIALIZED VIEW' WHEN 'i' THEN 'INDEX' WHEN 'S' THEN 'SEQUENCE' WHEN 'f' THEN 'FOREIGN TABLE' WHEN 'p' THEN 'PARTITIONED TABLE' WHEN 'I' THEN 'PARTITIONED INDEX' ELSE c.relkind::text END AS object_type FROM pg_class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE n.nspname = current_schema() UNION ALL -- Functions SELECT n.nspname AS schema_name, p.proname AS object_name, 'FUNCTION' AS object_type FROM pg_proc p JOIN pg_namespace n ON n.oid = p.pronamespace WHERE n.nspname = current_schema() ) AS objects ORDER BY object_type, object_name;It only shows objects in the current schema (like Oracle USER_OBJECTS).
WHERE n.nspname = current_schema()with
WHERE n.nspname NOT IN ('pg_catalog', 'information_schema')
psql -h postgres01.oric.no -U mysuperuser postgres -c "COPY (SELECT name || '=' || setting FROM pg_settings ORDER BY name) TO STDOUT WITH CSV HEADER" > pg_config.csv
SELECT s.schemaname, s.relname AS table_name, s.n_live_tup, s.last_analyze, s.last_autoanalyze FROM pg_stat_all_tables s JOIN pg_class c ON c.relname = s.relname LEFT JOIN pg_inherits i ON i.inhrelid = c.oid OR i.inhparent = c.oid WHERE s.schemaname = 'myschema' AND ( s.relname = 'mytable' OR c.oid IN ( SELECT inhrelid FROM pg_inherits WHERE inhparent = ( SELECT oid FROM pg_class WHERE relname = 'mytable' ) ) ) ORDER BY s.n_live_tup DESC,s.last_analyze;Example output
schemaname | table_name | n_live_tup | last_analyze | last_autoanalyze -----------------+-----------------+------------+-------------------------------+------------------------------ myschema | mytable_2015_02 | 95788115 | 2025-08-06 12:58:22.952592+00 | myschema | mytable_2015_03 | 78505350 | 2025-08-06 12:58:37.147433+00 | myschema | mytable_2015_04 | 71211253 | 2025-08-06 12:58:51.311452+00 | myschema | mytable_2015_01 | 68255510 | 2025-08-06 12:58:08.42708+00 | myschema | mytable_2015_05 | 62075043 | 2025-08-06 12:59:05.434118+00 | myschema | mytable_2015_06 | 60888876 | 2025-08-06 12:59:19.918657+00 | 2025-08-06 11:31:19.551345+00 myschema | mytable_2015_12 | 57485594 | 2025-08-06 13:00:43.112316+00 | 2025-08-27 03:13:11.394606+00 myschema | mytable_2016_01 | 57405307 | 2025-08-06 13:00:55.395113+00 | 2025-08-25 13:35:07.749773+00
SELECT pg_current_wal_lsn(); -- On the primaryIn my case, the result was
pg_current_wal_lsn -------------------- 0/E015420On the standby, run
SELECT pg_last_wal_replay_lsn();In my case, the result was
pg_last_wal_replay_lsn ------------------------ 0/E015420
psql -h hostname -U username -d databaseOptions:
Option Description -h <host> Hostname or IP address of the PostgreSQL server -p <port> Port number (default is 5432) -U <username> Username to connect as -d <dbname> Database name -W Prompt for password (useful if password auth is required) -f <filename> Execute commands from a SQL script file -c "<command>" Execute a single SQL command and exit -v var=value Set psql variable (used inside scripts) --set=var=value Same as -v, more explicit --dbname=<connstr> Full connection string, e.g.: postgresql://user:pass@host:port/dbnameExample
psql -h pg01.oric.no -p 5432 -U myuser -d mydbAlternatively, use an URL, which is more scripting friendly:
psql "postgresql://replicate@pg01.oric.no:5432/mydb"
SELECT column_name FROM information_schema.columns WHERE table_name = 'pg_stat_wal_receiver';Output
column_name ----------------------- pid last_msg_receipt_time latest_end_lsn latest_end_time sender_port receive_start_lsn receive_start_tli written_lsn flushed_lsn received_tli last_msg_send_time status slot_name sender_host conninfo (15 rows)Or use the psql metacommand "\d+":
echo '\d+ pg_stat_wal_receiver' |psqlOutput
View "pg_catalog.pg_stat_wal_receiver" Column | Type | Collation | Nullable | Default | Storage | Description -----------------------+--------------------------+-----------+----------+---------+----------+------------- pid | integer | | | | plain | status | text | | | | extended | receive_start_lsn | pg_lsn | | | | plain | receive_start_tli | integer | | | | plain | written_lsn | pg_lsn | | | | plain | flushed_lsn | pg_lsn | | | | plain | received_tli | integer | | | | plain | last_msg_send_time | timestamp with time zone | | | | plain | last_msg_receipt_time | timestamp with time zone | | | | plain | latest_end_lsn | pg_lsn | | | | plain | latest_end_time | timestamp with time zone | | | | plain | slot_name | text | | | | extended | sender_host | text | | | | extended | sender_port | integer | | | | plain | conninfo | text | | | | extended | View definition: SELECT pid, status, receive_start_lsn, receive_start_tli, written_lsn, flushed_lsn, received_tli, last_msg_send_time, last_msg_receipt_time, latest_end_lsn, latest_end_time, slot_name, sender_host, sender_port, conninfo FROM pg_stat_get_wal_receiver() s(pid, status, receive_start_lsn, receive_start_tli, written_lsn, flushed_lsn, received_tli, last_msg_send_time, last_msg_receipt_time, latest_end_lsn, latest_end_time, slot_name, sender_host, sender_port, conninfo) WHERE pid IS NOT NULL;
ps -fu postgres | grep $(pgrep -f postmaster) postgres 1913 1 0 Jul17 ? 00:02:51 /usr/pgsql-15/bin/postmaster -D /var/lib/pgsql/15/data/ postgres 1982 1913 0 Jul17 ? 00:00:24 postgres: logger postgres 2063 1913 0 Jul17 ? 00:00:18 postgres: checkpointer postgres 2064 1913 0 Jul17 ? 00:00:03 postgres: background writer postgres 2119 1913 0 Jul17 ? 00:00:07 postgres: walwriter postgres 2120 1913 0 Jul17 ? 00:00:14 postgres: autovacuum launcher postgres 2121 1913 0 Jul17 ? 00:00:01 postgres: archiver postgres 2122 1913 0 Jul17 ? 00:00:00 postgres: logical replication launcher postgres 2657 1913 0 Jul17 ? 00:02:05 postgres: postgres_exporter postgres localhost(48674) idleThe command lists all processes by user "postgres" and and greps for the output of the command "pgrep -f postmaster", which returns a list of process IDs
pgrep -c postmaster 9
SELECT schemaname, relname, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze FROM pg_stat_user_tables WHERE relname = 'documents'; schemaname | relname | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze ----------------+----------+-------------+-------------------------------+-------------------------------+------------------------------- scott | documents| | 2025-06-25 02:05:20.692869+02 | 2025-07-12 22:04:05.119051+02 | 2025-06-17 08:41:12.567351+02 (1 row)Explanation of columns:
ANALYZE;Analyze a specific table:
ANALYZE my_table;Analyze specific columns only:
ANALYZE my_table (column1, column2);
postgres=# alter system set log_min_duration_statement='2s'; ALTER SYSTEM postgres=# SELECT pg_reload_conf(); pg_reload_conf ---------------- t (1 row) postgres=# show log_min_duration_statement; log_min_duration_statement ---------------------------- 2s (1 row)I expect every statement that lasts longer than 2 seconds to be logged in the postgreSQL server main log.
postgres=# SELECT pg_sleep(3); pg_sleep ---------- (1 row)From another session, tail the log from today:
tail -F /var/log/postgres/postgresql-2025-07-11_000000.log | grep --line-buffered "duration:.*statement:"Output
2025-07-11 09:45:42 CEST [173349]: [5-1] user=postgres,db=postgres,app=psql,client=[local] LOG: duration: 3003.698 ms statement: SELECT pg_sleep(3);
cd $ORACLE_GG tree -L 2 --noreport $(ls -d dir* | grep -v '^dirdat$')
mydb=> \d mytable Partitioned table "myschema.mytable" Column | Type | Collation | Nullable | Default -------------------+--------------------------+-----------+----------+------------------------------ id | bigint | | not null | generated always as identity ad_updtud_log_id | bigint | | not null | period | integer | | not null | created | timestamp with time zone | | not null | Partition key: RANGE (id) Indexes: "mytable_pkey" PRIMARY KEY, btree (id, periode) "mytable_id_index" btree (im_oppsum_logg_id) "mytable_opprettet_index" btree (opprettet) Foreign-key constraints: "fk_mytable" FOREIGN KEY (ad_updtud_log_id, period) REFERENCES mytable2(id, period) Number of partitions: 180 (Use \d+ to list them.)This is how you can list all partitions belonging to a range-partitioned table in postgreSQL, based on the values:
SELECT child.relname AS partition_name, regexp_replace(pg_get_expr(child.relpartbound, child.oid), '.*FROM \(''(\d+).*', '\1')::bigint AS range_start, pg_get_expr(child.relpartbound, child.oid) AS partition_range FROM pg_inherits JOIN pg_class parent ON pg_inherits.inhparent = parent.oid JOIN pg_class child ON pg_inherits.inhrelid = child.oid JOIN pg_namespace nm ON nm.oid = child.relnamespace WHERE parent.relname = 'mytable' AND nm.nspname = 'myschema' ORDER BY range_start;Result (excerpt only):
partition_name | range_start | partition_range ---------------------------------------+-------------+-------------------------------------------- mytable_0 | 0 | FOR VALUES FROM ('0') TO ('30000000') mytable_30000000 | 30000000 | FOR VALUES FROM ('30000000') TO ('60000000') mytable_60000000 | 60000000 | FOR VALUES FROM ('60000000') TO ('90000000') mytable_90000000 | 90000000 | FOR VALUES FROM ('90000000') TO ('120000000') mytable_120000000 | 120000000 | FOR VALUES FROM ('120000000') TO ('150000000') mytable_150000000 | 150000000 | FOR VALUES FROM ('150000000') TO ('180000000') mytable_180000000 | 180000000 | FOR VALUES FROM ('180000000') TO ('210000000') mytable_210000000 | 210000000 | FOR VALUES FROM ('210000000') TO ('240000000') mytable_240000000 | 240000000 | FOR VALUES FROM ('240000000') TO ('270000000') mytable_270000000 | 270000000 | FOR VALUES FROM ('270000000') TO ('300000000') mytable_300000000 | 300000000 | FOR VALUES FROM ('300000000') TO ('330000000') mytable_330000000 | 330000000 | FOR VALUES FROM ('330000000') TO ('360000000') mytable_360000000 | 360000000 | FOR VALUES FROM ('360000000') TO ('390000000') mytable_390000000 | 390000000 | FOR VALUES FROM ('390000000') TO ('420000000') mytable_420000000 | 420000000 | FOR VALUES FROM ('420000000') TO ('450000000') mytable_450000000 | 450000000 | FOR VALUES FROM ('450000000') TO ('480000000')
alter session set current_schema=<schema_name>
, in PostgreSQL, search_path
is a session-level setting that determines the order in which schemas are searched when you reference database objects without a schema name.
It consists of a list of schema names. When you run a query like
SELECT * FROM mytable;
PostgreSQL checks each schema in the list — in order — to find mytable
.
"$user", publicThis means:
public
schema.search_path
:
SHOW search_path;Set it for the session:
SET search_path TO schema1, public;or
SET search_path TO schema1, schema2, public;The last example is important in cases where you logon to a database with a user with no matching schema. Consider the following example:
psql -h server1.oric.no -U jim -d musicdbI am logging onto the database "musicdb" with a user called "jim". By default, jim will have its own user followed by public, in his search path:
musicdb=> show search_path; search_path ----------------- "$user", public (1 row)I have already given user jim the privilges needed to "see" the objects created by schema "music" which exists in the database "musicdb".
musicdb=> set search_path to 'music','$user','public'; SET musicdb=> show search_path; search_path ------------------------------- music, "$user", public (1 row)The "current_user" and "current_schema" functions will now return the actual user name, and the first match in the search_path, respectivly:
musicdb=> select current_user, current_schema; current_user | current_schema --------------+---------------- jim | music (1 row)Why is it important?
postgres=# alter database db01 connection limit 0; ALTER DATABASEVerify
SELECT datname, datconnlimit FROM pg_database WHERE datname = 'db01'; -[ RECORD 1 ]+------------- datname | db01 datconnlimit | 0Set back to unlimited:
ALTER DATABASE db01 CONNECTION LIMIT -1; ALTER DATABASEVerify:
SELECT datname, datconnlimit FROM pg_database WHERE datname = 'db01'; -[ RECORD 1 ]+------------- datname | db01 datconnlimit | -1To limit the connections for a specific user only:
psql psql (15.13) Type "help" for help. postgres=# alter user read_db01 connection limit 0; ALTER ROLE postgres=# alter user read_db01 connection limit -1; ALTER ROLE postgres=#The current setting can be verified with:
SELECT rolname, rolconnlimit FROM pg_roles WHERE rolname = 'read_db01'; rolname | rolconnlimit -------------------+-------------- read_db01 | -1or, list all users that does have restrictions on the number of connections:
SELECT rolname, rolconnlimit FROM pg_roles WHERE rolconnlimit != -1; rolname | rolconnlimit -------------------+-------------- pganalyze | 5 read_db01 | 0 (2 rows)
psql CREATE ROLE common_users WITH NOLOGIN NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION NOBYPASSRLS; GRANT pg_read_all_stats TO common_users;Then, create the user ro_user1:
create user ro_user1 password 'mysecretpassword'; grant common_users to ro_user1; grant connect on database db01 to ro_user1;Log into the database db01 and revoke and grant some privileges:
psql \connect db01 revoke all on schema schema1 from ro_user1; grant usage on schema schema1 to ro_user1; grant select on all tables in schema schema1 to ro_user1;Confirm the privileges:
\connect postgres select database_privs('ro_user1');
The amount of memory the database server uses for shared memory buffers.
The postgres documentation suggest starting with allocating 25% of the available memory to the shared database memory pool:Controls the maximum size the Write-Ahead Logging (WAL) files can grow before triggering a checkpoint. Checkponts are relative expensive operations, so we do not want them to occur too often. On the other hands, too infrequent checkpointing may increase recovery times. max_wal_size can be set to balance performance and recovery time by influencing how often checkpoints occur.
work_mem
Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files.
The documentation points out that a complex query might perform several sort and hash operations at the same time, with each operation generally being allowed to use as much memory as this value specifies before it starts to write data into temporary files and that serveral running sessions could be executing such operations at the same time. So even if the 6M specified as its value does not seem like much, it could mean significant memory usage on a busy system.
It is similar to pga_aggregate_target in an oracle database: the amount of memory set for all private global areas on the server. After the introduction of this parameter in Oracle 9i, the private global area parameters used back then, for example sort_area_size and hash_area_size, was not necessary any longer.
Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.
It can be set higher than work_mem:
Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps.
Specifies the maximum amount of memory to be used by each autovacuum worker process. If this value is specified without units, it is taken as kilobytes. It defaults to -1, indicating that the value of maintenance_work_mem should be used instead. The setting has no effect on the behavior of VACUUM when run in other contexts.
Here is a table with my settings for my 16G Memory serverParameter | Value |
---|---|
shared_buffers | 4GB |
max_wal_size | 8GB |
work_mem | 6MB |
maintenance_work_mem | 479MB |
grep Hugepagesize /proc/meminfo Hugepagesize: 2048 kBOn my 16G server, I would like to start with 25% of the total memory as available for shared_buffers:
su - postgres sudo systemctl stop postgresql-15.service postgres --shared-buffers=4096MB -D $PGDATA -C shared_memory_size_in_huge_pages 2102Update /etc/sysctl.conf. Add
vm.nr_hugepages=2102Restart sysctl
sysctl -pNow, tell the postgres server to use huge pages, if they are available.
huge_pages = tryAdd the following lines to /etc/security/limits.conf so that postgres can lock down the memory set aside for huge tables:
postgres soft memlock unlimited postgres hard memlock unlimitedReboot the server and verify that huge pages are being used:
cat /proc/meminfo | grep HugeInterestingly, the parameter huge_page_size should only be used if you wish to override the default huge page size on your system. The default is zero (0). When set to 0, the default huge page size on the system will be used. In my case this is what I want to so the parameter can be ignored.
SELECT con.conname AS constraint_name, CASE con.contype WHEN 'p' THEN 'PRIMARY KEY' WHEN 'u' THEN 'UNIQUE' WHEN 'f' THEN 'FOREIGN KEY' WHEN 'c' THEN 'CHECK' WHEN 'x' THEN 'EXCLUSION' ELSE con.contype END AS constraint_type, rel.relname AS table_name, pg_get_constraintdef(con.oid) AS definition FROM pg_constraint con JOIN pg_class rel ON rel.oid = con.conrelid JOIN pg_namespace nsp ON nsp.oid = rel.relnamespace WHERE nsp.nspname = 'owner' AND rel.relname = 'table_name' ORDER BY constraint_name;
COPY at_locations FROM '/external/data/geolocations_at_fixed.csv' WITH ( FORMAT csv, DELIMITER ';', NULL 'NULL', HEADER false );and execute it like this:
psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet vinmonopolet=> \i cp_at.sqlYou will see the error
ERROR: must be superuser or have privileges of the pg_read_server_files role to COPY from a file HINT: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.This is because the COPY command runs server-side, and PostgreSQL server expects be able to access the file and its entire path on the server.
\copy vinmonopolet.at_locations FROM '/external/data/geolocations_at_fixed.csv' WITH (FORMAT csv, DELIMITER ';', NULL 'NULL', HEADER false)and execute either with the \i
psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet -f cp_at.sql Password for user vinmonopolet: COPY 5This works because the \copy command is interpreted client-side. psql reads the file from your local machine (in my case, the "local machine" was indeed the postgres server, but the point stands), and pipes the content into the database. And it succeeds because no elevated privileges are needed for this method.
ADR Homes: diag/rdbms/db01/db01 diag/rdbms/db02/db02 diag/rdbms/db03/db03 diag/rdbms/db04/db04 diag/rdbms/db05/db05 diag/clients/user_oracle/RMAN_610688766_110 diag/clients/user_oracle/host_610688766_110 diag/clients/user_oracle/host_610688766_82 diag/tnslsnr/dbhost1/listener diag/tnslsnr/dbhost1/mylsnr diag/tnslsnr/dbhost1/testlistenerThis caused operations against ADR to fail with
ADR HOME selected: diag/rdbms/db01/db01 diag/rdbms/db02/db02 diag/rdbms/db03/db03 diag/rdbms/db04/db04 DIA-48415: Syntax error found in string [diag/rdbms/db01/db01] at column [26] DIA-48415: Syntax error found in string [diag/rdbms/db02/db02] at column [26] DIA-48415: Syntax error found in string [diag/rdbms/db03/db03] at column [26] DIA-48415: Syntax error found in string [diag/rdbms/db04/db04] at column [26] Syntax errorSolution: clean out the old folder physically:
cd /u01/oracle/diag/rdbms rm -rf db02 db03 db04 db05 cd /u01/oracle/diag/tnslsnr/dbhost1 rm -rf mylsnr testlistener cd /u01/oracle/diag/clients/user_oracle rm -rf *Afterwards, the adr should you can check your adrci settings. They should look better now:
ADR base = "/u01/oracle" adrci> show homes ADR Homes: diag/rdbms/db01/db01 diag/clients/user_oracle/RMAN_610688766_110 diag/tnslsnr/dbhost1/listener
ls -d */ dir1/ dir2/ di3/ dir4/Also good:
ls -l | grep "^d" drwxr-xr-x 4 root root 4096 Oct 13 2022 dir1 drwxr-xr-x 2 root root 4096 Mar 14 11:17 dir2 drwxr-xr-x 2 root root 4096 Mar 26 15:51 dir2 drwxr-xr-x 2 root root 4096 Apr 27 2021 dir4A third option:
ls -F | grep "/$" dir1/ dir2/ dir3/ dir4/
dbca -createCloneTemplate -sourceSID source_database_sid | -sourceDB source_database_name -templateName new_database_template_name [-promptForWalletPassword] [-rmanParallelism parallelism_integer_value] [-maxBackupSetSizeInMB maximum_backup_set_size_in_MB] [-dataFileBackup {true | false}] [-datafileJarLocation data_files_backup_directory] [-sysDBAUserName SYSDBA_user_name] [-sysDBAPassword SYSDBA_user_password] [-useWalletForDBCredentials {true | false} -dbCredentialsWalletPassword wallet_account_password -dbCredentialsWalletLocation wallet_files_directory] [-uploadToCloud -opcLibPath OPC_library_path -opcConfigFile OPC_configuration_file_name [-rmanEncryptionPassword rman_encryption_password ] [-compressBackup { true | false } ] [-walletPassword database_wallet_password]Documentation here
"xauth: file /root/.Xauthority does not exist"really mean?
# RHEL/Centos su - yum install xauth #Debian apt install xauth2. Make sure you use the -X flag (X forwarding) when connecting to the server
ssh -X root@yourserver3. On the server, edit the file /etc/ssh/sshd_config, and make sure these lines exist and are set to yes
X11Forwarding yes X11UseLocalhost yes4. Restart ssh daemon
systemctl restart sshd5. Try launching a graphical app, like xterm
touch ~/.Xauthority xauth generate :0 . trusted
responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v12.2.0 gdbName=tstdwh1.oric.no sid=tstdwh1 databaseConfigType=SI policyManaged=false createServerPool=false force=false createAsContainerDatabase=false templateName=/u01/oracle/product/19c/assistants/dbca/templates/New_Database.dbt sysPassword=mysecreatpassword systemPassword=mysecreatpassword datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/ datafileDestination=/data02/oradata/{DB_UNIQUE_NAME}/ recoveryAreaDestination=/data04/fra/{DB_UNIQUE_NAME} storageType=FS characterSet=AL32UTF8 nationalCharacterSet=AL16UTF16 registerWithDirService=false listeners=LISTENER skipListenerRegistration=false variables=ORACLE_BASE_HOME=/u01/oracle/product/19c,DB_UNIQUE_NAME=tstdwh1,ORACLE_BASE=/u01/oracle,PDB_NAME=,DB_NAME=tstdwh1,ORACLE_HOME=/u01/oracle/product/19c,SID=tstdwh1 initParams=undo_tablespace=UNDOTBS1,sga_target=4710MB,db_block_size=32768BYTES,nls_language=NORWEGIAN,dispatchers=(PROTOCOL=TCP) (SERVICE=tstdwh1XDB),diagnostic_dest={ORACLE_BASE},control_files=("{ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/control01.ctl", "/u04/fra/{DB_UNIQUE_NAME}/control02.ctl"),remote_login_passwordfile=EXCLUSIVE,audit_file_dest={ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump,processes=400,pga_aggregate_target=1570MB,nls_territory=NORWAY,local_listener=LISTENER_TSTDWH1,db_recovery_file_dest_size=7851MB,open_cursors=300,log_archive_format=%t_%s_%r.dbf,db_domain=oric.no,compatible=19.0.0,db_name=tstdwh1,db_recovery_file_dest=/u04/fra/{DB_UNIQUE_NAME},audit_trail=db sampleSchema=false memoryPercentage=40 automaticMemoryManagement=falseNotice the parameter db_block_size=32768BYTES in the "variables" directive, and the fact that I am using the New_Database.dbt template. This template is more flexible than the others delivered by Oracle, but makes the whole database creation process take more time, as they do not include any databaes files.
dbca -createDatabase -responsefile ./${ORACLE_SID}.rsp -silentOr, create a wrapper around the command, make it executable and execute it in the background:
#!/bin/bash start=$(date +%s.%N) export GLOGFILE=run_dbca.log touch ${GLOGFILE} chmod 666 ${GLOGFILE} exec 1> ${GLOGFILE} 2>&1 echo "Now running run_dbca.sh" echo "Resetting oratab..." echo '' > /etc/oratab if [ $? -ne 0 ]; then echo "Could not erase oratab. Exit." exit 1 else echo "oratab erased. Continuing..." fi export instanceAlive=`ps -ef| grep pmon | grep -v grep |awk '{ print $8}' | cut -f 3 -d"_"` if [ ! -z ${instanceAlive} ]; then echo "Instance for database $ORACLE_SID is already running. Shut it down first" exit 1 fi dbca -createDatabase -responsefile ./${ORACLE_SID}.rsp -silent echo "Finished running run_dbca.sh" dur=$(echo "$(date +%s.%N) - $start" | bc) printf "Total execution time for run_dbca.sh: %.6f seconds\n" $dur exit 0
chmod 755 run_dbca.sh nohup ./run_dbca.sh &
ORA-12546: TNS:permission deniedCause: Incorrect permissions on the oracle binary.
oracle@myserver:[cdb]# ls -l $ORACLE_HOME/bin/oracle -rw-r--r-- 1 oracle dba 0 Jan 24 09:57 /u01/oracle/product/19c/bin/oracle oracle@myserver:[cdb]# cd $ORACLE_HOME/rdbms/lib oracle@myserver:[cdb]# make -f ins_rdbms.mk ioracleAfterwards, permissions are correct:
oracle@myserver:[cdb]# ls -l $ORACLE_HOME/bin/oracle -rwsr-s--x 1 oracle dba 462603312 Apr 11 10:54 /u01/oracle/product/19c/bin/oracle
\connect mydb \prompt 'Enter value for querid: ' v_quid \echo You entered: :v_quid SELECT query_sql_text FROM query_store.query_texts_view WHERE query_text_id = :'v_quid';Run it:
sql -h pgserver1.oric.no -U dbadmin -d postgres -f find_rest_of_statement.sql Enter value for querid: -7644745457188850826 You entered: -7644745457188850826 query_sql_text ---------------------------------------------------------------------------------------------------------------------- THE QUERY TEXT WILL SHOW UP HERE :-)
CREATE TABLE mytable ( id bigserial, key text COLLATE pg_catalog."default" NOT NULL, outdata jsonb, partno integer, koffset bigint, handled_at timestamp with time zone, inserted_at timestamp with time zone NOT NULL DEFAULT now(), kyear integer NOT NULL, -- Ensure it's NOT NULL to work properly with partitions nk boolean NOT NULL, CONSTRAINT mytable_pkey PRIMARY KEY (id,kyear) ) PARTITION BY RANGE (kyear);Create the partitions. Make it suitable for ranges that adhere to the standard YYYYMM:
CREATE TABLE mytable_202501 PARTITION OF mytable FOR VALUES FROM (202501) TO (202502); CREATE TABLE mytable_202502 PARTITION OF mytable FOR VALUES FROM (202502) TO (202503);Insert some test values:
INSERT INTO mytable (key, outdata, partno, koffset, handled_at, kyear, nk) VALUES ('A123', '{"data": "test1"}', 101, 500, NOW(), 202501, TRUE), ('B456', '{"data": "test2"}', 102, 600, NOW(), 202501, FALSE), ('C789', '{"data": "test3"}', 103, 700, NOW(), 202501, TRUE); INSERT INTO mytable (key, outdata, partno, koffset, handled_at, kyear, nk) VALUES ('D111', '{"data": "test4"}', 104, 800, NOW(), 202502, FALSE), ('E222', '{"data": "test5"}', 105, 900, NOW(), 202502, TRUE), ('F333', '{"data": "test6"}', 106, 1000, NOW(), 202502, FALSE);Confirm that the data is indeed there:
select * from mytable;Finally, verify that the rows have been commited to two different partitions:
musicdb=> SELECT relname, n_tup_ins musicdb-> FROM pg_stat_all_tables musicdb-> WHERE relname IN ('mytable_202501', 'mytable_202502'); relname | n_tup_ins ----------------+----------- mytable_202501 | 3 mytable_202502 | 3 (2 rows)
BEGIN DBMS_AUDIT_MGMT.clean_audit_trail( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, use_last_arch_timestamp => FALSE); END; /Be careful:
SQL> SELECT * FROM DBA_AUDIT_MGMT_LAST_ARCH_TS; no rows selectedNo rows returned from this query means that there has been no previous cleanup performed.
BEGIN DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP ( AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, LAST_ARCHIVE_TIME => SYSTIMESTAMP-30); END; /
ERROR at line 1: ORA-46258: Cleanup not initialized for the audit trail ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 61 ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 2283 ORA-06512: at line 17
SET SERVEROUTPUT ON BEGIN IF DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD) THEN DBMS_OUTPUT.PUT_LINE('Cleanup is initialized.'); ELSE DBMS_OUTPUT.PUT_LINE('Cleanup is not initialized.'); END IF; END; /Check status:
sqlplus / as sysdba@check_init_status.sql Cleanup is not initialized. PL/SQL procedure executed.To initialize cleanup of the standard audit trail, put the following in a file called init_cleanup.sql:
BEGIN DBMS_AUDIT_MGMT.INIT_CLEANUP ( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, default_cleanup_interval => 48); END; /Initialize cleanup:
sqlplus / as sysdba@init_cleanup.sqlCheck status again:
sqlplus / as sysdba@check_init_status.sql Cleanup is initialized. PL/SQL procedure executed.
ERROR at line 1: ORA-46267: Insufficient space in 'SYSAUX' tablespace, cannot complete operation ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1087 ORA-06512: at line 3
select count(*) from sys.aud$;
"COUNT(*)" |
---|
141 362 956 |
select min(ntimestamp#) "mindate", max(ntimestamp#) "maxdate" from sys.aud$;
"mindate" | "maxdate" |
---|---|
21.11.2016 08:29:16,325777 | 04.02.2020 14:08:49,660074 |
BEGIN DBMS_AUDIT_MGMT.clean_audit_trail( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, use_last_arch_timestamp => TRUE); END; /Let's check the status after the execution of the clean_audit_trail-procedure: How many rows do we have left in our standard audit trail?
select count(*) from sys.aud$;
"COUNT(*)" |
---|
2 356 049 |
select min(ntimestamp#) "mindate", max(ntimestamp#) "maxdate" from sys.aud$;
"mindate" | "maxdate" |
---|---|
04.03.2020 08:09:01,325371 | 04.02.2020 14:08:49,660074 |
[FATAL] [INS-32715] The source home (/u01/oracle/product/19c) is not registered in the central inventory. ACTION: Ensure that the source home is registered in the central inventory.Cause:
opatch lsinventory Oracle Home : /u01/oracle/product/19c Central Inventory : /u01/oraInventory from : /u01/oracle/product/19c/oraInst.locCheck the file for obvious errors:
cd /u01/oraInventory/ContentsXML cat inventory.xml (excerpt) <HOME_LIST> <HOME NAME="19c" LOC="/u01/oracle/product/19c" TYPE="O" IDX="8"/> </HOME_LIST>Solution:
Attach the ORACLE_HOME to the inventory once more, even though it looks correct to begin with:$ORACLE_HOME/oui/bin/runInstaller -silent -attachHome ORACLE_HOME="$ORACLE_HOME" ORACLE_HOME_NAME="19c" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 25599 MB Passed The inventory pointer is located at /etc/oraInst.loc You can find the log of this install session at: /home/oracle/oraInventory/logs/AttachHome2025-02-06_01-07-51PM.log 'AttachHome' was successful.Try the runInstaller command again, and it should work.
yum list installed |grep bounceSearch repository:
yum search bounce =========================================================== Name & Summary Matched: bounce ============================================================ pgbouncer.x86_64 : Lightweight connection pooler for PostgreSQL =============================================================== Summary Matched: bounce ===============================================================Install:
yum install pgbouncer.x86_64Verify installation:
yum list installed |grep bounce pgbouncer.x86_64 1.24.0-42PGDG.rhel8 @YOURREPOSITORY_PostgreSQL_PostgreSQL_common_RHEL8_x86_64
[root] chown postgres:postgres /etc/pgbouncer/ -RCreate the file /etc/pgbouncer/pgbouncer.ini:
; ; pgbouncer config file ; [databases] postgres = port=5432 dbname=musicdb [pgbouncer] listen_addr = 127.0.0.1 listen_port = 6432 admin_users = music auth_type = scram-sha-256 auth_file = /postgres_data/config/users.txt logfile = /postgres_data/config/pgbouncer.log pidfile = /postgres_data/config/pgbouncer.pid server_reset_query = DISCARD ALL; pool_mode = session default_pool_size = 5 log_pooler_errors = 0Find the SCRAM authenticatio key:
psql select '"'||rolname||'" "'||rolpassword||'"' from pg_authid where rolname='music';
ss -ltpn | grep 6432 --> no output? Go go ahead!As the postgres user, start the pgbouncer process
pgbouncer -d /etc/pgbouncer/pgbouncer.iniThe port will now be occupied:
ss -ltpn | grep 6432 LISTEN 0 128 127.0.0.1:6432 0.0.0.0:* users:(("pgbouncer",pid=392231,fd=12))You can now connect to your pgbouncer instance. There are a number of different show-commands:
psql -p 6432 -h 127.0.0.1 -U music pgbouncer -c "show config" Password for user music: key | value | default | changeable -----------------------------+--------------------------------------------------------+--------------------------------------------------------+------------ admin_users | music | | yes application_name_add_host | 0 | 0 | yes auth_dbname | | | yes auth_file | /var/lib/pgsql/users.txt | | yesAnd some control commands:
psql -p 6432 -h 127.0.0.1 -U music pgbouncer -c "reload" Password for user music: RELOAD
To connect through the pgbouncer rather than a direct connection, use the syntax below, where you exchange the database name ("musicdb") with "pgbouncer":
psql -p 6432 -h 127.0.0.1 -U music -d pgbouncer Password for user music: psql (15.10, server 1.24.0/bouncer) WARNING: psql major version 15, server major version 1.24. Some psql features might not work. Type "help" for help. pgbouncer=# \conninfo You are connected to database "pgbouncer" as user "music" on host "127.0.0.1" at port "6432".