SELECT thread#,
COUNT(*) AS cnt,
MIN(sequence#) AS min_seq,
MAX(sequence#) AS max_seq,
MIN(first_time) AS min_time,
MAX(next_time) AS max_time
FROM v$archived_log
WHERE deleted = 'NO'
GROUP BY thread#
ORDER BY thread#;
THREAD# CNT MIN_SEQ MAX_SEQ MIN_TIME MAX_TIME
__________ ______ __________ __________ ______________________ ______________________
1 80 7936 8015 2026-02-16 15:38:02 2026-02-18 14:15:42
Minimalistic Oracle
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Wednesday, February 18, 2026
Useful query against v$archived_log in recovery situations
Wednesday, February 4, 2026
Dropping an audit policy for a specific user
You have the following audit policy:
The policy is enabled for a specific user. The "noaudit" statement used above will disable it for everyone.
Solution:
Disable it for the specific user:
CREATE AUDIT POLICY monitor_sys_everything ACTIONS ALL; AUDIT POLICY monitor_sys_everything BY SYS;I wanted to drop the auditing policy:
SQL> noaudit policy monitor_sys_everything; Noaudit succeeded. SQL> drop audit policy monitor_Sys_everything; drop audit policy monitor_Sys_everything * ERROR at line 1: ORA-46361: Audit policy cannot be dropped as it is currently enabled.Reason:
The policy is enabled for a specific user. The "noaudit" statement used above will disable it for everyone.
Solution:
Disable it for the specific user:
SQL> NOAUDIT POLICY monitor_sys_everything BY SYS;It can now be dropped:
drop audit policy monitor_sys_everything;
Thursday, January 29, 2026
Compression for Postgres tables
In PostgreSQL you don’t create a “compressed table” in the Oracle sense. Rather, compression is achieved per-column compression via TOAST, plus (if desirable) a few extension/filesystem tricks.
Postgres compresses large variable-length columns using TOAST:
There is no built-in heap/row compression for normal fixed-width columns (e.g. integer, bigint) in vanilla Postgres.
From PostgreSQL 14 onward you can choose compression algorithm per column: pglz (classic) or lz4 (faster, generally preferred).
Here is how I did it in my test environment:
First, ensure your PostgreSQL server was compiled with LZ4 support (use pg_config --configure and look for --with-lz4)
Set compression globally:
COMPRESSION lz4 / COMPRESSION pglz is a column option.
Only matters for TOAST-able types; it won’t change anything for integer, date, etc.
Compression only happens when the row gets large enough for TOAST to kick in (roughly when row > ~2kB).
You can switch existing columns to LZ4 (or back to pglz):
Check that the table is using column compression for TOAST values:
So even if table app_logs2 uses compression for TOASTable columns, it does not reveal this fact when being described:
Postgres compresses large variable-length columns using TOAST:
text, varchar, bytea, jsonb, xml, numeric, etc.TOAST will be applied when a row is too big to fit in an 8kB page; large values are stored in a separate TOAST table, optionally compressed.
There is no built-in heap/row compression for normal fixed-width columns (e.g. integer, bigint) in vanilla Postgres.
From PostgreSQL 14 onward you can choose compression algorithm per column: pglz (classic) or lz4 (faster, generally preferred).
Here is how I did it in my test environment:
First, ensure your PostgreSQL server was compiled with LZ4 support (use pg_config --configure and look for --with-lz4)
pg_config --configure | grep lz4It will show you a long list of options that was used when PostgreSQL was built. Look for '--with-lz4'
Set compression globally:
show default_toast_compression; default_toast_compression --------------------------- pglz (1 row) postgres=# ALTER SYSTEM SET default_toast_compression = 'lz4'; ALTER SYSTEM postgres=# SELECT pg_reload_conf(); pg_reload_conf ---------------- t (1 row) postgres=# show default_toast_compression; default_toast_compression --------------------------- lz4Optional: default to LZ4 for this session
SET default_toast_compression = 'lz4';Yet another option is to set LZ4 for a specific database:
ALTER DATABASE mydb SET default_toast_compression = 'lz4';Create the table:
CREATE TABLE app_logs (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text COMPRESSION lz4,
details jsonb COMPRESSION lz4
);
Note:You can switch existing columns to LZ4 (or back to pglz):
ALTER TABLE app_logs
ALTER COLUMN message SET COMPRESSION lz4,
ALTER COLUMN details SET COMPRESSION lz4;
Note that an ALTER TABLE only changes the future TOAST entries. To actually recompress existing rows you need to cause a rewrite. Common options:
-- 1) Table rewrite (heavy, but clean) ALTER TABLE app_logs SET (toast_tuple_target = 2040); -- optional tweak VACUUM FULL app_logs; -- or 2) Cluster on some index (also rewrites) CLUSTER app_logs USING app_logs_pkey; ANALYZE app_logs;Any bulk rewrite (incl. CREATE TABLE AS ..., INSERT INTO new SELECT ... FROM old) will store new TOAST values using the new compression method.
Check that the table is using column compression for TOAST values:
-- Main table vs TOAST table sizes
SELECT
relname,
pg_size_pretty(pg_relation_size(oid)) AS heap_size,
pg_size_pretty(pg_total_relation_size(oid)) AS total_with_indexes_toast
FROM pg_class
WHERE relname IN ('app_logs2','app_logs3','app_logs4');
-- Look at TOAST table directly
SELECT
c1.relname AS main_table,
c2.relname AS toast_table,
pg_size_pretty(pg_total_relation_size(c2.oid)) AS toast_total
FROM pg_class c1
JOIN pg_class c2 ON c1.reltoastrelid = c2.oid
WHERE c1.relname IN ('app_logs2','app_logs3','app_logs4');
In a simple test, I created three tables with three different compression directives and created one long value that would make sure it was TOASTED:
CREATE TABLE app_logs2 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text,
details jsonb
);
CREATE TABLE app_logs3 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text COMPRESSION lz4,
details jsonb COMPRESSION lz4
);
CREATE TABLE app_logs4 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text COMPRESSION pglz,
details jsonb COMPRESSION pglz
);
INSERT INTO app_logs2 (log_time, level, message, details)
VALUES (
now(),
'INFO',
repeat('x', 100000), -- make it large enough to be TOASTed
jsonb_build_object('k', repeat('y', 100000))
);
INSERT INTO app_logs3 (log_time, level, message, details)
VALUES (
now(),
'INFO',
repeat('x', 100000), -- make it large enough to be TOASTed
jsonb_build_object('k', repeat('y', 100000))
);
INSERT INTO app_logs4 (log_time, level, message, details)
VALUES (
now(),
'INFO',
repeat('x', 100000), -- make it large enough to be TOASTed
jsonb_build_object('k', repeat('y', 100000))
);
As expected, the app_logs2 defaulted to lz4 (set globally):
SELECT
relname,
pg_size_pretty(pg_relation_size(oid)) AS heap_size,
pg_size_pretty(pg_total_relation_size(oid)) AS total_with_indexes_toast
FROM pg_class
WHERE relname IN ('app_logs2','app_logs3','app_logs4');
-- Look at TOAST table directly
SELECT
c1.relname AS main_table,
c2.relname AS toast_table,
pg_size_pretty(pg_total_relation_size(c2.oid)) AS toast_total
FROM pg_class c1
JOIN pg_class c2 ON c1.reltoastrelid = c2.oid
WHERE c1.relname IN ('app_logs2','app_logs3','app_logs4');
relname | heap_size | total_with_indexes_toast
-----------+------------+--------------------------
app_logs2 | 8192 bytes | 32 kB
app_logs3 | 8192 bytes | 32 kB
app_logs4 | 8192 bytes | 48 kB
(3 rows)
main_table | toast_table | toast_total
------------+------------------+-------------
app_logs2 | pg_toast_2510179 | 8192 bytes
app_logs3 | pg_toast_2510188 | 8192 bytes
app_logs4 | pg_toast_2510197 | 24 kB
(3 rows)
Remember, per-column compression via default_toast_compression doesn not show up in \d+ unless it was explicitly set in the column definition.So even if table app_logs2 uses compression for TOASTable columns, it does not reveal this fact when being described:
CREATE TABLE app_logs2 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text,
details jsonb
);
\d+ app_logs2
Table "myschema.app_logs2"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
----------+--------------------------+-----------+----------+-------------------------------------------+----------+-------------+--------------+-------------
log_id | bigint | | not null | nextval('app_logs2_log_id_seq'::regclass) | plain | | |
log_time | timestamp with time zone | | not null | | plain | | |
level | text | | not null | | extended | | |
message | text | | | | extended | | |
details | jsonb | | | | extended | | |
Sunday, January 25, 2026
How to detach an ORACLE_HOME to an existing inventory
I want to detach an existing oracle_home from my inventory.
Note that the last argument must be the full path to the oracle software, NOT the oracle name found in the inventory:
Note that the last argument must be the full path to the oracle software, NOT the oracle name found in the inventory:
./runInstaller -silent -detachHome ORACLE_HOME="/sw/oracle/product/19c29" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB Passed The inventory pointer is located at /etc/oraInst.loc You can find the log of this install session at: /home/oracle/oraInventory/logs/DetachHome2026-01-25_12-51-17PM.log 'DetachHome' was successful.
Thursday, November 20, 2025
How to unload rows in an Oracle database to disk using sqlcl
In sqlcl, execute these lines to unload SCOTT.EMP to disk:
set arraysize 200 set load batch_rows 200 UNLOAD TABLE SCOTT.EMP DIR /datadisk1/exportDo not use semicolon after the end of the command!
Find total size of all databases in PostgreSQL cluster
SELECT database, size FROM (
SELECT datname AS database,
pg_size_pretty(pg_database_size(datname)) AS size,
0 AS sort_order
FROM pg_database
UNION ALL
SELECT 'TOTAL',
pg_size_pretty(SUM(pg_database_size(datname))),
1
FROM pg_database
) AS sub
ORDER BY sort_order, size DESC;
Example output:
database | size
----------------------------------+---------
mydb01 | 7819 kB
mydb02 | 7795 kB
postgres | 7739 kB
template0 | 7731 kB
template1 | 7715 kB
proddb01 | 76 GB
proddb02 | 2971 GB
proddb03 | 22 GB
warehouse01 | 11 TB
testdb01 | 106 MB
TOTAL | 14 TB
(11 rows)
Monday, November 17, 2025
Find the number of columns in a table in postgreSQL
SELECT count(*) FROM information_schema.columns WHERE table_name = 'mytable' AND table_schema = 'jim';
Subscribe to:
Comments (Atom)