SELECT last_value FROM myschema.mysequence;or
SELECT nextval('myschema.mysequence');
Note: nextval() advances the sequence — use last_value if you just want to inspect without side effects.
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
SELECT last_value FROM myschema.mysequence;or
SELECT nextval('myschema.mysequence');
Note: nextval() advances the sequence — use last_value if you just want to inspect without side effects.
SELECT has_schema_privilege('myuser', 'myschema', 'USAGE');
If the reply is
has_schema_privilege ---------------------- tYou can login
psql -h localhost -d mydb -U myuserand execute
\x
Expanded display is on.
SELECT
'TRIGGER' AS object_type,
t.tgname AS trigger_name,
n.nspname AS schema_name,
c.relname AS table_name,
-- Decode tgenabled letters
CASE t.tgenabled
WHEN 'O' THEN 'ENABLE'
WHEN 'A' THEN 'ENABLE ALWAYS'
WHEN 'R' THEN 'ENABLE REPLICA'
WHEN 'D' THEN 'DISABLED'
ELSE t.tgenabled
END AS enabled_state,
-- Trigger timing from tgtype
CASE
WHEN (t.tgtype & 2) <> 0 THEN 'BEFORE'
WHEN (t.tgtype & 64) <> 0 THEN 'INSTEAD OF'
ELSE 'AFTER'
END AS timing,
-- Events
(SELECT string_agg(event, ', ')
FROM (
SELECT unnest(
ARRAY[
CASE WHEN (t.tgtype & 4) <> 0 THEN 'INSERT' END,
CASE WHEN (t.tgtype & 8) <> 0 THEN 'DELETE' END,
CASE WHEN (t.tgtype & 16) <> 0 THEN 'UPDATE' END,
CASE WHEN (t.tgtype & 32) <> 0 THEN 'TRUNCATE' END
]
) AS event
) ev
) AS events,
pron.nspname || '.' || p.proname AS trigger_function
FROM pg_trigger t
JOIN pg_class c ON c.oid = t.tgrelid
JOIN pg_namespace n ON n.oid = c.relnamespace
JOIN pg_proc p ON p.oid = t.tgfoid
JOIN pg_namespace pron ON pron.oid = p.pronamespace
WHERE n.nspname = current_schema()
AND NOT t.tgisinternal
AND t.tgconstraint = 0
AND c.relkind <> 'p'
ORDER BY trigger_name;
Which will list your triggers:
-[ RECORD 1 ]----+------------------------------------------------------------------ object_type | TRIGGER trigger_name | mytrg1 schema_name | myschema table_name | mytable1 enabled_state | E timing | BEFORE events | UPDATE trigger_function | myschema.trigger_fct_upd_mytable1 -[ RECORD 2 ]----+------------------------------------------------------------------ object_type | TRIGGER trigger_name | mytrg2 schema_name | myschema table_name | mytable2 enabled_state | E timing | AFTER events | DELETE trigger_function | myschema.trg_close_fct
SELECT thread#,
COUNT(*) AS cnt,
MIN(sequence#) AS min_seq,
MAX(sequence#) AS max_seq,
MIN(first_time) AS min_time,
MAX(next_time) AS max_time
FROM v$archived_log
WHERE deleted = 'NO'
GROUP BY thread#
ORDER BY thread#;
THREAD# CNT MIN_SEQ MAX_SEQ MIN_TIME MAX_TIME
__________ ______ __________ __________ ______________________ ______________________
1 80 7936 8015 2026-02-16 15:38:02 2026-02-18 14:15:42
CREATE AUDIT POLICY monitor_sys_everything ACTIONS ALL; AUDIT POLICY monitor_sys_everything BY SYS;I wanted to drop the auditing policy:
SQL> noaudit policy monitor_sys_everything; Noaudit succeeded. SQL> drop audit policy monitor_Sys_everything; drop audit policy monitor_Sys_everything * ERROR at line 1: ORA-46361: Audit policy cannot be dropped as it is currently enabled.Reason:
SQL> NOAUDIT POLICY monitor_sys_everything BY SYS;It can now be dropped:
drop audit policy monitor_sys_everything;
text, varchar, bytea, jsonb, xml, numeric, etc.TOAST will be applied when a row is too big to fit in an 8kB page; large values are stored in a separate TOAST table, optionally compressed.
pg_config --configure | grep lz4It will show you a long list of options that was used when PostgreSQL was built. Look for '--with-lz4'
show default_toast_compression; default_toast_compression --------------------------- pglz (1 row) postgres=# ALTER SYSTEM SET default_toast_compression = 'lz4'; ALTER SYSTEM postgres=# SELECT pg_reload_conf(); pg_reload_conf ---------------- t (1 row) postgres=# show default_toast_compression; default_toast_compression --------------------------- lz4Optional: default to LZ4 for this session
SET default_toast_compression = 'lz4';Yet another option is to set LZ4 for a specific database:
ALTER DATABASE mydb SET default_toast_compression = 'lz4';Create the table:
CREATE TABLE app_logs (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text COMPRESSION lz4,
details jsonb COMPRESSION lz4
);
Note: ALTER TABLE app_logs
ALTER COLUMN message SET COMPRESSION lz4,
ALTER COLUMN details SET COMPRESSION lz4;
Note that an ALTER TABLE only changes the future TOAST entries. To actually recompress existing rows you need to cause a rewrite. Common options:
-- 1) Table rewrite (heavy, but clean) ALTER TABLE app_logs SET (toast_tuple_target = 2040); -- optional tweak VACUUM FULL app_logs; -- or 2) Cluster on some index (also rewrites) CLUSTER app_logs USING app_logs_pkey; ANALYZE app_logs;Any bulk rewrite (incl. CREATE TABLE AS ..., INSERT INTO new SELECT ... FROM old) will store new TOAST values using the new compression method.
-- Main table vs TOAST table sizes
SELECT
relname,
pg_size_pretty(pg_relation_size(oid)) AS heap_size,
pg_size_pretty(pg_total_relation_size(oid)) AS total_with_indexes_toast
FROM pg_class
WHERE relname IN ('app_logs2','app_logs3','app_logs4');
-- Look at TOAST table directly
SELECT
c1.relname AS main_table,
c2.relname AS toast_table,
pg_size_pretty(pg_total_relation_size(c2.oid)) AS toast_total
FROM pg_class c1
JOIN pg_class c2 ON c1.reltoastrelid = c2.oid
WHERE c1.relname IN ('app_logs2','app_logs3','app_logs4');
In a simple test, I created three tables with three different compression directives and created one long value that would make sure it was TOASTED:
CREATE TABLE app_logs2 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text,
details jsonb
);
CREATE TABLE app_logs3 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text COMPRESSION lz4,
details jsonb COMPRESSION lz4
);
CREATE TABLE app_logs4 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text COMPRESSION pglz,
details jsonb COMPRESSION pglz
);
INSERT INTO app_logs2 (log_time, level, message, details)
VALUES (
now(),
'INFO',
repeat('x', 100000), -- make it large enough to be TOASTed
jsonb_build_object('k', repeat('y', 100000))
);
INSERT INTO app_logs3 (log_time, level, message, details)
VALUES (
now(),
'INFO',
repeat('x', 100000), -- make it large enough to be TOASTed
jsonb_build_object('k', repeat('y', 100000))
);
INSERT INTO app_logs4 (log_time, level, message, details)
VALUES (
now(),
'INFO',
repeat('x', 100000), -- make it large enough to be TOASTed
jsonb_build_object('k', repeat('y', 100000))
);
As expected, the app_logs2 defaulted to lz4 (set globally):
SELECT
relname,
pg_size_pretty(pg_relation_size(oid)) AS heap_size,
pg_size_pretty(pg_total_relation_size(oid)) AS total_with_indexes_toast
FROM pg_class
WHERE relname IN ('app_logs2','app_logs3','app_logs4');
-- Look at TOAST table directly
SELECT
c1.relname AS main_table,
c2.relname AS toast_table,
pg_size_pretty(pg_total_relation_size(c2.oid)) AS toast_total
FROM pg_class c1
JOIN pg_class c2 ON c1.reltoastrelid = c2.oid
WHERE c1.relname IN ('app_logs2','app_logs3','app_logs4');
relname | heap_size | total_with_indexes_toast
-----------+------------+--------------------------
app_logs2 | 8192 bytes | 32 kB
app_logs3 | 8192 bytes | 32 kB
app_logs4 | 8192 bytes | 48 kB
(3 rows)
main_table | toast_table | toast_total
------------+------------------+-------------
app_logs2 | pg_toast_2510179 | 8192 bytes
app_logs3 | pg_toast_2510188 | 8192 bytes
app_logs4 | pg_toast_2510197 | 24 kB
(3 rows)
Remember, per-column compression via default_toast_compression doesn not show up in \d+ unless it was explicitly set in the column definition.
CREATE TABLE app_logs2 (
log_id bigserial PRIMARY KEY,
log_time timestamptz NOT NULL,
level text NOT NULL,
message text,
details jsonb
);
\d+ app_logs2
Table "myschema.app_logs2"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
----------+--------------------------+-----------+----------+-------------------------------------------+----------+-------------+--------------+-------------
log_id | bigint | | not null | nextval('app_logs2_log_id_seq'::regclass) | plain | | |
log_time | timestamp with time zone | | not null | | plain | | |
level | text | | not null | | extended | | |
message | text | | | | extended | | |
details | jsonb | | | | extended | | |
./runInstaller -silent -detachHome ORACLE_HOME="/sw/oracle/product/19c29" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 6143 MB Passed The inventory pointer is located at /etc/oraInst.loc You can find the log of this install session at: /home/oracle/oraInventory/logs/DetachHome2026-01-25_12-51-17PM.log 'DetachHome' was successful.
set arraysize 200 set load batch_rows 200 UNLOAD TABLE SCOTT.EMP DIR /datadisk1/exportDo not use semicolon after the end of the command!