Thursday, June 5, 2025

PostgreSQL search_path basics

Similar to Oracles alter session set current_schema=<schema_name>, in PostgreSQL, search_path is a session-level setting that determines the order in which schemas are searched when you reference database objects without a schema name. It consists of a list of schema names. When you run a query like
SELECT * FROM mytable;
PostgreSQL checks each schema in the list — in order — to find mytable.

Default value is:
 "$user", public 
This means:
  1. Look for a schema named after the current user.
  2. If not found or not accessible, try the public schema.

How to view and set it

Check your current search_path:
 SHOW search_path; 
Set it for the session:
 SET search_path TO schema1, public; 
or
 SET search_path TO schema1, schema2, public; 
The last example is important in cases where you logon to a database with a user with no matching schema. Consider the following example:
 psql -h server1.oric.no -U jim -d musicdb
I am logging onto the database "musicdb" with a user called "jim". By default, jim will have its own user followed by public, in his search path:
musicdb=> show search_path;
   search_path
-----------------
 "$user", public
(1 row)
I have already given user jim the privilges needed to "see" the objects created by schema "music" which exists in the database "musicdb".

For convenience, add schema "music" to the search_path:
musicdb=> set search_path to 'music','$user','public';
SET
musicdb=> show search_path;
          search_path
-------------------------------
 music, "$user", public
(1 row)
The "current_user" and "current_schema" functions will now return the actual user name, and the first match in the search_path, respectivly:
musicdb=> select current_user, current_schema;
 current_user | current_schema
--------------+----------------
 jim          | music
(1 row)
Why is it important?
It controls where PostgreSQL looks first for unqualified object names. It allows you to skip schema prefixes when working with other schemas' objects

How to limit connections in a postgres database

This is how you can restrict connections for a specific database to zero:
postgres=# alter database db01 connection limit 0;
ALTER DATABASE
Verify
SELECT datname, datconnlimit FROM pg_database WHERE datname = 'db01';
-[ RECORD 1 ]+-------------
datname      | db01
datconnlimit | 0
Set back to unlimited:
ALTER DATABASE db01 CONNECTION LIMIT -1;
ALTER DATABASE
Verify:
SELECT datname, datconnlimit FROM pg_database WHERE datname = 'db01';
-[ RECORD 1 ]+-------------
datname      | db01
datconnlimit | -1
To limit the connections for a specific user only:
 psql
psql (15.13)
Type "help" for help.

postgres=# alter user read_db01 connection limit 0;
ALTER ROLE
postgres=# alter user read_db01 connection limit -1;
ALTER ROLE
postgres=#
The current setting can be verified with:
 SELECT rolname, rolconnlimit
FROM pg_roles
WHERE rolname = 'read_db01';
      rolname      | rolconnlimit
-------------------+--------------
 read_db01         |           -1
or, list all users that does have restrictions on the number of connections:
 SELECT rolname, rolconnlimit
FROM pg_roles
WHERE rolconnlimit != -1;
      rolname      | rolconnlimit
-------------------+--------------
 pganalyze         |            5
 read_db01         |            0
(2 rows)

create user in postgres - basic syntax

I will create

  • a group role called common_users
  • a user called ro_user1
  • in a database called db01

    The read-only user ro_user1 should be able perform queries against all tables owned by the schema schema1.

    First, create the role common_users by logging onto the postgres (default) database
    psql
    
    CREATE ROLE common_users WITH
      NOLOGIN
      NOSUPERUSER
      INHERIT
      NOCREATEDB
      NOCREATEROLE
      NOREPLICATION
      NOBYPASSRLS;
    
    GRANT pg_read_all_stats TO common_users;
    
    Then, create the user ro_user1:
    create user ro_user1 password 'mysecretpassword';
    grant common_users to ro_user1;
    grant connect on database db01 to ro_user1;
    
    Log into the database db01 and revoke and grant some privileges:
    psql
    \connect db01
    revoke all on schema schema1 from ro_user1;
    grant usage on schema schema1 to ro_user1;
    grant select on all tables in schema schema1 to ro_user1;
    
    Confirm the privileges:
    \connect postgres
    select database_privs('ro_user1');
    
  • Tuesday, June 3, 2025

    PostgreSQL Memory Parameters and how they relate

    Here are some of the important memory parameters in a postgreSQL server, and how they relate to one another.

    shared_buffers

    The amount of memory the database server uses for shared memory buffers.

    The postgres documentation suggest starting with allocating 25% of the available memory to the shared database memory pool:

    If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for shared_buffers is 25% of the memory in your system... because PostgreSQL also relies on the operating system cache, it is unlikely that an allocation of more than 40% of RAM to shared_buffers will work better than a smaller amount.

    It also points out the necessity of considering database checkpointing:

    Larger settings for shared_buffers usually require a corresponding increase in max_wal_size, in order to spread out the process of writing large quantities of new or changed data over a longer period of time.

    max_wal_size

    Controls the maximum size the Write-Ahead Logging (WAL) files can grow before triggering a checkpoint. Checkponts are relative expensive operations, so we do not want them to occur too often. On the other hands, too infrequent checkpointing may increase recovery times. max_wal_size can be set to balance performance and recovery time by influencing how often checkpoints occur.

    work_mem

    Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files.

    The documentation points out that a complex query might perform several sort and hash operations at the same time, with each operation generally being allowed to use as much memory as this value specifies before it starts to write data into temporary files and that serveral running sessions could be executing such operations at the same time. So even if the 6M specified as its value does not seem like much, it could mean significant memory usage on a busy system.

    It is similar to pga_aggregate_target in an oracle database: the amount of memory set for all private global areas on the server. After the introduction of this parameter in Oracle 9i, the private global area parameters used back then, for example sort_area_size and hash_area_size, was not necessary any longer.

    maintenance_work_mem

    Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.

    It can be set higher than work_mem: Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps.

    autovacuum_work_mem

    Specifies the maximum amount of memory to be used by each autovacuum worker process. If this value is specified without units, it is taken as kilobytes. It defaults to -1, indicating that the value of maintenance_work_mem should be used instead. The setting has no effect on the behavior of VACUUM when run in other contexts.

    Here is a table with my settings for my 16G Memory server
    Parameter Value
    shared_buffers 4GB
    max_wal_size 8GB
    work_mem 6MB
    maintenance_work_mem 479MB


    Source: PostgreSQL documentation

    How to setup huge pages on a postgreSQL server

    The Postgres Documentation states:

    The use of huge pages results in smaller page tables and less CPU time spent on memory management, increasing performance.

    This is how I set up huge pages on one of my postgreSQL servers.

    First, check the server's huge page size. My huge pages are 2M each:
     grep Hugepagesize /proc/meminfo
    Hugepagesize:       2048 kB
    
    On my 16G server, I would like to start with 25% of the total memory as available for shared_buffers:
    su - postgres
    sudo systemctl stop postgresql-15.service
    postgres --shared-buffers=4096MB -D $PGDATA -C shared_memory_size_in_huge_pages
    2102
    
    Update /etc/sysctl.conf. Add
    vm.nr_hugepages=2102
    
    Restart sysctl
    sysctl -p
    
    Now, tell the postgres server to use huge pages, if they are available.

    Add the following directive to the config file postgresql.conf
    huge_pages = try
    
    Add the following lines to /etc/security/limits.conf so that postgres can lock down the memory set aside for huge tables:
    postgres soft memlock unlimited
    postgres hard memlock unlimited
    
    Reboot the server and verify that huge pages are being used:
    cat /proc/meminfo | grep Huge
    
    Interestingly, the parameter huge_page_size should only be used if you wish to override the default huge page size on your system. The default is zero (0). When set to 0, the default huge page size on the system will be used. In my case this is what I want to so the parameter can be ignored.

    Friday, May 23, 2025

    How to find constraints on a table in postgres



    SELECT
        con.conname AS constraint_name,
        CASE con.contype
            WHEN 'p' THEN 'PRIMARY KEY'
            WHEN 'u' THEN 'UNIQUE'
            WHEN 'f' THEN 'FOREIGN KEY'
            WHEN 'c' THEN 'CHECK'
            WHEN 'x' THEN 'EXCLUSION'
            ELSE con.contype
        END AS constraint_type,
        rel.relname AS table_name,
        pg_get_constraintdef(con.oid) AS definition
    FROM
        pg_constraint con
    JOIN
        pg_class rel ON rel.oid = con.conrelid
    JOIN
        pg_namespace nsp ON nsp.oid = rel.relnamespace
    WHERE
        nsp.nspname = 'owner'
        AND rel.relname = 'table_name'
    ORDER BY
        constraint_name;
    

    Thursday, May 22, 2025

    PostgreSQL: difference between \copy and COPY

    If you put a statement like this in a file:
    COPY at_locations
    FROM '/external/data/geolocations_at_fixed.csv'
    WITH (
        FORMAT csv,
        DELIMITER ';',
        NULL 'NULL',
        HEADER false
    );
    
    and execute it like this:
    psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet
    vinmonopolet=> \i cp_at.sql
    
    You will see the error
    ERROR:  must be superuser or have privileges of the pg_read_server_files role to COPY from a file
    HINT:  Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
    
    This is because the COPY command runs server-side, and PostgreSQL server expects be able to access the file and its entire path on the server.

    This in turn requires that the user executing the COPY command is defined as a superuser or has the pg_read_server_files privilege granted to it.

    As the error message says, we could use the client-side command \copy instead. Put this in the same script
    \copy vinmonopolet.at_locations FROM '/external/data/geolocations_at_fixed.csv' WITH (FORMAT csv, DELIMITER ';', NULL 'NULL', HEADER false)
    
    and execute either with the \i as above, or use the -f flag directly at the prompt:
    psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet -f cp_at.sql
    Password for user vinmonopolet:
    COPY 5
    
    This works because the \copy command is interpreted client-side. psql reads the file from your local machine (in my case, the "local machine" was indeed the postgres server, but the point stands), and pipes the content into the database. And it succeeds because no elevated privileges are needed for this method.

    Wednesday, April 30, 2025

    Cleanup of the ADR

    Sometimes you may want to clean up your adr. I had this problem after I had just recently created a new database on my server called "db01".

    In my case, I had the following adr homes set:

    ADR Homes:
    diag/rdbms/db01/db01
    diag/rdbms/db02/db02
    diag/rdbms/db03/db03
    diag/rdbms/db04/db04
    diag/rdbms/db05/db05
    diag/clients/user_oracle/RMAN_610688766_110
    diag/clients/user_oracle/host_610688766_110
    diag/clients/user_oracle/host_610688766_82
    diag/tnslsnr/dbhost1/listener
    diag/tnslsnr/dbhost1/mylsnr
    diag/tnslsnr/dbhost1/testlistener
    
    This caused operations against ADR to fail with
    ADR HOME selected:  diag/rdbms/db01/db01 diag/rdbms/db02/db02 diag/rdbms/db03/db03 diag/rdbms/db04/db04 
    DIA-48415: Syntax error found in string [diag/rdbms/db01/db01] at column [26]
    
    DIA-48415: Syntax error found in string [diag/rdbms/db02/db02] at column [26]
    
    DIA-48415: Syntax error found in string [diag/rdbms/db03/db03] at column [26]
    
    DIA-48415: Syntax error found in string [diag/rdbms/db04/db04] at column [26]
    
    Syntax
    error
    
    Solution: clean out the old folder physically:
    cd /u01/oracle/diag/rdbms
    rm -rf db02 db03 db04 db05
    
    cd /u01/oracle/diag/tnslsnr/dbhost1
    rm -rf mylsnr testlistener
    
    cd /u01/oracle/diag/clients/user_oracle
    rm -rf *
    
    Afterwards, the adr should you can check your adrci settings. They should look better now:
    ADR base = "/u01/oracle"
    adrci> show homes
    ADR Homes:
    diag/rdbms/db01/db01
    diag/clients/user_oracle/RMAN_610688766_110
    diag/tnslsnr/dbhost1/listener
    

    Friday, April 25, 2025

    List folders only - no files

    For future quick reference:

    I prefer:
    ls -d */
    
    dir1/      dir2/       di3/   dir4/ 
    
    Also good:
    ls -l | grep "^d"
    
    drwxr-xr-x 4 root   root           4096 Oct 13  2022 dir1
    drwxr-xr-x 2 root   root           4096 Mar 14 11:17 dir2
    drwxr-xr-x 2 root   root           4096 Mar 26 15:51 dir2
    drwxr-xr-x 2 root   root           4096 Apr 27  2021 dir4
    
    A third option:
    ls -F | grep "/$"
     
    dir1/
    dir2/
    dir3/
    dir4/
    

    Arguments that can be passed to "dbca -createCloneTemplate" usage output

    dbca -createCloneTemplate
       -sourceSID source_database_sid | -sourceDB source_database_name
       -templateName new_database_template_name
       [-promptForWalletPassword]
       [-rmanParallelism parallelism_integer_value]
       [-maxBackupSetSizeInMB maximum_backup_set_size_in_MB]
       [-dataFileBackup {true | false}]
       [-datafileJarLocation data_files_backup_directory]
       [-sysDBAUserName SYSDBA_user_name]
       [-sysDBAPassword SYSDBA_user_password]
       [-useWalletForDBCredentials {true | false}
           -dbCredentialsWalletPassword wallet_account_password
           -dbCredentialsWalletLocation wallet_files_directory]
       [-uploadToCloud 
           -opcLibPath OPC_library_path
           -opcConfigFile OPC_configuration_file_name
           [-rmanEncryptionPassword rman_encryption_password ]
       [-compressBackup { true | false } ]
       [-walletPassword database_wallet_password]
    
    Documentation here

    What is the reason behind the message: "xauth: file /root/.Xauthority does not exist "?

    What does the message that you sometimes see when attempting to launch X applications on a Linux server
    "xauth:  file /root/.Xauthority does not exist"
    
    really mean?

    It means that the .Xauthority file, which stores X11 authentication cookies, does not yet exist in your home directory (in this case, /root/).

    This is common when:

  • X11 forwarding has not been set up or used, yet.
  • You are logging in as root and have not initialized X11 in that session.
  • The file was deleted or never created.

    To work around the issue, follow these steps:

    1. Make sure xauth is installed
    # RHEL/Centos
    su - 
    yum install xauth
    
    #Debian
    apt install xauth
    
    2. Make sure you use the -X flag (X forwarding) when connecting to the server
    ssh -X root@yourserver
    
    3. On the server, edit the file /etc/ssh/sshd_config, and make sure these lines exist and are set to yes
    X11Forwarding yes
    X11UseLocalhost yes
    
    4. Restart ssh daemon
    systemctl restart sshd
    
    5. Try launching a graphical app, like xterm

    If these work, your X11 forwarding is properly set up, and .Xauthority will be created automatically.

    6. If you need to manually create the .Xauthority file

    This is rarely necessary, but you can run:
    touch ~/.Xauthority
    xauth generate :0 . trusted
    
  • Wednesday, April 23, 2025

    How to create a database with non-default blocksize using dbca

    Prerequisites:
  • You have a valid Oracle 19c installation in a server
  • You have set your Oracle environment already, so that your PATH, ORACLE_SID etc already exists when you execute your script

    1. create a response file called $ORACLE_SID.rsp, in this case, tstdwh1.rsp:
    responseFileVersion=/oracle/assistants/rspfmt_dbca_response_schema_v12.2.0
    gdbName=tstdwh1.oric.no
    sid=tstdwh1
    databaseConfigType=SI
    policyManaged=false
    createServerPool=false
    force=false
    createAsContainerDatabase=false
    templateName=/u01/oracle/product/19c/assistants/dbca/templates/New_Database.dbt
    sysPassword=mysecreatpassword
    systemPassword=mysecreatpassword
    datafileJarLocation={ORACLE_HOME}/assistants/dbca/templates/
    datafileDestination=/data02/oradata/{DB_UNIQUE_NAME}/
    recoveryAreaDestination=/data04/fra/{DB_UNIQUE_NAME}
    storageType=FS
    characterSet=AL32UTF8
    nationalCharacterSet=AL16UTF16
    registerWithDirService=false
    listeners=LISTENER
    skipListenerRegistration=false
    variables=ORACLE_BASE_HOME=/u01/oracle/product/19c,DB_UNIQUE_NAME=tstdwh1,ORACLE_BASE=/u01/oracle,PDB_NAME=,DB_NAME=tstdwh1,ORACLE_HOME=/u01/oracle/product/19c,SID=tstdwh1
    initParams=undo_tablespace=UNDOTBS1,sga_target=4710MB,db_block_size=32768BYTES,nls_language=NORWEGIAN,dispatchers=(PROTOCOL=TCP) (SERVICE=tstdwh1XDB),diagnostic_dest={ORACLE_BASE},control_files=("{ORACLE_BASE}/oradata/{DB_UNIQUE_NAME}/control01.ctl", "/u04/fra/{DB_UNIQUE_NAME}/control02.ctl"),remote_login_passwordfile=EXCLUSIVE,audit_file_dest={ORACLE_BASE}/admin/{DB_UNIQUE_NAME}/adump,processes=400,pga_aggregate_target=1570MB,nls_territory=NORWAY,local_listener=LISTENER_TSTDWH1,db_recovery_file_dest_size=7851MB,open_cursors=300,log_archive_format=%t_%s_%r.dbf,db_domain=oric.no,compatible=19.0.0,db_name=tstdwh1,db_recovery_file_dest=/u04/fra/{DB_UNIQUE_NAME},audit_trail=db
    sampleSchema=false
    memoryPercentage=40
    automaticMemoryManagement=false
    
    Notice the parameter db_block_size=32768BYTES in the "variables" directive, and the fact that I am using the New_Database.dbt template. This template is more flexible than the others delivered by Oracle, but makes the whole database creation process take more time, as they do not include any databaes files.

    Also notice the absence of the directive "databaseType" - it is only meaningful if you let DBCA choose a built-in template based on that type (like OLTP or Data Warehouse).

    2. execute the dbca directly in the shell:
    dbca -createDatabase -responsefile ./${ORACLE_SID}.rsp -silent 
    
    Or, create a wrapper around the command, make it executable and execute it in the background:
    #!/bin/bash
    start=$(date +%s.%N)
    export GLOGFILE=run_dbca.log
    touch ${GLOGFILE}
    chmod 666 ${GLOGFILE}
    exec 1> ${GLOGFILE} 2>&1
    echo "Now running run_dbca.sh"
    echo "Resetting oratab..."
    echo '' > /etc/oratab
    if [ $? -ne 0 ]; then
     echo "Could not erase oratab. Exit."
     exit 1
    else
     echo "oratab erased. Continuing..."
    fi
    export instanceAlive=`ps -ef| grep pmon | grep -v grep |awk '{ print $8}' | cut -f 3 -d"_"`
    
    if [ ! -z ${instanceAlive} ]; then
     echo "Instance for database $ORACLE_SID is already running. Shut it down first"
     exit 1
    fi
    dbca -createDatabase -responsefile ./${ORACLE_SID}.rsp -silent
    echo "Finished running run_dbca.sh"
    dur=$(echo "$(date +%s.%N) - $start" | bc)
    printf "Total execution time for run_dbca.sh: %.6f seconds\n" $dur
    exit 0
    
    chmod 755 run_dbca.sh
    nohup ./run_dbca.sh &
    
  • Friday, April 11, 2025

    Solution to "ORA-12546: TNS:permission denied" when starting sqlplus

    Problem: when trying to start sqlplus, the error below is thrown:
    ORA-12546: TNS:permission denied
    
    Cause: Incorrect permissions on the oracle binary.

    It needs the setuid permissions. Without those permissions, it cannot open sockets or shared memory properly — and the ORA-12546 is thrown when trying to connect via sqlplus.

    Solution: Relink the oracle binary
    oracle@myserver:[cdb]# ls -l $ORACLE_HOME/bin/oracle
    -rw-r--r-- 1 oracle dba 0 Jan 24 09:57 /u01/oracle/product/19c/bin/oracle
    oracle@myserver:[cdb]# cd $ORACLE_HOME/rdbms/lib
    oracle@myserver:[cdb]# make -f ins_rdbms.mk ioracle
    
    Afterwards, permissions are correct:
    oracle@myserver:[cdb]# ls -l $ORACLE_HOME/bin/oracle
    -rwsr-s--x 1 oracle dba 462603312 Apr 11 10:54 /u01/oracle/product/19c/bin/oracle
    

    Wednesday, April 9, 2025

    How to mimic sqlplus variable behaviour in PostgreSQL

    Here is how you can mimic the usage of variables at the prompt using psql and PostgreSQL like you would with oracle and sqlplus/sqlc:
    \connect mydb
    \prompt 'Enter value for querid: ' v_quid
    \echo You entered: :v_quid
    SELECT query_sql_text FROM query_store.query_texts_view WHERE query_text_id = :'v_quid';
    
    Run it:
    sql -h pgserver1.oric.no -U dbadmin -d postgres -f find_rest_of_statement.sql
    
    Enter value for querid: -7644745457188850826
    You entered: -7644745457188850826
                                                        query_sql_text                                                    
    ----------------------------------------------------------------------------------------------------------------------
          THE QUERY TEXT WILL SHOW UP HERE :-)                                                                                                    
    
    

    Wednesday, April 2, 2025

    How to check the number of rows inserted into the different partitions in a PostgreSQL partitioned table

    Create the table:
    CREATE TABLE mytable
    (
        id bigserial,
        key text COLLATE pg_catalog."default" NOT NULL,
        outdata jsonb,
        partno integer,
        koffset bigint,
        handled_at timestamp with time zone,
        inserted_at timestamp with time zone NOT NULL DEFAULT now(),
        kyear integer NOT NULL, -- Ensure it's NOT NULL to work properly with partitions
        nk boolean NOT NULL,
        CONSTRAINT mytable_pkey PRIMARY KEY (id,kyear)
    )
    PARTITION BY RANGE (kyear);
    
    Create the partitions. Make it suitable for ranges that adhere to the standard YYYYMM:
    CREATE TABLE mytable_202501
    PARTITION OF mytable
    FOR VALUES FROM (202501) TO (202502);
    
    CREATE TABLE mytable_202502
    PARTITION OF mytable
    FOR VALUES FROM (202502) TO (202503);
    
    Insert some test values:
    INSERT INTO mytable (key, outdata, partno, koffset, handled_at, kyear, nk)
    VALUES
        ('A123', '{"data": "test1"}', 101, 500, NOW(), 202501, TRUE),
        ('B456', '{"data": "test2"}', 102, 600, NOW(), 202501, FALSE),
        ('C789', '{"data": "test3"}', 103, 700, NOW(), 202501, TRUE);
    
    INSERT INTO mytable (key, outdata, partno, koffset, handled_at, kyear, nk)
    VALUES
        ('D111', '{"data": "test4"}', 104, 800, NOW(), 202502, FALSE),
        ('E222', '{"data": "test5"}', 105, 900, NOW(), 202502, TRUE),
        ('F333', '{"data": "test6"}', 106, 1000, NOW(), 202502, FALSE);
    
    Confirm that the data is indeed there:
    select * from mytable;
    
    Finally, verify that the rows have been commited to two different partitions:
    musicdb=> SELECT relname, n_tup_ins
    musicdb-> FROM pg_stat_all_tables
    musicdb-> WHERE relname IN ('mytable_202501', 'mytable_202502');
        relname     | n_tup_ins
    ----------------+-----------
     mytable_202501 |         3
     mytable_202502 |         3
    (2 rows)
    

    Wednesday, March 5, 2025

    How to manually purge your standard audit trail


    In this post I will show you how you can manage your audit trail using the package dbms_audit_mgmt.
    It can be used for all kinds of audit trails, both OS, XML, unified and standard auditing.
    You refer to them using the constants for audit_trail_types, found here

    In this post, I am using the DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD

    Note: If you simply want to purge the entire audit trail, use the procedure below:
    BEGIN
      DBMS_AUDIT_MGMT.clean_audit_trail(
       audit_trail_type        => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
       use_last_arch_timestamp => FALSE);
    END;
    /
    
    Be careful:

    Setting the use_last_arch_timestamp to FALSE will effectlivly do a TRUNCATE TABLE SYS.AUD$ behind the scenes and takes only a couple of minutes to execute, even with millions of rows to be purged. You may lose your audit data, which can have legal consequences.


    If desirable, audit trails can be cleaned based on the "Last Archive Timestamp" value.
    The Last Archive Timestamp represents the timestamp of the most recent audit record that was securely archived:
    SQL> SELECT * FROM DBA_AUDIT_MGMT_LAST_ARCH_TS;
    
    no rows selected
    
    
    No rows returned from this query means that there has been no previous cleanup performed.

    In cases where you want to keep some of your audit data, you can manually set the "Last Archive Timestamp" to a value of your choice.
    Here I set it to a month ago from today:
    BEGIN
      DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP (
        AUDIT_TRAIL_TYPE => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
        LAST_ARCHIVE_TIME => SYSTIMESTAMP-30);
    END;
    /
    

    If you try to use the SET_LAST_ARCHIVE_TIMESTAMP procedure before cleanup has been initialized, you will receive error

    ERROR at line 1:
     ORA-46258: Cleanup not initialized for the audit trail
     ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 61
     ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 2283
     ORA-06512: at line 17
    

    So make sure to initialize the cleanup first. Note that "initialization" will automatically relocate the AUD$ table to the SYSAUX tablespace.
    Use the IS_CLEANUP_INITIALIZED procedure to verify the state of the cleanup before and after the the INIT_CLEANUP has been executed.
    Put the following in a file called check_init_status.sql:
    SET SERVEROUTPUT ON
     BEGIN
      IF DBMS_AUDIT_MGMT.IS_CLEANUP_INITIALIZED(DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD) THEN
       DBMS_OUTPUT.PUT_LINE('Cleanup is initialized.');
      ELSE
       DBMS_OUTPUT.PUT_LINE('Cleanup is not initialized.');
      END IF;
     END;
     /
    
    Check status:
    sqlplus / as sysdba@check_init_status.sql
    
    Cleanup is not initialized.
    
    PL/SQL procedure executed.
    
    To initialize cleanup of the standard audit trail, put the following in a file called init_cleanup.sql:
    BEGIN
       DBMS_AUDIT_MGMT.INIT_CLEANUP (
       audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
       default_cleanup_interval => 48);
     END;
     /
    
    Initialize cleanup:
    sqlplus / as sysdba@init_cleanup.sql
    
    Check status again:
    sqlplus / as sysdba@check_init_status.sql
    
    Cleanup is initialized.
    
    PL/SQL procedure executed.
    

    Note that you need sufficient space in the SYSAUX tablespace to proceed. Autoextend on the datafiles will not help, there need to be sufficiently allocated space in advance:
    ERROR at line 1:
     ORA-46267: Insufficient space in 'SYSAUX' tablespace, cannot complete operation
     ORA-06512: at "SYS.DBMS_AUDIT_MGMT", line 1087
     ORA-06512: at line 3
    

    How many rows do we have in our standard audit trail?
    select count(*) from sys.aud$;
    
    "COUNT(*)"
    141 362 956


    How old is that data?
    select min(ntimestamp#) "mindate", max(ntimestamp#) "maxdate"
    from sys.aud$;
    
    "mindate" "maxdate"
    21.11.2016 08:29:16,325777 04.02.2020 14:08:49,660074

    So our data dates back to 2016. Time to clean up.
    The "use_last_arch_timestamp" make sure I will preserve audit statements more recent than one month:
    BEGIN
      DBMS_AUDIT_MGMT.clean_audit_trail(
       audit_trail_type        => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
       use_last_arch_timestamp => TRUE);
    END;
    /
    
    Let's check the status after the execution of the clean_audit_trail-procedure: How many rows do we have left in our standard audit trail?
    select count(*) from sys.aud$;
    
    "COUNT(*)"
    2 356 049


    How old is that data?
    select min(ntimestamp#) "mindate", max(ntimestamp#) "maxdate"
    from sys.aud$;
    
    "mindate" "maxdate"
    04.03.2020 08:09:01,325371 04.02.2020 14:08:49,660074

    Thursday, February 6, 2025

    Potential solution to INS-32715 when executing runInstaller

    Error when running runInstaller:
    [FATAL] [INS-32715] The source home (/u01/oracle/product/19c) is not registered in the central inventory.
       ACTION: Ensure that the source home is registered in the central inventory.
    
    Cause:
    There is something wrong with the inventory, although it seems correct.

    Find the inventory file:
    opatch lsinventory
    
    Oracle Home       : /u01/oracle/product/19c
    Central Inventory : /u01/oraInventory
       from           : /u01/oracle/product/19c/oraInst.loc
    
    
    Check the file for obvious errors:
    cd /u01/oraInventory/ContentsXML
    cat inventory.xml (excerpt)
    
    <HOME_LIST>
    <HOME NAME="19c" LOC="/u01/oracle/product/19c" TYPE="O" IDX="8"/>
    </HOME_LIST>
    
    Solution:
    Attach the ORACLE_HOME to the inventory once more, even though it looks correct to begin with:
    
     $ORACLE_HOME/oui/bin/runInstaller -silent -attachHome ORACLE_HOME="$ORACLE_HOME" ORACLE_HOME_NAME="19c"
    Starting Oracle Universal Installer...
    
    Checking swap space: must be greater than 500 MB.   Actual 25599 MB    Passed
    The inventory pointer is located at /etc/oraInst.loc
    You can find the log of this install session at:
     /home/oracle/oraInventory/logs/AttachHome2025-02-06_01-07-51PM.log
    'AttachHome' was successful.
    
    Try the runInstaller command again, and it should work.

    Monday, January 27, 2025

    How to install and configure PG Bouncer on Linux x86_64

    Install the PGBouncer software


    List packages allready installed on the system:
    yum list installed |grep bounce
    
    Search repository:
     yum search bounce
    =========================================================== Name & Summary Matched: bounce ============================================================
    pgbouncer.x86_64 : Lightweight connection pooler for PostgreSQL
    =============================================================== Summary Matched: bounce ===============================================================
    
    Install:
     yum install pgbouncer.x86_64
    
    Verify installation:
     yum list installed |grep bounce
    pgbouncer.x86_64                                              1.24.0-42PGDG.rhel8                     @YOURREPOSITORY_PostgreSQL_PostgreSQL_common_RHEL8_x86_64
    
    Done with the installation!


    Configure the pgbouncer for connections made by the user "music" to my database called "musicdb"

    [root] chown postgres:postgres /etc/pgbouncer/ -R
    
    Create the file /etc/pgbouncer/pgbouncer.ini:
    ;
    ; pgbouncer config file
    ;
    [databases]
    postgres    = port=5432 dbname=musicdb
    [pgbouncer]
    listen_addr = 127.0.0.1
    listen_port = 6432
    admin_users = music
    auth_type   = scram-sha-256
    auth_file   = /postgres_data/config/users.txt
    logfile     = /postgres_data/config/pgbouncer.log
    pidfile     = /postgres_data/config/pgbouncer.pid
    server_reset_query = DISCARD ALL;
    pool_mode = session
    default_pool_size = 5
    log_pooler_errors = 0
    
    Find the SCRAM authenticatio key:
    psql
    select '"'||rolname||'" "'||rolpassword||'"' from pg_authid where rolname='music';
    
      
    Create the file /postgres_data/config/users.txt

    vi /postgres_data/config/users.txt

    Add the following, and paste the value of the key you found in the query above, in between the second pair of quotation marks:

    "music" "SCRAM-SHA-256_your_SCRAM_key_here"


    Make sure port 6432 isn't occupied
     ss -ltpn | grep 6432 --> no output? Go go ahead! 
    
    As the postgres user, start the pgbouncer process
    pgbouncer -d /etc/pgbouncer/pgbouncer.ini
    
    The port will now be occupied:
    ss -ltpn | grep 6432
    LISTEN 0      128        127.0.0.1:6432      0.0.0.0:*    users:(("pgbouncer",pid=392231,fd=12))
    
    You can now connect to your pgbouncer instance. There are a number of different show-commands:
    psql -p 6432 -h 127.0.0.1 -U music pgbouncer -c "show config"
    Password for user music:
                 key             |                         value                          |                        default                         | changeable
    -----------------------------+--------------------------------------------------------+--------------------------------------------------------+------------
     admin_users                 | music                                                  |                                                        | yes
     application_name_add_host   | 0                                                      | 0                                                      | yes
     auth_dbname                 |                                                        |                                                        | yes
     auth_file                   | /var/lib/pgsql/users.txt                               |                                                        | yes
    
    And some control commands:
    psql -p 6432 -h 127.0.0.1 -U music pgbouncer -c "reload"
    Password for user music:
    RELOAD
    

    To connect through the pgbouncer rather than a direct connection, use the syntax below, where you exchange the database name ("musicdb") with "pgbouncer":
      psql -p 6432 -h 127.0.0.1 -U music -d pgbouncer
    Password for user music:
    psql (15.10, server 1.24.0/bouncer)
    WARNING: psql major version 15, server major version 1.24.
             Some psql features might not work.
    Type "help" for help.
    
    pgbouncer=# \conninfo
    You are connected to database "pgbouncer" as user "music" on host "127.0.0.1" at port "6432".