Tuesday, June 3, 2025

PostgreSQL Memory Parameters and how they relate

Here are some of the important memory parameters in a postgreSQL server, and how they relate to one another.

shared_buffers

The amount of memory the database server uses for shared memory buffers.

The postgres documentation suggest starting with allocating 25% of the available memory to the shared database memory pool:

If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for shared_buffers is 25% of the memory in your system... because PostgreSQL also relies on the operating system cache, it is unlikely that an allocation of more than 40% of RAM to shared_buffers will work better than a smaller amount.

It also points out the necessity of considering database checkpointing:

Larger settings for shared_buffers usually require a corresponding increase in max_wal_size, in order to spread out the process of writing large quantities of new or changed data over a longer period of time.

max_wal_size

Controls the maximum size the Write-Ahead Logging (WAL) files can grow before triggering a checkpoint. Checkponts are relative expensive operations, so we do not want them to occur too often. On the other hands, too infrequent checkpointing may increase recovery times. max_wal_size can be set to balance performance and recovery time by influencing how often checkpoints occur.

work_mem

Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files.

The documentation points out that a complex query might perform several sort and hash operations at the same time, with each operation generally being allowed to use as much memory as this value specifies before it starts to write data into temporary files and that serveral running sessions could be executing such operations at the same time. So even if the 6M specified as its value does not seem like much, it could mean significant memory usage on a busy system.

It is similar to pga_aggregate_target in an oracle database: the amount of memory set for all private global areas on the server. After the introduction of this parameter in Oracle 9i, the private global area parameters used back then, for example sort_area_size and hash_area_size, was not necessary any longer.

maintenance_work_mem

Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY.

It can be set higher than work_mem: Since only one of these operations can be executed at a time by a database session, and an installation normally doesn't have many of them running concurrently, it's safe to set this value significantly larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps.

autovacuum_work_mem

Specifies the maximum amount of memory to be used by each autovacuum worker process. If this value is specified without units, it is taken as kilobytes. It defaults to -1, indicating that the value of maintenance_work_mem should be used instead. The setting has no effect on the behavior of VACUUM when run in other contexts.

Here is a table with my settings for my 16G Memory server
Parameter Value
shared_buffers 4GB
max_wal_size 8GB
work_mem 6MB
maintenance_work_mem 479MB


Source: PostgreSQL documentation

How to setup huge pages on a postgreSQL server

The Postgres Documentation states:

The use of huge pages results in smaller page tables and less CPU time spent on memory management, increasing performance.

This is how I set up huge pages on one of my postgreSQL servers.

First, check the server's huge page size. My huge pages are 2M each:
 grep Hugepagesize /proc/meminfo
Hugepagesize:       2048 kB
On my 16G server, I would like to start with 25% of the total memory as available for shared_buffers:
su - postgres
sudo systemctl stop postgresql-15.service
postgres --shared-buffers=4096MB -D $PGDATA -C shared_memory_size_in_huge_pages
2102
Update /etc/sysctl.conf. Add
vm.nr_hugepages=2102
Restart sysctl
sysctl -p
Now, tell the postgres server to use huge pages, if they are available.

Add the following directive to the config file postgresql.conf
huge_pages = try
Add the following lines to /etc/security/limits.conf so that postgres can lock down the memory set aside for huge tables:
postgres soft memlock unlimited
postgres hard memlock unlimited
Reboot the server and verify that huge pages are being used:
cat /proc/meminfo | grep Huge
Interestingly, the parameter huge_page_size should only be used if you wish to override the default huge page size on your system. The default is zero (0). When set to 0, the default huge page size on the system will be used. In my case this is what I want to so the parameter can be ignored.

Friday, May 23, 2025

How to find constraints on a table in postgres



SELECT
    con.conname AS constraint_name,
    CASE con.contype
        WHEN 'p' THEN 'PRIMARY KEY'
        WHEN 'u' THEN 'UNIQUE'
        WHEN 'f' THEN 'FOREIGN KEY'
        WHEN 'c' THEN 'CHECK'
        WHEN 'x' THEN 'EXCLUSION'
        ELSE con.contype
    END AS constraint_type,
    rel.relname AS table_name,
    pg_get_constraintdef(con.oid) AS definition
FROM
    pg_constraint con
JOIN
    pg_class rel ON rel.oid = con.conrelid
JOIN
    pg_namespace nsp ON nsp.oid = rel.relnamespace
WHERE
    nsp.nspname = 'owner'
    AND rel.relname = 'table_name'
ORDER BY
    constraint_name;

Thursday, May 22, 2025

PostgreSQL: difference between \copy and COPY

If you put a statement like this in a file:
COPY at_locations
FROM '/external/data/geolocations_at_fixed.csv'
WITH (
    FORMAT csv,
    DELIMITER ';',
    NULL 'NULL',
    HEADER false
);
and execute it like this:
psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet
vinmonopolet=> \i cp_at.sql
You will see the error
ERROR:  must be superuser or have privileges of the pg_read_server_files role to COPY from a file
HINT:  Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
This is because the COPY command runs server-side, and PostgreSQL server expects be able to access the file and its entire path on the server.

This in turn requires that the user executing the COPY command is defined as a superuser or has the pg_read_server_files privilege granted to it.

As the error message says, we could use the client-side command \copy instead. Put this in the same script
\copy vinmonopolet.at_locations FROM '/external/data/geolocations_at_fixed.csv' WITH (FORMAT csv, DELIMITER ';', NULL 'NULL', HEADER false)
and execute either with the \i as above, or use the -f flag directly at the prompt:
psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet -f cp_at.sql
Password for user vinmonopolet:
COPY 5
This works because the \copy command is interpreted client-side. psql reads the file from your local machine (in my case, the "local machine" was indeed the postgres server, but the point stands), and pipes the content into the database. And it succeeds because no elevated privileges are needed for this method.

Wednesday, April 30, 2025

Cleanup of the ADR

Sometimes you may want to clean up your adr. I had this problem after I had just recently created a new database on my server called "db01".

In my case, I had the following adr homes set:

ADR Homes:
diag/rdbms/db01/db01
diag/rdbms/db02/db02
diag/rdbms/db03/db03
diag/rdbms/db04/db04
diag/rdbms/db05/db05
diag/clients/user_oracle/RMAN_610688766_110
diag/clients/user_oracle/host_610688766_110
diag/clients/user_oracle/host_610688766_82
diag/tnslsnr/dbhost1/listener
diag/tnslsnr/dbhost1/mylsnr
diag/tnslsnr/dbhost1/testlistener
This caused operations against ADR to fail with
ADR HOME selected:  diag/rdbms/db01/db01 diag/rdbms/db02/db02 diag/rdbms/db03/db03 diag/rdbms/db04/db04 
DIA-48415: Syntax error found in string [diag/rdbms/db01/db01] at column [26]

DIA-48415: Syntax error found in string [diag/rdbms/db02/db02] at column [26]

DIA-48415: Syntax error found in string [diag/rdbms/db03/db03] at column [26]

DIA-48415: Syntax error found in string [diag/rdbms/db04/db04] at column [26]

Syntax
error
Solution: clean out the old folder physically:
cd /u01/oracle/diag/rdbms
rm -rf db02 db03 db04 db05

cd /u01/oracle/diag/tnslsnr/dbhost1
rm -rf mylsnr testlistener

cd /u01/oracle/diag/clients/user_oracle
rm -rf *
Afterwards, the adr should you can check your adrci settings. They should look better now:
ADR base = "/u01/oracle"
adrci> show homes
ADR Homes:
diag/rdbms/db01/db01
diag/clients/user_oracle/RMAN_610688766_110
diag/tnslsnr/dbhost1/listener

Friday, April 25, 2025

List folders only - no files

For future quick reference:

I prefer:
ls -d */

dir1/      dir2/       di3/   dir4/ 
Also good:
ls -l | grep "^d"

drwxr-xr-x 4 root   root           4096 Oct 13  2022 dir1
drwxr-xr-x 2 root   root           4096 Mar 14 11:17 dir2
drwxr-xr-x 2 root   root           4096 Mar 26 15:51 dir2
drwxr-xr-x 2 root   root           4096 Apr 27  2021 dir4
A third option:
ls -F | grep "/$"
 
dir1/
dir2/
dir3/
dir4/

Arguments that can be passed to "dbca -createCloneTemplate" usage output

dbca -createCloneTemplate
   -sourceSID source_database_sid | -sourceDB source_database_name
   -templateName new_database_template_name
   [-promptForWalletPassword]
   [-rmanParallelism parallelism_integer_value]
   [-maxBackupSetSizeInMB maximum_backup_set_size_in_MB]
   [-dataFileBackup {true | false}]
   [-datafileJarLocation data_files_backup_directory]
   [-sysDBAUserName SYSDBA_user_name]
   [-sysDBAPassword SYSDBA_user_password]
   [-useWalletForDBCredentials {true | false}
       -dbCredentialsWalletPassword wallet_account_password
       -dbCredentialsWalletLocation wallet_files_directory]
   [-uploadToCloud 
       -opcLibPath OPC_library_path
       -opcConfigFile OPC_configuration_file_name
       [-rmanEncryptionPassword rman_encryption_password ]
   [-compressBackup { true | false } ]
   [-walletPassword database_wallet_password]
Documentation here