Tuesday, June 3, 2025

How to setup huge pages on a postgreSQL server

The Postgres Documentation states:

The use of huge pages results in smaller page tables and less CPU time spent on memory management, increasing performance.

This is how I set up huge pages on one of my postgreSQL servers.

First, check the server's huge page size. My huge pages are 2M each:
 grep Hugepagesize /proc/meminfo
Hugepagesize:       2048 kB
On my 16G server, I would like to start with 25% of the total memory as available for shared_buffers:
su - postgres
sudo systemctl stop postgresql-15.service
postgres --shared-buffers=4096MB -D $PGDATA -C shared_memory_size_in_huge_pages
2102
Update /etc/sysctl.conf. Add
vm.nr_hugepages=2102
Restart sysctl
sysctl -p
Now, tell the postgres server to use huge pages, if they are available.

Add the following directive to the config file postgresql.conf
huge_pages = try
Add the following lines to /etc/security/limits.conf so that postgres can lock down the memory set aside for huge tables:
postgres soft memlock unlimited
postgres hard memlock unlimited
Reboot the server and verify that huge pages are being used:
cat /proc/meminfo | grep Huge
Interestingly, the parameter huge_page_size should only be used if you wish to override the default huge page size on your system. The default is zero (0). When set to 0, the default huge page size on the system will be used. In my case this is what I want to so the parameter can be ignored.

Friday, May 23, 2025

How to find constraints on a table in postgres



SELECT
    con.conname AS constraint_name,
    CASE con.contype
        WHEN 'p' THEN 'PRIMARY KEY'
        WHEN 'u' THEN 'UNIQUE'
        WHEN 'f' THEN 'FOREIGN KEY'
        WHEN 'c' THEN 'CHECK'
        WHEN 'x' THEN 'EXCLUSION'
        ELSE con.contype
    END AS constraint_type,
    rel.relname AS table_name,
    pg_get_constraintdef(con.oid) AS definition
FROM
    pg_constraint con
JOIN
    pg_class rel ON rel.oid = con.conrelid
JOIN
    pg_namespace nsp ON nsp.oid = rel.relnamespace
WHERE
    nsp.nspname = 'owner'
    AND rel.relname = 'table_name'
ORDER BY
    constraint_name;

Thursday, May 22, 2025

PostgreSQL: difference between \copy and COPY

If you put a statement like this in a file:
COPY at_locations
FROM '/external/data/geolocations_at_fixed.csv'
WITH (
    FORMAT csv,
    DELIMITER ';',
    NULL 'NULL',
    HEADER false
);
and execute it like this:
psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet
vinmonopolet=> \i cp_at.sql
You will see the error
ERROR:  must be superuser or have privileges of the pg_read_server_files role to COPY from a file
HINT:  Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
This is because the COPY command runs server-side, and PostgreSQL server expects be able to access the file and its entire path on the server.

This in turn requires that the user executing the COPY command is defined as a superuser or has the pg_read_server_files privilege granted to it.

As the error message says, we could use the client-side command \copy instead. Put this in the same script
\copy vinmonopolet.at_locations FROM '/external/data/geolocations_at_fixed.csv' WITH (FORMAT csv, DELIMITER ';', NULL 'NULL', HEADER false)
and execute either with the \i as above, or use the -f flag directly at the prompt:
psql -h myserver.oric.no -U vinmonopolet -d vinmonopolet -f cp_at.sql
Password for user vinmonopolet:
COPY 5
This works because the \copy command is interpreted client-side. psql reads the file from your local machine (in my case, the "local machine" was indeed the postgres server, but the point stands), and pipes the content into the database. And it succeeds because no elevated privileges are needed for this method.

Wednesday, April 30, 2025

Cleanup of the ADR

Sometimes you may want to clean up your adr. I had this problem after I had just recently created a new database on my server called "db01".

In my case, I had the following adr homes set:

ADR Homes:
diag/rdbms/db01/db01
diag/rdbms/db02/db02
diag/rdbms/db03/db03
diag/rdbms/db04/db04
diag/rdbms/db05/db05
diag/clients/user_oracle/RMAN_610688766_110
diag/clients/user_oracle/host_610688766_110
diag/clients/user_oracle/host_610688766_82
diag/tnslsnr/dbhost1/listener
diag/tnslsnr/dbhost1/mylsnr
diag/tnslsnr/dbhost1/testlistener
This caused operations against ADR to fail with
ADR HOME selected:  diag/rdbms/db01/db01 diag/rdbms/db02/db02 diag/rdbms/db03/db03 diag/rdbms/db04/db04 
DIA-48415: Syntax error found in string [diag/rdbms/db01/db01] at column [26]

DIA-48415: Syntax error found in string [diag/rdbms/db02/db02] at column [26]

DIA-48415: Syntax error found in string [diag/rdbms/db03/db03] at column [26]

DIA-48415: Syntax error found in string [diag/rdbms/db04/db04] at column [26]

Syntax
error
Solution: clean out the old folder physically:
cd /u01/oracle/diag/rdbms
rm -rf db02 db03 db04 db05

cd /u01/oracle/diag/tnslsnr/dbhost1
rm -rf mylsnr testlistener

cd /u01/oracle/diag/clients/user_oracle
rm -rf *
Afterwards, the adr should you can check your adrci settings. They should look better now:
ADR base = "/u01/oracle"
adrci> show homes
ADR Homes:
diag/rdbms/db01/db01
diag/clients/user_oracle/RMAN_610688766_110
diag/tnslsnr/dbhost1/listener

Friday, April 25, 2025

List folders only - no files

For future quick reference:

I prefer:
ls -d */

dir1/      dir2/       di3/   dir4/ 
Also good:
ls -l | grep "^d"

drwxr-xr-x 4 root   root           4096 Oct 13  2022 dir1
drwxr-xr-x 2 root   root           4096 Mar 14 11:17 dir2
drwxr-xr-x 2 root   root           4096 Mar 26 15:51 dir2
drwxr-xr-x 2 root   root           4096 Apr 27  2021 dir4
A third option:
ls -F | grep "/$"
 
dir1/
dir2/
dir3/
dir4/

Arguments that can be passed to "dbca -createCloneTemplate" usage output

dbca -createCloneTemplate
   -sourceSID source_database_sid | -sourceDB source_database_name
   -templateName new_database_template_name
   [-promptForWalletPassword]
   [-rmanParallelism parallelism_integer_value]
   [-maxBackupSetSizeInMB maximum_backup_set_size_in_MB]
   [-dataFileBackup {true | false}]
   [-datafileJarLocation data_files_backup_directory]
   [-sysDBAUserName SYSDBA_user_name]
   [-sysDBAPassword SYSDBA_user_password]
   [-useWalletForDBCredentials {true | false}
       -dbCredentialsWalletPassword wallet_account_password
       -dbCredentialsWalletLocation wallet_files_directory]
   [-uploadToCloud 
       -opcLibPath OPC_library_path
       -opcConfigFile OPC_configuration_file_name
       [-rmanEncryptionPassword rman_encryption_password ]
   [-compressBackup { true | false } ]
   [-walletPassword database_wallet_password]
Documentation here

What is the reason behind the message: "xauth: file /root/.Xauthority does not exist "?

What does the message that you sometimes see when attempting to launch X applications on a Linux server
"xauth:  file /root/.Xauthority does not exist"
really mean?

It means that the .Xauthority file, which stores X11 authentication cookies, does not yet exist in your home directory (in this case, /root/).

This is common when:

  • X11 forwarding has not been set up or used, yet.
  • You are logging in as root and have not initialized X11 in that session.
  • The file was deleted or never created.

    To work around the issue, follow these steps:

    1. Make sure xauth is installed
    # RHEL/Centos
    su - 
    yum install xauth
    
    #Debian
    apt install xauth
    
    2. Make sure you use the -X flag (X forwarding) when connecting to the server
    ssh -X root@yourserver
    
    3. On the server, edit the file /etc/ssh/sshd_config, and make sure these lines exist and are set to yes
    X11Forwarding yes
    X11UseLocalhost yes
    
    4. Restart ssh daemon
    systemctl restart sshd
    
    5. Try launching a graphical app, like xterm

    If these work, your X11 forwarding is properly set up, and .Xauthority will be created automatically.

    6. If you need to manually create the .Xauthority file

    This is rarely necessary, but you can run:
    touch ~/.Xauthority
    xauth generate :0 . trusted