Stop and disable the current postgres services (in this case, postgres version 15):
systemctl stop postgresql-15 systemctl disable postgresql-15Remove the packages using "yum remove":
rpm -qa | grep postgres | xargs yum remove -y
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
systemctl stop postgresql-15 systemctl disable postgresql-15Remove the packages using "yum remove":
rpm -qa | grep postgres | xargs yum remove -y
[postgres@myserver.no ~]$ echo "SELECT * FROM pg_available_extensions;" | psql | egrep 'cube|earthdistance'; cube | 1.5 | | data type for multidimensional cubes earthdistance | 1.1 | | calculate great-circle distances on the surface of the EarthYes they are.
postgres=# \connect mydb1 You are now connected to database "mydb1" as user "postgres". mydb1=# create extension if not exists cube with schema myschema1 cascade; CREATE EXTENSION mydb1=# create extension if not exists earthdistance with schema myschema1 cascade; CREATE EXTENSION mydb1=# exitCheck if they have indeed been installed:
[postgres@myserver.no ~]$ psql mydb01
psql (15.4)
Type "help" for help.
sit1=# \dx
List of installed extensions
Name | Version | Schema | Description
--------------------+---------+---------------------+------------------------------------------------------------------------
cube | 1.5 | myschema1 | data type for multidimensional cubes
earthdistance | 1.1 | myschema1 | calculate great-circle distances on the surface of the Earth
pg_qualstats | 2.0.4 | public | An extension collecting statistics about quals
pg_stat_statements | 1.10 | public | track planning and execution statistics of all SQL statements executed
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
(5 rows)
ALTER TABLE myschema.mytable ALTER COLUMN id SET DEFAULT nextval('myschema.mysequence'::regclass);
\connect db01 alter schema schema1 rename to schema1_old;Documented here
alter role scott with nologin; ALTER ROLERemember that in postgres, "users" and "roles" are used interchangably, so this would also work:
alter user scott with nologin; ALTER ROLEIn either case, the postgres server will echo "ALTER ROLE" back to the administrator.
echo "\du" | psqlExample output:
List of roles Role name | Attributes | Member of ---------------------------------+------------------------------------------------------------+--------------------- scott | Cannot login +| {business_users}
postgres=# select rolcanlogin from pg_roles where rolname='jim'; rolcanlogin ------------- t (1 row) postgres=# select rolcanlogin from pg_roles where rolname='scott'; rolcanlogin ------------- f (1 row)where t and f indicates true if the user can login and false if the user cannot, respectively.
alter table myschema.mytable rename to mytable_old;Change ownership for a table:
alter table myschema.mytable owner to anotherschema;
pg_dump mydb -n 'myschema' -a -Fp -t myschema.mytable --column-inserts > /data1/insert_statements.sql| gzip
mydb # select 'myschema.mytable'::regclass::oid; oid ---------- 19561436 (1 row)
SELECT pg_size_pretty(sum(pg_relation_size(quote_ident(schemaname) || '.' || quote_ident(tablename)))::bigint) FROM pg_tables WHERE schemaname = 'yourschema';
echo "CREATE ROLE replicate WITH REPLICATION LOGIN password 'mypassword';" | psql echo "grant app_user to replicate;" | psql echo "alter system set wal_level = replica;" | psql echo "select pg_reload_conf();" | psql echo "alter system set synchronous_commit = local;" | psql echo "alter system set track_commit_timestamp = on;" | psql2. Restart the postgres server
systemctl restart postgresql-13.service3. Configure the slave database cluster
systemctl stop postgresql-13.service4. Remove any files created in the deployment of the slave. They will be replaced anyway
rm -rf /data/*5. take a base backup of the master database cluster
pg_basebackup -h test01-sandbox-pgsql.mydomain.com -D /data/ -P -U replicate --wal-method=stream6. Set some parameters
echo "hot_standby = on" >> /pgconfig/postgresql.auto.conf echo " primary_conninfo = 'host=masterhost.mydomain.com port=5432 user=replicate password=mypassword' promote_trigger_file = '/tmp/MasterNow' " > /pgconfig/postgresql.conf7. Create a file necessary for the master/slave role determination
touch /data/standby.signal8. Start the postgres server
systemctl start postgresql-13.service9. Verification checks
echo 'select usename,state,sync_state,sync_priority,client_hostname from pg_stat_replication;' | psql usename | state | sync_state | sync_priority | client_hostname -----------+-----------+------------+---------------+----------------------------------- replicate | streaming | async | 0 | til0dbgr-sandbox-pgsql02.skead.no (1 row)The check the slave status, use the dictionary view pg_stat_wal_receiver. According to the documentation the view "contain only one row, showing statistics about the WAL receiver from that receiver's connected server":
psql psql (13.6) Type "help" for help. postgres=# \x Expanded display is on. postgres=# select status,sender_host,conninfo from pg_stat_wal_receiver; -[ RECORD 1 ]-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- status | streaming sender_host | test01-sandbox-pgsql.mydomain.com conninfo | user=replicate password=******** channel_binding=prefer dbname=replication host=test01-sandbox-pgsql.mydomain.com port=5432 fallback_application_name=walreceiver sslmode=prefer sslcompression=0 ssl_min_protocol_version=TLSv1.2 gssencmode=prefer krbsrvname=postgres target_session_attrs=anyCheck that recovery is in progress:
echo 'SELECT pg_is_in_recovery();' | psql pg_is_in_recovery ------------------- t (1 row)The equivalent query on the master node would state "f" (false). Read more about
Functionality | Oracle | PostgreSQL |
---|---|---|
A standby server that can accept connections and serves read-only queries | Active Data Guard | Hot Standby |
A standby server that cannot be connected to until it is promoted to a master server | Physical Standby Server | Warm Standby |
A data-modifying transaction is not considered committed until all servers have committed the transaction. | Maximum Availability | Synchronous |
Allow some delay between the time of a commit and its propagation to the other servers. Some transactions might be lost in the switch to a backup server load balanced servers might return slightly stale results. |
Maximum Performance | Asynchronous |
Do not allow any transactions to be unprotected at any time | Maximum Protection |
#pg_dump dumps a database as a text file or to other formats. #!/bin/bash export SCRIPT_NAME=`basename $0` export HOST=`uname -n` export TS=`date +\%m.\%d.\%y\_%H_%M_%S` export RUN_DATE=`date +\%m.\%d.\%y` export RUN_DIR=. export LOG_DIR=/tmp export DUMPDIR=/pgdata/export export JOB_NAME="ExportPGDB" export VERSION=1.0.0 export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log exec &> ${LOGFILE} echo "Starting export job at " `date` pg_dump -Fd musicdb -n music -v -f ${DUMPDIR}/mussikkdbexport -j 2 echo "Ending job at " `date` exit
# pg_restore - restore a PostgreSQL database from an archive file created by pg_dump #!/bin/bash export SCRIPT_NAME=`basename $0` export HOST=`uname -n` export TS=`date +\%m.\%d.\%y\_%H_%M_%S` export RUN_DATE=`date +\%m.\%d.\%y` export RUN_DIR=. export LOG_DIR=/tmp export DUMPDIR=/pgdata/export export JOB_NAME="ImportPGDB" export VERSION=1.0.0 export LOGFILE=${LOG_DIR}/${SCRIPT_NAME}_${TS}.log exec &> ${LOGFILE} echo "Starting job at " `date` pg_restore ${DUMPDIR}/mussikkdbexport -C -c -d postgres -j 4 -v pg_restore ${DUMPDIR}/mussikkdbexport -c -d musicdb -j 4 -v echo "Ending job at " `date` exit
CREATE TABLE event ( id bigint NOT NULL DEFAULT nextval('hendelselager.hendelse_id_seq'::regclass), skpl character varying(8) COLLATE pg_catalog."default", refyear integer NOT NULL, pnum bigint NOT NULL, ksystem character varying(128) COLLATE pg_catalog."default" NOT NULL, category character varying(128) COLLATE pg_catalog."default" NOT NULL, event character varying(128) COLLATE pg_catalog."default" NOT NULL, tstamp timestamp without time zone NOT NULL, ip_id character varying(128) COLLATE pg_catalog."default" NOT NULL, details jsonb, CONSTRAINT e_pkey PRIMARY KEY (refyear, event, id), CONSTRAINT uc_e01 UNIQUE (refyear, pnum, ksystem, category, event, ip_id) ) PARTITION BY RANGE (refyear);The distribution of values was expected to be very scew right from the start:
ALTER TABLE event ADD sk_type character varying(8);Next, create the partial index. Here is the syntax I used in a sandbox environment:
CREATE INDEX sk_type_idx ON event (refyear,sk_type)
WHERE NOT (sk_type = 'STRING1');
This approach comes with a caviat, which may or may not be acceptable:
log_line_prefix = '[%m] – %p %q- %u@%h:%d – %a' wal_level = 'replica' hot_standby = 'on'I then make a change to the system configuration:
alter system set hot_standby_feedback=on; ALTER SYSTEMAfter this change, the file postgres.auto.conf has another entry:
log_line_prefix = '[%m] – %p %q- %u@%h:%d – %a'
wal_level = 'replica'
hot_standby = 'on'
hot_standby_feedback = 'on'
I will then have to reload the database using the function pg_reload_conf() for the new parameter to take effect:
postgres=# select pg_reload_conf(); pg_reload_conf ---------------- t (1 row)The current logfile for the postgreSQL database cluster records this fact:
[2022-01-03 14:45:23.127 CET] – 1683 LOG: received SIGHUP, reloading configuration files [2022-01-03 14:45:23.129 CET] – 1683 LOG: parameter "hot_standby_feedback" changed to "on"For details, check the documentation
select pg_is_in_recovery(); pg_is_in_recovery ------------------- t (1 row)https://www.postgresql.org/docs/11/functions-admin.html#FUNCTIONS-RECOVERY-INFO-TABLE
select col.table_schema, col.table_name, col.ordinal_position as column_id, col.column_name, col.data_type from information_schema.columns col join information_schema.tables tab on tab.table_schema = col.table_schema and tab.table_name = col.table_name and tab.table_type = 'BASE TABLE' where col.table_schema in ('myschema') and col.data_type IN ( 'json','jsonb' ) order by col.table_schema, col.table_name, col.ordinal_position;Execution in psql directly on server like this:
psql mydb -f find_cols.sql
INSERT INTO sales.sales_history SELECT * from sales.sales_history_p201801 ON CONFLICT ON CONSTRAINT uc_sh1 DO NOTHING;Comes in as a handy way of transfering rows from an old table to a partially populated new table. Duplicate rows will simply be ignored.
pg_dump -Fd mydb -t myschema.mytable -v -f /data/mydirectory -j 2Where
-Fd = export into a directory-format archive -t = table name -v = verbose -f = the target directory -j = number of export jobsNote that -f can be omitted for file based output formats.
pg_dump mydb -n 'myschema' -a -Fp -t mytable | gzip > mydb_myschema_mytable.gzWhere
-Fp = plain text format -a = data only, no metadata. In the case of plain text, this means no "create table" statement is generated -t = table nameWhen the export is done, you may unpack the mydb_myschema_mytable.gz and you will observe that the file contains COPY directives for each row.
pg_dump mydb -n 'myschema' -a -Fc -t mytable | gzip > mydb_myschema_mytable_custom.gzWhere all the options are identical to the plain text format, except for
-Fc = custom data format