psql psql (11.2) Type "help" for help. postgres=# alter role superuser valid until 'infinity'; ALTER ROLE
Minimalistic Oracle contains a collection of practical examples from my encounters with Oracle technologies. When relevant, I also write about other technologies, like Linux or PostgreSQL. Many of the posts starts with "how to" since they derive directly from my own personal experience. My goal is to provide simple examples, so that they can be easily adapted to other situations.
Wednesday, January 30, 2019
How to solve "User "superuser" has an expired password" in postgreSQL
How to find the log file for a PostgreSQL database
Look in the file $PGDATA/postgresql.conf:
#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------
# - Where to Log -
log_destination = stderr # Valid values are combinations of
# stderr, csvlog, syslog, and eventlog,
# depending on platform. csvlog
# requires logging_collector to be on.
# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
# into log files. Required to be on for
# csvlogs.
# (change requires restart)
# These are only used if logging_collector is on:
log_directory = log # directory where log files are written,
# can be absolute or relative to PGDATA
log_filename = 'postgresql-%a.log' # log file name pattern,
# can include strftime() escapes
log_truncate_on_rotation = on # If on, an existing log file with the
# same name as the new log file will be
# truncated rather than appended to.
# But such truncation only occurs on
# time-driven rotation, not on restarts
# or size-driven rotation. Default is
# off, meaning append to existing files
# in all cases.
log_rotation_age = 1d # Automatic rotation of logfiles will
# happen after that time. 0 disables.
log_rotation_size = 0 # Automatic rotation of logfiles will
# happen after that much log output.
# 0 disables.
So in my case, the directory $PGDATA/log should contain the database server's log files.
The directory listing shows:
[postgres@myserver /pgdata/data/log]$ ls -latrh total 5.6M drwx------ 2 postgres postgres 4.0K Dec 23 00:00 . -rw------- 1 postgres postgres 867K Jan 24 23:59 postgresql-Thu.log -rw------- 1 postgres postgres 869K Jan 25 23:59 postgresql-Fri.log -rw------- 1 postgres postgres 873K Jan 26 23:59 postgresql-Sat.log -rw------- 1 postgres postgres 873K Jan 27 23:59 postgresql-Sun.log -rw------- 1 postgres postgres 873K Jan 28 23:59 postgresql-Mon.log -rw------- 1 postgres postgres 873K Jan 29 23:59 postgresql-Tue.log drwx------ 21 postgres postgres 4.0K Jan 30 10:47 .. -rw------- 1 postgres postgres 407K Jan 30 11:01 postgresql-Wed.log
Since I am writing this post on a Wednesday, the log currently being written to is postgresql-Wed.log, and putting a tail on that will be somewhat like tailing the alert log of an oracle database.
Source: The online documentation
How to check the status, stop and start a postgreSQL server
The postgreSQL server comes with a wrapper called pg_ctl which will simplify some admin tasks for you.
Check if the variable PGDATA is set:
[postgres@myserver ~]$ echo $PGDATA /pgdata/data
Check status:
[postgres@myserver ~]$ pg_ctl status pg_ctl: server is running (PID: 27870) /usr/pgsql-11/bin/postgres "-D" "/pgdata/data"
If the PGDATA variable is not set in your operating system, you need to specify the D-flag, like this:
[postgres@myserver ~]$ pg_ctl status -D /pgdata/data pg_ctl: server is running (PID: 27870) /usr/pgsql-11/bin/postgres "-D" "/pgdata/data"
Check if the database accepts incoming requests:
[postgres@myserver ~]$ pg_isready -d testdb01 /var/run/postgresql:5432 - accepting connectionsStop:
[postgres@myserver ~]$ pg_ctl stop waiting for server to shut down.... done server stopped # stop immediate - similar to Oracles "shutdown abort". Instance recovery needed pg_ctl -D $PGDATA stop -m immediate waiting for server to shut down.... done server stopped # stop fast - similar to Oracles "shutdown immediate". Rollback of uncommited transactions, shutdown gracefully. No instance recovery needed pg_ctl -D $PGDATA stop -m fast waiting for server to shut down.... done server stopped # stop smart - similar to Oracles "shutdown normal". Waits for all transactions to complete. No instance recovery needed. May take time to complete. pg_ctl -D $PGDATA stop -m smart waiting for server to shut down.... done server stopped
Start:
[postgres@myserver ~]$ pg_ctl start waiting for server to start....2019-01-30 10:35:04 CET LOG: listening on IPv4 address "0.0.0.0", port 5432 2019-01-30 10:35:04 CET LOG: listening on IPv6 address "::", port 5432 2019-01-30 10:35:04 CET LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" 2019-01-30 10:35:04 CET LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2019-01-30 10:35:04 CET LOG: redirecting log output to logging collector process 2019-01-30 10:35:04 CET HINT: Future log output will appear in directory "log". done server startedRHEL7 or later
If your postgres server is running on RHEL7 or newer, you should use systemctl to control the service:
systemctl start postgresql-11 systemctl stop postgresql-11 systemctl restart postgresql-11 systemctl status postgresql-11Example output(abbreviated) from the status command:
● postgresql-11.service - PostgreSQL 11 database server Loaded: loaded (/etc/systemd/system/postgresql-11.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2021-01-08 02:20:51 CET; 1 months 10 days ago Docs: https://www.postgresql.org/docs/11/static/ Main PID: 1910 (postmaster) CGroup: /system.slice/postgresql-11.service ├─ 1910 /usr/pgsql-11/bin/postmaster -D /postgres/data ├─ 2047 postgres: logger ├─ 2390 postgres: checkpointer ├─ 2392 postgres: background writer ├─ 2394 postgres: walwriter ├─ 2395 postgres: autovacuum launcher ├─ 2396 postgres: archiver last was 00000001000002360000009A ├─ 2397 postgres: stats collector ├─ 2398 postgres: logical replication launcher ├─ 2766 postgres: postgres_exporter postgres localhost(60360) idle ├─ 3061 postgres: postgres_exporter postgres localhost(44202) idle ├─48217 postgres: postgres postgres [local] idle
If the postgres server is down, and you need to check the current version installed, you can use either of these methods:
which psql /sw/postgres/pgsql-13/bin/psql cat /u02/pgdata/datafiles/PG_VERSION 13 cat /etc/PG_VERSION 13.ansible
Tuesday, January 29, 2019
The difference between REDO and UNDO
UNDO | REDO |
---|---|
Used during normal database operations | Used during instance and media recovery only |
Contains changed column values | Contains "change vectors", a description of a change made to a single block |
Stored inside the database, in a separate tablespace | Stored outside the database, in redo log files |
Can be utilized for many purposes | Has one purpose only |
The redo log files has one and only one purpose in life: instance recovery.
The undo tablespace(s) has one primary purpose in life: to support read-consistency. However, unlike redo, UNDO data can also be utilized for many other purposes, like Oracles flashback technologies.
Oracle 12.2 Documentation about the redo logs can be found here
Monday, January 28, 2019
How to avoid ORA-00947 "not enough values" when performing an ITAS operation
If you are inserting rows from a table using an ITAS (Insert Tables As Select) statement, and the table being written to has a different layout than the table being selected from, you may struggle with getting your INSERT statement to succeed.
For example, let's say that the interim_table below has a virtual column defined on it, that the original table does not have. When attempting to execute your ITAS statement, you may see the error
ORA-00947: not enough values
To avoid this, enclose the the column listing in parenthesis, while omitting it for the SELECT statement constituting the VALUES-list, like this:
insert into interim_table ( tq_id ,ename ,setting ) SELECT f.tq_id, f.ename, f.setting FROM original_table f JOIN interim_table wf ON wf.tq_id = f.tq_id;
How to enable DML logging
DML logging can be used as a workaround if you have a large number of rows to process, and you expect that some of these will fail to be inserted correctly due to data quality problems. In other words, some rows may violate one or more constraints on the receiving table.
Here is a short demonstration that I have recently tested as a workaround for a client.
I have a table called MYTABLE, which looks like this:
Name Null? Type --------------------- -------- ------------------- ORDER_ID NOT NULL NUMBER ORG_ORDERID NUMBER VALUE_ID_TYPE VARCHAR2(20) COL_PREFIX VARCHAR2(80) COL_NAME VARCHAR2(500) COL_SUFFIX VARCHAR2(20)
There is a unique constraint on the table:
--create the index CREATE UNIQUE INDEX C_COL_UIX ON MYTABLE (ORG_ORDERID, COL_PREFIX, COL_NAME, COL_SUFFIX) TABLESPACE USERS; -- create the constraint, using index created above ALTER TABLE MYTABLE ADD ( CONSTRAINT C_COL_CONS UNIQUE (ORG_ORDERID, COL_PREFIX, COL_NAME, COL_SUFFIX) USING INDEX C_COL_UIX);
The actual DML logging is performed by adding the statement
LOG ERRORS INTO error_table ('INSERT') REJECT LIMIT UNLIMITED; |
Before we can do this we need to create the error log table:
BEGIN DBMS_ERRLOG.create_error_log (dml_table_name => 'MYTABLE'); END; /
The statement above will create a table called ERR$_MYTABLE, where rejected rows will be inserted into.
For this test, I will now verify that the values I intend to use, actually exist in the table:
select count(*) from mytable where org_orderid = 122 and col_prefix = 'A' and col_name = 'B' and col_suffix = 'X' ;
COUNT(*) ---------- 1
Any attempt to insert a row into the table with the same values as in the select statement above (122, A, B, X), will violate constraint C_COL_CONS, and logged in the table ERR$_MYTABLE.
Let's execute an INSERT statement that will violate the constraint:
INSERT INTO mytable (order_id, col_prefix, col_name, col_suffix, col_type) VALUES(122, 'A', 'B', 'X', 'D') LOG ERRORS INTO ERR$_MYTABLE ('INSERT') REJECT LIMIT UNLIMITED;
As expected, I get the following error:
ORA-00001: unique constraint (C_COL_CONS) violated
Let's check the log table:
SELECT * FROM ERR$_MYTABLE;
ORA_ERR_NUMBER$ | ORA_ERR_MESG$ | ORA_ERR_ROWID$ | ORA_ERR_OPTYP$ | ORA_ERR_TAG$ | ORDER_ID | ORG_ORDERID | VALUE_ID_TYPE | COL_PREFIX | COL_NAME | COL_SUFFIX |
---|---|---|---|---|---|---|---|---|---|---|
1 |
ORA-00001: unique constraint (C_COL_CONS; violated | I | INSERT | 2850841 | 122 | D | A | B | X |
If you execute a batchjob with thousands of rows, the offending rows will be caught so they can be corrected later. The non-offending rows will be neatly inserted where they belong.
Thursday, January 17, 2019
What types of materialized views can be created in an Oracle database?
There are three types of Materialized Views that can be created in an Oracle database:
1. Materialized Views with Aggregates
* The most commonly used materialized view, particularly in Datawarehouse databases.
Example:
CREATE MATERIALIZED VIEW SEQ_AGGR_INFO REFRESH FAST ON COMMIT ENABLE QUERY REWRITE AS select RDATE, count(SEQNO) as antall, max(SEQNO) as makssekvens from DATE_TO_SEQNO group by dato order by 1, 2;
Requirements:
* The SELECT-list must contain all of the GROUP BY columns (if present)
* There must be a COUNT(*) and a COUNT(column) on any aggregated columns.
* A materialized view logs must be present on all tables referenced in the query that defines the materialized view.
* Valid aggregate functions are: SUM, COUNT(x), COUNT(*), AVG, VARIANCE, STDDEV, MIN, and MAX.
2. Materialized Views containing only joins
* Materialized views contain only joins and no aggregates
Example:
CREATE MATERIALIZED VIEW active_data TABLESPACE data1 CACHE REFRESH FORCE ON COMMIT WITH PRIMARY KEY ENABLE QUERY REWRITE AS SELECT act.q_id, act.doc_type, act.next_q_id, act.prev_q_id, act.created, act.created_by, act.last_modified, act.last_modified_by, act.contents, act.accum_value, act.accum_total FROM active_data act JOIN archived_data arc ON arc.q_id = act.q_id;
An advantage with these types of materialized views is that expensive joins are precalculated.
Only REFRESH ON COMMIT or REFRESH ON DEMAND is legal with mviews containing only joins. When ON COMMIT is used, the refresh is performed at commit time of the transaction that does DML on the materialized view's detail table.
3. Nested Materialized Views
* A nested materialized view is a materialized view whose definition is based on another materialized view.
I have no personal experience using this type of materialized view, but Oracle has supplied some examples if you follow the link above.
Sources: Oracle 12.2 Data Warehousing Guide
Subscribe to:
Posts (Atom)