Friday, February 1, 2019

How to create a trigger that will add newly created tables to a role


A commonly asked for functionality in my day-to-day work is READ-ONLY access for users to various tables in the database. This is easy to solve by creating a role, and granting SELECT on the tables to that role.

Every time a new table is created, the role must be updated. This will for most of the time be impossible to administer, and should be automated by creating a trigger on the schema owning the tables/views.

Here is how:

create a new role:
create role my_ro_role;

Grant execute permissions on dbms_jobs to scott:
grant create job to scott;

Note that if you do not have the password for the schema that should own the trigger, you cannot use the SYS user to create it for them. You need SYSTEM or another DBA user to avoid the error
ORA-30510: system triggers cannot be defined on the schema of SYS user

Create the trigger. In this example, I am using the system user to accomplish this:
conn system/password
create or replace trigger scott.add_to_ro_role
after CREATE on schema
declare
 l_str varchar2(255);
 l_job number;
begin
 if ( ora_dict_obj_type = 'TABLE' ) then
  l_str := 'execute immediate "grant select on ' || ora_dict_obj_name || ' to my_ro_role";';
  dbms_job.submit( l_job, replace(l_str,'"','''') );
 end if;
end;
/

Every time a new table is created in the SCOTT schema, SELECT on the same table will be granted to the role.
The role can then be granted to individual users.

Thanks to Tron Malmø-Lund for the idea and the code to implement it! ;-)

Thursday, January 31, 2019

How to solve SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled



Logged in as the user owning the objects, you are getting the following error when trying to use the autotrace feature:
SP2-0618: Cannot find the Session Identifier.  Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report

Solution: run the necessary scripts first:
conn / as sysdba
@$ORACLE_HOME/sqlplus/admin/plustrce.sql
grant plustrace to scott;

conn scott/password

@$ORACLE_HOME/rdbms/admin/utlxplan.sql
exit

After this is done, the errors should no longer appear.

last - show listing of last logged in users


A potentiall very useful tool when investigating what really happened, is the command "last".

Here is an abbrivated output:
root     pts/4        psl0ssh01.skead. Thu Jan 31 12:56   still logged in
oracle   pts/4        w7x64proddvh111. Thu Jan 31 12:50 - 12:52  (00:02)
root     pts/6        psl0ssh01.skead. Thu Jan 31 12:18 - 12:23  (00:05)
root     pts/5        psl0ssh01.skead. Thu Jan 31 11:08   still logged in
admin    pts/5        psa0addm2.skead. Wed Jan 30 20:21 - 20:23  (00:01)
admin    pts/5        psa0addm2.skead. Wed Jan 30 20:11 - 20:21  (00:09)

The shell variable HISTTIMEFORMAT and the "history" command



If you need to investigate the history of commands on your linux/Unix server, set the HISTTIMEFORMAT parameter first:
HISTTIMEFORMAT="%d/%m/%y %T " 

After that, you'll find a much more precis listing of your history:

  962  31/01/19 11:08:50 cd product/
  963  31/01/19 11:08:50 ls -al
  964  31/01/19 11:08:50 du -sh *
  965  31/01/19 11:08:50 cd ..

instead of
  962  cd product/
  963  ls -al
  964  du -sh *
  965  cd ..
The man page for history states

If the HISTTIMEFORMAT variable is set, the time stamp information associated with each history entry is written to the history file, marked with the history comment character.

Wednesday, January 30, 2019

How to solve "User "superuser" has an expired password" in postgreSQL



psql
psql (11.2)
Type "help" for help.

postgres=# alter role superuser valid until 'infinity';
ALTER ROLE

How to find the log file for a PostgreSQL database



Look in the file $PGDATA/postgresql.conf:

#------------------------------------------------------------------------------
# REPORTING AND LOGGING
#------------------------------------------------------------------------------

# - Where to Log -

log_destination = stderr # Valid values are combinations of
                                        # stderr, csvlog, syslog, and eventlog,
                                        # depending on platform.  csvlog
                                        # requires logging_collector to be on.

# This is used when logging to stderr:
logging_collector = on # Enable capturing of stderr and csvlog
                                        # into log files. Required to be on for
                                        # csvlogs.
                                        # (change requires restart)

# These are only used if logging_collector is on:
log_directory = log # directory where log files are written,
                                        # can be absolute or relative to PGDATA

log_filename = 'postgresql-%a.log' # log file name pattern,
                                        # can include strftime() escapes

log_truncate_on_rotation = on # If on, an existing log file with the
                                        # same name as the new log file will be
                                        # truncated rather than appended to.
                                        # But such truncation only occurs on
                                        # time-driven rotation, not on restarts
                                        # or size-driven rotation.  Default is
                                        # off, meaning append to existing files
                                        # in all cases.
log_rotation_age = 1d # Automatic rotation of logfiles will
                                        # happen after that time.  0 disables.

log_rotation_size = 0 # Automatic rotation of logfiles will
                                        # happen after that much log output.
                                        # 0 disables.


So in my case, the directory $PGDATA/log should contain the database server's log files.

The directory listing shows:

[postgres@myserver /pgdata/data/log]$ ls -latrh
total 5.6M
drwx------  2 postgres postgres 4.0K Dec 23 00:00 .
-rw-------  1 postgres postgres 867K Jan 24 23:59 postgresql-Thu.log
-rw-------  1 postgres postgres 869K Jan 25 23:59 postgresql-Fri.log
-rw-------  1 postgres postgres 873K Jan 26 23:59 postgresql-Sat.log
-rw-------  1 postgres postgres 873K Jan 27 23:59 postgresql-Sun.log
-rw-------  1 postgres postgres 873K Jan 28 23:59 postgresql-Mon.log
-rw-------  1 postgres postgres 873K Jan 29 23:59 postgresql-Tue.log
drwx------ 21 postgres postgres 4.0K Jan 30 10:47 ..
-rw-------  1 postgres postgres 407K Jan 30 11:01 postgresql-Wed.log

Since I am writing this post on a Wednesday, the log currently being written to is postgresql-Wed.log, and putting a tail on that will be somewhat like tailing the alert log of an oracle database.

Source: The online documentation

How to check the status, stop and start a postgreSQL server


The postgreSQL server comes with a wrapper called pg_ctl which will simplify some admin tasks for you.

Check if the variable PGDATA is set:
[postgres@myserver ~]$ echo $PGDATA
/pgdata/data

Check status:
[postgres@myserver ~]$ pg_ctl status
pg_ctl: server is running (PID: 27870)
/usr/pgsql-11/bin/postgres "-D" "/pgdata/data"

If the PGDATA variable is not set in your operating system, you need to specify the D-flag, like this:
[postgres@myserver ~]$ pg_ctl status -D /pgdata/data
pg_ctl: server is running (PID: 27870)
/usr/pgsql-11/bin/postgres "-D" "/pgdata/data"

Check if the database accepts incoming requests:
[postgres@myserver ~]$  pg_isready -d testdb01
/var/run/postgresql:5432 - accepting connections

Stop:

[postgres@myserver ~]$ pg_ctl stop
waiting for server to shut down.... done
server stopped

Start:

[postgres@myserver ~]$ pg_ctl start
waiting for server to start....2019-01-30 10:35:04 CET LOG:  listening on IPv4 address "0.0.0.0", port 5432
2019-01-30 10:35:04 CET LOG:  listening on IPv6 address "::", port 5432
2019-01-30 10:35:04 CET LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2019-01-30 10:35:04 CET LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2019-01-30 10:35:04 CET LOG:  redirecting log output to logging collector process
2019-01-30 10:35:04 CET HINT:  Future log output will appear in directory "log".
 done
server started
If your postgres server is running on RHEL7 or newer, you should use systemctl to control the service:
systemctl start postgresql-11
systemctl stop postgresql-11
systemctl restart postgresql-11
systemctl status postgresql-11
Example output(abbreviated) from the status command:
● postgresql-11.service - PostgreSQL 11 database server
   Loaded: loaded (/etc/systemd/system/postgresql-11.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-01-08 02:20:51 CET; 1 months 10 days ago
     Docs: https://www.postgresql.org/docs/11/static/
 Main PID: 1910 (postmaster)
   CGroup: /system.slice/postgresql-11.service
           ├─ 1910 /usr/pgsql-11/bin/postmaster -D /postgres/data
           ├─ 2047 postgres: logger
           ├─ 2390 postgres: checkpointer
           ├─ 2392 postgres: background writer
           ├─ 2394 postgres: walwriter
           ├─ 2395 postgres: autovacuum launcher
           ├─ 2396 postgres: archiver   last was 00000001000002360000009A
           ├─ 2397 postgres: stats collector
           ├─ 2398 postgres: logical replication launcher
           ├─ 2766 postgres: postgres_exporter postgres localhost(60360) idle
           ├─ 3061 postgres: postgres_exporter postgres localhost(44202) idle
           ├─48217 postgres: postgres postgres [local] idle


If the postgres server is down, and you need to check the current version installed, you can use either of these methods:
which psql
/sw/postgres/pgsql-13/bin/psql

cat /u02/pgdata/datafiles/PG_VERSION
13

cat /etc/PG_VERSION
13.ansible