Thursday, November 23, 2023

How to use strace to figure out what files are being accessed by a shell script

I had a situation where an ebs-supplied script adstrtal.sh would not start - it kept throwing error
ORA-12541: TNS:no listener
Although $TNS_ADMIN was correctly set, and sqlplus and tnsping would confirm that the database was open and the listener was up, accepting connections on the desired port.

The sqlnet.log file created in the same directory from which I executed adstrtal.sh displayed the connection being attempted:
Fatal NI connect error 12541, connecting to:
 (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=EBS32.oric.no)(CID=(PROGRAM=sqlplus)(HOST=oric-ebsapp-utv.oric.no)(USER=ebs)))
 (ADDRESS=(PROTOCOL=TCP)(HOST=162.20.5.225)(PORT=1521)))
We are not using default port 1521, but a different port.

I then used strace to find the source of the error, like this:
strace -f -o /tmp/strace.out ./adstrtal.sh apps/****
When going through the /tmp/strace.out file, I was pointed in the right direction:
openat(AT_FDCWD, "$INST_TOP/ora/10.1.3/network/admin/EBS32_oric-ebsapp-utv_ifile.ora", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
Turns out that adstrtal.sh was looking for a tnsnames.ora entry not in the tnsnames.ora in $TNS_ADMIN ($INST_TOP/ora/10.1.2/network/admin) but rather in $INST_TOP/ora/10.1.3/network/admin.

As soon as I had added the correct tnsnames.ora entry, the adstrtall.sh script worked.

Solution to ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA

I had a situation where the error
ORA-12504: TNS:listener was not given the SERVICE_NAME in CONNECT_DATA
was thrown when connecting to a PDB.

The PDB was up and the listener runnning and serving the service_name which I wanted to connect to:

Solution was to reverse the order of directory_path in sqlnet.ora from
NAMES.DIRECTORY_PATH= (EZCONNECT,TNSNAMES)
to
NAMES.DIRECTORY_PATH= (TNSNAMES,EZCONNECT )

Friday, November 10, 2023

What are EBS snapshots?

I found this info in the document Oracle® Applications Maintenance Utilities Release 12.1 Part No. E13676-02 There are two types of snapshots: APPL_TOP snapshotsand global snapshots.

  • An APPL_TOP snapshot lists patches and versions of files in the APPL_TOP.
  • A global snapshot lists patches and latest versions of files in the entire Applications system (that is, across all APPL_TOPs).

    Both APPL_TOP snapshots and global snapshots may be either current view snapshots or named view snapshots.

    A current view snapshot is created once and updated when appropriate to maintain a consistent view.

    A partial view snapshot allows you to synchronize only selected files from a current view.

    A named view snapshot is a copy of the current view snapshot at a particular time (not necessarily the latest current view snapshot), and is not updated.
  • Friday, October 27, 2023

    Generation of "alter database rename file" scripts

    I have written several blog posts where I generate "alter database rename file" statements using the familiar syntax
    select 'alter database move datafile ''' || file_name || ''' TO ''' || replace(file_name,'old_sid','new_sid') || ''';'
    
    Recently, an experienced co-worker showed me another version of the script which is, in my opinion, much simpler:
    select 'alter database move datafile ' ||chr(39) || file_name || chr(39) || ' TO ' || chr(39) || replace(file_name,'old_sid','new_sid') || chr(39)|| ';'
    from dba_data_files;
    
    By referring to chr(39) instead of masking the char ' with the same character, your script becomes simpler to read and less error-prone. This will be particulary important as your scripts get more complex.

    The same method can of course be applied when generating scripts for moving table partitions, indexes etc.

    Wednesday, October 11, 2023

    How to see the number of huge pages configured on a Linux server

    On a RHEL server, to see if hugespages have been configured, you can use
    cat /proc/meminfo|grep -iE 'tables|huge'
    
    PageTables:        33368 kB
    AnonHugePages:         0 kB
    ShmemHugePages:        0 kB
    FileHugePages:         0 kB
    HugePages_Total:    5121
    HugePages_Free:        9
    HugePages_Rsvd:        9
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    Hugetlb:        10487808 kB
    
    From the above, we can see that the total amount of huge pages is 5121.
    Each page is 2M in size, as can be seen from Hugepagesize.
    The total size of hugepages amounts to 10242M, or 10G.

    Frank Pachot has written a really neat pice of code to format the output from sysctl to display huge pages usage on a Linux server.

    awk '/Hugepagesize:/{p=$2} / 0 /{next} / kB$/{v[sprintf("%9d GB %-s",int($2/1024/1024),$0)]=$2;next} {h[$0]=$2} /HugePages_Total/{hpt=$2} /HugePages_Free/{hpf=$2} {h["HugePages Used (Total-Free)"]=hpt-hpf} END{for(k in v) print sprintf("%-60s %10d",k,v[k]/p); for (k in h) print sprintf("%9d GB %-s",p*h[k]/1024/1024,k)}' /proc/meminfo|sort -nr|grep --color=auto -iE "^|( HugePage)[^:]*"
    
    The output is much easier to draw conclusions from, and it gives quite a lof of other useful information about your system, too:
    32767 GB VmallocTotal:   34359738367 kB                    16777215
    24 GB CommitLimit:    26025928 kB                          12707
    19 GB SwapTotal:      20971516 kB                          10239
    19 GB SwapFree:       20969176 kB                          10238
    19 GB MemTotal:       20596632 kB                          10056
    12 GB DirectMap1G:    12582912 kB                           6144
    10 GB Hugetlb:        10487808 kB                           5121
    10 GB HugePages_Total:    5121
    9 GB HugePages Used (Total-Free)
    9 GB DirectMap2M:    10082304 kB                           4923
    7 GB MemAvailable:    8173656 kB                           3991
    6 GB Cached:          6329396 kB                           3090
    5 GB Inactive:        5862704 kB                           2862
    4 GB Inactive(file):  4679556 kB                           2284
    2 GB Committed_AS:    2912096 kB                           1421
    1 GB MemFree:         1835428 kB                            896
    1 GB Inactive(anon):  1183148 kB                            577
    1 GB AnonPages:       1056944 kB                            516
    1 GB Active(file):    1502496 kB                            733
    1 GB Active:          1514284 kB                            739
    0 GB VmallocUsed:       30708 kB                             14
    0 GB Unevictable:       12376 kB                              6
    0 GB SwapCached:          432 kB                              0
    0 GB SUnreclaim:       101008 kB                             49
    0 GB SReclaimable:     375752 kB                            183
    0 GB Slab:             476760 kB                            232
    0 GB Shmem:            139612 kB                             68
    0 GB Percpu:             4960 kB                              2
    0 GB PageTables:        33448 kB                             16
    0 GB Mlocked:           12376 kB                              6
    0 GB Mapped:           322284 kB                            157
    0 GB KReclaimable:     375752 kB                            183
    0 GB KernelStack:        6080 kB                              2
    0 GB HugePages_Surp:        0
    0 GB HugePages_Rsvd:        9
    0 GB Hugepagesize:       2048 kB                              1
    0 GB HugePages_Free:        9
    0 GB Dirty:              1272 kB                              0
    0 GB DirectMap4k:      403328 kB                            196
    0 GB Buffers:            2852 kB                              1
    0 GB Active(anon):      11788 kB                              5
    

    Thursday, October 5, 2023

    How to create a template that includes a seed database using dbca

    This is how you can create a new template which will include the database "mydb01" as a seed database for future installations:
    dbca -silent -createCloneTemplate -sourceDB mydb01 -templateName minimal_nonmt_seed -maintainFileLocations true -sysDBAPassword mysecretpassword -sysDBAUserName sys -rmanParallelism 2 -dataFileBackup true -datafileDestination /u01/oracle/oradata/19c/assistants/dbca/templates
    

    Tuesday, October 3, 2023

    What to do if your flash recovery area is filled with "foreign archive logs"?

    One of my databases came to a halt when the flash recovery area filled completely.

    It wasn't a problem with space preassure, but merely a condition that made the flash recovery area *seem* full, when it really wasn't.

    To clear the archiver stuck condition is easy enough, simply increase the value of db_recovery_file_dest_size, but how to fix the source of the problem?

    My flash recovery area was, at the time, filled up with foreign archive logs:
    SYS@CDB$ROOT SQL> select * from v$flash_recovery_area_usage;
    
    FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
    ----------------------- ------------------ ------------------------- --------------- ----------
    CONTROL FILE                             0                         0               0          0
    REDO LOG                                 0                         0               0          0
    ARCHIVED LOG                         14.04                         0             235          0
    BACKUP PIECE                           .06                         0              44          0
    IMAGE COPY                               0                         0               0          0
    FLASHBACK LOG                            0                         0               0          0
    FOREIGN ARCHIVED LOG                  85.8                         0            1558          0
    AUXILIARY DATAFILE COPY                  0                         0               0          0
    
    8 rows selected.
    
    To fix this, log on to the database using rman:
    rman target / nocatalog
    
    Here you can list the foreign archivelogs:
    list foreign archivelogs all;
    
    They were all from may 2023, and they have ended up here since the database was cloned using storage snapshots.

    To clear them out of the flash recovery area, you need to first crosscheck them:
    RMAN> crosscheck foreign archivelog all;
    
    Then, delete them:
    RMAN> delete noprompt foreign archivelog all ;
    
    The crosscheck is important. Without it, Oracle cannot delete the entries from the controlfile, which means that tha v$flash_recovery_area will still be reported as full.
    The flash recovery area is not full anymore:
    SYS@CDB$ROOT SQL> select * from v$flash_recovery_area_usage;
    
    FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
    ----------------------- ------------------ ------------------------- --------------- ----------
    CONTROL FILE                             0                         0               0          0
    REDO LOG                                 0                         0               0          0
    ARCHIVED LOG                         18.21                         0             280          0
    BACKUP PIECE                           .06                         0              44          0
    IMAGE COPY                               0                         0               0          0
    FLASHBACK LOG                            0                         0               0          0
    FOREIGN ARCHIVED LOG                     0                         0               0          0
    AUXILIARY DATAFILE COPY                  0                         0               0          0
    
    8 rows selected.
    

    Documented in Doc ID 1617965.1 "What commands may be used to remove foreign archivelog files?" at Oracle Support.