Showing posts with label Globalization. Show all posts
Showing posts with label Globalization. Show all posts

Wednesday, August 2, 2023

What are the DST time zone files ?

What are the Oracle Time Zone files?
The Oracle Database time zone files contain the valid time zone names. The following information is also included for each time zone:

* Offset from Coordinated Universal Time (UTC)
* Transition times for Daylight Saving Time
* Abbreviations for standard time and Daylight Saving Time
Where do these timezone files exist?
The time zone files are stored in the $ORACLE_HOME/oracore/zoneinfo directory.  

Oracle Database supplies multiple versions of time zone files, and there are two types of file associated with each version:

* a large file, which contains all the time zones defined in the database
* a small file, which contains only the most commonly used time zones. 

The large version files are named as timezlrg_version_number.dat and the small version files are named as timezone_version_number.dat, where version_number is the version number of the time zone file. 
What is the default timezone file?
The default time zone file is a large time zone file having the highest version number [ that was delivered when the Oracle software was installed].
For Oracle 19c, the default is DST32:
In Oracle Database 19c, the default time zone file is $ORACLE_HOME/oracore/zoneinfo/timezlrg_32.dat.
How do I determine what timezone files that are in use in my database?
select * from V$TIMEZONE_FILE;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_32.dat              32          0
How do I find the database time zone?
SELECT dbtimezone FROM DUAL;

DBTIME
------
+00:00
When is the database time zone set?
Set the database time zone when the database is created by using the SET TIME_ZONE clause of the CREATE DATABASE statement. 
If you do not set the database time zone, then it defaults to the time zone of the server's operating system.

The time zone may be set to a named region or an absolute offset from UTC. 

To set the time zone to a named region, use a statement similar to the following example:

CREATE DATABASE db01
...
SET TIME_ZONE='Europe/London';

To set the time zone to an offset from UTC, use a statement similar to the following example:

CREATE DATABASE db01
...
SET TIME_ZONE='-05:00';
What operations are affected by the database time zone?
The database time zone is relevant only for TIMESTAMP WITH LOCAL TIME ZONE columns. Oracle recommends that you set the database time zone to UTC (0:00) to avoid data conversion and improve performance when data is transferred among databases. This is especially important for distributed databases, replication, and exporting and importing.

If you do NOT use NAMED Timezone information in your application(s) you are not using the Oracle DST information and Oracle DST patches have simply no impact on your system.
You may apply Oracle DST patches, but the Oracle DST information is simply not used.
Will the time zone files be updated during quarterly patching?
Starting with Oracle Database 19c RU 19.18.0, all available DST patches are installed with the RU, and deployed into the Oracle_home/oracore/zoneinfo directory. 
Installing DST patches does not affect database operation.
How are the time zone files delivered?
Each Oracle Database release includes a time zone file that is current at the time of the release and a number of older version files. 

The time zone files that are supplied with the Oracle Database are updated periodically to reflect changes in transition rules for various time zone regions.
Between Oracle Database releases, new time zone file versions may be provided in patch sets or individual patches to reflect the changes in transition rules for various time zone regions. 
Older time zone file versions allow you to run upgraded databases without a need to immediately upgrade the time zone file to the most current version.
Is the update of the DST time zone files absolutely necessary? In many cases, not at all.

The MOS note "Primary Note DST FAQ : Updated DST Transitions and New Time Zones in Oracle RDBMS and OJVM Time Zone File Patches (Doc ID 412160.1)" states:
Please DO note that even if you are located in a country that has changed the DST start or end day, in many cases there is no need to "update the Oracle RDBMS DST information".
If your OS has been patched or correctly configured the the "oracle time" (= sysdate) will be correct.
Only if the actual timezone is used in (PL)SQL an "Oracle RDBMS DST" update is needed.
What about multitenant?
  • Each container in a multitenant environment has its own time zone file
  • Oracle allows different containers to have different time zone file versions, so you have the option of upgrading only a subset of containers in a CDB
  • To perform a time zone data upgrade across an entire CDB, you must upgrade the CDB root and each PDB separately.
  • A new PDB is always assigned the time zone version of PDB$SEED.
  • PDB$SEED is always assigned the time zone version at the time of CDB creation.
  • The time zone version of PDB$SEED cannot be changed.
  • Documentation:

  • Choosing a Time Zone File
  • Upgrading the Time Zone File and Timestamp with Time Zone Data
  • All Time Zone Files (DST) Included in Release Updates (RUs)
  • Tuesday, September 18, 2018

    How the DBTIMEZONE is set in an Oracle database


    A customer wanted me to look into why his two databases running the exact same application code were returning different time zones.

    select name, created,(select dbtimezone from dual) "dbtimezone"
    from v$database;


    NAME CREATED dbtimezone
    testdb1 25.04.2016 16:40:12 +02:00


    NAME CREATED dbtimezone
    testdb2 08.11.2017 14:07:51 +01:00

    The os-command date +"%Z %z" returned CEST +0200 on both servers. Why the difference?

    It didn't take long to find a good answer from Oracle about this topic.


    Oracle states:

    The database timezone is the time zone of the location where the database is installed

    and

    Use the SET TIME_ZONE clause to set the time zone of the database. If you do not specify the SET TIME_ZONE clause, then the database uses the operating system time zone of the server.

    And also, from the document "Timestamps & time zones - Frequently Asked Questions (Doc ID 340512.1)":

    If not specified with the CREATE DATABASE statement, the database time zone defaults to the server's O/S time zone offset. Note that if your server runs in a time zone affected by DST, then the database time zone would default to whatever the current offset is at the time of database creation, so you can have databases created with the same script on the same server with a different database time zone.


    In Norway, we use DST, more commonly referred to as "summertime" and "wintertime". The first datbase was installed April 25th, after the server had switched to summertime. The second database was installed November 8th, after the server had switched to wintertime.

    Oracle continues with the following about the database time zone:


    The database time zone is not as important as it sounds. First of all it does not influence functions like sysdate, or systimestamp.

    These function take their contents (date and time, and in the case of systimestamp also time zone) completely from the OS without any "Oracle" intervention.

    and

    A common misconception is that the database timezone needs to be "your" timezone. This is NOT true.
    The database timezone has NO relation with "where" the server is located.
    There is NO advantage whatsoever in using your timezone or a named timezone as database timezone.
    The best setting for dbtimezone is simply +00:00 (or any other OFFSET like -09:00, +08:00, ...),

    if your current dbtimezone value is an OFFSET then please leave it like it is.

    Wednesday, April 18, 2018

    Potential solution for ORA-12008: error in materialized view or zonemap refresh path



    During the synch-phase of an online redefintion, I received the error:
    begin
    ORA-42009: error occurred while synchronizing the redefinition
    ORA-12008: error in materialized view or zonemap refresh path
    ORA-01843: not a valid month
    ORA-06512: at "SYS.DBMS_REDEFINITION", line 219
    ORA-06512: at "SYS.DBMS_REDEFINITION", line 5392
    

    Solution:
    Alter your session setting for NLS_TERRITORY. Add the line

    alter session set nls_territory='';
    

    to your script.


    This seems to be due to the fact that I am partitioning on a virtual column, using a column that is in turn of datatype TIMESTAMP:
    CREATE TABLE USER1.TAB1_INTERIM
    (
    ENTRY_ID             VARCHAR2(36 BYTE)        NULL,
    DNAME                VARCHAR2(11 BYTE)        NULL,
    DTYPE                VARCHAR2(64 BYTE)        NULL,
    CREATED_TIME         TIMESTAMP(6)             NULL,
    DYEAR                INTEGER                  NULL,
    CREATED_DAY AS (
    CAST(
    TO_CHAR(CREATED_TIME,'DD.MM.YYYY')
    AS DATE
    )
    ) VIRTUAL
    )
    TABLESPACE MYTABLESPACE
    PARTITION BY RANGE (CREATED_DAY)
    INTERVAL( NUMTOYMINTERVAL(1,'MONTH'))
    (
    PARTITION P_INIT VALUES LESS THAN (TO_DATE('01.01.2017', 'DD.MM.YYYY'))
    TABLESPACE MYTABLESPACE
    );
    

    Since the creator of the database didn't set the NLS_TERRITORY to NORWAY, it is using the default value "AMERICA".

    Without any altering of the default NLS setting, the CREATED_TIME column will therefore contain data formatted according to US standard:
    select created_time from user1.tab1 fetch first 1 rows only;
    
    CREATED_TIME
    --------------------------
    26-SEP-17 09.30.17.040000 AM
    

    I would like the data to be formatted according to Norwegian locale rules, so I use 'DD.MM.YYYY' in my virtual column. So I tested in sqlplus:
    alter session set nls_territory='NORWAY';
    select created_time from user1.tab1 fetch first 1 rows only;
    
    CREATED_TIME
    --------------------------
    26.09.2017 09.30.17,040000
    



    So my script would look like this:
    alter session set nls_territory='NORWAY';
    begin
       DBMS_REDEFINITION.SYNC_INTERIM_TABLE(uname=>'USER1',orig_table=>'TAB1',int_table=>'TAB1_INTERIM');
    end;
    /
    

    and it completed successfully.




    Friday, April 7, 2017

    How to create a logon trigger

    In this particular example, I create a logon trigger that sets one of many available NLS session parameters, as well as sets an undocumentet parameter that I was recommended to set by Oracle Support services:

    CREATE OR REPLACE TRIGGER logon_optimizer 
    after logon on database
    begin 
       if user in ('SCOTT','JACK','BOB')
       then 
          execute immediate 'alter session set NLS_LANGUAGE="GERMAN"';
          execute immediate 'alter session set "_optimizer_unnest_disjunctive_subq"= FALSE';
       end if; 
    end;
    /
    

    Test to verify that it Works:

    Connect scott/tiger
    set lines 200
    col parameter format a30
    col value format a40
    select * from nls_session_parameters;
    
    
    PARAMETER                      VALUE
    ------------------------------ -----------
    NLS_LANGUAGE                   GERMAN
    NLS_TERRITORY                  NORWAY
    

    Now generate an error:
    select * from xxx
                  *
    ERROR at line 1:
    ORA-00942: Tabelle oder View nicht vorhanden
    

    Tuesday, April 4, 2017

    How to find the NLS_LENGTH_SEMANTICS used for your PL/SQL objects

    Use the view DBA_PLSQL_OBJECT_SETTINGS:
    SELECT OWNER,NAME,TYPE,NLS_LENGTH_SEMANTICS
    FROM DBA_PLSQL_OBJECT_SETTINGS
    WHERE OWNER IN ('USER1','USER2');
    

    Join with DBA_OBJECTS for creation time and last ddl time etc:
    SELECT POS.OWNER,POS.NAME,POS.TYPE,POS.NLS_LENGTH_SEMANTICS, O.LAST_DDL_TIME,O.CREATED, O.STATUS
    FROM DBA_PLSQL_OBJECT_SETTINGS POS JOIN DBA_OBJECTS O
    ON (POS.NAME = O.OBJECT_NAME)
    AND POS.OWNER = 'SCOTT'
    ORDER BY NAME;
    

    Monday, November 21, 2016

    How to install and use the character set scanner

    Although deprecated since the release of DMU (Database Migration utility for Unicode), there are still cases where you need to run csscan.

    One such incident happened to me when migrating a database used for Oracles E-Business Suite.

    Some errors were found in DMU, but when clicking on the number indicating the number of rows with errors, the Cleansing Editor showed up empty. To get the rowids of the offending rows, I had to resort to the character scanner.

    If not installed do as follows:

    1. edit the script $ORACLE_HOME/rdbms/admin/csminst.sql so that it has a valid password, and that its object will be saved in a separate tablespace
    create user csmig identified by ***** account lock
    /
    alter user csmig default tablespace tools quota unlimited on tools
    /
    
    2. execute the script as sysdba
    sqlplus / as sysdba @csminst.sql
    

    That's it. The csscanner tool is now installed.

    A simple example of usage would be when I scanned one - 1 - table.
    First, I created a parameter file
    userid='sys/**** as sysdba'
    table=scott.emp
    fromchar=WE8ISO8859P1
    tochar=al32utf8
    array=4096000
    process=4
    feedback=1000
    

    Execute it as follows:
    csscan parfile=myscan.par
    

    This process creates three files for you:
    1. scan.out - Screen log file
    2. scan.err - individual exception report. This is where you would typically find your rows that contains errors
    3. scan.txt - contains the Database Scan Summary Report.

    Here's how my scan.err looked like after I scanned the table mentioned above:
    Database Scan Individual Exception Report
    
    
    [Database Scan Parameters]
    
    Parameter                      Value
    ------------------------------ ------------------------------------------------
    CSSCAN Version                 v2.1
    Instance Name                  proddb01
    Database Version               11.2.0.4.0
    Scan type                      Selective tables
    Scan CHAR data?                YES
    Database character set         WE8ISO8859P1
    FROMCHAR                       WE8ISO8859P1
    TOCHAR                         al32utf8
    Scan NCHAR data?               NO
    Array fetch buffer size        4096000
    Number of processes            4
    Capture convertible data?      NO
    ------------------------------ ------------------------------------------------
    
    [Data Dictionary individual exceptions]
    
    
    [Application data individual exceptions]
    
    User  : SCOTT
    Table : EMP
    Column: ATTRIBUTE15
    Type  : VARCHAR2(150)
    Number of Exceptions         : 2
    Max Post Conversion Data Size: 153
    
    ROWID              Exception Type      Size Cell Data(first 30 bytes)
    ------------------ ------------------ ----- ------------------------------
    AAS/WmAHwAACI5ZAAC exceed column size   153 
    AAS/WmAHwAACI5ZAAD exceed column size   153 
    ------------------ ------------------ ----- ------------------------------
    
    As expected, the offending rows contained Scandinavian characters, which will use 2 bytes after conversion to a Unicode database. Therefore the text will no longer fit, and actions need to be taken by the data owner before the migration.

    For more information, look up the official documentation here

    Wednesday, October 5, 2016

    Find column semantics



    When migrating from single-byte code to multi-byte code character sets, one way to move your data without data loss is to change the semantics used. In other words, VARCHAR2 and CHAR type columns can be redefined to use character semantics instead of the default byte semantics.

    A schema-by-schema overview of the tables and column involved can be extracted by using the query below:

    select owner "owner",
                    CASE 
                        WHEN char_used = 'B' then 'BYTE'
                        WHEN char_used = 'C' then 'CHAR'
                    END "semantics"
    ,count(*) "antall"
    from dba_tab_columns
    where owner in ('SCOTT','RYAN')
    and data_Type in ('CHAR','VARCHAR2')
    group by OWNER, ROLLUP(char_used);
    

    owner semantics antall
    SCOTT BYTE
    62
    SCOTT CHAR
    610
    SCOTT  
    672
    RYAN BYTE
    486
    RYAN  
    486

    In this example, the SCOTT schema contains columns of both semantics types.
    The schema RYAN contains only the default, BYTE.

    Wednesday, August 24, 2016

    How to add minutes, seconds, hours or days to a datetime value

    Simply add the desired interval to your timestamp value:

    select current_timestamp,
           current_timestamp + interval '3' minute "current_timestamp+3min" 
    from dual;
    
    
    CURRENT_TIMESTAMP                                 current_timestamp+3min
    ----------------------------------------          ---------------------------------------------------------------------------
    24.08.2016 19.44.32,808781 +2:00                  24.08.2016 19.47.32,808781000 +2:00
    

    There are numerous possibilities to add very granular time intervals to an existing datetime value.

    Check the examples of Interval literals in the official documentation.

    Wednesday, July 13, 2016

    SYSTIMESTAMP and SYSDATE VS. CURRENT_TIMESTAMP and CURRENT_DAY

    Most DBAs I've spoken to are familiar with the function SYSDATE, but there are some other functions, too, that may prove to be useful, even for DBAs that aren't directly involved in developing applications.
    For example, when querying the data dictionary, it can certainly be an advantage to be familiar with some other built-in date functions that comes with the Oracle database.

    The two first functions reflect the datetime settings of the operating system on which the database runs:

    * The SYSTIMESTAMP function returns a value of the TIMESTAMP WITH TIME ZONE datatype. It includes fractional seconds and time zone.
    * the SYSDATE function returns a value of the DATE datatype.It includes timestamp, but not fractional seconds, nor time stamp.

    The three functions below reflects the settings of your session:

    * The CURRENT_DATE fuction returns a value of the DATE datatype within the session time zone
    * The CURRENT_TIMESTAMP function returns a value of the TIMESTAMP WITH TIME ZONE data type within the session time zone
    * The LOCALTIMESTAMP function returns a value of the TIMESTAMP data type, within the session time zone

    Sitting in Oslo and querying a database with dbtimezone set to +2:00:
    select dbtimezone from dual;
    
    DBTIME
    +02:00
    I get the following default output when I execute the queries at 11:45:


    select sysdate, systimestamp, current_date,current_timestamp(5), localtimestamp(3) from dual
    
    
    SYSDATE SYSTIMESTAMP CURRENT_DATE CURRENTTIMESTAMP LOCALTIMESTAMP
    13.07.2016 11:45:52 13.07.2016 11.45.52,597707 +02:00 13.07.2016 11:45:52 13.07.2016 11.45.52,59771 +02:00 13.07.2016 11.45.52,598

    Now I will change my session time zone:
    alter session set time_zone='America/New_York';
    
    Session altered.
    


    Execute the functions again, and the difference between the function becomes apparent:
    select sysdate, systimestamp, current_date,current_timestamp(5), localtimestamp(3) from dual;
    

    SYSDATE SYSTIMESTAMP CURRENT_DATE CURRENTTIMESTAMP LOCALTIMESTAMP
    13.07.2016 11:49:1513.07.2016 11.49.15,381888 +02:0013.07.2016 05:49:1513.07.2016 05.49.15,38189 AMERICA/NEW_YORK13.07.2016 05.49.15,382



    Current_date now returns the date and time as it would be in New York
    Current_timestamp does the same, but adds the region name as the time zone indicator
    Localtimestamp returns the timestamp as it would be in New York

    The precision in the current_timestamp and localtimestamp (5 and 3, respectively) are set to override the default precision of 6 for fractional seconds.

    Tuesday, April 5, 2016

    MS word characters that have no representation in WE8ISO8859P1 and WE8ISO8859P15

    During a globalization effort, I found the following interesting information about the difference between the WE8MSWIN1252 and the WE8ISO8859P15 character set

    * 27 codepoints are NOT defined/used in WE8ISO8859P15 but are filled in / used in WE8MSWIN1252

    (note that the WE8MSWIN1252 codepoints

    * 91 = U+2018 : LEFT SINGLE QUOTATION MARK
    * 92 = U+2019 : RIGHT SINGLE QUOTATION MARK
    * 93 = U+201C : LEFT DOUBLE QUOTATION MARK
    * 94 = U+201D : RIGHT DOUBLE QUOTATION MARK

    are the default quotation marks of the Microsoft Word product, so if you have data that comes from Microsoft Office products you *need* a WE8MSWIN1252 database characterset.

    Make sure you clients NLS_LANG is also correct: WE8ISO8859P15 is *not* correct as NLS_LANG for windows clients
    Note 179133.1 The correct NLS_LANG in a Windows Environment


    A more common problem is that in an environment using English and West European or Latin American ( French, Spanish, Portuguese, Dutch, Italian,...) windows clients. a lot of setups use a NLS_LANG set to WE8ISO8859P15 on the client side. For windows systems this is not correct and provokes in most cases that there are actually WE8MSWIN1252 codes stored in the WE8ISO8859P15 database. The most commonly seen characters are the € symbol and these qoutes: ‘’“” - these are the 1252 "smart qoutes" used in Microsoft Office. They look similar to the "normal" US7ASCII qoute " in most fonts, but are different characters often and result in confusion. The Courrier New font for example distinct them quite good visibly.


    So watch out for cut-n-paste errors based on MS Word documents! They often result in characters that have no representation under the most commonly used non-Unicode character set.

    Wednesday, February 3, 2016

    11.2.0.2 and onwards: v$parameter reflects the NLS session parameters, not the NLS instance parameters



    After a migration of a database from single-byte to multi-byte character set, I wanted to compare the nls_parameters.

    As a part of the migration, we had to change semantics on several of our tables from BYTE to CHAR to accommodate for expanding data.

    One issue that came up during our initial discussions on how to complete the migration, was if it would be beneficial to set nls_length_semantics to CHAR in the init.ora file. But according to Oracle's documentation, the NLS_LENGTH_SEMANTICS should be kept to BYTE at database level.(Check Doc ID 144808.1 "Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS)", section G: "Summary of best practices") so that idea was rejected.

    After the import was finished, I wanted to compare the parameters to quality check the new instance.

    I queried the v$parameter view, as many DBAs would normally do and found that nls_length_semantics seemed to have been set to CHAR at the instance level:
    select name,value from v$parameter where name = 'nls_length_semantics';
    
    NAME VALUE
    nls_length_semantics CHAR

    After a quick search on My Oracle Support I found Doc ID 1368966.1 "NLS settings like NLS_DATE_FORMAT in spfile or init.ora ignored, incorrect or not as expected on Oracle 11.2.0.2 and higher"

    Turns out that this is not an error, but the result of a corrected bug (bug no 8722860).

    In previous releases, a query of v$parameter or "show parameter" in sql*plus would reflect the instance parameters.
    This is incorrect, as Oracle points out:

    The documentation set states that :
    * the SQL*Plus command show parameter displays the values of initialization parameters in effect for the current session.
    * the V$PARAMETER view displays information about the initialization parameters that are currently in effect for the session


    As Oracle puts it
    "From 11.2.0.2 onwards V$PARAMETER and the SQL*Plus command show parameter reflect the session NLS settings as it should."

    So make sure to use v$system_parameter instead, which is reflecting the correct instance setting:
    select name,value from V$SYSTEM_PARAMETER where name = 'nls_length_semantics';
    
    NAME VALUE
    nls_length_semantics BYTE


    Sources:
    Doc ID 144808.1 "Examples and limits of BYTE and CHAR semantics usage (NLS_LENGTH_SEMANTICS)"
    Doc ID 1368966.1 "NLS settings like NLS_DATE_FORMAT in spfile or init.ora ignored, incorrect or not as expected on Oracle 11.2.0.2 and higher"
    Doc ID 241047.1: "The Priority of NLS Parameters Explained (Where To Define NLS Parameters)

    Thursday, October 29, 2015

    How to check your Operating System's code page for a particular character

    I have been involved in several migration projects where we had to deal with Western European characters, which have invalid representation in the database, and needs to be converted.

    Like I showed in my post about usage of the chr function, you sometimes need to find the decimal and/or hexadecimal value for a particular character.

    But how can you verify that your operating system actually agrees; how would your Operating System translate a character passed from a terminal, e.g. putty?

    First, make sure your putty terminal has its translation-settings set to the character set of the server you are logging into: Right-click your putty terminal upper frame, and pick "Change Settings" from the menu. Go to the "translation" settings and then select the correct character set from the drop-down menu "Remote character set".

    On Linux platforms, a simple way to check how a character would be translated would be to use the hexdump utility. Thanks to Gray Watson for demonstrating this!

    man hexdump
    The hexdump utility is a filter which displays the specified files, or the standard input, if no files are specified, in a user specified format.
    

    Let's try to hexdump the character ø, and see what the internal hexadecimal representation is:
    echo "ø" | hexdump -v -e '"x" 1/1 "%02X" " "'
    xC3 xB8 x0A 
    

    The prefix x in front of the values represents hexadecimal values, so the important part here is "C3 and B8" - in a multibyte character set this represent the Scandinavian character ø ( I must admit, I never figured out what the 0A represents. Anyone?)

    Another way is to use the "od" utility:
    man od
    
    od - dump files in octal and other formats
           -x     same as -t x2, select hexadecimal 2-byte units
    
    

    Using the -x flag straight off will give the hexadecimal value in 2-byte units:
    echo "ø" | od -x
    0000000 b8c3 000a
    0000003
    

    This time, the values are cast around, and should be read backwards for meaning. I have not found an explanation to why od does this. Anyone?

    However, if you use the -t x notation instead,:
    echo "ø" | od -t x1
    0000000 c3 b8 0a
    0000003
    
    The values come out as a DBA expects; c3b8 corresponds to decimal value 50104 which in turn represent the Scandinavian letter ø.

    ( And again, I never figured out what the 0a represents. Anyone?)





    Wednesday, October 28, 2015

    How to work around ORA-29275: partial multibyte character when using chr function

    If you need to check what is actually stored in a database table, meaning the internal representation, you can use the DUMP function.

    It will return a varcar2 value containing the data type code, length in bytes, and internal representation

    SQL> select dump ('ø',1010) from dual;
    
    DUMP('Ø',1010)
    --------------------------------------------------------------------------------
    Typ=96 Len=2 CharacterSet=AL32UTF8: 195,184
    

    So the type above is 96, representing a VARCHAR2 or NVARCHAR datatype, the length is 2 bytes, the characterset of the database is AL32UTF8 and the decimal values of each byte is 195 and 184.

    For single-byte character set, it would be easy to check which character the decimal represented under the database's characterset; simply use the chr function and pass the decimal value as an argument.

    However with a multibyte character set, which one of the returned decimal functions should you choose?
    SQL> select chr(195) from dual;
    
    ERROR:
    ORA-29275: partial multibyte character
    

    A workaround is to dump the character to hexadecimal format instead of decimal, by using the argument 1016 as the second argument to the function:
    SQL>  select dump ('ø',1016) from dual;
    
    DUMP('Ø',1016)
    --------------------------------------------------------------------------------
    Typ=96 Len=2 CharacterSet=AL32UTF8: c3,b8
    

    The c3b8 hexadecimal value corresponds to the decimal value of 50104. Pass this value to the chr function and you get the result you're looking for:

    SQL> select chr(50104) from dual;
    
    CHR(50
    ------
    ø
    

    A brilliant converter between decimal and hexadecimal numbers can be found here

    The documentation for the dump function can be found at http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions055.htm#SQLRF00635

    The documentation for the chr function can be found at http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions026.htm#SQLRF00616

    The datatype table is listed here: https://docs.oracle.com/cd/B19306_01/appdev.102/b14250/oci03typ.htm#i422575

    Monday, October 5, 2015

    Is it possible to change the character set of the database by recreating the control file?

    Question: Is it possible to change the character set of the database by recreating the controlfil?
    Answer: No, this is not supported.

    Doc ID 225912.1 states:

    "Note also that it's NOT possible to change / adapt the NLS_CHARACTERSET by re-creating the control file.
    The characterset specified in CREATE CONTROLFILE DATABASE ... RESETLOGS ....CHARACTER SET ; ( the result of an ALTER DATABASE BACKUP CONTROLFILE TO TRACE..) need to be the actual current NLS_CHARACTERSET of the database."


    Question: So why does CREATE CONTROLFILE generate a CHARACTERSET clause in the first place then?
    Answer: In case media recovery is required after the controlfile has been recreated.

    From the documentation:
    "If you specify a character set, then Oracle Database reconstructs character set information in the control file. If media recovery of the database is subsequently required, then this information will be available before the database is open, so that tablespace names can be correctly interpreted during recovery."

    The documentation goes on to say

    "This clause is required only if you are using a character set other than the default, which depends on your operating system."

    Without having found any sources on the internet to confirm my interpretation, I will claim that this means:

    * If you recreate your controlfile for a database with, say, WE8ISO8859P1 as character set, you will indeed need to specify the character in the CREATE CONTROLFILE clause.

    * If you recreate the controlfile of a database using UTF8, for example AL32UT8, you can skip this clause altogether if you wish

    Wednesday, July 2, 2014

    How to work around ORA-01830: date format picture ends before converting entire input string

    When querying a column of type TIMESTAMP WITH TIMEZONE, I wanted to omit the fractional seconds + the time zone information.

    The query

     SELECT STATE,
            START_DATE "START DATE"
     FROM   DBA_SCHEDULER_JOBS
     WHERE  OWNER = 'SYS' AND JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS';
    

    would return
    STATE      START DATE
    ---------  --------------------------------
    SCHEDULED  02.07.2014 13:351:16,558701 +02:00
    
    and I simply didn't want to display that level of information. When trying to format the string as follows:

    TO_DATE(START_DATE,'DD.MM.YYYY HH24:MI:SS') "START DATE"
    

    I ran into the error:
       TO_DATE(START_DATE,'DD.MM.YYYY HH24:MI:SS') "START DATE"
                    *
     ERROR at line 2:
     ORA-01830: date format picture ends before converting entire input string
    

    Workaround:
    In your session, set
    ALTER SESSION SET NLS_TIMESTAMP_FORMAT='DD.MM.YYYY HH24:MI:SS';
    ALTER SESSION SET NLS_DATE_FORMAT='DD.MM.YYYY HH24:MI:SS';
    

    Then cast the data type from TIMESTAMP WITH TIMEZONE to TIMESTAMP:
    CAST(START_DATE AS TIMESTAMP)
    
    and finally, convert to DATE:
    TO_DATE(CAST(START_DATE AS TIMESTAMP),'DD.MM.YYYY HH24:MI:SS')
    
    Put together:
    SELECT STATE,
            TO_DATE(CAST(START_DATE AS TIMESTAMP),'DD.MM.YYYY HH24:MI:SS') "START DATE"
     FROM   DBA_SCHEDULER_JOBS
     WHERE  OWNER = 'SYS' AND JOB_NAME = 'PURGE_ALL_AUDIT_TRAILS';
    
    Output:
    STATE           START DATE
     --------------- ------------------------------
     SCHEDULED       02.07.2014 13:51:16
    
    Another example that worked for me was against a column defined like this:
      DATELASTUPDATED  TIMESTAMP(6)                 NOT NULL,
    
    In my resultset I would only like dates from 29.12.2016, the time of the day is not interesting. Simply use the fuction TRUNC:
    WHERE  trunc(DateLastUpdated) = '29.12.2016'
    

    Tuesday, June 24, 2014

    Where to find the database characterset in the dictionary

    SQL> select value$ from sys.props$ where name = 'NLS_CHARACTERSET';
    
     VALUE$
     --------------------------------------------------------------------------------
     WE8MSWIN1252
    
     SQL> select * from nls_database_parameters WHERE PARAMETER='NLS_CHARACTERSET';
    
     PARAMETER                      VALUE
     ------------------------------ ----------------------------------------
     NLS_CHARACTERSET               WE8MSWIN1252
    

    Thursday, June 19, 2014

    What does the TERRITORY component of the NLS_LANG parameter control?

    From OTN:

    "The territory component of the NLS_LANG parameter controls the operation of a subset of globalization support features.
    It specifies conventions such as the default date, monetary, and numeric formats. Each supported territory has a unique name, for example, AMERICA, FRANCE, or CANADA.
    If the territory is not specified, then the value is derived from the language value."


    For examples please see the Globalization Support Guide

    Wednesday, June 18, 2014

    How to work around ORA-12705 when connecting to your database

    You get:

    ERROR:
     ORA-12705: Cannot access NLS data files or invalid environment specified
    
    Solution:
    Incorrect or misspelled NLS_LANG.

    For example
    NLS_LANG=AMERICAN_NORWAY.UTF-8
    
    is incorrect.

    However
    export NLS_LANG=AMERICAN_NORWAY.UTF8
    
    is correct.

    Thursday, May 8, 2014

    What is the difference between the views nls_database_parameters and nls_instance_parameters?

    The NLS_DATABASE_PARAMETERS view will display what the NLS settings were when the database was created. These are fixed at the database level and cannot be changed.

    The NLS_INSTANCE_PARAMETERS view reflects parameters set for your instance in the init.ora file or the server parameter file (spfile).


    Source: James Koopman in Databasejournal.com

    This matches my settings for a random database:
    SQL> show parameter nls_
     
    NAME                                  TYPE        VALUE
     ------------------------------------ ----------- ------------------------------
     nls_comp                             string      BINARY
     nls_language                         string      AMERICAN
     nls_length_semantics                 string      BYTE
     nls_nchar_conv_excp                  string      FALSE
     nls_territory                        string      AMERICA
     
    SQL> SELECT * FROM NLS_INSTANCE_PARAMETERS WHERE value IS NOT NULL;
     
    PARAMETER                       VALUE
     ------------------------------ ----------------------------------------
     NLS_LANGUAGE                   AMERICAN
     NLS_TERRITORY                  AMERICA
     NLS_COMP                       BINARY
     NLS_LENGTH_SEMANTICS           BYTE
     NLS_NCHAR_CONV_EXCP            FALSE
     
    5 rows selected.
    

    Monday, February 10, 2014

    Working with TIME ZONE information



    To view the time zone settings for the database and your session:
    SELECT DBTIMEZONE, SESSIONTIMEZONE
    FROM DUAL;
    
    DBTIMEZONE SESSIONTIMEZONE
    +01:00 Europe/Zurich

    Some other useful functions:

  • SYSTIMESTAMP --> Returns system date including fractional seconds, datatype TIMESTAMP WITH TIME ZONE.
  • LOCALTIMESTAMP --> Returns the user session's local timestamp, datatype TIMESTAMP
  • CURRENT_TIMESTAMP --> Returns datatype TIMESTAMP WITH TIME ZONE, in the local time zone set for the session
  • CURRENT_DATE --> Returns datatype DATE, in the local time zone set for the session
    SELECT  SYSTIMESTAMP,
            LOCALTIMESTAMP,
            CURRENT_TIMESTAMP,
            CURRENT_DATE
    FROM DUAL;
    
    SYSTIMESTAMP LOCALTIMESTAMP CURRENT_TIMESTAMP CURRENT_DATE
    10.02.2014 09:17:00.301998 +01:00 10.02.2014 09:17:00.302006 10.02.2014 09:17:00.302006 +01:00 10.02.2014 09:17:00

    Some conversion functions:

    FROM_TZ
    -- converts a TIMESTAMP to TIMESTAMP WITH TIMEZONE
    
    SELECT LOCALTIMESTAMP, FROM_TZ(LOCALTIMESTAMP, '+01:00' ) FROM DUAL
    
    LOCALTIMESTAMP                 FROM_TZ(LOCALTIMESTAMP,'+01:00')
    ------------------------------ ---------------------------------------------------------------------------
    18.03.14 09:23:40.032126       18.03.14 09:23:40.032126 +01:00
    
    
    
    
    -- or alternatively, look up the timezone used in your session
    -- and use this in a subquery 
    SELECT LOCALTIMESTAMP, 
           FROM_TZ(LOCALTIMESTAMP, SELECT SESSIONTIMEZONE FROM DUAL) ) "Converted to TSTZ"
    FROM DUAL;
    
    LOCALTIMESTAMP                 Converted to TSTZ
    ------------------------------ ----------------------------------------
    18.03.14 09:28:39.470322       18.03.14 09:28:39.470322 +01:00
    
    

    TO_TIMESTAMP_TZ
    
    --converts a string to a TIMESTAMP WITH TIMEZONE datatype
    SELECT TO_TIMESTAMP_TZ('03.02.1973 16:00:00', 'DD.MM.YYYY HH24:MI:SS') 
    FROM DUAL;
    
    TO_TIMESTAMP_TZ('03.02.197316:00:00','DD.MM.YYYYHH24:MI:SS')
    ---------------------------------------------------------------------------
    03.02.73 16:00:00.000000000 +01:00
    
    

    TO_TIMESTAMP
    --converts a string to a TIMESTAMP datatype
    SELECT TO_TIMESTAMP('03.02.1973 16:00:00:123456','DD.MM.YYYY HH24:MI:SS:FF') 
    FROM DUAL;
    
    TO_TIMESTAMP('03.02.197316:00:00:123456','DD.MM.YYYYHH24:MI:SS:FF')
    ---------------------------------------------------------------------------
    03.02.73 16:00:00.123456000
    
    

    CAST
    Cast can be used for a variety of conversions. It takes an expression as its first parameter, and returns an instance of the datatype you are converting to.
    It is useful to convert strings representing dates or times, into proper datetime formats.
    -- to convert a string into a TIMESTAMP WITH LOCAL TIME ZONE, use CAST
    SELECT CAST('03-FEB-11'
           AS TIMESTAMP WITH LOCAL TIME ZONE) "String to TWLTZ"
    FROM DUAL;
    
    String to TWLTZ
    ----------------------------
    03.02.11 00:00:00.000000
    
    
    For a string with more precision, use TO_TIMESTAMP to convert to a timestamp first:
    
    SELECT CAST(
                TO_TIMESTAMP('03.02.1973 16:05:00','DD.MM.YYYY HH24:MI:SS')
                AS TIMESTAMP WITH LOCAL TIME ZONE
               ) "TS to TSWLTZ"
    FROM DUAL;
    
    TS to TSWLTZ
    -----------------------------
    03.02.73 16:05:00.000000
    
    

    EXTRACT
    Extracts the value indicated by the constant immediately following the opening parenthesis, from a valid timestamp expression.
    Note that EXTRACT requires a timestamp expression as an argument.
    Its return type is character if you extract TIMEZONE_REGION or TIMEZONE_ABBR, but numeric for all other.

    SELECT  EXTRACT(MINUTE FROM LOCALTIMESTAMP),
            EXTRACT(YEAR   FROM LOCALTIMESTAMP),
            EXTRACT(MONTH  FROM LOCALTIMESTAMP),
            EXTRACT(DAY    FROM LOCALTIMESTAMP),
            EXTRACT(HOUR   FROM LOCALTIMESTAMP),
            EXTRACT(SECOND FROM LOCALTIMESTAMP)
    FROM DUAL;
    
    
    EXTRACT(MINUTEFROMLOCALTIMESTAMP) EXTRACT(YEARFROMLOCALTIMESTAMP) EXTRACT(MONTHFROMLOCALTIMESTAMP) EXTRACT(DAYFROMLOCALTIMESTAMP) EXTRACT(HOURFROMLOCALTIMESTAMP) EXTRACT(SECONDFROMLOCALTIMESTAMP)
    3 2014 2 10 10 24.256402

    Using the SYSTIMESTAMP, which includes TIME ZONE, in the EXTRACT function:
    SELECT  EXTRACT(TIMEZONE_HOUR FROM SYSTIMESTAMP)   "TZ_HOUR",
            EXTRACT(TIMEZONE_MINUTE FROM SYSTIMESTAMP) "TZ_MINUTE"
    FROM DUAL;
    
    TZ_HOUR TZ_MINUTE
    1 0

    SYS_EXTRACT_UTC
    Extract the universal time (UTC) based on a timestamp that contains time zone information:
    SELECT SYS_EXTRACT_UTC(SYSTIMESTAMP)
    FROM DUAL;
    
    SYS_EXTRACT_UTC(SYSTIMESTAMP)
    10.02.2014 09:49:37.390230

    AT TIME
    Converts a datetime value to another time zone. AT TIME can convert TIMESTAMP, TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL TIME ZONE.
    But, take care, NOT the DATE datatype!
    Note that the parenthesis are not required, but it makes the function more readable:
    SELECT (LOCALTIMESTAMP) AT TIME ZONE DBTIMEZONE "DB time"
    FROM DUAL;
    
    DB time
    10.02.2014 10:59:45.901974 +01:00

    Or, using a named time zone region:
    SELECT (CURRENT_TIMESTAMP) AT TIME ZONE 'Europe/Moscow' "Moscow Time"
    FROM DUAL;
    
    Moscow Time
    10.02.2014 13:14:06.632136 +03:00

    Specifying an offset from UTC:
    SELECT (CURRENT_TIMESTAMP) AT TIME ZONE '-5:00' "New York Time"
    FROM DUAL;
    
    New York Time
    10.02.2014 05:15:51.388302 -05:00

    Use a TIMESTAMP value instead of TIMESTAMP WITH TIME ZONE:
    SELECT (LOCALTIMESTAMP) AT TIME ZONE 'America/Los_Angeles' "LA Time"
    FROM DUAL;
    
    LA Time
    10.02.2014 02:19:39.958949 -08:00

    AT LOCAL
    Convert the source data into the local time equivalent:
    SELECT 
            FROM_TZ(
                    TO_TIMESTAMP('10.02.2014 02:21:06.938886', 'DD.MM.YYYY HH24:MI:SS:FF')
                   ,'US/Hawaii') 
            AT LOCAL "Hawaiian to Local Time"
    FROM DUAL;
    
    Hawaiian to Local Time
    10.02.2014 13:21:06.938886000 +01:00

    Without the AT LOCAL function, you would of course get the timestamp in the time zone specified
    SELECT  FROM_TZ(
                    TO_TIMESTAMP('10.02.2014 02:21:06.938886', 'DD.MM.YYYY HH24:MI:SS:FF')
                   ,'US/Hawaii') "Hawaiian time"
    FROM DUAL;
    Hawaiian time
    10.02.2014 02:21:06.938886000 -10:00