Archives of the TeradataForum
Message Posted: Mon, 06 Oct 2008 @ 15:44:35 GMT
<-- Anonymously Posted: Mon, 6 Oct 2008 15:34 -->
Our V2R6.0 Teradata system is channel-attached to a MVS / zOS mainframe for loads etc and SQL Assistant is one of our main query tools. A subset of the European accented characters has been identified which, when exported from the operational system to an IBM dataset, display as the correct characters when the dataset is browsed. Our mainframe systems use the default US/UK English EBCDIC character set. We can load and export the data to/from Teradata without problem but cannot get it to display properly. In summary :
1. EBCDIC source data containing the International Characters loads properly to two versions of the target table, ie. one where the character set for the column is LATIN and one where it is UNICODE.
Am I right in thinking that this is because UTF16 is used on the Teradata server regardless of the table definition ?
2. The data from both versions of the Teradata table exports properly via a Mainframe BTEQ job to a file/dataset.
3. The data from both versions of the Teradata table will not display properly on screen via a SQL SELECT statement.
This is the case via both Mainframe BTEQ and when using SQL Assistant.
In the case of SQL Assistant we have tried altering the Session Character Set of the ODBC data source to UTF8 or UTF16. In both cases we have also tried SELECT TRANSLATE(col_name USING UNICODE_TO_LATIN) from tbl_name. The characters in question can be copied and pasted from a browse of the IBM dataset into a Wordpad file that defaults to the Helv font.
Having read through the manuals and an "Orange Book" article, my understanding is that Unicode is the preferred approach from a standards viewpoint but I cannot find anything that explains our issue. If anyone out there knows how to make this work your help would be appreciated.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|