|
|
Archives of the TeradataForum
Message Posted: Tue, 25 Nov 2003 @ 10:11:20 GMT
Subj: | | Re: CLI charset problem.. |
|
From: | | Victor Sokovin |
| No, output file is not empty. It contains some untranslateable chars (like ^Z^Z^Z^Z) in place of unicode columns. | |
| You can not read these columns even with Win Clients with Unicode support. | |
Right, it looks like some further investigation might be necessary. It would be useful to know more about your setup, i.e., how you
define the table and how you get data in and out of it, client platforms and utilities (does the data come from a Linux client as well?), is
ODBC used somewhere on the way etc. Some unwanted conversion might have taken place while you were populating the table. So, the first
question is whether the data currently stored in the table is OK.
If you did not do it already, you could try to use Char2HexInt and examine the hex values of the chars in the Unicode column. So, instead
of 'select unicode_column' in your test query use 'select Char2HexInt (unicode_column)'. This will make the 'viewing part' easy. Perhaps you
could imitate your data population process with some simple Latin sample like a bunch of letters A, B, C etc. If the test query does not
return their proper Unicode hex values then you'll have to look into the data loading processes. Try the simplest inserts in BTEQ to see
whether it makes any difference.
There could be many ramifications here so it would be useful if you could post some more test results.
Regards,
Victor
| |