![]() |
|
Archives of the TeradataForumMessage Posted: Tue, 27 Jan 2004 @ 12:57:10 GMT
Hi All, I have a data file with utf8 charset format. How do I load this file to a varchar field table. I am using the below script for this but this brings me 2673 errors for all rows in the ET_table. I am passing the charset parameter in the mload command as below.
mload -c utf8 < *mld
####################################
.logtable lg_unicode2;
.Begin Import Mload tables unicode2;
.Layout unicode2;
.Field col1 * char(2);
.DML Label INSERT;
Insert into unicode2 ( col1 ) values (:col1);
.Import infile unicode
format text
Layout unicode2
apply INSERT;
.End Mload;
.Logoff;
####################################
The table definition is having just a varchar col1 in it.
CREATE MULTISET TABLE DR_LOAD_DB_GH.unicode2 ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL
(
col1 VARCHAR(40) CHARACTER SET UNICODE NOT CASESPECIFIC)
PRIMARY INDEX ( col1 );
Note : The utf8 datafile is having multibyte japanese chars also. So the data could be like. <MB character> - could be 2 bytes or 3 bytes length. Strings can be of: <MB-2><MB-2><MB-2> Does anybody tried this out earlier ? What layout should I specify & what should the FORMAT clause of .IMPORT command is supposed to contain ? I am assuming that for such a scenario Teradata needs to be kanji-enabled. Is that right ? Thanks in advance for your help. --Vivek.
| ||||||||||||||||||||||||||||||||||||||||||||||||
| | ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
|
| ||||||||||||||||||||||||||||||||||||||||||||||||
| Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
| Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||