|
Archives of the TeradataForumMessage Posted: Tue, 17 Apr 2012 @ 15:17:23 GMT
For some time we'd been going back and forth between Oracle and Teradata about this issue and why the UTF8 works on Oracle but fails on Teradata. Finally Teradata agreed to come up with something that would "Pre-Clean" our data from Oracle, so we did not fail on the load. This "Pre-Clean" exe job is called: cp2uni_axm.so We have placed this code on the /edw/ directory of the server (same Linux server as we have the FastLoad script), and call it via the fload scripts as follows: After: SET RECORD VARTEXT ""; insert the following: /* access module filters untranslatable uncode characters */ /* ErrorChar u+0023 --> # */ /* u+003F --> ? */ /* Remove TRACE to reduce log-file clutter. */ axsmod /edw/cp2uni_axm.so "CodePage=UTF8, ErrorChar=U+003F, EOR=0A, Format=FIXED"; What this does is takes the input file (example 'myfile.dat') and puts it through the translation routine, changing any "strange Characters" with "?" (note we did ask for "�" but they must of dropped the request or had it upside down :-) and gave us "?"). (note "�" is Alt 168) Anyway, we do not have access to the Executive code - you may have to ask Teradata for it - and therefore unable to comment on what it does or how it does it; all we can say is it seems to work and does indeed replace strange characters with "?". Hope that make sense Regards David Clough
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||