Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Sat, 08 May 2004 @ 15:18:26 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: MLoad issues
 
From:   Michael Larkins

Hi Dave:

You mentioned that converting the source data might be difficult, however, using the native data formats would be much faster than making the load utility search byte by byte for a delimiter.

Now that I have that out of my system, I suggest you use a different delimiter, one that does not appear anywhere in your data. I realize this is changing the "source data" but you didn't indicate how the data is created. If an unload utility is being used, it should have an option to specify the delimiter, like Fastload, MultiLoad and TPump, even Queryman.

As far as your second question, the input data should fit into the memory location defined in the .layout. I might suggest making the field the largest that you might ever be sent (sounds like a problem on the creation side of the equation). Then, load your table or a temporal table and then run a query to do whatever you want with rows that contain a data value greater than CHAR or CHARACTER_LENGTH or OCTET_LENGTH bytes long. You might as well scrub your bad data in parallel instead of serially.

Hope these suggestions at least give you a place to start.


Regards,

Michael Larkins
Certified Teradata Master



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023