Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Tue, 11 Aug 2009 @ 13:30:41 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: How to bypass bad data in Multiload
 
From:   Jinesh P V

Michael,

We too had a similar requirement and used the following approach.

1. Write records with matching number of fields into a named pipe and perform mload from this named pipe. Records with incorrect number of fields can be diverted to a separate file for investigation/notification. If you are using

2. Define the fields in the layout larger than necessary, I ended up defining fields as VARCHAR(20) for a column that is actually varchar(10). This avoided mload complaining "Data item too large......". The values longer than table-column length will get truncated automatically with no error.

3. Add "IGNORE DUPLICATE ROWS" in the .dml command to avoid "UV" errors due to row duplicates.

4. Add option "nostop" to import statement to avoid mload terminating after seeing the first error.

5. Decide the percentage of rows that can go into error tables. Fail the load if this error percent is more than the allowed threshold. Additional checks for specific error codes and columns names may also be performed.



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023