Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Mon, 24 Feb 2003 @ 19:57:02 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Couting duplicate rows thru multi load
 
From:   Venkata Panabakam

  My experience has been that the writing of that UV table may be performed on a row-by-row basis, not in a block fashion. So it may be slow.  


  Why not multiload the 25 million to a multiset staging table. Then use SQL and grouping to find the duplicates and non-duplicates. At least you stay with parallel operations.  



finding duplicates rows on teradata, does it slow down teradata? usually data validation will done before loading data into teradata .... I am assuming loading duplicate rows into a table (with multi set) will slow down the teradata.... that is why we are separating duplicate rows on flat file itself.........

pls..correct me if i am wrong....



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023