Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Mon, 30 Jan 2006 @ 09:51:29 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Use of Multiset for Large Table
 
From:   Joseph V D silva

Hard to give proper comments without knowing your exact requirements. But this what I would like to say. When you are inserting into a multiset table, teradata doesnot do duplicate row checking. Now if your table was set table and the typical rows per value for the Primary Index was quite some number and you don't have any unique indices in your table, then you are done away ! One of my developers was recently trying to join two tables (same schema) and insert into a third one which was an empty set table. There was about 8 million rows in one table and about 4 million in another. After the query ran for about 2 hours we aborted it, changed the destination to multi set table and ran the union select/insert again and bingo ! the query was over in 2 mnts 32 seconds ! And since I had done a union (not union all) I was sure there won't be any duplicate rows though the target table was multi set. And since the target table was mulit set, teradata did not do a duplicate row checking... And we lived happily ever after :)

Understanding your system is the key to choose which way you want to go. If you know you can avoid duplicate rows going into that table, then you can keep it multi set and would see a great deal of improvement with performance.


Good luck !

Joseph Vinish D'silva



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023