Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Thu, 12 Apr 2007 @ 22:32:55 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Merging large volumes of data into large history tables
 
From:   Dan Linstedt

Hi Todd, et. All:

Todd, thanks for clearing this up for me, I'm sorry about the misconception I put out to the forums. I will check with you first on new improvements next time around. Also, we should be meeting up soon in RB...

By the way, a note about performance on Teradata and getting data IN fast... I recently had the pleasure/opportunity to work on a large multi- node system, all I can say is MLOAD speeds were approaching 400,000 rows per second for 1.2k byte row sizes, to an empty table. FastLoad was approaching 600,000 rows per second to the same table.

SQL Statements internal performance (moving data from table to table without constraints, and empty tables) was nearing 150,000 to 200,000 rows per second - and forced re-partitioning (based on new key). These SQL statements were extremely complex with all kinds of nested sub-queries, and including OLAP functionality, and transformations.

I'm very impressed with the speed/throughput we had, of course we had 1.5 Billion rows per table to deal with... so performance was paramount.


Thanks,

Dan Linstedt
Friends of Teradata Network



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023