Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Mon, 10 Apr 2007 @ 01:28:40 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Merging large volumes of data into large history tables
 
From:   Ureta, Cesar

Hi all:

Maybe I'm missing something but I always thought that parallel load tools (MultiLoad & FastLoad) were much faster at loading large volumes of data, mainly because they work at the block level.

So when you say "Insert/update operations have been made performant enough" do you actually mean that the times we get now are approaching the times we could get from the parallel tools?

I have a question: If SQL operates at the row level and has to go thru a log (we don't want unlogged operations on your DW tables, do we?) what has changed now other that hardware being much faster compared to a few years ago ?

I strongly think that ETL operations should be designed to run as fast as possible, fast enough is not sufficient as my current project has painfully proved. So here's the moral, If you think you are fast enough, time will prove you wrong...


Cheers,

Cesar



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023