Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Mon, 22 Oct 2001 @ 17:42:06 GMT


     
  <Prev Next>   <<First <Prev
Next>
Last>>
 


Subj:   Re: INSERT INTO big table
 
From:   Eric J. Kohut

Petr,

There are still a lot of assumptions which could affect performance to make, for example.

Speed of chips (Most nodes could have at least 2 and sometimes 3 different chip speeds based upon performance improvements)

I/O bandwidth (choice of point in the range of Disk storage to CPU ratio)

Fallback on table or No Fallback (Also permanent journaling, this doubtful)

Many General Tunable options in addition to the Performance Options you provided like FSG cache setting

Table's Data Block size (Both Source and Target)

Tables Row Size (Both Source and Target)

Since all the e-mails have been trimmed this is my understanding of your situation.

You are trying to insert into a non-empty table *** with 500 Million Rows with a select from a source table of 1 Million rows. This is .2 % (1 / 500) of your data being change . This % is very low and so there is minimal performance improvement due to multiple hits (rows) per block changed as compared to say 2 % - 10 %.

As a result, I would imagine that you've seen overall performance decline as your data volume stays constant. If possible, the best way to improve the data timeliness with such low data change % is to get the data applied earlier (TPUMP or You current method with Smaller Batches throughout the day). Faster hardware (CPU and I/O) and newer software would help as well, but a reduction in the rows / node (Vamp) would also be helpful (Less Disk to CPU). Currently you have about 20,833,333 rows / Vamp.

Also, have you checked to see if your system is experiencing cylinder splits or mini-cyl packing to reclaim space, this would make performance variable, would affect performance, and sometimes happens on very full systems.

That said based upon the info that I have; In general, you seem to be in the right general performance range. Assuming a target row size of >= 100 bytes, target table with no fallback, and 200 MHz chips you are pretty close to what I would expect at .2% data being changed.

EJK
Eric J. Kohut
Senior Solutions Consultant - Teradata Solutions Group - Retail
Certified Teradata Master
NCR Corp.



     
  <Prev Next>   <<First <Prev
Next>
Last>>
 
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023