Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Wed, 25 Jun 2003 @ 20:24:57 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Capacity Planning for Ordinairy People!
 
From:   LeBlanc, James

Part 2 - What is Tperf, inhibited, uninhibited, and why do ordinary people care?

This note is in addition to my previous posting and an elaboration on a minor point in that note, that of TPerf.

Disclaimer ** this is not an official posting from Teradata. It is a posting of a practitioner **

Measuring anything requires a metric, like an inch or a meter. Measuring performance requires a speed metric and Teradata calls it Tperf. Others can explain this better than I can, so as always I welcome those contributions to this discussion.

TPerf comes from a test case that was run on a Teradata system and query executions time were recorded and retained. The next generation hardware was developed and it ran faster. The first recorded query times were given a speed rating of 1.0 and the faster hardware was described as being a percentage faster, hypothetically 50% faster, and therefore given a Tperf rating of 1.5. That's all I ever say and think about with respect to the definition of TPerf.


Reasons to care

1. TPerf is used as a factor in developing pricing algorithms

2. TPerf is used to describe speed improvements over current configurations

3. Tperf is cumulative and straight-line (linear). In the theme of this note this is not an ordinary concept, it is a common sense approach to computing architecture.

4. For performance, TPerf is one of two ways of calculating the speed difference among nodes with varying speeds and the subject of this posting.


Elaborating on the use of TPerf

We are getting outside of ordinary when considering that Teradata system nodes run at different speeds. From my observations, most small and medium size configurations are built all at once and as a result all nodes run at identical speeds. However when systems are built in increments, year-after-year, the nodes run at varying speeds and some simple arithmetic computations (in addition to those mentioned in my previous posting) are required to calculate the maximum number of CPU seconds. TPerf is one of the techniques and should be used along with the other common sense approach with both used to verify results.


Slow Node and All Amp Operations

A full table scan requires the participation of all units of parallelism, VProcs, vAmps, AMPS, whatever ordinary people want to call them. Basically the whole machine participates. The total speed of the operation will depend on the speed of the slowest part. Therefore it follows that the total number of CPU seconds available depend on the speed of the slowest part. That slowest part can only run at 100%. So use TPerf to calculate the speed of the slowest node and prorate the CPU seconds downward to keep from overrunning the slow node. Otherwise you will calculate that the system is operating at less than 100 percent capacity and the slow node is running flat out at 100 percent and your system is an embarrassing mess.


Common Sense TPerf and Observation

Now lets put TPerf aside and use common sense observation to prorate CPU seconds.

Watch ResUsage for the varying speed nodes. You will notice that as the system utilization increases to 100 percent (where it should be). During the ramp-up period the various speed nodes will show different percentage utilization. Be cautions to select a ramp-up period as when the slow nodes are already at 100% the faster nodes are completing work faster than the slow nodes can process it. So the slow nodes may be over saturated. I use a graphical representation of 2 or more representative nodes via a spreadsheet. Look for increases in all the varying speed nodes and calibrate your speeds at that time. Using this technique you can develop your own replacement for the TPerf factor and should you have TPerf available use it to verify that computation.

For example if your slow nodes are running at 100 %, your faster nodes are running at 50% and this is a regular observation then your prorate factor is 50 percent. Use this number to diminish the CPU seconds on the fast nodes by 50 percent. Develop this factor for all varying speed nodes and diminish available CPU seconds for all faster nodes.


TPerf, Inhibited and Uninhibited

TPerf has two workload characterizations, CPU intensive (uninhibited TPerf) and IO intensive (inhibited TPerf) workloads with middle ground being TPerf. Maybe others can describe this better than I can. I tend to ignore these numbers as in practice they have only been interesting to understand how different storage options impact speed. I keep watching them as they may be useful factors when 5.1 comes along and CPU intensive functions are developed and implemented.

Don, I hope you are still awake :-)

Craig, I still owe you some examples

Jim LeBlanc
NCR/Teradata Technical Account Manager



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023