Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Mon, 23 Jun 2003 @ 20:41:33 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Capacity Planning for Ordinairy People!
 
From:   LeBlanc, James

In response to Anom's posting. I find the Anom posting to be well considered and most scientific.

Maybe the artistic part comes from interviewing the user community for collecting and understand service level requirements, written, spoken and otherwise. I find the rest of capacity planning to be scientific and well founded in simple arithmetic formulas. I will agree that the computations are less precise than those I use to balance my checkbook. Maybe they are as precise as watching my gas gauge and estimating where the gas level is based what the gauge says. A half tank is not always half full because from experience I find that it is a little lower. I know this because I observe fuel added and contrast that to the gauge reading. I use similar thinking for measuring CPU capacity.

Adding to the Anom posting, when measuring capacity one needs to measure both time and space, restated CPU time and Disk Space. The IO operations I find to be less important and more difficult to gain access to, so I settle for measuring logical IO operations and form a statistic of logical IO (LIO) per CPU second.

The notion of measuring Disk Space is noting new and most Teradata users know how to approach the task. The other part of measuring CPU time and allocating it to the various activities is more complex.

Measuring CPU time for capacity planning starts with an understanding a "CPU second" and "how many CPU seconds per wall clock second" are available in your system. Of the two dimensions, time and space, CPU seconds is the time unit of measure and the system capacity. CPU capacity planning requires this.

For ordinary purposes a single CPU system has 1 CPU second per wall clock system. Systems that have multiple CPUs take a little more thinking. A single node with 4 CPUs has 4 CPU Seconds per wall clock second. Adjust that number downward by 20% (the UNIX operating system takes about 20% of CPU time). So the answer is 0.8 * 4 or 3.2 CPU seconds per second.

Now you can calculate where 100% is and where you are by time, second, hour, day, week, month. Use AMPUsage to calculate your consumption by userid and sum that together to get capacity estimates by time. Account String Extensions make the task easier. One needs to know what AMPUsage is and how to compare that to ResUsage to validate your math.

I am finding that ordinary systems have a mixed bag of CPUs. Some nodes run with 4 CPUs and some run with 2 CPUs. This computation is a variation of the above but is easier than when you have mixed bag of CPUs and speeds. With this thought we are past discussing ordinary computations. For this condition I use a prorating of CPU seconds based on TPerf. I find that IO Inhibited TPerf works best. Drop me a note if you need more information on this.

With a full understanding of CPU Seconds per Second, one can calculate, model, measure and predict how applications will behave. The exercise is also a good use of time as it also gives insight into how TDQM performance regulator settings need to be used.

Jim LeBlanc
NCR/Teradata Technical Account Manager



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023