Archives of the TeradataForum
Message Posted: Wed, 30 Oct 2002 @ 22:22:21 GMT
Does anyone have a set of parameters that identify when a 2-node system should be scaled up to a 4-node system for improved performance (request through-put) based on trends in cpu utiliztion and I/O processing?
Our system, was sized for a 1.3 terabytes of space, is well under the threshold for expansion in that dimension, I believe were at about 40 % disk capacity. We have been experiencing significant contention for resources in the morning (when ETL runs), that peak about 8:00 am. We are just now starting to analyze the data using some of the ResUsage Macros (such as ResNode) and some other custom queries.
I guess what I'm looking for is some suggestions or guidence on what to look for, say, as target CPU % util paramertes over x amount of time on a some consistant (growth) basis. When do you say that running at 85 or 90 % for 4 or 5 hours would be justification for upgrading the number of unix nodes.
I cannot give you a query performance criteria, as I have none, but we do see many of our production cognos cube builds running longer and longer due both to more data (added daily) and more request occuring simultaneously becasue of added functionality to the DW. We have limited ETL (and database maintenance) windows becasuse of a coast-to-coast operation and when system backups occur in the production systems, and when operational data becomes available for data loads. Yet, the user community needs to have on-line access by 8:00 am ET. Seems like our box is running at 85-100% utilization from sometime just after 5:00 to a little after 10:00 am. And we are finding it more and more difficult to move (or parallelize) requests within the short time frame allotted us.
Any ideas, comments, suggestions will be carefully considered and appreciated!
Michael E. McBride
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|