Archives of the TeradataForum
Message Posted: Sat, 03 Jan 2004 @ 09:09:49 GMT
Regarding your question about the Delta CPU from Session Information, that sounds about right to me. I would expect about 150 CPU seconds per 10 seconds. The variability you are seeing probably has to do with the CPU intensity of the work being performed as well as the imperfection of the logging interval (i.e. it is probably not always exactly 10 seconds). I would also guess that there might be some small amount of variability in terms of which interval some CPU time gets logged.
In my experience, the number of active sessions typically has no correlation with the level of system busy. It's all about the type of activity(s) being performed. I'm sure that most anyone on this list could write a single query that would push the system to 100%. Bad SQL is easy to write! :-)
As far as the sar ratio is concerned, I have seen the 65:35 ratio on well behaved systems. Although to be quite honest, I find that sar, and the corresponding ResUsage columns, provide little guidance in pinpointing query-related performance issues. I'm assuming that is the underlying issue prompting the note.
Based on your last paragraph which mentioned your frustration, I would guess that the problems are one of two things:
1) The activities performed by the active sessions are less than optimal, meaning that performance tuning is needed.
2) The queries are well tuned (in general) but are system resource intensive, in which case the problem really becomes a workload management issue.
Of course that is only a guess. There are a host of possibilities, such as hardware problems, config issues, locking contention, unrealistic user expectations, etc.
I hope you find this helpful.
Thomas F. Stanek
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|