|
Archives of the TeradataForumMessage Posted: Sun, 24 Mar 2013 @ 20:37:20 GMT
<-- Anonymously Posted: Sunday, March 24, 2013 06:54 --> Hello All, We are running a two node 2580 appliance on linux-being used for DEV/ SIT and UAT. We are having 100 % cpu utilization (when we run our standard query on dbc.ResUsageSpma) during most of the hours of the day . The users are complaining of poor response time and the efficiency of our reporting queries are being called into question. On checking the DBQLOGTBL, we fixed a couple of runaway queries which we consuming high cpu time, but by and large, the queries seem ok. So we now have to establish that the hardware is insufficient to take the kind of load it is being subjected to. My plan is to compare the number of reporting queries that have started and completed within a certain duration- and compare the CPU consumed by them against the CPU time that is available. So my question is - Is there a way to know the number of CPU seconds available for user query in any of the resusage objects? If not, can I use something like below The 2580 is a has dual quad cores per node. So the available CPU seconds for? 10 min interval are CPU per 10 min interval ?= 2 (Node) * 4 (CPU) * 2 (Dual Core) * 600 (sec) = 9600 seconds. Of course- we have to discount the CPU required for system processes (x%)? As a dev team, we don't have access to any admin tools, no view point or administrator Requesting your kind guidance here.
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||