Archives of the TeradataForum
Message Posted: Wed, 21 Jul 2004 @ 17:46:35 GMT
We've been taking a much closer look at resource utilization lately, and one of the questions raised is the impact of frequently repeated very simple queries, of the SELECT * FROM TABLE against a table with about a hundred rows about 1K in length.
How do we measure this?
One proposed metric seems to show that such a query, run four hundred times a day, puts that user into our Top Ten list for CPU utilization.
Now, I don't doubt there's an impact, but that seems a little excessive, and having been burned before by clumsy benchmarking (my own), I'm curious how to measure this sort of query, as it seems to me the sort of measurement that's prone to lots of rounding error unless done carefully.
We're on V2R4.1.3.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|