Archives of the TeradataForum
Message Posted: Fri, 05 Feb 2010 @ 11:15:18 GMT
I'm trying to measure the execution time of a test query which has been executing once every second for the last few days.
The query is executed via a parameterised macro which performs a single Amp retrieve operation accessing the table via a row level locking view. TASM / TDWM changes have been made to optimise CPU and Amp Worker Task availability etc.
We have detail level query logging enabled for the logon and I have been using the Elapsedtime field from dbc.qrylog to graph the query execution time. The overall performance of the system has been phenomenal (we've achieved what we set out to do in terms of demonstrating ADW capabilities to our IT colleagues).
What I have noticed from the DBQL query results is an increase in "elapsedtime" at times when the Teradata system is "busy" with other workload.
So, my questions are:
is using elapsedtime on dbc.qrylog the best / most accurate way of measuring the actual execution time of the query or should I be using something else. (I would like to measure this from Teradata if possible, rather than from the "query submitting" application).
Is the (small but consistent) increase in the values in the elapsedtime field during busy periods a true reflection of the macro execution time, or is it something else (such as the time required to generate the DBQL data itself for instance)
If the macro execution time is genuinely affected at these times have we missed any Active Data Warehousing "tricks" or techniques which we can employ to maximise the response time of the query at all times irrespective of other factors such as system workload etc.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|