Archives of the TeradataForum
Message Posted: Wed, 25 Jan 2012 @ 09:50:47 GMT
<-- Anonymously Posted: Tuesday, January 24, 2012 18:57 -->
I observed the processing (C) , Impacti (I) and s pool (S), I O (io) consumed for a bunch of queries I tuned. I noticed that across tuned queries something like this wrt Original Q
I S Time Taken Tuned Q1 : 30% reduction 50% reduction 3 mins 45 secs ( from 15 mins of the Original Query ) Tuned Q2 18% reduction 50% reduction 49 secs
Q1 and Q2 were run back to back, when there was hardly any workload in a dev. Box. If I profile the overall system availability and consumption it has remained almost same. No jobs kicked off in the interim to assume a distinct change the milieu ( there was no one else in the system apart from idling sessions )
Now if that 3 mins was say 1-2 mins - I could understand that its just co-incidental, but this is such a disparate difference 49 secs Vs 3+ mins, inspite of Tuned Q1 showing a lower 'I' consumption. ..? This happened on repeated testing. It makes me think that time elapsed is imp.
Ok my Questions are :
- What would account for this disparity in time taken. Could someone maybe explain at a micro level what must be happening to cause the lower consuming query to take more time. Notice that spool consumed is almost the same.
- When we say that query is made efficient - the business wants a 'faster' performance query whereas from an appliance pt of view Q1 is more efficient in consuming less resources. So what 'd you like to call a more 'efficient query' one that is fast or one that is less of a hogger. Technically speaking I know it's the latter but looking at the broad criteria per se - where would you give more weight age - consider the fact that the query consuming less time also makes the system available faster in situation e.g. where there is a DelayedQueue , that Q gets shorter.
Thank you in advance for your valuable input.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 28 Jun 2020|