Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Fri, 05 Feb 2010 @ 19:35:08 GMT


     
  <Prev Next>   <<First <Prev
Next>
Last>>
 


Subj:   Re: Tactical query elapsed time analysis
 
From:   Diehl, Robert

Howard,

Looks like you are going down the right direction.

You want to monitor the execution times form both the DB and the application side. There may be issues with the application causing the latency. For instance, I have seen where the everything looks great on the database, but there were network and/or application issues. If we monitored only at the database, we would have not found the problems. It is best if you can monitor the timings between certain points. Some of these might be

1) Query time from the database perspective

2) query time from the Application perspective

3) time to format data

4) time to render data


Beware that logging all this will cause some overhead so you may not want detailed logging active at all times.

Some things to look out for that might be causing your increase in times or will help with keeping consistent response.

1) dictionary activity. Try to avoid lots of table drops/creates all at once. Also avoid lots of Permissions changes (grants/revokes).

2) make sure other workload is spread evenly across system. Try to avoid spikes in activity. Monitor your amp worker tasks. TASM Throttles and Workload Limits can help out.

3) Tactical work should be in its own Resource Partition. This partition should be weighted at least 4 times higher than the next resource partition.

4) make sure Tactical Work is first in the TASM evaluation order.

5) Use simple TASM classification rules like username. There is less overhead with some of the rules. This is more important if you want subsecond response time.

6) don't frequently activate (change) TASM Rule Sets as this will cause a momentary spike in Tactical query response. This is not the same as Periods within a Rule Set, which do cause this spike.

7) Use Single Amp Single Step queries. Sounds like you are doing this. You get the single step with the access lock

8) Keep queries Cached. USE Prepared statements to get caching in JDBC. This is like USING statement in BTEQ. Monitor this in DBQL.

Hints on how to keep it cached

Use as few sessions since request cache is at Parsing Engine (PE) level (trying to keep 'like' requests on as few PE's as possible).

Session pool at mid-tier is recommended

Alternative is to add a LAN host with only a few PE's. Must have enough PE's to handle the number of sessions. 120 Session per PE

Look at PMON User Info 'Request Cache hits' versus 'REQUEST COUNT' to see how often you are cached (you can use DBQL now that previous logging problems have been corrected.)


9) Use Macros as opposed to Stored Procedures or SQL.

Caches faster

Avoids Syntax check except in macro exec

Tuning can be transparent to application


10) Avoid use of complex case statements and other functions. This causes quite a bit of overhead in sub-second queries. We have seen very erratic response times when using a case statement in a single amp single step query.

11) Too many join indexes or nested views takes longer to resolve plan. This is not a problem after plan is in Request cache. Use tables versus views

12) I found that Roles versus Explicit rights does not make noticeable difference. I have not revisited this since V2R5

13) Default database versus explicit database statements does make a difference. Use explicit database.


Note: Some of this may not matter as much now as newer Teradata versions might different behaviors and I don't always reevaluate what is working.

Read Carrie Ballinger's Orange book about

I hope this helps.


Thanks,

Bob Diehl
Travelocity



     
  <Prev Next>   <<First <Prev
Next>
Last>>
 
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023