|
Archives of the TeradataForumMessage Posted: Fri, 04 Nov 2005 @ 10:28:42 GMT
The replies from Bob Diehl and Eric contain some good information in them so I won't repeat any of that except to underscore Priority Scheduler. Priority scheduler can be crucial to getting decent response times particularly in a mixed workload environment (your "tactical queries", Decision support queries and data loads). You might also want to look at TDWM (older version known as DBQM). TDWM can be used to regulate the incoming workload (eg. Decision support queries and optionally defer them or reject them). You mentioned that the Teradata team does not use any sort of connection pooling? Is that because you don't want to or you don't believe it is supported? Connection pooling certainly is supported and if you can multi-thread your application, you can submit the queries in parallel. I have done this with Java applications with substantial performance improvements. On another front, you might be able to "play" with or "cheat" on the response times. I had need to build an application (pre priority scheduler et al days) and the first screen needed to run 10 or so queries to get the information it needed. When the Teradata was loaded with work (i.e. most of the business day) the response time for the queries would be around 5-10 minutes. However some of the queries would respond in a just a few seconds. So what I did was as each query completed, that associated part of the application would fill in with the results of the query. So the user would start with a screen containing field labels but no data. As each query completed, information would appear on the screen. By doing this, the screen would still take 5-10 minutes to paint fully, but the user started getting data in about 30 seconds. The users perception was that the response time was about a minute or two. Version 1 of the application was not responsive to user requests (i.e. it was locked until the screen fully displayed). Version 2 used a threaded model to obtain the data and the user could navigate elsewhere before the screen fully painted. Back then I couldn't use multiple sessions. These days I could use multiple sessions courtesy of a connection pool, this could result in even more performance gains. My experience is that the elapsed time of any one query is not signifiganctly increased if you run multiple ones in parallel. Obviously there are caveats (eg. Queries are simple, they don't get blocked due to write or exclusive locks etc). Also if there is any opportunity to cache data in the application you could save some trips to the database. Typically a round trip to the database in a decent environment for a small resultset retrieved via an index (examine your explains) can be measured in the low 10's of milliseconds often less. But a lookup from a HashMap (Java) or similar can be measured in micro seconds (i.e. thousands of times faster). I hope some of these ideas help Glenn Mc
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||