Archives of the TeradataForum
Message Posted: Wed, 09 Jan 2008 @ 13:09:44 GMT
| Subj: || || Re: Impact of a Heartbeat Query |
| From: || || Ferry, Craig |
We have a single heartbeat query set up on our database. IMHO, the heartbeat query should be just a quick query to verify that your database
is up and running. The response time should be very quick, not impact the performance of the system, and be run often enough for your comfort.
I run mine every 5 minutes. This could be as simple as select * from dbc.dbcinfo.
In addition to what I consider our heartbeat query, I have several monitoring queries that run. These queries have been set up to mimic
critical SQLs that are run routinely on our system by business users (one being an SQL that the CEO runs since I don't want a call from him). I
have benchmarked these SQLs when the system is idle to find a best case response time. What I do then is use the 'average' response time
collected from prior weeks to send out an alert if they are running longer than what is considered normal running time. I also believe that
these types of SQLs should not have an adverse impact on your system as they should be tuned like your production SQLs to run efficiently. These
types of SQLs should be used more for seeing if you do have bottlenecks or locking issues on your system that are impacting day to day business.
These SQLs are also run on a frequency so that we could be researching suspected problems before it becomes a bigger issue for our end users.
Some we run at 15 minute intervals and one of the larger ones is run at 30 minute intervals.