Archives of the TeradataForum
Message Posted: Wed, 06 Jun 2001 @ 08:00:14 GMT
I set up something similar to this in UNIX. I created a table with randomly generated rows, hoping that they would be distributed evenly across the AMPs (luckily they were) of about 1 million rows. I then set up a job which runs about every 10 minutes that performs a full table scan, and aggregates the results.
This table is not used by any other process or user - thus no contention.
The query is run, its start time and stop time logged. Then the number of sessions on the system and the spool currently being used are also logged. This information can then be used with data from Teradata Manager to provide a fuller picture of any problems, or the current state of play.
The data at the end of each sample is logged into a file, then at the end of each day loaded into a table. Allowing any analysis as needed.
We found that on a quiet machine the query takes about 2 seconds.
Hope this helps....
PS - I think you mean canned query!
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|