Archives of the TeradataForum
Message Posted: Fri, 10 Aug 2001 @ 11:48:02 GMT
All (or at least those interested),
I'm looking for a good algorithm to determine whether or not to dynamically create/drop NUSIs for columns in the fact tables of my star model.
I'm in a situation where we must use the same tables and views for mulitple databases. In some databases, NUSIs for a specific columns make sense; however, in other databases, NUSIs for the respective columns make no sense.
Anyway, I've been playing with the data (imperically), and it seems that the size of the records, # of records in the table, and (of course) the # of records returned by a specific selection determine whether or not the optimizer chooses to use a NUSI versus perform a full table scan.
Previous to this experimenting, I have always been told 2 general rules to following regarding when a NUSI will more than likely be used:
1) Filtered selection represents <10% of the data
2) If (# of returned rows)/(# of datablocks) < 1, then NUSI used
Neither one of these seem to be remotely accurate.
I'm finding (given my data), that a NUSI isn't being used unless the returned rows account for ~1-2% of the data; furthermore, I have examples of NUSIs being used when the ration of (# of returned rows)/(# of datablocks) is as high as 15.
Here is my question:
1) Am I wasting my time trying to come up with an algorithm driven by Record size, # of records in the table, and the Average # of records per distinct NUSI selection (assuming all tables share a common data block size) 2) If not, am I leaving anything important out, such as a table distribution factor
Any help would be greatly appreciated,
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|