Archives of the TeradataForum
Message Posted: Tue, 12 Sep 2000 @ 22:35:14 GMT
Also, it depends on the non-uniqueness of the secondary indices. If you only have a couple of hundred values out of a couple of hundred million rows, for example, you could have a million rows for each value, and that can take time. Also, if a single value has most of the occurrences, that causes hot amps and loss of parallelism.
Dropping & recreating can help, but collect statistics and look at the distinct values, then run some queries to determine counts by value.
Hope this gives you some ideas.
Admin Comment: When last tried, the URL in this Post no longer functioned.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|