Archives of the TeradataForum
Message Posted: Wed, 12 Oct 2005 @ 15:10:19 GMT
Subj: | | Re: Redestributing large tables instead of replicating a small table |
|
From: | | Dieter Noeth |
Victor Sokovin wrote:
| Just a related question: a similar feature (or the underlying algorithm) seems to be essential for such things as ROLLUP or CUBE. | |
I don't think it's similar, because the algorithm for ROLLUP is simpler: find the finest granularity of aggregation, calculate it and then
repeatedly aggregate the data:
rollup(a,b,c)
-> group by (a,b,c) into spool 1
-> group spool 1 by (a,b) into spool 2
-> group spool 2 by (a) into spool 3
-> group spool 3 into spool 4
The result is simply a union of those spools.
It's a bit more complex for GROUPING SETS, but i think even i could program that algorithm ;-)
| I must admit I don't see them used on large tables as yet. Could anybody share their experience with these relatively new
features? | |
The performance for the lowest level is similar to a group by on those columns. This is the main overhead, each following aggregation works on
a shrinking set.
At least it's faster than the traditional query: UNION ALL different agregations...
| In theory, they could have been programmed better than the "traditional" features, which have gone through a long series of fixes and
upgrades (and a few generations of developers). | |
Hopefully...
In theory, there is no difference between theory and practice. But, in practice, there is ;-)
Dieter
|