Archives of the TeradataForum
Message Posted: Fri, 25 Jun 2004 @ 09:15:05 GMT
I couple of comments, which might help.
We use this approach a bit. We name the small table "Lt_Dimension_FakeStats" in order to be aware of the faked statistics when using it, and not to collect stats on this table by accident...
It may not (in reality) always be the best plan to have the small table duplicated (followed by a product join) - consider the following example:
- 20 amp system
- Smal table having 200 rows (10 pr amp assuming perfect distribution).
- Large table having 20 million rows (1 million pr amp assuming perfect distribution)
- If the small table is duplicated, then we have a product join of 1 million rows versus 200 rows on each amp
- If the large table is redistributed (assuming perfect re-distribution) then we have a product join of 1 million rows versus 10 rows on each amp.
- Furthermore, in any re-distribution some rows will ("accidently") be in place already - and therefore the re-dist might be less costly than one would think at first glance.
But, nevertheless I often think that the optimizer costs re-distributions very low - to low. Does anyone agree on this ?
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|