Archives of the TeradataForum
Message Posted: Fri, 26 Apr 2002 @ 12:32:31 GMT
I hear talk of Teradata implementing the concept of 'soft' referential integrity with the release of V2R5. In this way, we can define RI between objects without suffering from the current associated overhead with RI (which can be very significant.) This will be a huge 'win' for some of us, particularly those of us which maintain RI via appluication logic but cannot afford the overhead cost.
With this new cocept of 'soft' RI, the optimizer will be able to generate smarter execution strategy for those cases where RI exists (for example, I can define a much more flexible join index via left outer joins which can be built using the same plan as a join index built with inner joins (since the optimizer should know that RI exists within the associated tables.))
Taking this concept one step further of shifting responsibility from the RDBMS to the user in terms of data integrity:
Currently, the optimizer doesn't look at the actual data when it comes up with estimated cost associated with various steps. Instead, it looks at statistics (meta data about the data) when building execution plans. I'll agree, in most cases, this is a far more efficient means of deriving an optimized execution plan; however, in some cases, it is not.
For instance, consider a star schema. From my point of view, dimension tables are nothing but meta data about the leaf level data found within a fact table. Assuming our star schema is built only once a week and that we build a statistic which tells me exactly the number of occurences per dimensional component found within a fact table (something which only needs to be built once a week), then the optimizer could use this statistic in combination with the data within the dimension table to derive the optimal plan (not the estimated opitmal plan.)
Any thoughts...(please, no religious remarks about my example of a star schema)
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|