Archives of the TeradataForum
Message Posted: Thu, 25 May 2000 @ 17:30:47 GMT
The process seems efficient.
I am not sure what I could compare it to but it is much more efficient than an older process.
We have sort of a combined key strategy.
I personally prefer intelligent keys since you can derive meaning from the value.
I understand the concern with using intelligent keys and what to do when the value changes. In our case, when the value changes, then the meaning changes.
For example, we use policy number as the primary key on almost all our tables.
This is somewhat easier to track than assigned numeric values. We had a bad experience where a surrogate key lookup was scrambled and all the attached data became worthless.
Maintaining several lookup/reference type tables was difficult. At least with the intelligent key, you do not have that dependency. The other benefit we realized was the way Teradata distributes the data. The common key places similar data on the same amp so all queries are joined locally without any redistribution.
In some cases we store more rows than we have to because some of the tables could be squeezed further than the policy level but the max size is about 5Gb so we aren't losing that much.
We do use a surrogate value for a qualifier in a compound key scenario. For example, policy number is not unique enough in some tables so we have what we termed a version number to denote different versions of the policy. The version number is just used to define when columns related to that policy have changed. For example, one row indicates the policyholder is married and another row indicates they are single. The first row might be version=1 and the second might be version=2.
This version number is calculated using the method I described earlier. I would be very interested in hearing your (and anyone elses) thoughts and experiences with key choices.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|