Archives of the TeradataForum
Message Posted: Mon, 08 Jan 2001 @ 23:29:23 GMT
This thread is born out of some of the discussion that I have had with the Bank over the last few months. The main point here is not the implementation of a denormalised model as a whole but the way it will be best to implement the largest data area within the bank, which is the transactions.
This is not really like CDRs(where nearly all transactions have the same data) as banking events are more complex of course in a banking environment where the differing transaction types have quite different items. This can cause problems and complexities in the fact that there quite often is not much data that matches between the different types, debits/credits/credit card/mortgage etc....
What Michael is trying to grasp is to how much of the data to put into the main transaction event table vs what to put in sub-type tables. There might be data that will be of use to the end user most of the time so should that be loaded to the prime transaction table (that can be stored using compress if absent of course) to enable ease of access and also stop having to do multiple row loads to more than one transaction table etc.. When the transaction table is of course the largest of the store data items in a Financial institution this is the most critical. (this can vary from 600million to 3billion rows depending on the customer base and the historical depth)
We have seen customers implement the transaction area in different ways as Mick suggests -
* very thin event table with just a transaction number, account number, event type, event date, event time then all other transaction details in sub-type tables.
* reasonably fat transaction table with most used and populated data items (300 bytes with 50-80 columns) then sub-types with not so important/accessed data.
* And anything in between
Hope this helps
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|