Archives of the TeradataForum
Message Posted: Tue, 20 Apr 2004 @ 18:50:49 GMT
We have an internally-built Unix (C program) routine to unload changed/inserted rows from 4 Oracle instances. Additionally, as records are deleted in Oracle, those keys/timestamps are inserted into a table, which is also unloaded. The rows that are unload from the tables are based on a timestamp field, where the timestamp field is later than the last time an unload occurred (every hour). These rows are placed into a MQ series queue and eventually are received into MVS datasets. Then, the data is fed into Multiload scripts to populate the Teradata warehouse.
Periodically, the warehouse misses some rows. It appears that the internally-built routine to unload data misses a row. This is rare, but it happens. When this error occurs, the application is forced to "turn back the clock" and resend all data from the point of the error. Sometimes that takes the entire weekend.
We'd like to be able to audit on a daily basis the rows in the warehouse vs the rows in source. This could be challenging. We could just run a summary count on both systems based on data older than the current day, but that won't be exact due to the activity in the source.
Does Datastage have some type of audit capability to ensure the warehouse and the source are in sync?
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|