Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Tue, 20 Apr 2004 @ 18:50:49 GMT


     
  <Prev Next>  
<<First
<Prev
Next> Last>>  


Subj:   Keeping source in sync with the warehouse
 
From:   Hunt, Deborah J

We have an internally-built Unix (C program) routine to unload changed/inserted rows from 4 Oracle instances. Additionally, as records are deleted in Oracle, those keys/timestamps are inserted into a table, which is also unloaded. The rows that are unload from the tables are based on a timestamp field, where the timestamp field is later than the last time an unload occurred (every hour). These rows are placed into a MQ series queue and eventually are received into MVS datasets. Then, the data is fed into Multiload scripts to populate the Teradata warehouse.

Periodically, the warehouse misses some rows. It appears that the internally-built routine to unload data misses a row. This is rare, but it happens. When this error occurs, the application is forced to "turn back the clock" and resend all data from the point of the error. Sometimes that takes the entire weekend.

Our target:

We'd like to be able to audit on a daily basis the rows in the warehouse vs the rows in source. This could be challenging. We could just run a summary count on both systems based on data older than the current day, but that won't be exact due to the activity in the source.


Does Datastage have some type of audit capability to ensure the warehouse and the source are in sync?


Thanks,

Debbie Hunt



     
  <Prev Next>  
<<First
<Prev
Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023