Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Tue, 18 Oct 2005 @ 08:37:26 GMT


     
  <Prev Next>   <<First <Prev
Next>
Last>>
 


Subj:   Re: Writing to file from a Teradata SQL statement
 
From:   McCall, Glenn David

If you had V2R6.1 (which is unlikely 'cos it aint released yet) you could use an "external table". An external table is kind of like a UDF but for a table. Unfortunately, all the examples of use with an external table are selects. I don't know if you can perform inserts into one of these.

Another V2R6.1 alternative will be queue tables. Were you could accumulate records until the archive has completed at which point the accumulated records could be processed against the real table - but I don't know if this would constitute modifying the database?

I am a little confused about your approach. Why would you wan't to cause an insert to write the content to a file only to have it reinserted at a later date? Why not simply defer the insert in the first place. There are a couple of options here:

1) defer the job that is generating the updates until the backup is finished.

2) Use DBQM (Database Query Manager - now known as TDWM - Teradata Dynamic Workload Manager). I don't know if it is possible, but you might be able to establish a profile where no queries may be executed while the backup is running. Rather DBQM/TDWM defers the queries for submission after the backup.

3) Modify the job itself so that if the backup is running, it writes the updates to a file (eg. In the form of a bteq script) which is simply replayed via bteq.


I hope this gives you some ideas.


Regards

Glenn Mc



     
  <Prev Next>   <<First <Prev
Next>
Last>>
 
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023