Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Tue, 22 Nov 2005 @ 18:15:40 GMT


     
  <Prev Next>  
<<First
<Prev
Next> Last>>  


Subj:   Queueing Jobs in a Disaster Recovery Scenario to Reduce Tape Contention
 
From:   Gregg, Bill

Has anyone designed (or used) solutions that reduce tape contention issues under disaster recovery?

Does anyone know of an available queueing mechanism/problem (eg. Groves-Clark mechanism) that models the scenario below?


Here's the situation:

1) Approx a dozen large tables are EACH cluster archived across 8 jobs that run more or less in parallel. (this means there are 12 waves of jobs that produce about 96 archive.txt files.) (each job takes from 30 minutes to 3 hrs depending on the size of the table.) (all jobs for a specific table run in the same wave and take about the same amount of time.)

2) Approx 80 all amps archive jobs (some one table, some many tables) are run after all of the big tbls finish. (10 job waves of 8 jobs, all the jobs that are scheduled to run at the same time (same wave), take approx the same amount of time, but each time for each wave (layer) of jobs variesfrom 2hrs to 15 minutes).

3) Supporting infrastructure includes 4 dedicated servers per Tera box, 2 tapes per server; each tape will hold 300gb. Prod and backup Tera boxes with a different number of AMPs.


ISSUE: When data is recovered to a machine with a different configuration, cluster archives that run in parallel need to run in sequence. This means that different recovery streams may need datasets on the same tape.


Some thoughts:

1) Try to keep all jobs to the same amount of time

2) Keep all jobs short (~15 minutes) to reduce delays stemming from tape contention


Your insights, experiences, and ideas are welcome.


Thanks,

Bill Gregg



     
  <Prev Next>  
<<First
<Prev
Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023