Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Tue, 08 Sep 2009 @ 18:07:55 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Table/ row locking
 
From:   Geoffrey Rommel

  I will have multiple jobs (BTEQ scripts) performing DELETEs and INSERTs into the same fact table.  


Hold on a minute. Is this perhaps a reworking of an older job that ran against Oracle or SQL Server? You may need to rethink this application in Teradata terms.

Row-by-row inserts are slow. The best option, if possible, would be a MultiLoad to delete the old rows followed by another MultiLoad to insert the new ones.

If that doesn't work, TPump is the next thing to try. You may encounter locking problems; read carefully the section on SERIALIZE.

Another possibility, if the volume is relatively low, is to FastLoad the incoming rows to a staging table and then run a stored procedure that would apply the deletes and inserts in the proper order. This might allow you to add, say, 1000 rows in one insert rather than executing 1000 inserts.

If all else fails, and you decide that you must have multiple BTEQ scripts running in parallel, consult the section on locking in the SQL Reference: Statement and Transaction Processing.

As you can see, I'm just throwing out ideas, and there are several things to consider here. You might want to consult with someone who has written applications like this before.



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023