Home Page for the TeradataForum

Archives of the TeradataForum

Message Posted: Tue, 16 Oct 2001 @ 10:24:26 GMT

  <Prev Next>   <<First <Prev Next> Last>>  

Subj:   Re: INSERT INTO big table
From:   Petr Horsky


you asked for it...


INSERT INTO dwkb.Current_Account_Transaction
SELECT * FROM dwkb_aux.Current_Account_Transaction;

  1)First, we lock a distinct dwkb_aux."pseudo table" for read on a RowHash to prevent global deadlock for dwkb_aux.Current_Account_Transaction.  
  2)Next, we lock a distinct dwkb."pseudo table" for write on a RowHash to prevent global deadlock for dwkb.Current_Account_Transaction.  
  3)We lock dwkb_aux.Current_Account_Transaction for read, and we lock dwkb.Current_Account_Transaction for write.  
  4)We do an all-AMPs RETRIEVE step from dwkb_aux.Current_Account_Transaction by way of an all-rows scan with no residual conditions into Spool 1, which is built locally on the AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The result spool file will not be cached in memory. The size of Spool 1 is estimated with high confidence to be 1,451,355 rows. The estimated time for this step is 4 minutes and 9 seconds.  
  5)We do a MERGE into dwkb.Current_Account_Transaction from Spool 1 (Last Use).  
  6)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->No rows are returned to the user as the result of statement 1.  

  <Prev Next>   <<First <Prev Next> Last>>  
  Top Home Privacy Feedback  
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 28 Jun 2020