Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Mon, 07 Jul 2003 @ 15:18:27 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Question Regarding Multi-Statement Transactions in BTEQ
 
From:   ulrich arndt

It has an impact on parallelism as the explain shows. It is possible (but not necessary) that all three statements will performed in parallel (see examples).

Ulrich


Example 1 - all parallel:

create table a
( a integer);
*** Table has been created.
*** Time was 1 sec.       7/7/2003    5:23:53 PM

create table b
( a integer);
*** Table has been created.
*** Time was 1 sec.       7/7/2003    5:23:53 PM

create table c
( a integer);
*** Table has been created.
*** Time was 1 sec.       7/7/2003    5:23:54 PM

select *
from udb000.sequence
sample 10;
*** Query completed.  10 row(s) found.  1 field(s) returned.
*** Time was  4 secs.       7/7/2003    5:23:58 PM
          ID
============
         364
         904
         559
        -658
        -628
        -148
        -120
        -948
         308
         305

explain
insert into a
select id
from udb000.sequence
;insert into b
select id
from udb000.sequence
;insert into c
select id
from udb000.sequence
;

*** Help information has been returned.
*** Time was 1 sec.       7/7/2003    5:23:58 PM
Explanation
--------------------------------------------------
 
  1)First, we lock a distinct GPS_ARNDTU."pseudo table" for write on a RowHash to prevent global deadlock for GPS_ARNDTU.c.  
  2)Next, we lock a distinct GPS_ARNDTU."pseudo table" for write on a RowHash to prevent global deadlock for GPS_ARNDTU.b.  
  3)We lock a distinct GPS_ARNDTU."pseudo table" for write on a RowHash to prevent global deadlock for GPS_ARNDTU.a.  
  4)We lock a distinct udb000."pseudo table" for read on a RowHash to prevent global deadlock for udb000.sequence.  
  5)We lock GPS_ARNDTU.c for write, we lock GPS_ARNDTU.b for write, we lock GPS_ARNDTU.a for write, and we lock udb000.sequence for read.  
  6)We execute the following steps in parallel.  
   
  1) We do a MERGE into GPS_ARNDTU.a from udb000.sequence.
 
   
  2) We do a MERGE into GPS_ARNDTU.b from udb000.sequence.
 
   
  3) We do a MERGE into GPS_ARNDTU.c from udb000.sequence.
 
  7)We spoil the parser's dictionary cache for the table.  
  8)We spoil the parser's dictionary cache for the table.  
  9)We spoil the parser's dictionary cache for the table.  
  10)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->No rows are returned to the user as the result of statement 1. No rows are returned to the user as the result of statement 2. No rows are returned to the user as the result of statement 3.  

drop table a;
*** Table has been dropped.
*** Time was 1 sec.       7/7/2003    5:23:59 PM

drop table b;
*** Table has been dropped.
*** Time was 1 sec.       7/7/2003    5:23:59 PM

drop table c;
*** Table has been dropped.
*** Time was 1 sec.       7/7/2003    5:24:00 PM

Example 2 - not all parallel but partial parallel:

explain
insert into a
select date-current_date
from sys_calendar.calendar
;insert into b
select date-current_date
from sys_calendar.calendar
;insert into c
select date-current_date
from sys_calendar.calendar
;

*** Help information has been returned.
*** Time was  2 secs.       7/7/2003    5:19:56 PM
Explanation
--------------------------------------------------
 
  1)First, we lock a distinct GPS_ARNDTU."pseudo table" for write on a RowHash to prevent global deadlock for GPS_ARNDTU.c.  
  2)Next, we lock a distinct GPS_ARNDTU."pseudo table" for write on a RowHash to prevent global deadlock for GPS_ARNDTU.b.  
  3)We lock a distinct GPS_ARNDTU."pseudo table" for write on a RowHash to prevent global deadlock for GPS_ARNDTU.a.  
  4)We lock a distinct SYS_CALENDAR."pseudo table" for read on a RowHash to prevent global deadlock for SYS_CALENDAR.CALDATES.  
  5)We lock GPS_ARNDTU.c for write, we lock GPS_ARNDTU.b for write, we lock GPS_ARNDTU.a for write, and we lock SYS_CALENDAR.CALDATES for read.  
  6)We do an all-AMPs RETRIEVE step from SYS_CALENDAR.CALDATES by way of an all-rows scan with no residual conditions into Spool 1, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 1 by row hash. The size of Spool 1 is estimated with high confidence to be 73,414 rows. The estimated time for this step is 0.31 seconds.  
  7)We execute the following steps in parallel.  
   
  1) We do a MERGE into GPS_ARNDTU.a from Spool 1 (Last Use).
 
   
  2) We do an all-AMPs RETRIEVE step from SYS_CALENDAR.CALDATES by way of an all-rows scan with no residual conditions into Spool 2, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 2 by row hash. The size of Spool 2 is estimated with high confidence to be 73,414 rows. The estimated time for this step is 0.31 seconds.
 
  8)We execute the following steps in parallel.  
   
  1) We do a MERGE into GPS_ARNDTU.b from Spool 2 (Last Use).
 
   
  2) We do an all-AMPs RETRIEVE step from SYS_CALENDAR.CALDATES by way of an all-rows scan with no residual conditions into Spool 3, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 3 by row hash. The size of Spool 3 is estimated with high confidence to be 73,414 rows. The estimated time for this step is 0.31 seconds.
 
  9)We do a MERGE into GPS_ARNDTU.c from Spool 3 (Last Use).  
  10)We spoil the parser's dictionary cache for the table.  
  11)We spoil the parser's dictionary cache for the table.  
  12)We spoil the parser's dictionary cache for the table.  
  13)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->No rows are returned to the user as the result of statement 1. No rows are returned to the user as the result of statement 2. No rows are returned to the user as the result of statement 3.  



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023