Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Thu, 10 Feb 2005 @ 22:45:21 GMT


     
  <Prev Next>  
<<First
<Prev
Next> Last>>  


Subj:   Query Length Time Increase
 
From:   Ferry, Craig

I have a strange problem. I have a query that I am running. If I add another where clause, the response time goes from 8 to 22 minutes. I know this is rather vague, but I have included the query and explain plans below (I can send the ddl statements if anyone is really interested). Any suggestions as to why adding an additional limiting factor on the data would increase query response time? I would have thought just the opposite. Statistics are collected on all the columns referenced on all the tables.

The very last line of the query is the line added which increases the times.

****************************************************************

     Select a.branch_id,
     b.group_id,
     b.region_id,
     extract(year From a.date_invoice) As year_invoice,
     Case
        When Substr(a.vendor_no,1,4)='TOR3'
           Then 'CA USD'
        When Substr(a.vendor_no,1,3)='TOR'
           Then 'CA CAD'
        Else   'US USD'
     End AS Currency,
     a.vendor_no,
     v.vendor_name,
     v.address_1,
     v.city,
     v.state,
     v.zip_code,
     a.invoice_no,
     a.date_invoice,
     a.lot_no,
     a.net_amount,
     Case
        Substring(a.lot_no From	3 For 1)
        When '-'
           Then 'Manual Claim'
        When '/'
           Then 'Negative Pmt'
        When 'D'
           Then 'HQ AP Input'
        When 'S'
           Then 'HQ AP Input'
        When '='
           Then 'SA5615 Claim'
        Else 'Unknown'
     End As trans_type,
     s.supplier_name,
     r.sequence_no,
     r.date_received,
     r.transaction_code

     From dss_tables.ap_open_paid_dly_tbl a

     Inner Join dss_tables.receipts r
        On r.branch_id=a.branch_id
        And r.vendor_no=a.vendor_no
        And r.invoice_no=a.invoice_no
        And r.date_invoice=a.date_invoice
        And r.lot_no=a.lot_no

     Left Outer Join dss_tables.supplier_dly_tbl s
        ON s.branch_id=r.branch_id
        AND s.supplier_no=r.supplier_no

     Inner Join dss_tables.vendor_master_dly_tbl v
        On v.vendor_no=a.vendor_no

     Inner Join dss_tables.branch b
        On b.branch_id=a.branch_id


     Where a.date_paid is Null
     And a.vendor_no In (Select vendor_no
     From dss_tables.ap_open_paid_dly
     Where vendor_no Not In ('PHLPURGE', 'PGHPURGE', 'CHGPURGE','SFXPURGE',
     'HDQPURGE', 'TORPURGE', 'PHL00001', 'PGH00001',
     'CHG00001', 'SFX00001', 'HDQ00001', 'TOR00001', 'PHL00002','PGH00002',
     'CHG00002', 'SFX00002', 'HDQ00002', 'TOR00002',
     'PHL00003', 'PGH00003', 'CHG00003', 'SFX00003', 'HDQ00003','TOR00003',
     'PHL00004', 'PGH00004', 'TOR20002')
     And date_paid is Null
     And date_invoice < '2004-11-12'
     And date_invoice >= '2003-01-01'
     And branch_id Not In (7147,7528, 7529)
     And a.branch_id In (Select branch_id
     From branch
     Where REGION_ID = 8120)
     Group By vendor_no
     Having Sum(net_amount)<0 )

     And a.branch_id Not In (7147, 7528, 7529)
     And a.net_amount < 0
     And a.date_invoice < '2004-11-12'
     And a.date_invoice >= '2003-01-01'
     And b.region_id = 8120


****************************************************************

FAST EXPLAIN (BEFORE LAST LINE)

Explanation
--------------------------------------------------
 
  1)First, we lock dss_tables.s for access, we lock dss_tables.ap_open_paid_dly_tbl for access, we lock dss_tables.r for access, we lock dss_tables.branch_tbl for access, and we lock dss_tables.v for access.  
  2)Next, we do an all-AMPs RETRIEVE step from dss_tables.branch_tbl by way of an all-rows scan with a condition of ("dss_tables.branch_tbl.region_id = 8120") into Spool 5 (all_amps), which is duplicated on all AMPs. Then we do a SORT to order Spool 5 by row hash. The size of Spool 5 is estimated with high confidence to be 980 rows. The estimated time for this step is 0.02 seconds.  
  3)We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an all-rows scan, which is joined to dss_tables.a by way of a traversal of index # 24 without accessing the base table extracting row ids only locking dss_tables.a for access. Spool 5 and dss_tables.a are joined using a nested join, with a join condition of ("dss_tables.a.branch_id = branch_id"). The result goes into Spool 6 (all_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 6 by field Id 1. The size of Spool 6 is estimated with low confidence to be 7,882 rows. The estimated time for this step is 2.03 seconds.  
  4)We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an all-rows scan, which is joined to dss_tables.a by way of an all-rows scan with a condition of ("(NOT (dss_tables.a.lot_no IS NULL )) AND ((NOT (dss_tables.a.date_invoice IS NULL )) AND ((NOT (dss_tables.a.invoice_no IS NULL )) AND ((NOT (dss_tables.a.vendor_no IS NULL )) AND ((NOT (dss_tables.a.branch_id IS NULL )) AND ((dss_tables.a.branch_id <> 7147) AND ((dss_tables.a.branch_id <> 7528) AND ((dss_tables.a.branch_id <> 7529) AND ((dss_tables.a.date_paid IS NULL) AND ((dss_tables.a.net_amount < 0.00) AND ((dss_tables.a.date_invoice < DATE '2004-11-12') AND ((dss_tables.a.date_invoice >= DATE '2003-01-01') AND ((NOT (dss_tables.a.branch_id IS NULL )) AND (NOT (dss_tables.a.vendor_no IS NULL ))))))))))))))") locking dss_tables.a for access. Spool 6 and dss_tables.a are joined using a row id join, with a join condition of ("(1=1)"). The result goes into Spool 7 (all_amps), which is duplicated on all AMPs. The size of Spool 7 is estimated with low confidence to be 220,696 rows. The estimated time for this step is 2.09 seconds.  
  5)We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of an all-rows scan, which is joined to dss_tables.ap_open_paid_dly_tbl by way of an all-rows scan with a condition of ("(dss_tables.ap_open_paid_dly_tbl.branch_id <> 7147) AND ((dss_tables.ap_open_paid_dly_tbl.branch_id <> 7528) AND ((dss_tables.ap_open_paid_dly_tbl.branch_id <> 7529) AND ((dss_tables.ap_open_paid_dly_tbl.date_invoice >= DATE '2003-01-01') AND ((dss_tables.ap_open_paid_dly_tbl.date_invoice < DATE '2004-11-12') AND ((dss_tables.ap_open_paid_dly_tbl.date_paid IS NULL) AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR20002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00004 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00004 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQ00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFX00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHG00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQ00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFX00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHG00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQ00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFX00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHG00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TORPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFXPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHGPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGHPURGE ') AND (dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHLPURGE '))))))))))))))))))))))))))))))))"). Spool 7 and dss_tables.ap_open_paid_dly_tbl are joined using a product join, with a join condition of ("(1=1)"). The result goes into Spool 4 (all_amps), which is built locally on the AMPs. The result spool file will not be cached in memory. The size of Spool 4 is estimated with low confidence to be 1,285,041,870 rows. The estimated time for this step is 18 minutes and 34 seconds.  
  6)We do an all-AMPs SUM step to aggregate from Spool 4 (Last Use) by way of an all-rows scan, and the grouping identifier in field 1. Aggregate Intermediate Results are computed globally, then placed in Spool 8. The aggregate spool file will not be cached in memory. The size of Spool 8 is estimated with no confidence to be 932,688,240 rows. The estimated time for this step is 3 hours and 23 minutes.  
  7)We execute the following steps in parallel.  
   
  1) We do an all-AMPs RETRIEVE step from Spool 8 (Last Use) by way of an all-rows scan with a condition of ("Field_4 < 0.00") into Spool 2 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 2 by row hash and the sort key in spool field1 eliminating duplicate rows. The result spool file will not be cached in memory. The size of Spool 2 is estimated with no confidence to be 932,688,240 rows. The estimated time for this step is 1 hour and 12 minutes.
 
   
  2) We do an all-AMPs RETRIEVE step from dss_tables.a by way of an all-rows scan with a condition of ("(NOT (dss_tables.a.vendor_no IS NULL )) AND ((NOT (dss_tables.a.branch_id IS NULL )) AND ((dss_tables.a.date_invoice >= DATE '2003-01-01') AND ((dss_tables.a.date_invoice < DATE '2004-11-12') AND ((dss_tables.a.net_amount < 0.00) AND ((dss_tables.a.date_paid IS NULL) AND ((dss_tables.a.branch_id <> 7529) AND ((dss_tables.a.branch_id <> 7528) AND ((dss_tables.a.branch_id <> 7147) AND ((NOT (dss_tables.a.branch_id IS NULL )) AND ((NOT (dss_tables.a.vendor_no IS NULL )) AND ((NOT (dss_tables.a.invoice_no IS NULL )) AND ((NOT (dss_tables.a.date_invoice IS NULL )) AND (NOT (dss_tables.a.lot_no IS NULL ))))))))))))))") locking for access into Spool 10 (all_amps), which is redistributed by hash code to all AMPs. The size of Spool 10 is estimated with high confidence to be 152,676 rows. The estimated time for this step is 4.17 seconds.
 
  8)We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of an all-rows scan, which is joined to dss_tables.v by way of an all-rows scan. Spool 10 and dss_tables.v are joined using a single partition hash join, with a join condition of ("dss_tables.v.vendor_no = vendor_no"). The result goes into Spool 11 (all_amps), which is duplicated on all AMPs into 20 hash join partitions. The size of Spool 11 is estimated with low confidence to be 4,274,928 rows. The estimated time for this step is 19.58 seconds.  
  9)We do an all-AMPs RETRIEVE step from dss_tables.r by way of an all-rows scan with a condition of ("(NOT (dss_tables.r.branch_id IS NULL )) AND ((NOT (dss_tables.r.vendor_no IS NULL )) AND ((NOT (dss_tables.r.invoice_no IS NULL )) AND ((NOT (dss_tables.r.date_invoice IS NULL )) AND (NOT (dss_tables.r.lot_no IS NULL )))))") into Spool 12 (all_amps) fanned out into 20 hash join partitions, which is built locally on the AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The result spool file will not be cached in memory. The size of Spool 12 is estimated with low confidence to be 60,747,691 rows. The estimated time for this step is 6 minutes and 38 seconds.  
  10)We do an all-AMPs JOIN step from Spool 11 (Last Use) by way of an all-rows scan, which is joined to Spool 12 (Last Use) by way of an all- rows scan. Spool 11 and Spool 12 are joined using a hash join of 20 partitions, with a join condition of ("(branch_id = branch_id) AND ((vendor_no = vendor_no) AND ((invoice_no = invoice_no) AND ((date_invoice = date_invoice) AND (lot_no = lot_no ))))"). The result goes into Spool 13 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 13 by row hash. The size of Spool 13 is estimated with low confidence to be 152,676 rows. The estimated time for this step is 20.23 seconds.  
  11)We execute the following steps in parallel.  
   
  1) We do an all-AMPs JOIN step from Spool 13 (Last Use) by way of a RowHash match scan, which is joined to dss_tables.s by way of a RowHash match scan with a condition of ("NOT (dss_tables.s.supplier_no IS NULL)"). Spool 13 and dss_tables.s are left outer joined using a merge join, with a join condition of ("(dss_tables.s.supplier_no = supplier_no) AND (dss_tables.s.branch_id = branch_id)"). The result goes into Spool 14 (all_amps), which is duplicated on all AMPs. Then we do a SORT to order Spool 14 by row hash. The size of Spool 14 is estimated with low confidence to be 4,274,928 rows. The estimated time for this step is 23.28 seconds.
 
   
  2) We do an all-AMPs RETRIEVE step from Spool 2 (Last Use) by way of an all-rows scan into Spool 15 (all_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 15 by row hash. The result spool file will not be cached in memory. The size of Spool 15 is estimated with no confidence to be 932,688,240 rows. The estimated time for this step is 1 hour and 8 minutes.
 
   
  3) We do an all-AMPs RETRIEVE step from dss_tables.branch_tbl by way of an all-rows scan with no residual conditions locking for access into Spool 16 (all_amps), which is duplicated on all AMPs. The size of Spool 16 is estimated with high confidence to be 25,200 rows. The estimated time for this step is 0.05 seconds.
 
  12)We do an all-AMPs JOIN step from Spool 14 (Last Use) by way of an all-rows scan, which is joined to Spool 15 (Last Use) by way of an all- rows scan. Spool 14 and Spool 15 are joined using an inclusion merge join, with a join condition of ("(Field_4 = Field_3) AND (vendor_no = vendor_no)"). The result goes into Spool 17 (all_amps), which is built locally on the AMPs. The size of Spool 17 is estimated with index join confidence to be 152,676 rows. The estimated time for this step is 3 minutes and 19 seconds.  
  13)We do an all-AMPs JOIN step from Spool 16 (Last Use) by way of an all-rows scan, which is joined to Spool 17 (Last Use) by way of an all- rows scan. Spool 16 and Spool 17 are joined using a single partition hash join, with a join condition of ("branch_id = branch_id"). The result goes into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with index join confidence to be 152,676 rows. The estimated time for this step is 0.25 seconds.  
  14)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->The contents of Spool 1 are sent back to the user as the result of statement 1.  


****************************************************************

SLOW EXPLAIN (AFTER LAST LINE)

Explanation
--------------------------------------------------
 
  1)First, we lock dss_tables.s for access, we lock dss_tables.ap_open_paid_dly_tbl for access, we lock dss_tables.r for access, we lock dss_tables.branch_tbl for access, and we lock dss_tables.v for access.  
  2)Next, we do an all-AMPs RETRIEVE step from dss_tables.branch_tbl by way of an all-rows scan with a condition of ("dss_tables.branch_tbl.region_id = 8120") into Spool 5 (all_amps), which is duplicated on all AMPs. Then we do a SORT to order Spool 5 by row hash. The size of Spool 5 is estimated with high confidence to be 980 rows. The estimated time for this step is 0.02 seconds.  
  3)We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an all-rows scan, which is joined to dss_tables.a by way of a traversal of index # 24 without accessing the base table extracting row ids only locking dss_tables.a for access. Spool 5 and dss_tables.a are joined using a nested join, with a join condition of ("dss_tables.a.branch_id = branch_id"). The result goes into Spool 6 (all_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 6 by field Id 1. The size of Spool 6 is estimated with low confidence to be 7,882 rows. The estimated time for this step is 2.03 seconds.  
  4)We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an all-rows scan, which is joined to dss_tables.a by way of an all-rows scan with a condition of ("(NOT (dss_tables.a.lot_no IS NULL )) AND ((NOT (dss_tables.a.date_invoice IS NULL )) AND ((NOT (dss_tables.a.invoice_no IS NULL )) AND ((NOT (dss_tables.a.vendor_no IS NULL )) AND ((NOT (dss_tables.a.branch_id IS NULL )) AND ((dss_tables.a.branch_id <> 7147) AND ((dss_tables.a.branch_id <> 7528) AND ((dss_tables.a.branch_id <> 7529) AND ((dss_tables.a.date_paid IS NULL) AND ((dss_tables.a.net_amount < 0.00) AND ((dss_tables.a.date_invoice < DATE '2004-11-12') AND ((dss_tables.a.date_invoice >= DATE '2003-01-01') AND ((NOT (dss_tables.a.branch_id IS NULL )) AND (NOT (dss_tables.a.vendor_no IS NULL ))))))))))))))") locking dss_tables.a for access. Spool 6 and dss_tables.a are joined using a row id join, with a join condition of ("(1=1)"). The result goes into Spool 7 (all_amps), which is duplicated on all AMPs. The size of Spool 7 is estimated with low confidence to be 220,696 rows. The estimated time for this step is 2.09 seconds.  
  5)We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of an all-rows scan, which is joined to dss_tables.ap_open_paid_dly_tbl by way of an all-rows scan with a condition of ("(dss_tables.ap_open_paid_dly_tbl.branch_id <> 7147) AND ((dss_tables.ap_open_paid_dly_tbl.branch_id <> 7528) AND ((dss_tables.ap_open_paid_dly_tbl.branch_id <> 7529) AND ((dss_tables.ap_open_paid_dly_tbl.date_invoice >= DATE '2003-01-01') AND ((dss_tables.ap_open_paid_dly_tbl.date_invoice < DATE '2004-11-12') AND ((dss_tables.ap_open_paid_dly_tbl.date_paid IS NULL) AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR20002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00004 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00004 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQ00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFX00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHG00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00003 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQ00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFX00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHG00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00002 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TOR00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQ00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFX00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHG00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGH00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHL00001 ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'TORPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'HDQPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'SFXPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'CHGPURGE ') AND ((dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PGHPURGE ') AND (dss_tables.ap_open_paid_dly_tbl.vendor_no <> 'PHLPURGE '))))))))))))))))))))))))))))))))"). Spool 7 and dss_tables.ap_open_paid_dly_tbl are joined using a product join, with a join condition of ("(1=1)"). The result goes into Spool 4 (all_amps), which is built locally on the AMPs. The result spool file will not be cached in memory. The size of Spool 4 is estimated with low confidence to be 1,285,041,870 rows. The estimated time for this step is 18 minutes and 34 seconds.  
  6)We do an all-AMPs SUM step to aggregate from Spool 4 (Last Use) by way of an all-rows scan, and the grouping identifier in field 1. Aggregate Intermediate Results are computed globally, then placed in Spool 8. The aggregate spool file will not be cached in memory. The size of Spool 8 is estimated with no confidence to be 932,688,240 rows. The estimated time for this step is 3 hours and 23 minutes.  
  7)We execute the following steps in parallel.  
   
  1) We do an all-AMPs RETRIEVE step from Spool 8 (Last Use) by way of an all-rows scan with a condition of ("Field_4 < 0.00") into Spool 2 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 2 by row hash and the sort key in spool field1 eliminating duplicate rows. The result spool file will not be cached in memory. The size of Spool 2 is estimated with no confidence to be 932,688,240 rows. The estimated time for this step is 1 hour and 12 minutes.
 
   
  2) We do an all-AMPs RETRIEVE step from dss_tables.branch_tbl by way of an all-rows scan with a condition of ("dss_tables.branch_tbl.region_id = 8120") locking for access into Spool 10 (all_amps), which is duplicated on all AMPs. Then we do a SORT to order Spool 10 by row hash. The size of Spool 10 is estimated with high confidence to be 980 rows. The estimated time for this step is 0.02 seconds.
 
  8)We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of an all-rows scan, which is joined to dss_tables.a by way of a traversal of index # 24 without accessing the base table extracting row ids only locking dss_tables.a for access. Spool 10 and dss_tables.a are joined using a nested join, with a join condition of ("branch_id = dss_tables.a.branch_id"). The result goes into Spool 11 (all_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 11 by field Id 1. The size of Spool 11 is estimated with low confidence to be 7,882 rows. The estimated time for this step is 2.00 seconds.  
  9)We do an all-AMPs JOIN step from Spool 11 (Last Use) by way of an all-rows scan, which is joined to dss_tables.a by way of an all-rows scan with a condition of ("(NOT (dss_tables.a.vendor_no IS NULL )) AND ((NOT (dss_tables.a.branch_id IS NULL )) AND ((dss_tables.a.date_invoice >= DATE '2003-01-01') AND ((dss_tables.a.date_invoice < DATE '2004-11-12') AND ((dss_tables.a.net_amount < 0.00) AND ((dss_tables.a.date_paid IS NULL) AND ((dss_tables.a.branch_id <> 7529) AND ((dss_tables.a.branch_id <> 7528) AND ((dss_tables.a.branch_id <> 7147) AND ((NOT (dss_tables.a.branch_id IS NULL )) AND ((NOT (dss_tables.a.vendor_no IS NULL )) AND ((NOT (dss_tables.a.invoice_no IS NULL )) AND ((NOT (dss_tables.a.date_invoice IS NULL )) AND (NOT (dss_tables.a.lot_no IS NULL ))))))))))))))") locking dss_tables.a for access. Spool 11 and dss_tables.a are joined using a row id join, with a join condition of ("(1=1)"). The result goes into Spool 12 (all_amps), which is duplicated on all AMPs. Then we do a SORT to order Spool 12 by row hash. The size of Spool 12 is estimated with low confidence to be 220,696 rows. The estimated time for this step is 2.38 seconds.  
  10)We do an all-AMPs JOIN step from Spool 12 (Last Use) by way of an all-rows scan, which is joined to dss_tables.r by way of a traversal of index # 8 without accessing the base table extracting row ids only. Spool 12 and dss_tables.r are joined using a nested join, with a join condition of ("dss_tables.r.vendor_no = vendor_no"). The input table dss_tables.r will not be cached in memory. The result goes into Spool 13 (all_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 13 by field Id 1. The size of Spool 13 is estimated with low confidence to be 7,882 rows. The estimated time for this step is 2 minutes and 47 seconds.  
  11)We do an all-AMPs JOIN step from Spool 13 (Last Use) by way of an all-rows scan, which is joined to dss_tables.r by way of an all-rows scan with a condition of ("(NOT (dss_tables.r.vendor_no IS NULL )) AND ((NOT (dss_tables.r.invoice_no IS NULL )) AND ((NOT (dss_tables.r.date_invoice IS NULL )) AND (NOT (dss_tables.r.lot_no IS NULL ))))"). Spool 13 and dss_tables.r are joined using a row id join, with a join condition of ("(dss_tables.r.lot_no = lot_no) AND ((dss_tables.r.date_invoice = date_invoice) AND ((dss_tables.r.invoice_no = invoice_no) AND ((dss_tables.r.vendor_no = vendor_no) AND (dss_tables.r.branch_id = branch_id ))))"). The input table dss_tables.r will not be cached in memory. The result goes into Spool 14 (all_amps), which is redistributed by hash code to all AMPs. The size of Spool 14 is estimated with low confidence to be 7,882 rows. The estimated time for this step is 2 minutes and 47 seconds.  
  12)We do an all-AMPs JOIN step from Spool 14 (Last Use) by way of an all-rows scan, which is joined to dss_tables.v by way of an all-rows scan. Spool 14 and dss_tables.v are joined using a single partition hash join, with a join condition of ("dss_tables.v.vendor_no = vendor_no"). The result goes into Spool 15 (all_amps), which is duplicated on all AMPs. Then we do a SORT to order Spool 15 by row hash. The size of Spool 15 is estimated with low confidence to be 220,696 rows. The estimated time for this step is 0.91 seconds.  
  13)We do an all-AMPs RETRIEVE step from Spool 2 (Last Use) by way of an all-rows scan into Spool 16 (all_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 16 by row hash. The result spool file will not be cached in memory. The size of Spool 16 is estimated with no confidence to be 932,688,240 rows. The estimated time for this step is 1 hour and 8 minutes.  
  14)We do an all-AMPs JOIN step from Spool 15 (Last Use) by way of an all-rows scan, which is joined to Spool 16 (Last Use) by way of an all- rows scan. Spool 15 and Spool 16 are joined using an inclusion merge join, with a join condition of ("(Field_4 = Field_3) AND (vendor_no = vendor_no)"). The result goes into Spool 17 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 17 by row hash. The size of Spool 17 is estimated with index join confidence to be 7,882 rows. The estimated time for this step is 1 minute and 2 seconds.  
  15)We do an all-AMPs JOIN step from Spool 17 (Last Use) by way of a RowHash match scan, which is joined to dss_tables.s by way of a RowHash match scan with a condition of ("NOT (dss_tables.s.supplier_no IS NULL)"). Spool 17 and dss_tables.s are left outer joined using a merge join, with a join condition of ("(dss_tables.s.supplier_no = supplier_no) AND (dss_tables.s.branch_id = branch_id)"). The result goes into Spool 1 (group_amps), which is built locally on the AMPs. The size of Spool 1 is estimated with low confidence to be 7,882 rows. The estimated time for this step is 0.61 seconds.  
  16)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->The contents of Spool 1 are sent back to the user as the result of statement 1.  


________________________________

Craig Ferry
Sr Database Administrator
WESCO Distribution, Inc.



     
  <Prev Next>  
<<First
<Prev
Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 27 Dec 2016