Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Wed, 06 Feb 2002 @ 16:40:10 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Erratic results via Business Objects
 
From:   Lee Vickers

Ok John, you asked for it..........it usually runs in approx. 1.5mins

SELECT
  Store.Store_No,
  ST_Stock_Take_Period.Stock_Take_Period,
  Product_Group.Product_Group_Code,
  Product_Group.Product_Group_Desc,
  Product_SubGroup.Product_Subgroup_Code,
  Product_SubGroup.Product_Subgroup_Desc,
  Product_Parent.Product_Parent_PLU,
  Product_Parent.Parent_Product_Desc,
  Product_Variant.Brand_Code,
  max(Product_Variant_Case.Retail_Units_per_Cas) AS Max_Case_Size,
  min(Product_Variant_Case.Retail_Units_per_Cas) AS Min_Case_Size,
  count(Product_Variant_Case.Retail_Units_per_Cas) AS Num_Case_Sizes,
  avg(Product_Variant_Case.Retail_Units_per_Cas) AS Avg_Case_Size,
  max(ST_Stock_Take_Line.Average_Sell_Price) AS Max_Sell_Price,
  min(ST_Stock_Take_Line.Average_Sell_Price) AS Min_Sell_Price,
  count(ST_Stock_Take_Line.Average_Sell_Price) AS num_Sell_Prices,
  avg(ST_Stock_Take_Line.Average_Sell_Price) AS Avg_Sell_Price,
  sum(ST_Stock_Take_Line.Theoretical_Unit_Count),
  sum(ST_Stock_Take_Line.Actual_Unit_Count),
  sum(ST_Stock_Take_Line.Opening_Unit_Count),
  sum(ST_Stock_Take_Line.Unit_Loss) AS Unit_Loss,
  sum(ST_Stock_Take_Line.ST_Period_Delivery_Val),
  sum(ST_Stock_Take_Line.ST_Period_Deliveries),
  sum(ST_Stock_Take_Line.ST_Period_Stock_Adj),
  sum(ST_Stock_Take_Line.ST_Period_Stock_Adj_Val),
  sum(ST_Stock_Take_Line.ST_Period_Sales),
  sum(ST_Stock_Take_Line.ST_Period_Sales_Val) AS Sales_Val,
  sum(ST_Stock_Take_Line.ST_Period_Reduced),
  sum(ST_Stock_Take_Line.ST_Period_Wastage01 +
ST_Stock_Take_Line.ST_Period_Wastage02 +
ST_Stock_Take_Line.ST_Period_Wastage03 +
ST_Stock_Take_Line.ST_Period_Wastage04 +
ST_Stock_Take_Line.ST_Period_Wastage05 +
ST_Stock_Take_Line.ST_Period_Wastage06),
  sum(ST_Stock_Take_Line.ST_Period_Wast_Val01 +
ST_Stock_Take_Line.ST_Period_Wast_Val02 +
ST_Stock_Take_Line.ST_Period_Wast_Val03 +
ST_Stock_Take_Line.ST_Period_Wast_Val04 +
ST_Stock_Take_Line.ST_Period_Wast_Val05 +
ST_Stock_Take_Line.ST_Period_Wast_Val06),
  sum(ST_Stock_Take_Line.ST_Period_Wastage01),
  sum(ST_Stock_Take_Line.ST_Period_Wastage02),
  sum(ST_Stock_Take_Line.ST_Period_Wastage03),
  sum(ST_Stock_Take_Line.ST_Period_Wastage04),
  sum(ST_Stock_Take_Line.ST_Period_Wast_Val01),
  sum(ST_Stock_Take_Line.ST_Period_Wast_Val02),
  sum(ST_Stock_Take_Line.ST_Period_Wast_Val03),
  sum(ST_Stock_Take_Line.ST_Period_Wast_Val04),
  count(ST_Stock_Take_Line.Product_Variant_PLU),
  (CASE WHEN (num_Sell_Prices > 1 AND Sales_Val > 0) THEN
(
  (sum(CASE WHEN ST_Stock_Take_Line.ST_Period_Sales = 0
        THEN 0 ELSE ST_Stock_Take_Line.ST_Period_Sales_Val END)/
  sum(CASE WHEN ST_Stock_Take_Line.ST_Period_Sales = 0
        THEN 1 ELSE ST_Stock_Take_Line.ST_Period_Sales END))
) ELSE Max_Sell_Price END) AS Best_Sell_Price,
  sum(ST_Stock_Take_Line.Unit_Loss)* Best_Sell_Price AS Stock_Loss_Value,
  sum(Opening_Unit_Val),
  sum(Actual_Unit_Val),
  sum(Theoretical_Unit_Val),
  max(Product_Variant_Desc)
FROM
  Store,
  Product_Group,
  ST_Stock_Take_Period,
  ST_Stock_Take_Line,
  Product_Parent,
  Product_Variant,
  Product_Variant_Case,
  Product_SubGroup,
  ST_Stock_Take_Header
WHERE
  ( Product_SubGroup.Product_Group_Code=Product_Group.Product_Group_Code  )
  AND  (Product_Variant.Product_Variant_PLU=ST_Stock_Take_Line.Product_Variant_PLU)
  AND  (Product_Parent.Product_SubGroup_Code=Product_SubGroup.Product_Subgroup_Code)
  AND  (Product_Variant.Product_Parent_PLU=Product_Parent.Product_Parent_PLU  )
  AND  (ST_Stock_Take_Header.Stock_Take_Period=ST_Stock_Take_Line.Stock_Take_Period)
  AND  ( ST_Stock_Take_Header.Store_No=Store.Store_No  )
  AND  (ST_Stock_Take_Header.Stock_Take_Period=ST_Stock_Take_Period.Stock_Take_Period  )
  AND  (Product_Variant_Case.Product_Variant_Case_PLU=Product_Variant.Product_Variant_PLU  )
  AND  (  ST_Stock_Take_Period.Stock_Take_Period  =  @variable('Stock Take Period:')
  AND  Product_Group.Product_Group_Code  ^=  'YC '
  AND  Product_Variant.Catchweight_Ind  =  'N'
  AND  Product_Variant.Product_Variant_PLU  NOT IN
(3565,5226,6115,9901,5185,6651,18596,5176,22054,3365,6001,6002,7540,8883,8884,8885,8886,8887)
  )
GROUP BY
1,2,3,4,5,6,7,8,9
Explanation
--------------------------------------------------
 
  1)First, we lock a distinct proddb03."pseudo table" for read on a RowHash to prevent global deadlock for proddb03.store.  
  2)Next, we lock PRODDB05.ST_Stock_Take_Line for access, we lock PRODDB05.ST_Stock_Take_Period for access, we lock PRODDB05.ST_Stock_Take_Header for access, we lock PRODDB03.Product_Variant for access, we lock PRODDB03.Product_SubGroup for access, we lock proddb03.store for read, we lock PRODDB03.Product_Group for access, we lock PRODDB03.Product_Parent for access, and we lock PRODDB03.product_variant_case for access.  
  3)We execute the following steps in parallel.  
   
  1) We do an all-AMPs RETRIEVE step from PRODDB03.Product_SubGroup by way of an all-rows scan with no residual conditions into Spool 3, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 3 by row hash. The size of Spool 3 is estimated with high confidence to be 402 rows. The estimated time for this step is 0.05 seconds.
 
   
  2) We do an all-AMPs RETRIEVE step from PRODDB05.ST_Stock_Take_Header by way of an all-rows scan with a condition of ("PRODDB05.ST_Stock_Take_Header.Stock_Take_Period = 238200104") into Spool 4, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 4 by row hash. The size of Spool 4 is estimated with no confidence to be 418 rows. The estimated time for this step is 0.05 seconds.
 
   
  3) We do an all-AMPs JOIN step from PRODDB03.product_variant_case by way of a RowHash match scan with no residual conditions, which is joined to PRODDB03.Product_Variant with a condition of ("(PRODDB03.Product_Variant.Catchweight_Ind = 'N') AND((PRODDB03.Product_Variant.Product_Variant_PLU <> 8887.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 8886.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 8885.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 8884.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 8883.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 7540.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 6002.) ANDRODDB03.Product_Variant.Product_Variant_PLU <> 6001.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 3365.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 22054.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 5176.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 18596.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 6651.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 5185.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 9901.)AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 6115.) AND ((PRODDB03.Product_Variant.Product_Variant_PLU <> 5226.) AND (PRODDB03.Product_Variant.Product_Variant_PLU <> 3565.))))))))))))))))))"). PRODDB03.product_variant_case and PRODDB03.Product_Variant are joined using a merge join, with a join condition of ("PRODDB03.product_variant_case.Product_Variant_Case_PLU = PRODDB03.Product_Variant.Product_Variant_PLU"). The result goes into Spool 5, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 5 by row hash. The size of Spool 5 is estimated with no confidence to be 2,383 rows. The estimated time for this step is 0.59 seconds.
 
  4)We execute the following steps in parallel.  
   
  1) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of a RowHash match scan, which is joined to PRODDB03.Product_Parent. Spool 5 and PRODDB03.Product_Parent are joined using a merge join, with a join condition of ("Spool_5.Product_Parent_PLU = PRODDB03.Product_Parent.Product_Parent_PLU"). The result goes into Spool 6, which is built locally on the AMPs. Then we do a SORT to order Spool 6 by row hash. The size of Spool 6 is estimated with no confidence to be 2,383 rows. The estimated time for this step is 0.19 seconds.
 
   
  2) We do an all-AMPs JOIN step from PRODDB03.Product_Group by way of a RowHash match scan with a condition of ("PRODDB03.Product_Group.Product_Group_Code <> 'YC '"), which is joined to Spool 3 (Last Use). PRODDB03.Product_Group and Spool 3 are joined using a merge join, with a join condition of ("Spool_3.Product_Group_Code = PRODDB03.Product_Group.Product_Group_Code"). The result goes into Spool 7, which is duplicated on all AMPs. Then we do a SORT to order Spool 7 by row hash. The size of Spool 7 is estimated with no confidence to be 14,720 rows. The estimated time for this step is 0.09 seconds.
 
   
  3) We do a single-AMP JOIN step from PRODDB05.ST_Stock_Take_Period by way of the unique primary index "Stock_Take_Period = 238200104" with no residual conditions, which is joined to Spool 4 (Last Use). PRODDB05.ST_Stock_Take_Period and Spool 4 are joined using a merge join, with a join condition of ( "Spool_4.Stock_Take_Period = PRODDB05.ST_Stock_Take_Period.Stock_Take_Period"). The result goes into Spool 8, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 8 by row hash. The size of Spool 8 is estimated with no confidence to be 418 rows. The estimated time for this step is 0.10 seconds.
 
  5)We execute the following steps in parallel.  
   
  1) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of a RowHash match scan, which is joined to Spool 7 (Last Use). Spool 6 and Spool 7 are joined using a merge join, with a join condition of ("Spool_6.Product_SubGroup_Code = Spool_7.Product_Subgroup_Code"). The result goes into Spool 9, which is redistributed by hash code to all AMPs. The size of Spool 9 is estimated with no confidence to be 2,383 rows. The estimated time for this step is 0.34 seconds.
 
   
  2) We do an all-AMPs RETRIEVE step from PRODDB05.ST_Stock_Take_Line by way of an all-rows scan with a condition of ("PRODDB05.ST_Stock_Take_Line.Stock_Take_Period = 238200104") into Spool 10, which is redistributed by hash code to all AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The size of Spool 10 is estimated with no confidence to be 2,260,894 rows. The estimated time for this step is 2 minutes and 33 seconds.
 
   
  3) We do an all-AMPs JOIN step from proddb03.store by way of a RowHash match scan with no residual conditions, which is joined to Spool 8 (Last Use). proddb03.store and Spool 8 are joined using a merge join, with a join condition of ("Spool_8.Store_No = proddb03.store.Store_No"). The result goes into Spool 11, which is duplicated on all AMPs. The size of Spool 11 is estimated with no confidence to be 16,720 rows. The estimated time for this step is 0.64 seconds.
 
  6)We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of an all-rows scan, which is joined to Spool 10 (Last Use). Spool 9 and Spool 10 are joined using a product join, with a join condition of ("(Spool_9.Product_Variant_PLU = Spool_10.Product_Variant_PLU) AND (Spool_10.Product_Variant_PLU = Spool_9.Product_Variant_Case_PLU)"). The result goes into Spool 12, which is built locally on the AMPs. The size of Spool 12 is estimated with no confidence to be 169,580 rows. The estimated time for this step is 55.54 seconds.  
  7)We do an all-AMPs JOIN step from Spool 11 (Last Use) by way of an all-rows scan, which is joined to Spool 12 (Last Use). Spool 11 and Spool 12 are joined using a product join, with a join condition of ("(Spool_12.Stock_Take_Period = Spool_11.Stock_Take_Period) AND (Spool_11.Stock_Take_Period = Spool_12.Stock_Take_Period)"). The result goes into Spool 2, which is built locally on the AMPs. The size of Spool 2 is estimated with no confidence to be 70,884,440 rows. The estimated time for this step is 1 minute and 16 seconds.  
  8)We do a SUM step to aggregate from Spool 2 (Last Use) by way of an all-rows scan, and the grouping identifier in field 1. Aggregate Intermediate Results are computed globally, then placed in Spool 13. The aggregate spool file will not be cached in memory. The size of Spool 13 is estimated to be 70,884,440 rows.  
  9)We do an all-AMPs RETRIEVE step from Spool 13 (Last Use) by way of an all-rows scan into Spool 1, which is built locally on the AMPs. The result spool file will not be cached in memory. The size of Spool 1 is estimated with low confidence to be 70,884,440 rows. The estimated time for this step is 1 hour and 1 minute.  
  10)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->The contents of Spool 1 are sent back to the user as the result of statement 1.  



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 27 Dec 2016