Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Tue, 17 Jul 2007 @ 10:19:23 GMT


     
  <Prev Next>  
<<First
<Prev
Next>
Last>>
 


Subj:   Slow Hash Join
 
From:   Ureta, Cesar

Hi all,

I have this query that performs very quickly in the development server (1 second) but takes several minutes in the production server.

This is the query:

     sel tablename, sum(currentperm) from dbc.tablesize where
     databasename like 'edw0dev_%'
     group by 1
     order by 2 desc

And here's the explain on the development server:

Explanation
--------------------------------------------------
 
  1)First, we lock DBC.DataBaseSpace for access, we lock DBC.TVM for access, and we lock DBC.Dbase for access.  
  2)Next, we flush the DISKSPACE and AMPUSAGE caches.  
  3)We do an all-AMPs RETRIEVE step from DBC.Dbase by way of an all-rows scan with a condition of ("DBC.Dbase.DatabaseName LIKE 'edw0dev_%'") into Spool 4 (all_amps), which is duplicated on all AMPs. The size of Spool 4 is estimated with no confidence to be 5,824 rows. The estimated time for this step is 0.01 seconds.  
  4)We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of an all-rows scan, which is joined to DBC.TVM by way of an all-rows scan with no residual conditions. Spool 4 and DBC.TVM are joined using a single partition hash_ join, with a join condition of ("DBC.TVM.DatabaseId = DatabaseId"). The result goes into Spool 5 (all_amps), which is redistributed by hash code to all AMPs into 4 hash join partitions. The size of Spool 5 is estimated with no confidence to be 44,790 rows. The estimated time for this step is 0.30 seconds.  
  5)We do an all-AMPs RETRIEVE step from DBC.DataBaseSpace by way of an all-rows scan with a condition of ("DBC.DataBaseSpace.TableId <> '000000000000'XB") into Spool 6 (all_amps) fanned out into 4 hash join partitions, which is redistributed by hash code to all AMPs. The size of Spool 6 is estimated with no confidence to be 529,667 rows. The estimated time for this step is 0.62 seconds.  
  6)We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of an all-rows scan, which is joined to Spool 6 (Last Use) by way of an all-rows scan. Spool 5 and Spool 6 are joined using a hash join of 4 partitions, with a join condition of ("(DatabaseId = DatabaseId) AND ((DatabaseId = DatabaseId) AND (TableId = TVMId ))"). The result goes into Spool 3 (all_amps), which is built locally on the AMPs. The size of Spool 3 is estimated with no confidence to be 476,930 rows. The estimated time for this step is 0.41 seconds.  
  7)We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by way of an all-rows scan, and the grouping identifier in field 3. Aggregate Intermediate Results are computed globally, then placed in Spool 7. The size of Spool 7 is estimated with no confidence to be 357,698 rows. The estimated time for this step is 1.78 seconds.  
  8)We do an all-AMPs RETRIEVE step from Spool 7 (Last Use) by way of an all-rows scan into Spool 1 (group_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 1 by the sort key in spool field1. The size of Spool 1 is estimated with no confidence to be 357,698 rows. The estimated time for this step is 0.26 seconds.  
  9)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->The contents of Spool 1 are sent back to the user as the result of statement 1.  


Here's the EXPLAIN from the production server (the name changes but there is an identical number of tables):

Explanation

Explanation
--------------------------------------------------
 
  1)First, we lock DBC.DataBaseSpace for access, we lock DBC.TVM for access, and we lock DBC.Dbase for access.  
  2)Next, we flush the DISKSPACE and AMPUSAGE caches.  
  3)We execute the following steps in parallel. - 1) We do an all-AMPs RETRIEVE step from DBC.Dbase by way of an all-rows scan with a condition of ("DBC.Dbase.DatabaseName LIKE 'prod_%'") into Spool 4 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 4 by row hash. The size of Spool 4 is estimated with no confidence to be 479 rows. The estimated time for this step is 0.01 seconds. - 2) We do an all-AMPs RETRIEVE step from DBC.TVM by way of an all-rows scan with no residual conditions into Spool 5 (all_amps), which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 5 by row hash. The size of Spool 5 is estimated with low confidence to be 16,872 rows. The estimated time for this step is 0.08 seconds.  
  4)We execute the following steps in parallel.  
   
  1) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of a RowHash match scan, which is joined to Spool 5 (Last Use) by way of a RowHash match scan. Spool 4 and Spool 5 are joined using a merge join, with a join condition of ("DatabaseId = DatabaseId"). The result goes into Spool 6 (all_amps), which is built locally on the AMPs. The size of Spool 6 is estimated with no confidence to be 15,192 rows. The estimated time for this step is 0.02 seconds.
 
   
  2) We do an all-AMPs RETRIEVE step from DBC.DataBaseSpace by way of an all-rows scan with a condition of ("DBC.DataBaseSpace.TableId <> '000000000000'XB") into Spool 7 (all_amps), which is redistributed by hash code to all AMPs. The size of Spool 7 is estimated with no confidence to be 1,037,286 rows. The estimated time for this step is 0.36 seconds.
 
  5)We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an all-rows scan, which is joined to Spool 7 (Last Use) by way of an all-rows scan. Spool 6 and Spool 7 are joined using a single partition hash join, with a join condition of ("(DatabaseId = DatabaseId) AND ((DatabaseId = DatabaseId) AND (TableId = TVMId ))"). The result goes into Spool 3 (all_amps), which is built locally on the AMPs. The size of Spool 3 is estimated with no confidence to be 933,948 rows. The estimated time for this step is 0.23 seconds.  
  6)We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by way of an all-rows scan, and the grouping identifier in field 3. Aggregate Intermediate Results are computed globally, then placed in Spool 8. The size of Spool 8 is estimated with no confidence to be 145,882 rows. The estimated time for this step is 0.44 seconds.  
  7)We do an all-AMPs RETRIEVE step from Spool 8 (Last Use) by way of an all-rows scan into Spool 1 (group_amps), which is built locally on the AMPs. Then we do a SORT to order Spool 1 by the sort key in spool field1. The size of Spool 1 is estimated with no confidence to be 145,882 rows. The estimated time for this step is 0.03 seconds.  
  8)Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.  
  ->The contents of Spool 1 are sent back to the user as the result of statement 1.  


The slow step is the hash join on step 5. I can make the query go sub second by slightly reducing the number of rows (perhaps the "build" small table then fits into memory). The main difference between both plans is that the development is doing a hash join in four partitions while the production server is using a single partition.

Values for the HTMemAlloc and SkewAllowance are default in both servers. I played with them a little but the documentation states not to increase HTMemAlloc too much anyway so this is no use for me.

Is there anything I can do to avoid being at the mercy of the hash table fitting into memory or not ?


Cheers,

Cesar



     
  <Prev Next>  
<<First
<Prev
Next>
Last>>
 
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 27 Dec 2016