Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Thu, 18 Oct 2001 @ 14:29:49 GMT


     
  <Prev Next>  
<<First
<Prev
Next> Last>>  


Subj:   Insert select with sample
 
From:   Ruth Fenwick

Hi All,

I was trying to build a small sample set of a 106G table as insert into little_table select * from big_table sample .05;

My spool got up over 132G. Original table is 106G including the indexes. Tables have an identical PI, in fact the only difference is the copy doesn't have the NUSIs defined on it.

I've attached the piece of the explain that indicates that it'll read the whole table into spool. Does this make sense? I would have expected it to only load into spool the sample amount, but it SAMPLES spool, not the table.

  4) We do an all-AMPs RETRIEVE step from hbc_zellers.z_mb_detail by
     way of an all-rows scan with no residual conditions into Spool 2,
     which is built locally on the AMPs.  The input table will not be
     cached in memory, but it is eligible for synchronized scanning.
     The result spool file will not be cached in memory.  The size of
     Spool 2 is estimated with high confidence to be 779,947,617 rows.
     The estimated time for this step is 21 hours and 39 minutes.

  5) We do an all-AMPs SAMPLING step from Spool 2 (Last Use) by way of
     an all-rows scan into Spool 1, which is built locally on the AMPs.
     Then we do a SORT to order Spool 1 by row hash.  Samples are
     specified as fraction of total rows..

  6) We do a MERGE into hbc_zellers.z_mb_detail_ruth2 from Spool 1
     (Last Use).


     
  <Prev Next>  
<<First
<Prev
Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023