Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Thu, 07 Dec 2006 @ 23:34:39 GMT


     
  <Prev Next>  
<<First
<Prev
Next> Last>>  


Subj:   Partition elimination using substitution
 
From:   Lee, James

I read the following past that indicates you can get partition elimination when using an import file in bteq:

www.teradataforum.com/teradata/20061013_083308.htm


But I can't seem to get it to work.

In the following example, table global_omniture_hit is partitioned by day on cast(omniture_date_time as date).

In any case, what I need to do is to dynamically determine the date to select from the table at run time. There is only one date in the table at a time. The reason it is paritioned is becuase it improves the insert performance into another table that is partitioned by day as well.

Here's what I tried:

     -------------------------------------------------------------
     Teradata BTEQ 08.02.03.03 for WIN32.
     Copyright 1984-2006, NCR Corporation. ALL RIGHTS RESERVED.
     Enter your logon or BTEQ command:
     .logon 192.168.10.240/jklee

     .logon 192.168.10.240/jklee
     Password:

     *** Logon successfully completed.
     *** Teradata Database Release is V2R.06.01.00.02
     *** Teradata Database Version is 06.01.00.39
     *** Transaction Semantics are BTET.
     *** Character Set Name is 'ASCII'.

     *** Total elapsed time was 1 second.

     BTEQ -- Enter your DBC/SQL request or BTEQ command:


     .export data file="test.dat"

     .export data file="test.dat"
     *** To reset export, type .EXPORT RESET
     BTEQ -- Enter your DBC/SQL request or BTEQ command:
     select current_date;

     select current_date;

     *** Success, Stmt# 1 ActivityCount = 1
     *** Query completed. One row found. One column returned.
     *** Total elapsed time was 1 second.


     BTEQ -- Enter your DBC/SQL request or BTEQ command:
     .export reset;

     .export reset;
     *** Output returned to console.
     BTEQ -- Enter your DBC/SQL request or BTEQ command:
     .import data file="test.dat";

     .import data file="test.dat";
     BTEQ -- Enter your DBC/SQL request or BTEQ command:

     explain
     using (d1 date)
     select *
     from staging.global_omniture_hit
     where cast(omniture_date_time as date) = :d1
     ;

     *** Help information returned. 17 rows.
     *** Total elapsed time was 1 second.

     Explanation
     ------------------------------------------------------------------------
     ---
     1) First, we lock a distinct staging."pseudo table" for read on a
     RowHash to prevent global deadlock for staging.global_omniture_hit.
     2) Next, we lock staging.global_omniture_hit for read.
     3) We do an all-AMPs RETRIEVE step from staging.global_omniture_hit
     by way of an all-rows scan with a condition of (
     "(CAST((staging.global_omniture_hit.omniture_date_time) AS DATE))=
     :d1") into Spool 1 (group_amps), which is built locally on the
     AMPs. The input table will not be cached in memory, but it is
     eligible for synchronized scanning. The result spool file will
     not be cached in memory. The size of Spool 1 is estimated with no
     confidence to be 1,368,069 rows. The estimated time for this step
     is 25 minutes and 16 seconds.
     4) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
     -> The contents of Spool 1 are sent back to the user as the result of
     statement 1. The total estimated time is 25 minutes and 16
     seconds.

     BTEQ -- Enter your DBC/SQL request or BTEQ command:

     -----------------

If I hardcode the date, I get the parition eliminition:

     ------------------------------------------------
     explain
     select *
     from staging.global_omniture_hit
     where cast(omniture_date_time as date ) = '2006-12-06'
     ;

     explain
     select *
     from staging.global_omniture_hit
     where cast(omniture_date_time as date ) = '2006-12-06'
     ;

     *** Help information returned. 16 rows.
     *** Total elapsed time was 1 second.

     Explanation
     ------------------------------------------------------------------------
     ---
     1) First, we lock a distinct staging."pseudo table" for read on a
     RowHash to prevent global deadlock for staging.global_omniture_hit.
     2) Next, we lock staging.global_omniture_hit for read.
     3) We do an all-AMPs RETRIEVE step from a single partition of
     staging.global_omniture_hit with a condition of (
     "(CAST((staging.global_omniture_hit.omniture_date_time) AS DATE))=
     DATE '2006-12-06'") into Spool 1 (group_amps), which is built
     locally on the AMPs. The input table will not be cached in memory,
     but it is eligible for synchronized scanning. The result spool
     file will not be cached in memory. The size of Spool 1 is
     estimated with no confidence to be 1,368,069 rows. The estimated
     time for this step is 4 minutes and 34 seconds.
     4) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
     -> The contents of Spool 1 are sent back to the user as the result of
     statement 1. The total estimated time is 4 minutes and 34 seconds.

     BTEQ -- Enter your DBC/SQL request or BTEQ command:
     --------------

Any tips on what is going on and how I can achieve what I want to do?


James K. Lee
Data Architect
WebMD Health Services



     
  <Prev Next>  
<<First
<Prev
Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 27 Dec 2016