Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Wed, 04 Apr 2007 @ 16:27:04 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: Summarizing dbc.qrylog table
 
From:   Barner Eric

Here are some helpful suggestions.

1). I would suggest keeping only about 3-6 months in the DBC tables.

2). Also write a nightly UNIX cron (if MP-RAS) to extract the current day's DBQL data from the dbc Tables insert the data into a cloned DBQL table in another database , and then delete from the back end of the dbc tables.

3). For your new tables (clones of the DBQL tables), I would suggest implementing Multivalue Compression as you will gain substantial savings from some of the DBQL tables.

4.) Create a new CollectDate derived column from the CollectTimestamp or from the StartTime or whichever timestamp column you prefer. Use this column for a Partitioned Primary index.


This will help in avoiding full scans of the table when querying for specific slices of time, especially in tables with 3 years of data.

Example PPI:

     PRIMARY INDEX XPI_DBQLOGTBL ( ProcID ,CollectTimeStamp )
     PARTITION BY RANGE_N(CollectDate  BETWEEN DATE '2004-01-01' AND
     '2010-01-01' EACH INTERVAL '1' DAY )

Hope this helps.

Eric Barner



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023