Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Mon, 26 Aug 2013 @ 13:29:51 GMT


     
  <Prev Next>   <<First <Prev Next> Last>>  


Subj:   Re: CPU consumption on compression
 
From:   Dave Wellman

Hi,

Historically, I have always found that compression gives an 'overall' reduction in cpu usage - providing of course that you get enough compression. However, with the introduction of the new compression options in TD 13.10 (I'll assume that you're on this release or higher) it is not so straight forward.

Multi-Value compression (MVC): This often gives compression of 25-30% and in my experience has always been a 'win' in terms of cpu usage. I think that the comment you referred to about 'the increase in CPU is more than compensated by the decrease in I/O' was originally written in relation to MVC. Bear in mind that it requires cpu cycles to issue an I/O. By compressing the data you need fewer I/O's to read 'x' amount of data, thus you save on the cpu cycles required to issue the I/O's. With MVC the cost of decompressing is trivial, it is basically an indexed lookup into an in-core array (in the table header). So MVC is truly brilliant for SELECT processing. Obviously, where you're doing maintenance against the table (INS and UPD) then there is a slightly bigger overhead in building the new row which may possibly involve moving bytes of data if an UPD (for example) changes a value from a compressed one to a non-compressed one.

Algorithmic compression (ALC): this is much more dependent on the algorithm being used. Also remember that this requires cpu cycles for all operations against the column. An INS requires that the data value is compressed and any operation that requires the column value to be read (SELECT of the column or anything where the column is in the selection or join clauses) will require the data value to be de-compressed. I always tell students to think of a Word document being ZIP'd. If you want to check the content of the document, you have to unzip it.

Block Level Compression (BLC): this is always the same routine but I thought that on the 25xx the compression/decompression work was done by a separate cpu, so I'm not sure you'd see anything in the DBQL data (maybe it was the 26xx?).

If you're using ALC, which routine are you using? Is it your own or is it one of the Teradata supplied ones?

What version of TD are you using?

Are your selects full table scans with selection against the compressed columns? If so are these representative of 'real' queries?

Just some thoughts.


Dave

Ward Analytics Ltd - Information in motion (www.ward-analytics.com)



     
  <Prev Next>   <<First <Prev Next> Last>>  
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023