Home Page for the TeradataForum

Archives of the TeradataForum

Message Posted: Mon, 26 Aug 2013 @ 17:08:10 GMT

  <Prev Next>   <<First <Prev Next> Last>>  

Subj:   Re: CPU consumption on compression
From:   L?vai ?kos

Hi Anonym,

We investigated MVC thoroughly, I can give you some details about the findings in a production environment.

The summarized experiences (measured on V13 and V13.10, early versions' eg. V2R5 behavior may differ significantly):

- If we examine a given 'SELECT' query isolated, the CPU usage is typically similar to the uncompressed table (some or more percent of increase or decrease appears depending on the operation type)

- If we look on system level, there comes the strange thing... In a production environment, after compressing most of the tables (70% of the occupied storage space), we experienced a total of 10+% CPU usage DECREASE globally. This seems strange, but the explanation of this is based on the caching. If the tables are compressed, significantly less data blocks must be loaded from the disks and moved in memory during query processing. (cacheing and data movement is CPU intensive operation also, and must be several times during a query execution). The less data to be cached will reduce the cache contention also.

- INSERT/UPDATE operations will require more CPU, even by 20-30%. However the processing requirement of the data load is dominated by the transformation phase, since the increase of the final step will not mean so serious issue, the gain of the many times ran data access will over-compensate this increase.

- If you expend some of the gained space on properly chosen indices (typically hash/join indices), that will dramatically reduce your CPU usage.

- Do not forget the reduced backup time windows (smaller tables), and occasionally the shorter disaster recovery.

- As a conclusion MVC is really worth to implement not only from storage, but from performance (CPU & I/O) point of view.

- ALC compression typically consumes significant CPU resources (depending on the chosen algorithm) both at compression and decompression phase, and is really effective only in special cases (eg. URLs)

- BLC is really CPU intensive, however is often done by dedicated hardvare element (if available), which will not influence the 'regular' CPU usage. Since it is done 'between' the storage and SQL processing part of the system it will not show the in-memory advantages of the MVC .

Akos Levai

Teradata Master, OCP

  <Prev Next>   <<First <Prev Next> Last>>  
  Top Home Privacy Feedback  
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 24 Jul 2020