|
|
Archives of the TeradataForum
Message Posted: Fri, 02 Feb 2007 @ 17:48:33 GMT
Subj: | | Re: How Do You Measure/Validate Compression Savings? |
|
From: | | Judge, James |
A few replies on this topic, based on some observed measurements
| I think MVC might actually lead to the *increase* in, say, CPU usage if MVC is liberally applied to "small" tables on a large scale. If
there is compressing then there must be uncompressing going on somewhere down the line, no? It is hard to measure all the effects without access
to a good test lab, though. | |
There was a Partner's user presentation on this topic a few years back (R5.x timeframe) which quantified different scenarios. But in my
observations (unless the workload is PI, singleton, row inserts-retrievals) a set process that gets the benefit of 30% compression resulting in
30% fewer IOs for that object, always benefits from reduced CPU consumption.
| The idea of having a tool for MVC never occurred to me before I read the thread, probably because of the fear of compressing "small"
tables and causing weird adverse effects elsewhere. | |
Haven't come across that scenario
| Granted on 'large' systems (more than 100 amps), you generally try to avoid redistribution of tables around 1Gb if at all
possible. | |
If the 1GB table is being joined to table(s) of 00'sGB or TBs then I have seen this as not uncommon in the R6.x releases
| But how would you calculate queries that touch many objects. What percent of spool reduction is due to compression as opposed to
minus natural data growth over time. | |
Granted the benefit is diluted across an "n-way" join, but a simple capture of AMPcpu & AMPIO before and after the test shows the benefit.
Why couldn't a tool do that (at least estimate based on explain plan IO requirements and making the assumption that 30% compression = 30% less IO
for that object)?
| |