|
Archives of the TeradataForumMessage Posted: Wed, 31 Jan 2007 @ 21:18:41 GMT
To add more "discussion" (would that be straw?) to the "strawman".. I agree with the thoughts given in this thread but it actually made me think of why a tool may be or more value than we are thinking. Keeping with the premise of "cost/benefit" as to the measure of the tool value, say, in the quantifiable value of deferring the next system upgrade for a few months, we also need to consider compute resources along with storage space. And let me further offer for your review that, in standard DW query work, "compute resources" can be generally equated to IO in the form of spool files required for the unit of work. As we know, in R6.1.xxx compression is now carried into the spool files, and IMO we are seeing more often than before (correct me if this is not what other people are seeing) where "small" (to quantify let's use 1GB as an example) tables are being duplicated across the AMPs to facilitate a join(s) to "large" tables. While I don't know what the "costing" thresholds are for this, if we use the 1GB table as an example and we say there are 100 AMPs then we have created a 100GB image (obviously assuming all columns in table are spooled) that could benefit from compression. But if this query is run often and concurrently then this could easily be 1TB of spool. So, using 30% MVC saving (a not un-common value thrown around) we could reduce spool requirement for this one scenario by 300GB of spool and "spool processing". So, knowing there is still no "Compression Wizard" for the DBA, how valuable would that tool be if the compression savings could be given a "workload" (like the other "Wizard tools) and estimate spool savings?
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||