|
Archives of the TeradataForumMessage Posted: Wed, 31 Jan 2007 @ 22:42:46 GMT
I would find that useful. Granted on 'large' systems (more than 100 amps), you generally try to avoid redistribution of tables around 1Gb if at all possible. But how would you calculate queries that touch many objects. What percent of spool reduction is due to compression as opposed to minus natural data growth over time. Also if the data is predicated like Where date between X and Y, how would you measure accurately. I found this to be extremely difficult to measure for complex queries. But reduced load times, and reduced i\o for collect stats, and index recreation should be easy and useful to quantify. -Eric
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||