|
Archives of the TeradataForumMessage Posted: Fri, 05 Jan 2001 @ 12:31:23 GMT
Jeremy, John's right in that the calculations need to be carefully looked at. Some specific things to look at (some in this area, some not); - when sizing individual columns, remember that the number of bytes for a column is based on that data type on the client and not on Teradata. This is especially true of DECIMAL data types being sent back to an IBM mainframe. In a Teradata table, a DECIMAL(2,x) column requires 1 byte whereas the same column on an IBM mainframe requires 2 bytes, DECIMAL(18,x) on Teradata is 8 bytes, but on a mainframe is 10 bytes. The spool file contains the column formatted/converted for the client environment. - same thing may happen due to data type changes. Common one is DATE columns being formatted and converted to CHAR columns. A date formatted as 'yyyy-mm-dd' requires 10 bytes in the the spool file, but only 4 in the table. You said that you ran the two selects independently and that the second one used @25GB. Was this test run using the same program? I'm thinking here of the difference between Fastexport and Bteq when preparing the final spool file for returning to the user. Fastexport builds the final spool file (as seen in the Explain output) but then breaks that up into blocks and resdistributes these blocks prior to returning any data. I haven't specifically checked this in the past, but I'm willing to believe that these extra copies of the output require extra Spool space (maybe doubling it?). Also, were the test and the real query both run using the same Export mode? Switching between (for instance) Record mode and Field mode can change the spool file sizes dramatically. Cheers, Dave
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||