Archives of the TeradataForum
Message Posted: Wed, 28 Mar 2001 @ 11:51:31 GMT
A few thoughts on this...
If you've got a row length that large then I'm guessing that you've got one or two columns which are big VARCHARs. Teradata will only store the necessary amount of data - i.e. 100 bytes of data in a VARCHAR(20000) will only take up 102 bytes. So you may not have that much of a problem anyway.
Spool space usage is not dependent directly on the row length, only on the size of the columns from the row that Teradata needs in the spool file to answer the query (i.e. it doesn't just copy the entire row).
If you've got rows which really are that long, then you'll be doing a lot of physical I/O because (I think) Teradata will put each row into a separate block (because this is treated as a large row).
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|