|
Archives of the TeradataForumMessage Posted: Mon, 06 Jul 2004 @ 00:34:22 GMT
I opened an incident some years ago on a similar topic and then closed it after talking to NCR. The problem is how to detect when the result of a join or concatenation will exceed an NCR size limit, and there are basically three choices: 1. Sum the sizes of the column *definitions* and abort if the sum predicts an overflow. 2. Let the processing proceed and abort if an error is detected during processing. 3. Pre-scan all the objects in the query to compute sizes, pre-execute the join plan using sizes only and abort if a problem is detected. NCR chose #1 because it's quick, easy and there are no wasted resource issues. #2 was rejected because of the potential for unhappy users where one row could abort a long run, and #3 was rejected because of the huge overhead. A test case is below with my errors at V2R5.1.0.9. Systems using small row sizes will need to edit the 64K to 32K. /dave hough ----- create table t1 (upi integer not null, dat varchar(64000)) unique primary index (upi); create table t2 (upi integer not null, dat varchar(64000)) unique primary index (upi); insert into t1 values(1,'A'); insert into t2 values(1,'B'); select * from t1; select * from t2; select t1.upi, t1.dat, t2.dat from t1 inner join t2 on t1.upi = t2.upi; --Error 3577: Row size or Sort Key size overflow. select t1.upi, t1.dat || t2.dat from t1 inner join t2 on t1.upi = t2.upi; --Error 3798: A column or character expression is larger than the max size. select t1.upi, trim(t1.dat) || trim(t2.dat) from t1 inner join t2 on t1.upi = t2.upi; --Error 3798: A column or character expression is larger than the max size.
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||