|
|
Archives of the TeradataForum
Message Posted: Fri, 28 Jul 2006 @ 09:29:22 GMT
Subj: | | Re: Oracle to Teradata Migration and Max Row Size |
|
From: | | Chakrapani, Praveen |
I think if we have row size more than 64KB then teradata splits and stores the row internally. There is no need to split up the table. I just
created a table with rowsize more than 64kb and inserted few rows it worked fine.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
create table t4(a int, b int, c char(32000), d char(32000));
create table t4(a int, b int, c char(32000), d char(32000));
*** Table has been created.
*** Total elapsed time was 2 seconds.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
show table t4;
show table t4;
*** Text of DDL statement returned.
*** Total elapsed time was 1 second.
---------------------------------------------------------------
CREATE SET TABLE TEST.t4 ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT
(
a INTEGER,
b INTEGER,
c CHAR(32000) CHARACTER SET LATIN NOT CASESPECIFIC,
d CHAR(32000) CHARACTER SET LATIN NOT CASESPECIFIC)
PRIMARY INDEX ( a );
BTEQ -- Enter your DBC/SQL request or BTEQ command:
ins into t4(1,2,'abc','def');
ins into t4(1,2,'abc','def');
*** Insert completed. One row added.
*** Total elapsed time was 1 second.
BTEQ -- Enter your DBC/SQL request or BTEQ command:
sel * from t4;
sel * from t4;
*** Query completed. One row found. 4 columns returned.
*** Total elapsed time was 1 second.
a b c
---------------------------------------------------------------
1 2 abc
If row size is more than you have to increase the data block size according to that while creating the table. We can increase
datablocksize to max of 127.5KB. This way you can store atleast one row in one datablock. But having many datablocks for a table may affect query
performance also.
Thanks,
Praveen
| |