Archives of the TeradataForum
Message Posted: Mon, 24 Feb 2003 @ 22:03:50 GMT
| Subj: || || Re: Couting duplicate rows thru multi load |
| From: || || Venkat |
| ||If you have a lot of duplicates (>50) and if you load duplicates into a set table it will slow down the load a lot without a Unique
Index to use for a duplicate row check. If you have a Unique Index then it will slow down a little.|| |
we may not have duplicate rows more than 10% and we have unique primary index
| ||If you use a Multi-set table the load will not slow down as long as you throw away the duplicate rows (Automatic in Fastload)
(Multiload has an option to ignore duplicate rows, otherwise this may be slow due to the row by row insert into the UV table)|| |
Actually we need to count the duplicate rows also.... basically we are planning to put all rows (including duplicate) in a temporary
multi set table, then seperate the the duplicate rows from multiset table.....
does it slowdown the load process?