Archives of the TeradataForum
Message Posted: Mon, 24 Feb 2003 @ 18:48:34 GMT
Subj: | | Re: Couting duplicate rows thru multi load |
|
From: | | Dwight Etheridge |
My experience has been that the writing of that UV table may be performed on a row-by-row basis, not in a block fashion. So it may be
slow.
Why not multiload the 25 million to a multiset staging table. Then use SQL and grouping to find the duplicates and non-duplicates. At
least you stay with parallel operations.
Dwight Etheridge
Teradata Certified Master
|