Archives of the TeradataForum
Message Posted: Thu, 02 Aug 2012 @ 19:12:18 GMT
Dump the duplicate rows to another table using either group by on all fields having count>1 or qualify row_number over partition by all fields = 2. Both methods will give you one row for any row in the source table that has at least one duplicate.
Then delete any rows in the main table that match the new table, and finally insert from the new table into the main.
That may or may not be faster than doing multiset to multiset table load using group by or qualify row_number = 1, but it keeps you from having 2 copies of a big table.
You could also fast export the unique rows then truncate and reload from the export file, although that will require rebuilding secondary indexes.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 28 Jun 2020|