|
Archives of the TeradataForumMessage Posted: Fri, 10 Oct 2008 @ 16:16:40 GMT
Can anyone please advice the best way ( without impacting the system performance much ) to delete duplicate rows from a table > 1 TeraByte? A ) FastExport all rows and then FastLoad /Multiload (with ignore Dupes). B ) Multiload Delete - Not sure how the SQL would like since we cannot use RowID's. C ) SELECT DISTINCT or SELECT .. WITH GROUP BY and then INSERT into a SET Table. Any other suggested options? Ideas/Suggestions/Thoughts welcome and appreciate. Thanks in advance. Regards, Raj
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||