|
Archives of the TeradataForumMessage Posted: Mon, 24 Feb 2003 @ 18:08:36 GMT
Hi, I have a question on loading duplicate rows into teradata using multiload..... we are loading 25 millions records into table using multi load... Actually we need to do validation before loading into teradata... validation includes separate duplicate records and count duplicate records etc.... here our validation program taking more time for seperating duplicate rows on a flat file..... what we are thinking is, load data (with duplicates) directly to teradata thru multi load, multi load will put all duplicate into UV table(one of the error table)...and get the count of UV table.... so that we no need to separate duplicate rows on flat file... total no of records in flat file is : 25 millions is it good to separate duplicate rows thru multi load? is it expensive operation on teradata? I appreciate u r comments and suggestions Thanx
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||