Archives of the TeradataForum
Message Posted: Fri, 16 Nov 2012 @ 09:22:16 GMT
I have a TPUMP script that?s inserting /updating records in the target table. The file has approximately 2 million rows/records. The Informatica job is running almost 30 hours and is still running , the throughput is 8 records per second. When I look at the EXPLAIN plan , this is what I see.
The below three steps is kind of repeating 20 times (60 steps in the explain)
We do an All-AMPs RETRIEVE step from table ABC by way of an all-rows scan into Spool 52843, which is redistributed by hash code to all AMPs.
We do an All-AMPs MERGE DELETE to table ABC from Spool 52843 via the row id. New updated rows are built and the result goes into Spool 52844, which is redistributed by hash code to all AMPs. Then we do a SORT to order Spool 52844 by row hash.
We do a MERGE into table O_RESV_DAILY_ELEMENT_NAME from Spool 52844.
I do understand that All AMP scan to check for every insert /update is the cause for slowing down the performance. However I am unable to figure out what needs to be modified in the script to make it run faster.
Any suggestions/ tips to debug would be really helpful.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|