Archives of the TeradataForum
Message Posted: Thu, 28 Sep 2001 @ 03:50:31 GMT
Since you are using MLOAD, I suggest that you should try to leverage the potential for an increase in throughput by loading more than one week at time, assuming the historical data is all available.
If you were adding 1 week to 14 weeks of data, you were changing about 7 %. 1 week to 25 weeks would be 4 %. The hits per block keeps going down. This is why the time keeps going up as you add the same number of rows.
Since Mload tries to touch each of the data blocks only once, As you increase the number of weeks per load, the hits (rows) per block should go up substantially. As the hits per blocks goes up, the throughput per amount of time should also go up, thus getting more data loaded in the same amount of time.
You can either pre-concatenate the files or simply run multiple files through using multiple import / apply statements. The throughput will continue to increase as the number of weeks increases.
Using this approach, I would try to run at least 25 - 33+ % of your remaining weeks through at once and drop all of the NUSI's before you run the mload.
The other attractive approach is to fastload all of the remaining rows into a empty table (t2) by running multiple Phase I fastloads with no end loading till the final week. Then as you stated do a multi-statement insert select from the original table and t2 into a 3rd table (t3). This will prevent transient journaling. If you do this with at least 50 % of the data the overall throughput increase (2x mload at 30% rows changed, 3x mload at 10% changed, 20x at 1% changed) should be enough to prevent the time from growing too long.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|