Archives of the TeradataForum
Message Posted: Wed, 13 Nov 2002 @ 09:47:38 GMT
Interestingly, in my experience (and I stress that this is my experience) most sites have traditionally gone the 4-step route and are now starting to switch to the 2-step approach.
The reason for this has usually been that in the past the performace of an sql transaction which was inserting or updating a 'few million' rows was abysmal, due mainly to Transient Journal processing and row-at-a-time I/O. And if your transaction failed and went into rollback....
Using an export and multiload approach avoided TJ processing, took advantage of a single I/O per block and although you possibly had to drop/create NUSI's this process often turned out to be faster. Obviously each load process shouldbe looked at individually to determine the best approach.
Over the last few releases of Teradata there have been many features added which means that (as far as I'm concerned) it is now viable to do a LOT more processing using sql (i.e. Bteq scripts) without having to resort to an export and multiload approach.
Where might you still want to use the 4-step approach? Off the top of my head I'm not sure ! I can see it happening where you've got huge volumes for the size of machine you're running on.
Ward Analytics Ltd: Information in motion (www.ward-analytics.com)
|Copyright 2016 - All Rights Reserved
|Last Modified: 15 Jun 2023