Archives of the TeradataForum
Message Posted: Thu, 29 May 2003 @ 10:03:00 GMT
Implementing journals will incur operational overheads on table maintenance, therefore any insert/update/delete operation on a journalled table (either local or non-local) will be slowed up by the journalling process. Bizarrely though, I found that in tests that I ran on this functionality quite a while ago now (which, admittedly, were on V2R3 during 2002Q3), non-local journalling suffered less than local journals - this is bizarre, as the 'AMP-path' is theoretically longer for a non-local journal, and is a safer option if you're not employing FALLBACK. I was only using a few million records during testing, but straightforward BTEQ operations performed as follows for non-local after image J's :-
i) update of c.2.5million rows, updating primary key, forcing AMP redistribution. Overhead approx. 3.2%
ii) insert to empty table of c.2.5million rows. Overhead approx. 52% (!!)
iii) delete of c.2.5million rows from table, no predicates. You don't want to go there..... I don't want to be blamed for a drop in the share price!
However, running such operations via multiload resulted in almost zero degradation in performance, which I'm assuming is down to the blocking used by the Mload utility. So, if you're going to journals, I'd recommend implementing these where possible (obviously, there are other factors to take into account before you start changing reams of your application code!)
Once we migrated to V2R4.1, we came across a few more probs in the software with more advanced arcmain / journal functionality. In the end, migrating beyond e-fix 30 (i.e. DBS version V2R4.01.03.30) seemed to cover all situations, with a satisfactory outcome (we're now running V2R4.01.03.45).
The journalling we're now employing within the production environment seems very reliable, with no failures in the production environment to date. Performance, as with more or less everything else, is dependent upon your system config. (resource contention, CPU load, I/O channel throughput, etc, etc) but I'm seeing journals of around 7.5Gb being applied to around 21 target tables, in a couple of hours. A number of these targets are many billion row tables, so I think this is pretty good turnaround time.
Obviously, this is all part of of our production schedule, and the machine is not dedicated to this one task during this time, so you may see better / worse performance - as mentioned, and as always, your system config is probably the determining factor. I've some reservations regarding multiple target table rollforwards in a single operation, as although the functionality is provided, I've noticed some 'quirks' with clearing down the journal table (now being progressed under DR62987) and potential dataloss (experienced during initial tests a month or two back, although I have to find the time to replicate this before progression). Single target table rollforwards however, work a treat.
Bear in mind that V2R5 cannot rollforward a V2R4 format journal, so if you're planning an upgrade soon, you'll need to introduce alternative and more traditional procedures to cover any transition phase.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|