Archives of the TeradataForum
Message Posted: Mon, 26 May 2008 @ 11:10:13 GMT
I have a number of identical jobs which run nearly parallel on different databases (for different countries) on one Teradata system.
They use mainly the tables in their "own" database.
But unfortunately, they share one small table.
Therefore, we sometimes encounter deadlock situations, when two jobs try to update this table. Obviously, the jobs access the individual amps in a different order, thus the deadlocks happen. Since this jobs is very critical, we cannot afford those deadlock situations.
We first tried to minimize the risk by COMMITing the entire job right before the critical statement in order to shorten the time when the critical ressource is locked.
Notwithstanding, the deadlocks still happen.
We cannot serialize the complete jobs because they take too much time to run one after another. Neither can we put all those identical jobs in one big job, because we have different dependencies on preceding jobs.
Now I devised a small table like that
create table mf_locktable (col1 byteint not null , col2 Byteint not null) primary index pi_mf_locktable (col1) insert into mf_locktable values(1,1);
we will now place the follwing statement right before the critical statement
locking row for write select col2 from mf_locktable where col1 = 1
Since we are using ANSI mode, we hope that the following will happen:
- the first job locks the mf_locktable. Since only one amp is accessed, no deadlock should occur.
- a second job cannot get the write lock and has to wait
- the first jobs performs the critical update und COMMITs
- the second job gets the write lock without the risk of deadlocks
- and so on...
Does anyone have experience with this kind of serialization ?
Do you think it will work ?
Has anyone a better idea ? A more elegant one ?
Mit freundlichen Grüßen / With best regards
i.A. Martin Fuchs
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|