|
|
Archives of the TeradataForum
Message Posted: Mon, 26 May 2008 @ 16:19:42 GMT
Subj: | | Re: Serialization of statements to avoid deadlocks |
|
From: | | Dieter Noeth |
martin.fuchs wrote:
| Therefore, we sometimes encounter deadlock situations, when two jobs try to update this table. Obviously, the jobs access the individual
amps in a different order, thus the deadlocks happen. Since this jobs is very critical, we cannot afford those deadlock situations. | |
| We first tried to minimize the risk by COMMITing the entire job right before the critical statement in order to shorten the time when the
critical ressource is locked. | |
This is the first step...
| Notwithstanding, the deadlocks still happen. | |
This might happen, if you got several sequential updates mixing table and row hash locks or several multistatements using hash locks. Adding a
LOCK TABLE WRITE before the first update should prevent that.
| Since we are using ANSI mode, we hope that the following will happen: - the first job locks the mf_locktable. Since only one amp is
accessed, no deadlock should occur. | |
This looks like a manual implementation of the existing Pseudo-Table-Locks in Teradata:
1) First, we lock a distinct db."pseudo table" for write on a RowHash to prevent global deadlock for db.tablename.
| Has anyone a better idea ? A more elegant one ? | |
Try to find out, what causes the deadlocks and remove that cause :-) If you could post DDL & updates...
Dieter
| |