Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Thu, 22 Aug 2002 @ 17:48:18 GMT


     
  <Prev Next>   <<First <Prev
Next>
Last>>
 


Subj:   Re: Question on Deadlock..
 
From:   Thomas Stanek

Deadlocks come in many flavors. Unfortunately, the 2631 deadlock error message covers a lot of territory.

In the traditional sense, they result from multiple users coming in contention with the same object, either within the dictionary or a user table. In the V3R4.0 messages manual, there are three reasons cited:

1) an excessive number of transactions were queued waiting for locks, and the user's request would have blocked on the lock queue.

2) the request conflicted with another request, causing a deadlock.

3) a software problem (for example, a lost message) occurred, causing the request to "time out".

It sounds like your problem might be related to #3. One situation that I have seen often is when a 2631 occurs as a result of contention with a system resource, not a table. In this scenario, the contention was occurring on the "mailboxes" that are used to send messages between the AMPs within a node. The type of activity that one would see when this situation occurs is typically when there are a high number of sessions that produce a lot of data passing among the AMPs. Row Re-distributions are the primary culprit.

If you are confident that you are not contending on a specific resource such as a dictionary table (as suggested by Dave W.) or some user table, I would begin to investigate this avenue. The way to "see" this is through the use of the puma -m command. Interestingly, there used to be a discussion of the puma command in the Performance Optimization guide. However, in the V2R4.0 manuals, there is only a passing mention with a reference to the RDBMS Utilities, Vol. 2 manual. Unfortunately, there is no meaningful discussion of puma in that manual. Hmmm.... I guess there's always the MAN pages.

The solution to the problem was a review of the SQL being submitted and subsequent performance tuning, either through model changes or SQL changes. We found that there was a lot of excess row re-distributions occurring. Once the changes were made, row re-distributions were significantly reduced, along with the number of 2631s that occurred.

As a side note, with the advent of the DBQL log that is coming, it should be much easier to identify sessions that experience 2631s and be better able to pinpoint the root cause(s) of the problem.

Hope this was helpful. If you think this is your problem and need more info, I may be able to find some examples of the puma command that I have run in the past.


Regards,

Thomas F. Stanek
TFS Consulting
www.tfsconsulting.com



     
  <Prev Next>   <<First <Prev
Next>
Last>>
 
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023