|
|
Archives of the TeradataForum
Message Posted: Tue, 02 Jul 2013 @ 17:04:44 GMT
Subj: | | Re: Rowhash Blocking |
|
From: | | Dieter Noeth |
Roopalini Bakthavachalam wrote:
| The tables are loaded using ODBC load. I looked into the DBQLOGTBL and found that for a single session there are thousands of plain simple
insert statements like INSERT INTO DB_1.TABLE (A,B,C,D,E,F,G,H,I) VALUES ( ? , ?, ? , ? , ? , ? , ? , ? ) ; | |
| So this session is blocking the other session which again is inserting thousands of records to the same table. | |
This is probably inserting all the rows within a session in a single transaction or multiple batches with thousands of rows each (Teradata
session using BT/ET or ANSI session).
Maximum number of hash collitions 2 is low, but there *are* collitions and you probably hit them :-)
If this was a TPump job you could use the SERIALIZE option, but in your case you might have to lower the number of session to 1.
Of course it will slow down the load, to speed it up again you should check if the INSERTs are single row or multiple rows using "array
requests" (the same as PACK in BTEQ and ARRAYSUPPORT in TPump). You should be able to pack a few hundred rows in one block, this will greatly
improve speed (1 of those sessions will probably be faster than multiple (>10) single row sessions).
Btw, one session should be automatically rollback (caused by the deadlock detection mechanism) and then redo all the work a second time...
Dieter
| |