Archives of the TeradataForum
Message Posted: Wed, 12 Mar 2008 @ 12:56:02 GMT
Subj: | | Re: Stored Procedure Error Handling |
|
From: | | Dieter Noeth |
David Clough wrote:
| Because I'm using a Cursor to read from a Table and then, depending upon the data in the row, Inserting into TEMP_TAB or writing into a
more permanent table : | |
> SELECT RN1.OLD_ACCT_ID AS OLD_ACC
> , RN1.NEW_ACCT_ID AS CURR_ACC
> , RS1.NEW_ACCT_ID AS NEW_ACC
> FROM
> RENUM_CAND RN1
> LEFT JOIN
> RENUM_SET RS1
> ON RN1.NEW_ACCT_ID = RS1.OLD_ACCT_ID
> DO
> IF A_csr.NEW_ACC IS NULL
> THEN INSERT INTO FINAL_SET
> VALUES (A_csr.OLD_ACC,COALESCE(A_csr.CURR_ACC,A_csr.OLD_ACC));
> ELSE INSERT INTO TEMP_SET
> VALUES (A_csr.OLD_ACC,A_csr.NEW_ACC);
> SET ACCT_CT = ACCT_CT + 1;
> END IF;
| This works but is horribly slowly ! This, I guess is because every Insert is committing, one at a time into an non-empy table. | |
No, it's because of single row inserts.
There's one basic rule: Cursors are evil if used to process large data sets. This is also true in other DBMSs, but it's especially evil in a
parallel system: a cursor in Teradata runs on a single node (your session's PE) and it's running serially, fetch next, fetch next, fetch
next...
| If I compiled in ANSI mode would I then be able to control the Commits by only doing a Commit every 1000 rows or so ? | |
Probably, but this will hardly help.
| Would this help or am I just stuck with the performance as I've currently got it ? | |
| Would it be the same if I transferred the Insert statements into a Macro and then invoked the Macro instead ? Just thinking aloud. | |
Just replace the cursor with SQL:
insert into FINAL_SET VALUES
select ...
where NEW_ACC IS NULL
;insert into TEMP_SET VALUES
select ...
where NEW_ACC IS NOT NULL
;
If it's a MultiStatement Request, it's probably materializing the select once and then reuses it for both inserts.
That "SET ACCT_CT = ACCT_CT + 1;" might be replaced with a ROW_NUMBER.
Maybe you don't even need that volatile table at all.
Dieter
|