Archives of the TeradataForum
Message Posted: Wed, 12 Mar 2008 @ 12:17:54 GMT
Right, well I've got the Error Handling working, based upon the comments I've had ... so thanks for that.
For interest to others, here's one of the working example excerpts :
BEGIN DECLARE CONTINUE HANDLER FOR SQLSTATE '42000' BEGIN CREATE VOLATILE TABLE TEMP_SET (OLD_ACCT_ID CHAR(10) NOT NULL, NEW_ACCT_ID CHAR(10)) UNIQUE PRIMARY INDEX (OLD_ACCT_ID) ON COMMIT PRESERVE ROWS; END; DELETE FROM TEMP_SET; END; --
This quite nicely Creates the Table, but only if the Delete statement fails due to Temp_Set not existing.
Anyway, last question - and it's about performance.
Because I'm using a Cursor to read from a Table and then, depending upon the data in the row, Inserting into TEMP_TAB or writing into a more permanent table :
SELECT RN1.OLD_ACCT_ID AS OLD_ACC , RN1.NEW_ACCT_ID AS CURR_ACC , RS1.NEW_ACCT_ID AS NEW_ACC FROM RENUM_CAND RN1 LEFT JOIN RENUM_SET RS1 ON RN1.NEW_ACCT_ID = RS1.OLD_ACCT_ID DO IF A_csr.NEW_ACC IS NULL THEN INSERT INTO FINAL_SET VALUES (A_csr.OLD_ACC,COALESCE(A_csr.CURR_ACC,A_csr.OLD_ACC)); ELSE INSERT INTO TEMP_SET VALUES (A_csr.OLD_ACC,A_csr.NEW_ACC); SET ACCT_CT = ACCT_CT + 1; END IF;
This works but is horribly slowly ! This, I guess is because every Insert is committing, one at a time into an non-empy table.
If I compiled in ANSI mode would I then be able to control the Commits by only doing a Commit every 1000 rows or so ?
Would this help or am I just stuck with the performance as I've currently got it ?
Would it be the same if I transferred the Insert statements into a Macro and then invoked the Macro instead ? Just thinking aloud.
I'm sure this must be of interest to others who are considering using Stored Procedures.
Thanks for your help so far.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 28 Jun 2020|