|
Archives of the TeradataForumMessage Posted: Thu, 09 Apr 2015 @ 11:28:53 GMT
Hi, This question (or something very similar) has come up multiple times over the last week or so. I'm assuming that all these questions are from the same person or the same site. Let's try and sort this out for anyone who is thinking about doing this. Teradata does NOT allow multiple concurrent insert/selects or update-joins. As noted by Todd Walter yesterday, if you run multiple such queries concurrently then Teradata will start the first one and any others will be made to wait until the first has completed. If the source data is on the Teradata system this is probably the most efficient way of handling this processing. If you want to have multiple concurrent inserts or updates then you have to export your data to an external file, and run Tpump / TPT Stream, this can take advantage of row hash locks (which as pointed out previously may lock a single row or it may lock multiple rows - all those with a single row hash value). It is likely that this process will take longer elapsed time and use more cpu and IO than the above method. On a recent customer engagement I took their process from 36 minutes for 660K rows (using the export/import approach) to 48 seconds for 7.4M rows (using insert/select). Using "lock row for access" does not affect the above information. Using PPI on the target table does not affect the above information. A question that was asked at some point was "what is the need for concurrent insert/select processing?". I haven't seen any answer to that. Regards, Dave Ward Analytics Ltd - Information in motion (www.ward-analytics.com)
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||