|
Archives of the TeradataForumMessage Posted: Wed, 26 Aug 2015 @ 14:18:27 GMT
An increased execution time of 6 times seems very high. There were many enhancements between those releases but the only one I can think of that might noticeably impact query execution time is the use of worker threads. That would add some time since a new thread is created for each statement - and you have 55000 of them. Threads are good for ensuring that the UI is responsive while the query is running, but creating 55000 of them is not so good. However I would not expect that to add more than a few minutes to the execution time. If this was a single test run could there have been locking or network issues during that run? Executing 55000 insert statements in a single Query is definitely not what the product is designed for. There are 2 alternatives you could try: The recommended way is to use Import mode. A single parameterized Insert statement with an import file containing the data. Using the maximum batch size with 4 column, 70 byte, rows can load up to 4000 rows/sec. (I don't know your #columns or average row size. Larger will be slower.) The other way is to use 'Execute Parallel' in which the entire query is submitted as a single request. However, unless these are very small insert statements the query would have to be broken into multiple pieces - each piece less than 1MB in size. Both of these alternatives will be considerably faster - maybe 1 minute for Import. (If you already have the data in the table you can create the Import file using 'Export', and you can create the parameterized Insert statement using the Generate SQL > Insert (Import) context menu on the table node of the Explorer Tree.) Mike Dempsey
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||