|
Archives of the TeradataForumMessage Posted: Wed, 23 Jun 2011 @ 00:56:18 GMT
I'm not sure about this, but here would be my guess. On the first system, something is indicating that the table has a significant number of rows that have COLUMN2 = 2. This could be because of a statistic that has been gathered on COLUMN2, or maybe if it doesn't have a statistic, the random amp sampling is indicating that a significant number of rows would be deleted. So, in the first explain, it wants to spoil the parser cache for the table in case any other plans are referencing the table and generating a plan based on the existing data since the data would be "significantly" changed by the delete. In the second explain, the optimizer either believes that the table does not have a significant number of rows where COLUMN2 = 2. Therefore, it doesn't need to spoil the parser cache because any other plans generated for the table that are in cache would not be affected by the delete. If you changed the delete to a "select *" and did an explain on that, you could see how many rows the optimizer thinks that it will be deleting. I would bet that the number of rows that it believes it's deleting is higher in the first explain than in the second. In either case, the spoiling of the parser's dictionary cache should not significantly impact the response time for the query. Thanks, Barry
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||