|
Archives of the TeradataForumMessage Posted: Wed, 03 May 2006 @ 17:12:36 GMT
Actually, when you create and use macros, the query plan is not produced until the macro is executed. What is stored in the database is the SQL that makes up the macro. The advantage with macros is that you don't have to rely on the end-user re-coding the SQL, you can more easily enforce security if needed, and if the SQL is lengthy, you don't have network overhead of passing it into the database over and over. Also, the SQL will be executed as a multi-statement request, so you may get more parallelism in the execution of the SQL statements. The first time a macro is executed, it's plan may (or may not) remain in the statement cache in the PE. If the plan has been cached (usually because parameterized SQL has been used), then that plan can be re-used by other executions of the same macro on that same PE. There are several reasons why the statement cache may not contain the plan for a particular macro, causing the macro to be re-optimized when it is re-executed: 1. The macro has not been run recently and the plan has been aged out. 2. Previous executions of the same macro originated from a different PE, the macro is new to this PE. 3. Either a DDL or a COLLECT STATISTICS statement that references objects contained within the macro have been issued since that macro's plan was placed in that PE's statement cache, causing the plan to be flushed from the cache. So, yes, macros offer the advantage of not having to re-parse the plan, some of the time, if designed appropriately. But it doesn't exactly work like compiling a Cobol program. Thanks, -Carrie --------
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||