|
|
Archives of the TeradataForum
Message Posted: Thu, 17 Jul 2003 @ 17:12:24 GMT
Subj: | | Re: Stats on v2r5 |
|
From: | | Ballinger, Carrie |
I wanted to chime in on an old thread from June 26, concerning the recollection of statistics at the table level in V2R5.
Below is an explain run on V2R5 that illustrates what the optimizer will do when you issue a refresh of statistics on a table that has
multiple columns/indexes that previously had stats collected on them.
Although all stats on the table will be refreshed within the same, single request, an additional scan or index search is performed for
each set of stats. As the explain shows, the individual stats are collected in a serial fashion, not in parallel, but they do share their
final spool in common and a single update is made to the dictionary tables.
The effort to produce stats is different than performing an aggregation, with different resource characteristics and trade-offs.
Subsequently, performing the collection of stats independently (with some overlap in final processing) has proven to be a more balanced
approach in Teradata than trying to build all of them at one time.
Thanks, --Carrie
---------
Statistics had previously been collected on 1 index (the NUPI) and 2 separate columns.
explain collect statistics on mkt_basket_dtl;
Explanation -------------------------------------------------- | |
| 1) | First, we lock CAB.mkt_basket_dtl for access.
| |
| 2) | Next, we do a COLLECT STATISTICS step from CAB.mkt_basket_dtl by way of an all-rows scan into Spool 1 (all_amps), which is built
locally on the AMPs.
| |
| 3) | Then we save the UPDATED STATISTICS from Spool 1 (Last Use) into Spool 3, which is built locally on the AMP derived from DBC.TVFields
by way of the primary index.
| |
| 4) | We do a COLLECT STATISTICS step from CAB.mkt_basket_dtl by way of a traversal of index # 4 without accessing the base table into
Spool 6 (all_amps), which is built locally on the AMPs.
| |
| 5) | Then we save the UPDATED STATISTICS from Spool 6 (Last Use) into Spool 3, which is built locally on the AMP derived from DBC.TVFields
by way of the primary index.
| |
| 6) | We do a COLLECT STATISTICS step from CAB.mkt_basket_dtl by way of an all-rows scan into Spool 9 (all_amps), which is built locally on
the AMPs.
| |
| 7) | Then we save the UPDATED STATISTICS from Spool 9 (Last Use) into Spool 3, which is built locally on the AMP derived from DBC.TVFields
by way of the primary index.
| |
| 8) | We lock DBC.TVFields for write on a RowHash.
| |
| 9) | We do a single-AMP MERGE DELETE to DBC.TVFields from Spool 3 (Last Use) by way of a RowHash match scan. New updated rows are built
and the result goes into Spool 10 (one-amp), which is built locally on the AMPs.
| |
| 10) | We do a single-AMP MERGE into DBC.TVFields from Spool 10 (Last Use).
| |
| 11) | We spoil the parser's dictionary cache for the table.
| |
| 12) | Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.
| |
| -> | No rows are returned to the user as the result of statement 1.
| |
| |