If you run an explain on table level statistics collection, you can see each separate pass of the table. Each column/index is a separate
sequential collection. There is some savings at the end because they share a common spool file and a single update is made to the dictionary
tables.
Explanation -------------------------------------------------- | |
| 1) | First, we lock CAB.customer for access.
| |
| 2) | Next, we do a COLLECT STATISTICS step from CAB.customer by way of an all-rows scan into Spool 1 (all_amps), which is built locally on the
AMPs.
| |
| 3) | Then we save the UPDATED STATISTICS from Spool 1 (Last Use) into Spool 3, which is built locally on the AMP derived from DBC.TVFields by
way of the primary index.
| |
| 4) | We do a COLLECT STATISTICS step from CAB.customer by way of a traversal of index # 8 without accessing the base table into Spool 4
(all_amps), which is built locally on the AMPs.
| |
| 5) | Then we save the UPDATED STATISTICS from Spool 4 (Last Use) into Spool 3, which is built locally on the AMP derived from DBC.TVFields by
way of the primary index.
| |
| 6) | We do a COLLECT STATISTICS step from CAB.customer by way of a traversal of index # 4 without accessing the base table into Spool 8
(all_amps), which is built locally on the AMPs.
| |
| 7) | Then we save the UPDATED STATISTICS from Spool 8 (Last Use) into Spool 3, which is built locally on the AMP derived from DBC.TVFields by
way of the primary index.
| |
| 8) | We lock DBC.TVFields for write on a RowHash.
| |
| 9) | We do a single-AMP MERGE DELETE to DBC.TVFields from Spool 3 (Last Use) by way of a RowHash match scan. New updated rows are built and the
result goes into Spool 9 (one-amp), which is built locally on the AMPs.
| |
| 10) | We do a single-AMP MERGE into DBC.TVFields from Spool 9 (Last Use).
| |
| 11) | We spoil the parser's dictionary cache for the table.
| |
| 12) | Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request.
| |