Archives of the TeradataForum
Message Posted: Tue, 20 Nov 2012 @ 17:53:09 GMT
Not just "not helpful", they may be harmful.
Stats should be collected on case to case basis, meaning that, in theory, you should check the execution plan after each stats collected to verify if their impact is positive, neutral or ,in case of columns hitting the histogram limit, negative. Unfortunately, you have to drop the undesired statistics the same way: case by case. There's no easy solution, just a lot of work.
I don't know your exact situation, but it almost looks like you have collected all the statistics possible. If that's the case, I would first take the queries with most performance issues and check if their performance is degraded due to these "misleading" statistics.
Then, when you put out the biggest fire, you can check for statistics that are collected but probably unused. Use DBC tables and DBQLogs to isolate the columns not relevant for statistics collection (not used in joins, predicates, non-indexed etc).
Or just drop everything and start from the scratch. But consider all the downsides of this approach and make a clear action-plan before you do any radical step. A slow query is still better than a query that never ends.
You have a long way to go, good luck!
|Copyright 2016 - All Rights Reserved|
|Last Modified: 23 Jun 2019|