Home Page for the TeradataForum

Archives of the TeradataForum

Message Posted: Thu, 20 Jul 2000 @ 20:19:34 GMT

  <Prev Next>   <<First <Prev Next> Last>>  

Subj:   Re: Saturation
From:   Rolf Hanusa

Saturation is fact of life regardless of the processing platform or DBMS you use. We currently have 68 nodes, 680 amps and can "saturate" the machine, when the right combination of things exist. Teradata is more forgiving than other systems that I worked with, but not invulnerable to poor performance conditions.

Before I give you some suggestions, let me first discuss my definition of "saturation" which may be different than yours. Saturation, in my opinion, is not seeing 100% CPU utilization. Teradata will take advantage of all available cycles into process 1 request or thousands. Saturation means that for a given workload, there is nothing that can be done to meet client requirements and expectations, short of upgrading the machine. Client expectations of course must be managed, and this can be very difficult at times. But it is not realistic for a client to expect the same level of service at 3pm when the system is busiest, as at 4am, when usage is lightest (on our system). This is there expectation, but it is not realistic. You can offer to reduce their performance at 4am to match their 3pm levels if they would like more consistency (I suspect that this offer won't be accepted).

We have a very user friendly system, that is, it is designed to give our clients maximum flexibility, availability, and performance. We do not use query governors, and give our users an average of 25-30GB of spool. Most of the time, our users are quite happy with the systems performance (they remember what it was like on other systems), but at times an extremely heavy workload (usually batch utilities that are processing in the online day), or an occasional bad query will severely degrade performance. We have a "canary" program, that monitors system performance by executing a specific set of pre-planned queries every 5 minutes or so. If performance begins to degrade, our DBAs and system administrators are paged, so that the can immediately take corrective action.

Here is a few of the things we look for:

- Batch (Utilities) running in the online day. We do control this, but sometimes they exceed or limits.

- Look for missing or stale STATISTICS, especially on small reference tables.

- High levels of blocking and deadlocks. This is usually indicative of high volumes of GRANTS and DDL in the online day. Avoid Public grants and keep you Accessrights table cleaned up.

- Using Teradata Manager and PMON, we look for jobs with excessive CPU, IO, and high transaction volumes. These are candidates for immediate review. We can take corrective actions like canceling queries, or changing priorities, but usually try to contact the offending user/developer before doing so. We will review (EXPLAIN) client queries and suggest changes when necessary. Poorly designed applications can be required to undergo a design review, and implement suggested changes.

- System parameters can have a major impact on system performance. Have NCR perform a system audit to adjust these parameters to your specific usage patterns.

The best offense against system "abuse" (intentional or not), is to establish system standards and guidelines, and to make sure clients and especially developers are properly trained.

Hope that helps. Maintaining maximum performance is an ongoing challenge.

  <Prev Next>   <<First <Prev Next> Last>>  
  Top Home Privacy Feedback  
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023