Archives of the TeradataForum
Message Posted: Sat, 24 Apr 2004 @ 08:41:48 GMT
The issue of V2R5 and memory requirements sometimes gets misunderstood. Below is some information that I hope is helpful to everyone. In a nutshell, upgrading to V2R5 alone does not require adding more memory. Each system is different and must be looked at independently to determine if additional memory can benefit the performance of the system. This is independent of the Teradata database version, as there are V2R4.x systems out there that can benefit from additional memory.
Note in the text below that when referring to memory we say 'recommended' not 'required.'
Teradata can run on a system with 1GB memory per MPP node, but we do not recommend this because experience tells us that such a system would be unlikely to provide adequate performance. Also available, and what is normally installed, are 2, 3, or 4GB memory per node.
Existing customers upgrading to V2R5.x from V2R4.x
Before upgrading from V2R4.x to V2R5.x, memory usage with V2R4.x should be reviewed, especially for performance critical applications at peak times. If there is 150MB or more of free memory available per node, then there should be no need to add memory before upgrading. If there is less than 150MB of free memory available per node, it is possible that upgrading to V2R5.x will result in degraded performance, even if none of the optional new features in V2R5.x are to be implemented. In this case, there are two options to explore to avoid significant performance degradation: either reduce FSG cache sufficiently to allow 150MB of free memory, or upgrade memory of the system. It would be prudent to investigate the impact of reducing FSG cache on overall system performance on the system running V2R4.x before implementing the upgrade to V2R5.x. If the V2R4.x system continues to provide the required overall performance with the reduced FSG cache, then there is no need to add memory. However, if reducing cache negatively impacts performance by an unacceptable amount, then it is recommended that you add 1GB memory to each node.
Existing customer with V2R5.x introducing use of additional memory consuming features
Whether you intend to use new features of V2R5.x or you are just now introducing the use of features already available prior to V2R5.x, be aware that certain features may require more memory in order to show their optimal performance benefit. Of particular note are:
* LOBs and UDFs (which will become available in V2R5.1)
* PPI and Value-List Compression (first available with V2R5.0)
* Join Index, Hash-Join, Stored Procedures and 128K datablocks (available prior to V2R5.0)
While each of these features will function and even show performance gain in most instances without additional memory, the gain might be countered by the impacts of working with a fixed-sized memory. In turn, you may experience more segment swaps and incur additional swap physical disk I/O. To counter this you can lower the FSG cache percent. However this in turn may cause fewer cache hits on table data and instead cause a different type of additional physical disk I/O. Generally additional I/O on table data is not as severe of a performance issue as swapping I/O, but can still have a measurable impact. Talk to your local support team for help in monitoring FSG Cache memory.
New customers or new footprints in existing customers
Systems that will be deployed to support a traditional data warehousing workload i.e. strategic decision support with bulk loading of data, 3GB is the recommended memory per node. However, you might want to take into account the following: to upgrade memory to 4GB later would require system downtime and potential part swaps, both of which could be more costly than simply purchasing 4GB initially, and never needing to worry about memory size again. If you can afford to take a long term view, we recommend you install 4GB.
Systems that will be deployed to support an active data warehouse, which by definition includes both traditional data warehousing with bulk data loading, and tactical/event driven decision making with continuous feeding of data, 4GB is the recommended memory per node.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 28 Jun 2020|