Home Page for the TeradataForum
 

Archives of the TeradataForum

Message Posted: Fri, 28 Feb 2003 @ 23:19:24 GMT


     
  <Prev Next>   <<First <Prev
Next>
Last>>
 


Subj:   Re: Need help on Teradata Database design
 
From:   Kohut, Eric J

You leave out a lot of things like what you want those 500 connections to do, etc..

Also with 5 or 6 PE's on a single node you might expect some contention if these sessions are doing much parsing intensive work.

Also the very large disks that you have may be a little counter productive depending on your workload. Very Large disks, while cheap, are not always the best for throughput.

Also additional questions like. What version of Teradata do you have? How much data do you have? What applications do you intend to run? How many applications. What are the workload types?

You'll have a lot of challenges. We have an entire team of people in our development area who work on how to balance all of the various potential system bottlenecks to produce a series of configurations that would produce reasonable performance results. If I were you, I would try to work with someone to help you figure this out. Unfortunately, that will cost you money.

That said here are some guidelines.

On a single node system. We would normally only configure about 600 GB of space usable by the RDBMS. As a result, I would expect that this would about 1.2 TB of space before RAID. We would normally have this in a few arrays (or likely in a much larger disk subsystem with a lot of I/O capacity) to spread the I/O very thin across a few different array subsystems. Also by increasing the number of disks and keeping their size low, you'd have more spindles to spread the work across.

As a result, you should probably use at least both of your arrays and ideally more if you want to max out your system. However, you probably don't need all of the disk capacity that you have available. If you use the single array approach, you'll likely be I/O bound depending on the bandwidth of your array and array controllers and what your users are doing. To support one of our full nodes you'd probably need about 160 MB (50 % reads)- 260 MB (100 % reads) per second depending on the mix of reads vs writes.

One solution may be to go ahead and define everything and even fill up the disks in your 2nd array but reserve a lot of the space (possibly 1/2 at 15,000 or more for slower drives depending on disk rotational speed.) so that it is never used by the RDBMS (You can't guarantee it won't be used for spool but at least you won't store user data in it.

NCR are starting to use 15,000 speed disks as opposed to 10,000 speed disks. This will help us to support somewhat larger drives, probably 72 GB but not sure about 146 GB. Also, the drives we use need to be very robust. We are very hard on our drives since we are rarely accessing data in disk order but rather in random order.

Finally, we are using only 2 of the newest CPUs in our latest nodes due to other potential bottlenecks (memory, bus, etc..) in the rest of the hardware when using 4 of the very high speed CPU's. The I/O numbers that I gave you are for 2 (2.4 - 2.8 Mhz) CPU's. It will be hard to suggest how for you to balance this, since the reason we went with 2 is that above this only some proportion of each CPU can be fully utilized. This has been an SMP issue for many years. The CPU are now fast enough for the limitation to be felt in systems with as few as 4 CPU's.

There is a lot of information here and a lot more to deal with if you expect this to be an optimized system.


God Luck,

Eric

EJK
Eric J. Kohut
Senior Solutions Consultant - Teradata Solutions Group - Retail
Certified Teradata Master
NCR Corp.



     
  <Prev Next>   <<First <Prev
Next>
Last>>
 
 
 
 
 
 
 
 
 
  
  Top Home Privacy Feedback  
 
 
Copyright for the TeradataForum (TDATA-L), Manta BlueSky    
Copyright 2016 - All Rights Reserved    
Last Modified: 15 Jun 2023