|
|
Archives of the TeradataForum
Message Posted: Sun, 27 Feb 2000 @ 17:54:29 GMT
Subj: | | Re: Teradata OS |
|
From: | | John Grass |
Notes:
David Nasser wrote:
| I'm going to expand on this a bit more. | |
| Teradata installations are finely balanced between the CPU, memory, interconnect bandwidth, DASD connect bandwidth and DASD I/O
throughput. In essence, a well tuned (normal) Teradata installation will not be constrained on any of these components. | |
| With all due respect, all computer systems have potential and/or actual bottlenecks. Just hand a system over to a group of users
... | |
True. Aggregate queries will use more CPU than I/O. Joins between two fact tables or Cartesian product joins will impact the
interconnect. However, the database will handle these workloads w/o bringing the entire system to it's needs. The idea is that all the
hardware is working at peak capacity: that no major components are idle, effectively wasting money. This is expecially true for
mainframes, where a well managed CEC will almost always run at least at 95%.
Also rmemember that if one has a very heavy CPU extensive workload, then one can always change the CPU to DASD ratio (by adding nodes) to
accommodate this workload "bottleneck."
| underlying OS and the underlying CPU architecture have little effect on the performance of the system. Rather, they have an
effect on the COST of the entire solution platform. | |
| You take the position that performance is independent of the OS?? This po' boy has been trained otherwise. | |
Actually, performance is dependent on a good match between the OS, the hardware, and the application software running on it.
Mainframes are great at OLTP. Would you run multiple file table scan DB2 queries on a heavily loaded OLTP mainframe? Is this the fault of
the OS?
As long as the OS is adequate to ensure the hardware is well used by the application software, then we are good. NCR's UNIX variant -
SVR4 - has been around for a long time and is very robust. It is not, however, widely used in the industry and thus would not be a good
choice for an enterprise solution across all platforms.
Tandem's non-stop OS is great (the best?) for financial OLTP but, again, is not widely used in the industry and thus would not be a good
choice for an enterprise solution across all platforms.
The proof is in the pudding. When is comes to competitive benchmarks for DSS, Teradata and NCR's SVR4-MPRAS UNIX usually come out on
top.
| From the mainframe, all one really sees is the performance of the SQL and batch maintenance jobs (both fully parallelized)
submitted across the channel. One does not see UNIX, NT, Solaris, 32-bit, 64-bit, or whatever. | |
I'm not really concerned with the mainframe. It is "familiar territory.
| Just this week, we had a query blow up, eat 160 gb of spool space. First thing we did was forget the mainframe, test with
NT/Queryman. Jury is still out... hope its something silly ... | |
Probably. Take a look at the EXPLAIN (in English, and will be produced
without having to execute the query) and look for "product join" or a
join between two fact tables.
| |