Archives of the TeradataForum
Message Posted: Sun, 25 Jul 1999 @ 19:24:59 GMT
This situation is close to what Bob describes by his using a NUSI (?) which is an all amp operation.
I have a few questions?
First, Is single query, sub-second performance the real requirement. How scalable does the performance need to be? Is this one user or will it be 100's or 1000's? Also, will this system be only OLTP or will it mix OLTP and DSS or OLAP. I know that other RDBMS's are good at OLTP, but I've not seen any be able to mix OLTP and even remotely complex OLAP / DSS. And they also have their scalability limits. You note that this runs in .5 seconds on Informix. How many will run in .5 seconds. 1, 5, 10, 25, 50,?. Teradata is a throughput system for this type of work. At least 30-45 of these should run In 2-3 seconds on Teradata and possibly many, many more.
Also, You should not try to position Teradata as an OLTP engine by itself. However, if you are trying to mix OLTP and OLAP / DSS or if you need a very high level Of scalability then you might try to position Teradata in this application.
I would also be interested in more information like have statistics been collected on all of the join columns, PI's, etc... The explain looks reasonable But the optimizer requires information to take a better path.
There are two ways to perform OTLP on Teradata.
1. The direct use of a Primary Index to drive other PI joins.
2. The use of a USI Primary Index to point to the row of a NUPI row (possibly UPI)
Although I don't have much info on the query (Primary Index and exact user requirements), it doesn't look like you could turn this into a Unique Secondary Index look up to find a NUPI row. This is an additional way to provide fast response and high scalability. This is at most a 2 AMP operation and as a result will be very fast and very scalable. You can execute 40-60 of these per amp / sec assuming that no other operations require an all amp operations. Is this index unique or could it be made unique with other information like c_status, other known columns, etc....????
Also, If the current explain is the best you can do, you are going to need to remove all / most of the re-distributions to make this perform in the speed which you desire. To do this you'll probably need to build a join index or de-normalize the tables. I'd advise the join index if you don't plan on a high volume of insert / updates to these tables and / or you need scalability of more than 50 simultaneous transactions.
There are a few more options but they are only worth pursuing once you've exhausted these options.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|