Archives of the TeradataForum
Message Posted: Sat, 12 Jun 2004 @ 15:29:29 GMT
I have a calucation in SQL Server like this
SUM(lu.Universe / NU.Universe) as National_Allocation
Where Universe is of 2 decimals accuracy and National_Allocation(Resulting Field) is (38,18) i.e 18 decimals accuracy.
This Caluculation I have implemented in T-DATA as
SUM(CAST(lu.Universe+.000000000000000000 as FLOAT) / NU.Universe) as National_Allocation
but the result in both the cases is not the same & the typical values are
SQL Server T-Data .674403981042654005 0.674403981042654026
I tried with the following options
1.Converting numerator to float with 18 decimals accuracy
2.Converting denominator to float with 18 decimals accuracy
3.Convertin both to float to float with 18 decimals accuracy
One more Option I tried is
SUM(CAST(lu.Universe as FLOAT) / NU.Universe) as National_Allocation but this is giving reult only upto 15 decimal digits. I need all the decimal digits with absolute accuracy since it is summation even if there is mismatch in 17th or 18 th decimal digits it makes a big difference in summation.
Why this strange behaviour of Teradata...
Please apply your thoughts & let me know your suggestions at the earliest.........
|Copyright 2016 - All Rights Reserved|
|Last Modified: 15 Jun 2023|