|
|
Archives of the TeradataForum
Message Posted: Fri, 13 Apr 2007 @ 15:09:40 GMT
Subj: | | Re: Merging large volumes of data into large history tables |
|
From: | | Victor Sokovin |
| Todd, thanks for clearing this up for me, I'm sorry about the misconception I put out to the forums. I will check with you first on new
improvements next time around. Also, we should be meeting up soon in RB... | |
Why checking with anybody before expressing your opinions? They might be interesting in their own right even if they are incorrect.
| By the way, a note about performance on Teradata and getting data IN fast... I recently had the pleasure/opportunity to work on a
large multi-node system, all I can say is MLOAD speeds were approaching 400,000 rows per second for 1.2k byte row sizes, to an empty table.
FastLoad was approaching 600,000 rows per second to the same table. | |
| SQL Statements internal performance (moving data from table to table without constraints, and empty tables) was nearing 150,000 to 200,000
rows per second - and forced re-partitioning (based on new key). These SQL statements were extremely complex with all kinds of nested sub-
queries, and including OLAP functionality, and transformations. | |
Right; so it sounds like comparing ML/FL and INSERT ... SELECT in this case is like comparing apples and oranges?
| I'm very impressed with the speed/throughput we had, of course we had 1.5 Billion rows per table to deal with... so performance was
paramount. | |
Yes, that goes without saying.
| Friends of Teradata Network | |
Is this a new initiative?
Regards,
Victor
| |