Archives of the TeradataForum
Message Posted: Fri, 12 Mar 2010 @ 17:21:44 GMT
Are you using all possible BTEQ optimizations? Multiple sessions and parameter arrays and higher pack and perhaps NOPI table if that makes sense.
.sessions n .logon dbc/user,pass .import data file = filename; .repeat * pack n; using (c1 INTEGER, c2 VARCHAR(n)) insert t1 (c1,c2) values(:c1,:c2);
For smaller loads compared to tpump or fastload bteq can have less overhead on Teradata and compared to fastload more concurrent jobs. Fastload creates/drops error tables and does some dml to them and uses 3 awts/amp.
tpump creates/drops an error table, log table, and macros (error table and macros can be reused to avoid that create/drop overhead).
TPump and fastload have very different characteristics.
One of the biggest differences is that TPump does individual sql DMLs in parallel with the concurrency per session for non-parameter arrays based on the pack up to the parallel step limit and for parameter arrays up to the amp count. TPump can also apply dml to populated tables.
Fastload has 2 phases--in the first blocks of rows are sent on each session to the amp assigned to it at logon which converts the fields as necessary, determines the rowhash, and does buffered redistribution with an rrd buffer size tunable controlled by the ReDistBufSize in DBSCONTROL. In phase 2 it sorts and writes out the data blocks.
There's a lot more that could be said about these. Fastload is faster and uses less resource on Teradata. Both support checkpoint restart (bteq does not).
|Copyright 2016 - All Rights Reserved|
|Last Modified: 23 Jun 2019|