|
Archives of the TeradataForumMessage Posted: Thu, 13 Jul 2006 @ 21:12:58 GMT
I assume the goal is to optimize throughput. . First note that throughput is limited by the effective bottleneck which might be Teradata capacity/contention, LAN or Channel bandwidth, client cpu or I/O capacity/contention. If any of these bottlenecks are hit, additional sessions won't improve throughput. On each session the utility sends a block of rows then waits for a response from Teradata. With multiple sessions, the utility sends a block on each session then waits for any to complete. You have the optimal number of sessions when the utility never waits--that is, by the time it sends a request on the last session, some other session is complete. To put it another way, whenever the utility has a request ready to send, there is an available session to send it on. All utilities use all amps in the data transfer phase. For FDL and MDL the sessions requests are sent directly to an amp that was chosen for that session at logon--the PE is not involved at all after logon. These receiving amps extract individual data rows from the buffer, compute the rowhash to determine the owning amp, and do buffered redistribution. So every amp is processing rows it receives from the receiving amps--the receiving amps are doing more work. I think in terms of sessions/node. It is a 'sliding scale'--more sessions per node on smaller systems and fewer sessions/node on larger systems. Sessions should get assigned across nodes (but I am not sure of this--I hope so). So for Fastload if there were 1 session per node then there would be good balance--each node has 1 receiving amp and others processing buffers sent to them by the receiving amp. It is good to experiment. I usually start with 8 sessions and run jobs of about 1 minute duration increasing the session count by 4 each time. Generally you will see throughput increase with sessions at a decreasing rate until more sessions don't provide more throughput. After a few tests like this you get a sense for how your system behaves and can predict the right session count for jobs you create in the future. Note that session count can also be used to limit throughput. e.g. If you run an MDL in the query window and want to limit its impact on the workload to use no more than some percentage of the system limiting the sessions is effective in limiting the throughput and thus the resource consumption rate. TPump use regular SQL and so delivers multi-statement requests to PEs (after startup the steps should be in the statement cache). Again CLI should balance across gateways and the session assign task should balance across PEs and hopefully nodes. The dispatcher then sends the requests to the owning amp. Similarly throughput increases with sessions then levels off. If there is workload contention, it is appropriate to use high PSF settings--each TPump dml doesn't take long so it doesn't affect other work too much. TPump can be throttled if necessary by reducing the session count (preferred) or using the sessionstatustbl. With low/default PSF settings on a busy system there will be so much run to run variation due to workload contention that subsequent runs with same session count can vary wildly in throughput so it is difficult to figure out the optimal session count. Note that parameter arrays introduced with Teradata V2R6 and TTU8.1 TPump can change things--they allow much higher pack, send all rows from a particular request to the owning amp in a single step, and can amortize/avoid transient journal and data block I/O. This can enable equivalent throughput with a lower session count. Higher pack means fewer requests total and per session which means fewer round trips and increases the likelihood of I/O avoidance when >1 row in a request ends up on the same amp or in the same datablock. ARC has similar considerations. The connection is directly with the AMP (no PE involvement). Besides returning it's own data, the amp that the session is connected to obtains blocks from the other amps via the message subystem. The same principles apply--better throughput with more sessions up to a point then it levels off. BTEQ also now exploits parameter arrays--this is mentioned in the SQL Fundamentals Manual but not explictly in the BTEQ manual. Use of the new 'pack' keyword causes it to use parameter arrays.
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||