|
Archives of the TeradataForumMessage Posted: Fri, 23 Feb 2001 @ 11:54:35 GMT
Yonina, You say that you have two IFPs, I assume that you mean two START IFP commands in your TDP startup parms and therefore two connections (i.e. CP's) defined at the Teradata end. For a 4 node 5200 this is almost certainly going to be your bottle-neck and so increasing this will undoubtably improve the situation. You probably need to look at your configuration (in terms of the connectivity defined between the 5200 and your m/f). I'm assuming that you're using ESCON adaptors on the nodes (if not, why not?) and it sounds like they're only configured with a single CP per adaptor. You can have up to 8 CP's per adaptor (each CP requires a START IFP command in the tdp startup parms) but typically each ESCON adaptor will have two (sometimes three) CP's defined to it to provide for a high throughput. That may be one way to drive more throughput (but this may also increase the MVS cpu cycles being used for a shorter period of time - you win some you lose some !). Other ways to check what's happening or try and speed it up: - use the ResUsage data, specifically the ResHostByLink macro to check what throughput your getting on the channel connects. BTW I always create a modified version of this macro to aggregate the numbers up to LogicalHostId (you need to join to dbc.logonoff to do this) - I find this is more useful than having figures at the link (i.e. CP) level. - do you have a second m/f that can be connected (I realise this may not be a quick solution !): if your ultimate bottle neck is on the m/f, then connecting a second one (not just a second TDP) may allow you to increase the overall throughput. - check the throughput to your tape drives (although from what I remember magstars are pretty quick). if that becomes your bottle neck then try increasing the number of arcmain jobs you run (but you'll need to watch the mvs cpu utilisation again). I hope that starts to make some sense, if not let me know. Cheers, Dave
| ||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||
Copyright 2016 - All Rights Reserved | ||||||||||||||||||||||||||||||||||||||||||||||||
Last Modified: 15 Jun 2023 | ||||||||||||||||||||||||||||||||||||||||||||||||