|
|
Archives of the TeradataForum
Message Posted: Wed, 09 Nov 2005 @ 14:18:05 GMT
Subj: | | Re: ARCMAIN throughput - limiting factors |
|
From: | | Anomy Anom |
<-- Anonymously Posted: Wednesday, November 09, 2005 09:09 -->
John Graas:
| "I think the previous poster was trying to point out the single path limit between the mainframe and the backup device being written to by
the backup job. That is, a single backup job can only write to one backup device. If this backup device's (e.g. tape) capacity is less than the
aggregate throughput of the interconnect between the Teradata configuration and the mainframe, then this will be the limiting factor." | |
Was I erroneously substituting what I called a "single ESCON connection" between the mainframe and the backup device for what you describe as
the "single path limit" between the two?
Anyway, I understand the cluster backup scenario that this and Michael McBride's most recent post described. In fact, as I originally stated,
we have seen 2x the data transfer speed from Teradata when we have two backups running, to different devices as a cluster would do. However, our
scenario is a little different than the parallel Unix-to-dedicated tape model described by Michael. Since our backups currently go to a
mainframe, we are competing for various resources there with other users. While we could achieve modest improvement via cluster backups,
Michael's system does the job much faster than the best rate we could ever hope to do, even with clusters. We are probably somewhat limited as to
the degree we can parallelize without impacting other mainframe workloads or tape backups. That is what is so frustrating with our architecture.
FICON does promise some improvement in throughput, but probably not revolutionary because of the competition for mainframe resources. BTW, thanks
Michael for your elaboration on parallel cluster restores. In our discussions we had noticed your earlier remark about that.
As always, thanks for your replies.
| |