Archives of the TeradataForum
Message Posted: Tue, 11 Aug 2015 @ 16:28:00 GMT
I'm glad that you got the job running again, that is probably the immediate requirement but let's think about this a bit.
A spool requirement of more than 5TB is a lot. (yes ultimately it is affected by the volume of source data, but it is still a big number)
I'm assuming from this thread that you haven't just started using PDCR. In which case the question has to be what has changed to mean that this processing now fails by running out of spool?
And if the original issue was that the PDCR db ran out of space, what changed to suddenly mean that you had a spool space issue? These are two different things. Of course it may be that the spool space issue was just waiting to happen and the database space error hit you first.
I would suggest that a more appropriate course of action would be to:
- reset the PeakSpool figures for user PDCRAdmin (set them = CurrentSpool)
- after each of the next few runs take a snapshot of the PeakSpool for that user.
You need to see if the 'out of spool' error is:
- due to skewed processing - one or more AMPs will have a PeakSpool at the MaxSpool for the AMP, OR
- due to excessive spool usage - all/most AMPs will have high PeakSpool values.
As per one earlier suggestion, just before you run this process, run a query to get the sizes of each DBQL table in database DBC.
Ward Analytics Ltd - Information in motion (www.ward-analytics.com)
|Copyright 2016 - All Rights Reserved|
|Last Modified: 23 Jun 2019|