Archives of the TeradataForum
Message Posted: Fri, 24 Aug 2001 @ 06:22:13 GMT
we encountered the same problem with >2GB under unix, but are trying a different apporach:
Instead of writing directly to a file, we let the fastexport write to /dev/stdout, pipe it into a little Perl script that splits log information from data (data is characterized by a special prefix) and split it by Perl in several files if necessary.
All our unix-software is able to handle these multi-files as one single file.
We are just experimenting with this solution, and it is not totally stable so far, but we're on a good way to get this fixed.
|Copyright 2016 - All Rights Reserved|
|Last Modified: 27 Dec 2016|