To conserve space when backing up projects on our third-party Oracle-based data interpretation system I create a named pipe, hook it up to compress, and feed the backup to the pipe:
mknod <project_name>.pipe p
cat <project_name>.pipe | compress > <project_name>.dmp.Z&
owbackup <project_name> <project_name>.pipe
This works for our smaller projects and works for huge schemas on our HP-UX systems, but we have problems with a project that creates a 65 Gb backup file. Using the above, we get a 45 Gb file that uncompresses to a 46 Gb file and which the import utility reports an Oracle error indicating the file is probably from an aborted export. I can ftp the 65 Gb file to another server with lotsa space and compress and uncompress it with no ill effects on its usability.
Are there any limits on Red Hat Linux mknod pipes or compress that might be causing this?