SlackwareThis Forum is for the discussion of Slackware Linux.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
i have just installed slackware 8.1 with reiserfs.
i tried to backup data from our sun workstations using cpio but
there seems to be a 2Gb limit on the filesize.
is there a max file size limit for cpio?
is there a way to get around this limit?
the kernel installed is 2.4.18.
I also experienced this problem with slackware 8.0,
i installed the 2.2.19 kernel using ext2fs. I was hoping that
upgrading to slackware 8.1 using the 2.4.18 kernel and reiserfs would help solve this problem.
I have experienced similar problems. Even though a file system is suppose to support larger file sizes than 2 GB, most programs do not. And even if you find an app that will, most every operation you attempt with it will also fail. Your mileage may vary.
The only solution I was able to work with consistently and reliably was to make smaller files.
Cpio may very well have a file size limitation imposed through bash if your output is to a file or it may have a limitation within if the file to be backed up is greater than 2 GB. I am sure I have been able to far exceed 2 GB on total output using tar to a device (not a file), like a hard disk, DVD or tape. My daily backup on my workstation exceeds 3 GB on DVD+RW using tar piped to sdd to handle the write operation. But tar is purely sequential, it is a aging process to restore a single file from such a backup. You might want to investigate a program called "taper". I believe it allows file selections through an index list if I recall. I think the output was designed for tape drives though. If you have the disk space and can set it up, like on a network, you may want to look at rsync. On rsync though, I do not know if a single file over 2 GB would be a problem or not.
I have not used cpio direct to a device but I think it supports tape drives so you could attempt the following. On for example a 20 gbyte hard disk for backups, create 4 primary partitions of 5 GB each. Then perform your backup using the device name directly. No file system. Reference it like /dev/hdd1, hdd2, hdd3, hdd4. Modify for your needs as required.
If you are interested in a backup medium faster than tape, you may want to investigate DVD+RW. Last night my system backed up 3.9 GB in 20 minutes. (200 MB/minute.) The capacity without compression is 4.7 GB.
thank you very much for your suggestion. I tried using tar but it did not work. It also stopped at 2GB.
With regards to the 5 partitions that you suggested. The backup drives are physically not at the said machine. It was mounted from our NT based stations using smbmount. Could it be that smbmount has a problem with filesize exceeding 2GB?
I wish we had a tape drive but we cannot afford it.
Anyway, i also have a red hat linux machine with 2.4.2 kernel using ext2fs. It can tar files at more than 2GB.
Zelgadis, thank you for the suse.de link. It was most informative. But it appears that we are still not quite there as yet.
I was able to duplicate the 2 GB limit using cpio on local source files to local output. However, when I used tar to perform the same backup it created a file that the "ls" command wouldn't even display in the directory. And "rm" would not delete it. I had to use the tar command again to truncate to an acceptable size. Then delete it.
I can only recommend for the backup limitation issue using cpio; break it up into smaller files sizes, or performing data only backups. You can also consider piping the output through gzip to compress it. But that is only a limited workaround until the file output reaches the 2 GB limit again because gzip has the same limitation. Consider also the network usage for complete backups and time constraints for these massive transfer operations. For a Linux box data only backup consider the directories; /etc, /home, and /root. I also include /var/spool/mail and /var/spool/mqueue if it is a mail server and the apache files if a web server.
In regards to my suggestion on the hard disk partitions, I would think the disk physically needs to be where the tar/cpio command is executed would be best. But I do not see why it couldn't be remote on a Linux box, just use the HOSTNAME:/dev/hddx to reference it like a tape drive would be. If you are backing up to a destination located on a NT workstation that is using NTFS it should not have a 2/4 GB limit. If other, Win2K Pro, using FAT32 then high possibility of 2/4 GB limit.