Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Has a reliable & stable utility been developed yet that allows one to image only the used portion of a Linux partition?
Just wondering if there's anything out there now that saves one having to waste storage and time on saving an entire partition (or drive) when oftentimes 80% is free space we don't need to waste resources on backing up.
Any suggestions?
Thanks,
cc.
I don,t know if a utility has been developed (or even exists), but I do know that what you request can be possible using rsync to mirror data locally or remotely. This way, you can use just the exact portion of the used linux disk, and the rest of the disk (in case space is left) you can use it for whatever you want. Is not difficult, if you,re interested:
Keep in mind that if you want to sync 2 different servers, you will need a dedicate nic interface for them with at least 1Gb speed, if it is local disks, you need to consider how to limit buffer cache I/O request to not overload your server, some google search can help you with it.
To do this the imager needs to filesystem-aware - so why not use a "proper" tool yourself ?. partimage is a good example - but doesn't support ext4 or btrfs. No good to me, or likely a large proportion of current Linux users.
A reason to make partitions at install time is also a reason one might wish to make quick data backups. A partition for /home makes it easier to copy. Copy with any tool is all the same choices. Either file by file or bit by bit. You can easily copy mount points that are not partitions too by a large amount of tools. Tar, rsync, cpio, and many others can do file by file. Generally one pipes those to some compression to save transfer and ultimate end size.
clonezilla is often used for that. It supports a wide variety of filesystems. clonezilla uses partclone internally, and you can use partclone directly if you don't want/like the clonezilla wrapper.
clonezilla is often used for that. It supports a wide variety of filesystems. clonezilla uses partclone internally, and you can use partclone directly if you don't want/like the clonezilla wrapper.
I got fed up with Clonezilla. It never seemed to improve between versions. Really obvious things like options that go nowhere and cause the user to end up in endless loops were never tidied up. I went back to dd>tar in the end out of sheer exasperation. Gets the job done (eventually) but you end up with the wasted space problem. Arghh!!!
you don't save the entire partition, you only save the file system contents.
for example,
/dev/sda1 = fat, 212MB, this is the boot partition if you are using EFI / ELILO and not GRUB.
/dev/sda2 = ext3, 200GB, this is mounted as root file system /.
/dev/sda3 = ext3, 100GB, just for kicks you have a separate partition mounted as /home, can be whatever.
this list goes on and on.
for any file system, say your /home file system on sda3 which is partitioned as 100GB, but you only are using 3 GB of it with all the data on it.
why save the entire 100GB partition when only 3% of it has anything useful?
you don't.
you use the 'tar' command and tar the filesystem into a single file and save just that like so:
"cd /home"
"tar -cf /root/myhome.tar *"
this will create a file that's 3GB in size called myhome.tar located in the /root folder on a different partition.
the catch with this obviously is if you want to save everything on a partition (the data not every byte including free space) you can't write that archive file to the same partition.
and when you want to archive the root file system, you cannot do it on a running system do you need to slave the disk to a running system and mount it.
once you have your archive file, which has just the contents from whatever partition,
you can also gzip that tar file to compress it,
when you are ready to restore that information all you do is make a new partition somewhere or reformat one, then do:
"cd /home"
"tar -xf /root/myhome.tar ." {going from my previous tar example}
the other catch is, if you created your tar file on an ext3 file system, then you need to restore it onto an ext3 file system I believe. Or the file system where you are restoring the archive to has to be the same as when you used the tar command to begin with.
this is especially true if you're cloning disks and writing the root file system to a new partition, the file systems (ext3, ext4, xfs, whatever) have to be the same. you can't tar a root file system that was ext3 then untar it to a mounted file system that is xfs, it would have to be ext3.
and if your file system is XFS, that file system comes with xfsdump and xfsrestore which does the same thing as i described, but i believe allows you to create the .xfsdump file on the same partition you are archiving even if it's the root folder of a running system.
Thank you, Ron, but I've already hit on the solution of using dd to write a huge file full of zeroes which maxes out the unused space on the drive (or partition), then deleting this file. So although I'm still backing up the entire drive (or partition) if I pipe it via gzip (or some other compression utility) all those GBs of contiguous zeroes compress to next-to-nothing. Using this method, I find a 122Gb partition with 3Gb of system on it comes out at about 2.5Gb of archive. I'm totally happy with that. And it's independent of file systems!
Thank you, Ron, but I've already hit on the solution of using dd to write a huge file full of zeroes which maxes out the unused space on the drive (or partition), then deleting this file. So although I'm still backing up the entire drive (or partition) if I pipe it via gzip (or some other compression utility) all those GBs of contiguous zeroes compress to next-to-nothing. Using this method, I find a 122Gb partition with 3Gb of system on it comes out at about 2.5Gb of archive. I'm totally happy with that. And it's independent of file systems!
Glad you got there. In reading the thread I kept thinking why not just use dd; only you'd have to be well versed not just in the partitions, but also the blocks assigned to them. Looks like you beat me to the punchline.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.