Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Or am I missed something?
see man gzip, gzip -# (digit): number is the compression level (1 fastest, 9 best)
Hopefully, I am not mistaken, but my understanding is that it will pipe dd stdout (using the specified input file and byte size) to gzip's stdin for compression using the fastest but least effective algorithm.
Just executed the following 982 seconds ago, and am 11GB out of 32GB complete.
rknichols, Per your recommended command, the image was 31914983424 bytes. I thought it would have been the same size as the file was created from a disk of the same size.
Code:
sudo dd bs=4M if=/dev/sda | gzip > ~/RPi2a.img.gz
Last edited by NotionCommotion; 12-25-2017 at 07:46 AM.
Although it's probably an assumed default (until something changes). If you're not that worried about encryption and there's an open port, you can use netcat instead of ssh. Each with it's set of caveats.
$ nc -l -w 300 -p 5900 > got_it.file
$ nc 192.168.2.99 5900 < sent_it.file
Where the -l one listens (run it first). And the sent_it.file could be something like $(dd .....). With the caveat being that you won't know when to stop the sender or receiver until you have to whole file and it hasn't updated for a while. File size, date/time stamp, md5sum's, and other ways to manually check. Like giving it it's own interface and checking traffic with something like speedometer, when the traffic stops you likely have the whole file. Some versions of netcat have an option to quit when done, but not a default, and not always there.
It looks like you made /dev/sda just large enough to hold the compressed image file, and that's way too small for the decompressed image.
Quote:
Originally Posted by NotionCommotion
rknichols, Per your recommended command, the image was 31914983424 bytes. I thought it would have been the same size as the file was created from a disk of the same size.
Code:
sudo dd bs=4M if=/dev/sda | gzip > ~/RPi2a.img.gz
That is nothing like my recommended command, which would have reported how large the decompressed file would be.
I have to ask, what is the point of compression if the resulting file is the same size as the original? That's just absurd.
dd does not really care about the filesystem, just the content of the partition. Sometimes better (especially when you want to save a damaged/corrupted filesystem).
That is nothing like my recommended command, which would have reported how large the decompressed file would be.
I have to ask, what is the point of compression if the resulting file is the same size as the original? That's just absurd.
Sorry, my response was unclear. I followed your recommended command to determine the size which I then documented in my response, and then showed the command which I came up with to create the image. Two very different commands!
The point? After reading your response, I tend to believe I am missing something. I assumed if I take the bits from a device using dd, compress them,send them on their way, and decompress them when they arrive, this is a good thing. Maybe I am assuming without knowing?
Personally, I find rsync to be a good way of cloning a device, although tar & scp work perfectly well. Tar gives you the option of zipping with a simple program option.
What dd gives you extra is the boot record, the partition table, and the directory structure throughout the drive which in most cases is disk specific, and useless or even unwanted if you restore to a different disk.
Let's say you have a disk crash, and mark some bad sectors, but the rest is fine. Are you sure the disk won't write those sectors anyhow? It doesn't check anything, or even mount the disk.Even if it skips them, won't you then run out of space?
Sorry, my response was unclear. I followed your recommended command to determine the size which I then documented in my response, and then showed the command which I came up with to create the image. Two very different commands!
OK. That size (31914983424) was similar enough to the place that the dd command ran out of space (31104958464) that I didn't notice the difference.
But, /dev/sda is indeed more than 800MB smaller than uncompressed source image. It just isn't going to fit.
dd can copy the WHOLE device. With partx and other routes to treat the resulting file like a device with mountable partitions.
I've had issues with tar on 32 bit systems with resulting files in excess of 4GB. I generally rsync my installs for backing up, because it saves a lot of space and doesn't need a lot of know how to access individual files. Various cp routes with the wrong options can otherwise remove permissions, change date/time stamps, and other not quite a backup of a usable system things. In short, if you're not well versed, and have the space, dd is the simpler and safer option IMO. Especially if you just want an undo option and are otherwise dealing with quirky OSes like windows.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.