Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I want to clone a 16GB micro SD card that has 2 partitions containing the file system of a Raspberry Pi. Only about 3.3 GB is actually used but when I use dd to copy /dev/sdf to a file it makes a 16GB copy. Partition 1 is only 60MB. I thought that if I shrink Partition 2 down to 3.5GB that would reduce the size of the output file (and also therefore take only 1/4 as much time). But even after shrinking it with gparted, dd still makes a 16GB copy.
I'm puzzled. I don't want to copy each partition individually. But isn't there a way to reduce this down to a smaller single image file to not only speed up the copying process, but also result in an image that can be copied to a smaller sd card?
I think you're right. I guess it's fdisk that uses 1024 as block size -- so it showed 7014400 sectors==3507200 blocks.
That suggests that I only copied 1/2 of the card -- except that df and gparted seem to show the same amount of used space on the copy vs the original card. And it booted. And I haven't yet found anything missing.
Well, it's clear from the Gnu Coreutils manual that the default block size is 512 bytes so I made a new copy from the original card, this time specifying 1024 as block size. I know that's small and it was agonizingly slow especially in the next step when I wrote it to the second card, but I wanted to be sure there would be no ambiguity. So this time it looked like this:
Code:
sudo dd if=/dev/sdf bs=1024 count=3572736 | pv | dd of=raspian_copy_20160221.img bs=1024
3572736+0 records inMB/s] [ <=> ]
3572736+0 records out
3658481664 bytes (3.7 GB) copied, 169.525 s, 21.6 MB/s
3.41GB 0:02:49 [20.6MB/s] [ <=> ]
3572736+0 records in
3572736+0 records out
3658481664 bytes (3.7 GB) copied, 169.633 s, 21.6 MB/s
and now the total bytes copied is more in line with what was expected.
So then I copied the .img file to the second sd card with
Code:
sudo dd if=raspian_copy_20160221.img bs=1024 | pv | sudo dd of=/dev/sdf bs=1024
3572736+0 records inMB/s] [ <=> ]
3572736+0 records out
3658481664 bytes (3.7 GB) copied, 1026.28 s, 3.6 MB/s
3.41GB 0:17:06 [ 3.4MB/s] [ <=> ]
3572736+0 records in
3572736+0 records out
3658481664 bytes (3.7 GB) copied, 1026.32 s, 3.6 MB/s
I checked the new card with fdisk and gparted and interestingly, the output from fdisk and the info displayed by gparted was absolutely identical to what they showed last time so obviously those programs aren't showing what is actually on the disk -- just what is supposed to be there. I still don't know what was missing from the disk before but presumably I would eventually have found some files missing or truncated.
Next time I'll raise the bs higher and adjust count accordingly. That should speed things up some, but this was a good lesson.
Now I'm wondering if I actually have a complete, valid copy of the original sd card. We treat the sd card as if it's a physical disk, but it isn't. It's flash memory, with a controller that does wear-leveling so presumably the data can be physically scattered anyplace on that chip, right?
So, I started out with a 16GB microSD card with less than 3GB of actual data on it, but I don't know where that data is located on the card. I used gparted to shrink Partition 2 (which takes up almost all of the card) down to about 3.5GB, and then used dd to copy the "first" 7329792 512-byte sectors from that card. But can I be sure that all the data was actually located in those first 7329792 sectors? (I don't even know what a "sector" is on a flash memory chip.)
Put another way, when gparted shrinks a partition on an SD card, does all the active data end up in the shrunken partition? Does it defragment files and relocate them to a specific part of the chip, or does the SD card's controller do that, or does it happen at all?
Could I be winding up with an unknown number of files, or parts of files, missing from the shrunken partition and missing from the cloned copy?
Last edited by r.stiltskin; 02-21-2016 at 09:12 AM.
you may try dcfldd instead of dd.
I think you can rely on gparted, if it could shrink that partition it should work. You may try to mount the modified filesystem and check the number of files if you want.
Now I'm wondering if I actually have a complete, valid copy of the original sd card. We treat the sd card as if it's a physical disk, but it isn't. It's flash memory, with a controller that does wear-leveling so presumably the data can be physically scattered anyplace on that chip, right?
Correct, and the logic on SD takes care of that, regardless where the data is actually located it will be presented as it was on physical disk. In other words, don't worry about it.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
I think you're overcomplicating this -- use dcfldd (as pan64 pints out, because it's likely to be more reliable) then just use tar to compress the image.
When you're taking a disk image using dd (or dcfldd) it isn't looking at the partitions so the partition size is irrelevant and messing with partition size will likely only end in tears.
I'm compressing a 32GB image from my own Pi as I type (it does take a long time, I'll admit) and will get back to you with the size saving, if any to confirm.
I'm not worried about dd though. I'm only focusing on the resizing operation.
What made me suspicious is that it took almost no time for gparted to resize the ~16GB partition down to ~3.5GB. On a real disk that can take a significant amount of time while data is actually copied and rewritten to different physical sectors. Does the SD card's internal controller accomplish that just by rearranging entries in some sort of index file that it maintains for itself?
Or taking that a step further, does it generally present a "logically defragmented" file system to the OS, so that all it has to do is truncate it when called upon to shrink?
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Resizing ought not to take much time at all as data, in theory, is "at the beginning of the disk" plus, as you point out, an SD card isn't a spinning drive. Again, though, I should state that the partition size has no effect whatsoever upon the disk image size as the disk image is an image of the disk not a copy of the partitions. It's full of whatever random clutter was on the disk as well as your files. My compression of an image file is still going on though so BZ2 will take a long, long, time it seems.
@273:
I realize that dd isn't looking at partitions per se. I only wanted to shrink the partition to have the filesystem in a contiguous region at the beginning of the "disk" so I could dd only the relevant part and not the entire 16GB, most of which is empty.
I think it's worth the trouble. The time dd takes to image 16GB is bad but maybe tolerable, but if I happen to pop in a 64GB card I certainly don't want to have to wait all night making an image of 50 or 60 GB of garbage.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.