Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
How should I utilize clonezilla?
1.In my environment often I have to ready new PCs (with feodra 14), So I want to make clone entire DISK of one PC after installing fedora 14. Then every time I need to prepare a new system then I can apply that image, is it possible? if yes what parameters should I take in considerations? If I make a image of 1TB hard drive with having fedora only (nothing contains except fedora), is it possible to implement its image on a hard disk of 350GB?
2. I am not willing to make backup of every PCs on my network becoz I don't have such media to store such large backups. So I want to backup only bootloaders partition that is sda1 (in our case), so is it possible to make image of only one PC's sda1 partition and then is it applicable to all PC's sda1 (bootloader partition)? or should I make separate image of every PC's sda1.
I'm glad to hear that you're doing this; I did this sort of thing a lot as a computer technician, and have found that few people are clever enough to consider it. :-)
That said, I don't recall if CloneZilla will automatically resize large partitions to fit onto smaller devices. I do know that CZ uses algorithms for cutting-down a given partition to be only the written data, so I think it's theoretically possible--and is probably do-able by way of (step 1) either editing the MBR image (all 512 bytes) or creating a partition table from scratch on the new drive first; then (step 2) using tune2fs on the existing image to make the filesystem smaller. I'm pretty sure that these can be done automatically by Clonezilla, but if not, I can almost certainly teach you how to do it manually. (I'm not sure how Fedora's LVM will handle this, so it'll take some experimentation.)
Since I find this interesting, I'm in the processess of installing Fedora 17 (since I don't have a 14 disc handy) in a virtual machine, so I can test this out. I'll let you know how it works.
Just a general word of advice, though: it's almost always better to make a partition image of the smallest partition that will hold the OS data, so that you can expand it to the new size upon installation. This is almost always easier and more reliable (i.e. less prone to problems) than shrinking a partition. Both are possible, but enlarging is generally a lot less hassle.
So, I've just created a "fixed-size" .vdi image for use with the Fedora VM. I'm using Virtualbox. Due to hardware constraints, I made the image 100GB, which should be much more than enough to house a "stock" Fedora installation. (I'd be surprised if a "stock" installation took more than about 10GB.) Please note that making a 100GB fixed image took about 15 minutes on my quad-core Core 2 system, and utterly dominated it during that time. (You might want to find something else to do for a while, once you start it...) I'm now booting from the F17 AMD64 LiveCD. i386 should have the same effect, so don't worry about architecture differences between the two. Once booted, I'm installing to hard drive using all the "default" options.
Edit: Make sure to use the "use all space" option, instead of the "replace existing linux systems" option--just to be sure.
If you need something other than the defaults, please tell me, as this might have an impact on things to follow. GPT partition tables and encrypted drives will almost certainly complicated things.
Ultimately, I intend to image the 100GB virtual drive using CZ, then shrink its volumes down to 30GB or so. This should act as a "proof of concept" for how you can shrink your 1TB volume image to install on a 350GB drive. In the end, the final test is whether this newly-resized image will work properly once installed.
I've installed Fedora, and completed the "first boot" dialog, after which I logged in once, just to make sure that all appropriate files got auto-generated. If you mean to install in a more "OEM" fashion, you should probably skip these steps in future attempts. Also, there are, I believe, special installation options for installing Fedora in a way such that all system information is requested after the install, instead of during it; this is ideal for mass installs, depending on use case.
I'm now downloading the latest "Stable" version of CloneZilla. The images I'm getting are:
I doubt that there's a substantial difference between the two, aside from accessing the extended capabilities of 64-bit CPUs--which may or may not affect performance in this case. The PAE extension will mean that you get to use all your RAM (above ~3GB), even if you're using an x86 CPU. (If you're using an old CPU--like before Pentium 4 with Hyperthreading--you should use another image, since those CPUs don't have PAE extensions.)
Shortly, I'll boot from one of those CDs and decide which options to use.
uk.engr, that CZ image should be fine. If anything I mention below specifies a specific architecture, etc., just know to change it to what you're using.
The CloneZilla process:
Note: I've tried to install VirtualBox Guest Additions on the live CD, but since the version of kernel headers available in the apt repository are different from the version used in CZ at present, it's not possible. This will result in slow USB transfers. :-(
Once the CD is booted, here are the options I'm choosing:
Don't Touch Keymap
(At this point, go to Device > USB > your device to mount your USB drive, if you have one. Press the up and down arrow keys to recover menu text.)
Select the drive you want to STORE THE BACKUP IMAGE ON.
Choose a directory on the above-selected drive in which store your backup image.
Choose the disk you want to back-up.
Priority: -q2 partclone > partimage > dd
Chose these options: -c, -j2, (Optional: -gm, -gs ; These last two are optional, and are only for verifying data integrity of the original image. They're best used only for an image that you're not going to mess with, such as resizing it. If you decide making an image from-scratch, you should use these two options. They take time to execute, so for now, I'm ignoring them.)
Compression option: If size on the backup drive isn't an issue, do -z2 (gzip); if it's a big problem, then use -z2 (bzip2) or -z4 (lzma). Lzma is about the same size as bzip2 (sometimes smaller), and has faster decompression; however, it requires the p7zip software to "mess with" (as we're going to), which isn't included "by default" in most distributions. (You can install it manually without much fuss.) For compatibility, I recommend gzip or bzip2; for best size with decent speed, I recommend lzma. For now, I'm choosing gzip.
Size: 9999999999 (That's 10 times "9") (This will prevent splitting the volume--that is, unless the resulting image is larger than 9.9TB--so that we can more easily mess with it. If you need to put it on a CD or DVD series, you should change this value accordingly.)
Check/repair: I recommend an automated check/repair for your "production" image (fsck-src-part-y); for now, we'll skip it.
Check to see if it's restorable: Yes. (This is a good idea, even for testing, though it'll make it take longer.)
After finished cloning: Do nothing. (This will let you go to a command prompt and check things out if you want to. If you don't care about that, pick a different option.) This is the resulting command (I hope I don't make any typos...)
You'll note that you can find this command on the Live system (until you reboot) at: /tmp/ocs-2012-07-13-12-img-2012-07-13-13-07. (This, and the numbers above, will change each time you run CZ.)
Now, press ENTER.
Finally, confirm that you want to do this by pressing "y" then ENTER. Now wait...
You'll note that the ensuing CloneZilla output will tell you that it's saving the MBR and every partition separately, into individual files--one per partition (or MBR/metadata). This will allow us to decompress/extract the partitions and mess with them, later.
Note: since Fedora uses separate partitions for /, /etc/, /home/, and so forth, this is going to prove a bit challenging when we go to make the new filesystem/partition layout. I think it's possible, though, so we'll deal with problems as they arise.
Last edited by DaneM; 07-13-2012 at 09:24 AM.
Reason: More information about partition files.
Sounds good; you're welcome. (We'll see how much I'm helping when we're done, I suspect...) I need to get some rest, so I'll get back to you later (tomorrow?), once I've had a chance to puzzle this out further. I'm hoping that the next post will be the final solution for your problem.
Ok this is done and checked image (restoreable). I have followed your steps and made image of 100GB fedora.vdi. one thing I am unable to understand: I stored that image via ssh (not usb ) in /home/f14image/2012-07-14-06-img so my used space in ssh server's /home before cloning was:
Means cloned image of 100GB .vdi occupied only 1GB space in /home?
Is it compression effect? or I am observing something wrong? If It is right then 1TB sda's image will occupy 10GB (Means 1% if total sda size). If this is fact then it will resolve my problem concerned to availability of large media for backing 50s of 1TB sda(s) that will require only 50GB of media for backup?
Please guide me further about this. Thanks a lot for giving me step by step instructions.
For the instructions later on you might want to have a live CD with GParted and lvm tools on it at the ready. If you don't have such a disk, you should start downloading/burning it now. :-) (I suspect that most Fedora CDs will suffice for this.)
The size is tiny for two reasons:
1) We chose to use partclone instead of dd in the "priority" section. Partclone is an excellent tool for backing-up partitions, since instead of copying every sector, bit-for-bit--even empty ones--it copies only used sectors of the filesystem. This results in an image that's only as big as the data on the filesystem, and takes a tiny fraction of the time that dd (or cat--not recommended) would take to do the same job. dd images, by contrast, are "raw," in that they copy the filesystem into a file that is exactly like the original partition in every respect--including size. This leads to a problem I found, that I'll describe shortly.
2) Gzip takes out all the data that can be losslessly "aliased," and replaces it with a "short hand" version of that data; this is basically how compression works. So, if the file has a string of 53 "1" bits, gzip might make a short note saying that, "53 '1' bits follow." (I don't know the details of how the algorithm works, so this is just a rough summary. It's a lot more sophisticated than what I've described, surely.) This means that the 3GB of data on your partition becomes about 1GB of data in the compressed file.
Why is this a problem? Well, the way Linux/Unix works is to see literally EVERYTHING as a file. This includes disks, partitions, video cameras, printers, mice, directories, and everything else. Check in /dev/ for a list of devices pretending to be files. This works in our favor because it means that a tool like resize2fs doesn't ever really know whether you're messing with a partition on a physical disk, or if it's just the image of that partition in a file in your /home directory. Neat, right? Unfortunately, in order to use a utility that isn't supposed to know the difference between a file and a partition, we have to have our file be EXACTLY like a real partition in every respect, except for where it's located. This means that we have to use the ultra-slow and massive-file-generating dd utility to create our image. This will take at least about a half hour (probably several hours--or even days--depending on partition size and hardware speed) and generate a partition file of exactly the same size as the original partition. So, a 1TB partition becomes a 1TB file--and takes forever to make. Additionally, if we compress this image, or don't decompress it after it's made (which will take even more space...), we STILL can't use utilities on it! Bummer!
So, I screwed up: in order to make an image that we can mess with, we have to let CZ ONLY use dd--which can be done, but it's not pleasant. Note: you can still decompress the image using zcat or gunzip (depending on filename), like this:
Note that the second command is only relevant if the image is split into multiple pieces, such as for fitting onto a set of DVDs. gunzip won't work with a file ending in ".aa"--which CZ uses to denote pieces of the image--so I'm using zcat to output it into a file, using the > (overwrite) and >> (append) operators.
In any case, this is manageable if not for the second problem:
2) In order to use lvm tools like lvresize to shrink a volume, we have to first be able to access the LVM's "mapper" device, which normally lives in /dev/mapper/...but the mapper device for this set of images is, itself, an image! (D'oh!) So far, I have yet to figure out how to get lvm tools to let me specify the location of the mapper as a file, rather than something in /dev/mapper. Additionally, running these commands as root (don't do this if you don't want to screw with your ACTUAL computer's data!) would result in attempting to resize the LVM of the system that you're using to type these commands--probably with catastrophic results! So, unless I/we can figure out how to tell the tools where to find the mapper device (in a data file, instead of a udev device file), we're "dead in the water" with regard to resizing existing images. Also, note that these mapper and LVM images would all have to be un-compressed, and generated using only dd.
So, here's the solution--and I'm sorry that it's not exactly what you seemed to have in mind at the outset, though it will work, since I've used something very similar, before.
Note: do the following on a machine very similar to the one you want to install these images on.
1) Create a Fedora installation on a drive of absolute minimal size. I recommend no bigger a than what you can guarantee will be able to fit on any system you install to (i.e. if your smallest system has a 100GB drive, make your installation's total 100GB or smaller). To do this, using the layout of your working Fedora computer's partitions, decide just how large a space you need for /, /boot, /etc, /usr, and whatever else you want on its own partition. (Don't get too crazy about separating things unless you really know what you're doing; just follow what's already working for you on production machines.) Alternatively, you can use the Fedora installer's "use entire disk" function, but check the "review partition layout" box at the bottom of that page--then edit it on the next screen. Here's the trick, though: make /home VERY small, so that it won't take up an noteworthy amount of space on the "model machine's" hard drive. I recommend no more than about 1GB for this partition. Please keep in mind that this isn't how big your actual /home partition is going to be; we'll resize it later. Now, finish the installation to your liking (including updates, if desired) and reboot with the CloneZilla CD of your choosing. (If you prefer, you can make /home/ the size of the smallest PC's /home partition, just so that you only have to resize it later on larger machines. The key, here, is that the whole system's partition and LVM tables be very small, so that you can grow them, and never have to shrink them.) Please note that I'm assuming that /home is the only partition that will really need to vary in size from one machine to another. We can still resize the other partitions, but I suspect that it's not needed, since all user data should go in /home.
2) Backup the installation using the steps I mentioned above, in the previous posts. Be sure to enable filesystem checking (e2fsck), MD5summing, etc. to ensure that you get good images and can test them for faults, later. These are your production images, so choose your options accordingly.
3) Now that you have your images, plug your drive with the images on it into your new computer and boot up CloneZilla. Choose the menu options for restoring your images to a drive in expert mode. Here are the options I'm using:
-g auto Reinstall grub in client disk MBR
-c Client waits for confirmation before cloning
-j2 Clone the hidden data between MBR and 1st partition
-cm check image by MD5 checksum
-cs check image by SHA1 checksum
Use the partition table from the image
The resulting command (barring any typos) is this:
A note about backing-up production machines. This is really only useful for saving customizations after the user has started using the computer (or customizations you did for that person's particular use case). Don't do this for making "stock" installs on new systems, since it'll only compound any user-made problems. This is, however, quite good for saving work data and configuration edits.
In your original question, you asked about backing up sda1's system partition. Please note that this is complicated by the fact that, by default, Fedora separates /boot from /, and sometimes other "system" partitions, too. Additionally, /home/<username> tends to hold a lot of files and directories (usually "hidden" ones, starting with a period) that will determine how the GUI and other important features work--including encryption keys (if that's important to you). So, I don't think it's really feasible to store just one partition and get everything you need to make the computer work decently. Therefore, the above (and soon to be the below, as well) centers around backing-up the entire drive. If you truly only want a partition, you can choose that option instead of "disk backup" in CloneZilla. Some switches/options won't apply, but the rest should work the same.
Depending on the amount of data each user has, this might not be an issue--and it could be further possible to simply backup PARTS of the user's home directory, including all "dot" (hidden) files, plus any important documents that the user needs to do his/her job. This would ideally have to utilize a strict set of instructions for where to put "work stuff," versus other stuff--e.g. /home/user/Documents, /home/user/Desktop are nothing but work data, but nowhere else is, and therefore won't be backed-up. With this method, you could use the above CZ info, plus the following command from the computer in question or a Live CD (such as in the event of software failure):
cp -vpfr * /mnt/usbdrive/workstation1/ 2>/mnt/usbdrive/workstation1/errors.log
This is done as root so as to preempt any permissions problems.
In this example, all directories in home (for all users) will be copied to the USB drive, which is mounted at /mnt/usbdrive. The data will be saved under the directory named after the machine--in this case, "workstation1". The copy command will give verbose output; preserve permissions (though you'll have to modify ownership of some things if you reinstall the OS or move it to another machine); any files/directories by the same name on the destination will be overwritten; all files/directories will be copied recursively; and all error output will be redirected and written to the file, "/mnt/usbdrive/workstation1/errors.log", so that you can check it later for indications that something didn't copy properly. I used this command every day when I was doing technician work, and it proved quite reliable, so long as the hardware wasn't broken. It works equally well from a live installation (for Linux, but not Windows, for reason of locked files--assuming you install CygWin's cp command or some such, which I don't recommend), Live CD, or another computer into which you plug the old computer's hard drive. It's also fairly fast, given a decent computer. You can also specify directories to back up by using the "-t" switch before the destination ("target"), like this:
To get everything in the user's home directory that starts with a period--and nothing else, this is very useful (and took a long time to figure out):
for i in `ls -A | grep -e '^\.[[:alnum:][:punct:][:space:]]\+'` ; do cp -vpfr "$i" /mnt/usbdrive/workstation1/username/ 2>/mnt/usbdrive/workstation1/username/errors.log ; done
This needs further testing (since I haven't tried it in a while and might have forgotten something), but it should grab any "hidden" configuration files in the user's directory and put them in the user's backup folder and make a log of errors.
If you need to compress the contents of that folder once you've copied the data, you can do this:
tar -zcvf directory.tar.gz directory #for gzip compression--faster
tar -jcvf directory.tar.bz2 directory #for bzip2 compression--smaller but slower
7z a -t7z -m0=lzma -mx=9 -mfb=64 -md=32m -ms=on directory.7z directory #"Ultra" compression with 7zip--smallest but slowest
Again, using root to get around any permissions problems. Keep in mind that 7zip isn't always installed on Linux by default, so if you need a very compatible compression algorithm, use tar and bzip2/gzip. P7zip is available for free (open-source) on Windows and Linux, if that matters.
OK...I'll try to complete the solution once I get my USB problem worked-out.
Update: I used badblocks on my USB drive ("badblocks -svb 4096 /dev/sde) and discovered over 300,000 bad sectors. :-( Fortunately, it's still under its Samsung warranty (through Seagate), so I'll be getting a new-and-functional HD to replace it. Maybe it'll even be bigger! :-D For now, I'm using a smaller drive with my USB rig.
Last edited by DaneM; 07-16-2012 at 12:15 PM.
Reason: USB situation update