LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 05-20-2015, 04:49 PM   #16
dt64
Member
 
Registered: Sep 2012
Distribution: RHEL5/6, CentOS5/6
Posts: 218

Rep: Reputation: 38

Quote:
Originally Posted by Shadow_7 View Post
dd works, but it takes a long time and wastes a lot of drive space if the source device/partition is huge and only partially filled with non-filesystem data. But it's simple and thorough and predictable and capable of dealing with certain hardware issues.
fully agreed.
hence the combination with gzip or pbzip2, but this takes processing time for crunching in exchange for less storage space used.

Quote:
tar is alright, but I've had issues using it on 32 bit systems if the resulting .tar is > 4GB. It doesn't seem to complain, but if you try to restore from the > 4GB file it fails to be anything bootable. And it gets a bit tedious if you have to tar each individual directory under / and restore them in a particular order to avoid large files. With special considerations for /home and /usr/share/doc/ and others to avoid girth. YMMV

rsync is what I tend to use these days. Although I don't do incremental. And I typically use it to clone a not currently booted to system. Change the fstab, update the grub, and boot the rsync'd copy. When not rsyncing a live system you can omit the long list of --excludes and use simple flags like -aRXv. It's pretty fast and flexible if you can have the down time to not be syncing a live system. You can use it on live systems to, but slower and more complex, plus more potential for undesirable results.
Fully agreed.

@ohmster:
for your thermal issues, even they don't quite fit in this topic, try to take your cooler away, clean the proc and cooler, use new paste (not too much, just about right amount), put it all back in place and give it a retry.
mounting coolers is not about artic silver, cooler master or any other fancy name (even these brand were known to sell high-priced average quality, so nothing wrong with using it apart from the money but), but about how it's done, clean and nice. Make sure the fan on the cooler runs well and gets fresh air and all should be ok.
 
Old 05-20-2015, 05:09 PM   #17
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,978

Rep: Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624Reputation: 3624
Any time you clone a system you can expect issues. A clone only tends to work as a means to image a system so that you can replicate it back to exact same hardware as it came from.

The two apps you correctly warned you of issues. LVM is very popular (I however don't use it normally).
 
Old 05-20-2015, 08:53 PM   #18
ohmster
Member
 
Registered: May 2005
Location: South Florida
Distribution: CentOS 7
Posts: 39

Original Poster
Rep: Reputation: 2
Quote:
Originally Posted by Shadow_7 View Post
dd works, but it takes a long time and wastes a lot of drive space if the source device/partition is huge and only partially filled with non-filesystem data. But it's simple and thorough and predictable and capable of dealing with certain hardware issues.

tar is alright, but I've had issues using it on 32 bit systems if the resulting .tar is > 4GB. It doesn't seem to complain, but if you try to restore from the > 4GB file it fails to be anything bootable. And it gets a bit tedious if you have to tar each individual directory under / and restore them in a particular order to avoid large files. With special considerations for /home and /usr/share/doc/ and others to avoid girth. YMMV

rsync is what I tend to use these days. Although I don't do incremental. And I typically use it to clone a not currently booted to system. Change the fstab, update the grub, and boot the rsync'd copy. When not rsyncing a live system you can omit the long list of --excludes and use simple flags like -aRXv. It's pretty fast and flexible if you can have the down time to not be syncing a live system. You can use it on live systems to, but slower and more complex, plus more potential for undesirable results.
Oh yeah man. dd will backup empty space. I got a 1Tb drive to backup and am using only about 200Mb of it right now. I need compression with dd and the examples I got in the links showed how to do it. BUT, does dd NEED the full 1TB to make the backup and THEN gzip it down smaller, or is that done on the fly so I can backup to a smaller size external hard drive.

Hey Shadow, those are some good tips but how about backing it up with some samples of what you use for me to see in a code box? I am starting to understand it enough to get 'of=output' file and 'in=input file', DO NOT mix them up! And some of the other flags too. Can I see some samples of what you use please?

Back to my overheating issue. I think I will idle this thing from a different live CD and see if I still get the high temps. I cannot use a machine that runs within 5 degrees of shutdown or self destruction. I have done electronics all of my life so I know how to mount a heat sink. Maybe it is not seating properly. I will check for it but this CPU shoud NOT run at 65 C or 145 F "just idling".

Last edited by ohmster; 05-20-2015 at 08:54 PM.
 
Old 05-20-2015, 09:07 PM   #19
ohmster
Member
 
Registered: May 2005
Location: South Florida
Distribution: CentOS 7
Posts: 39

Original Poster
Rep: Reputation: 2
Quote:
Originally Posted by dt64 View Post
fully agreed.
hence the combination with gzip or pbzip2, but this takes processing time for crunching in exchange for less storage space used.


Fully agreed.

Quote:
Originally Posted by ohmster
Yeah, none of this is perfect but should get the job done. More at bottom message.


@ohmster:
for your thermal issues, even they don't quite fit in this topic, try to take your cooler away, clean the proc and cooler, use new paste (not too much, just about right amount), put it all back in place and give it a retry.
mounting coolers is not about artic silver, cooler master or any other fancy name (even these brand were known to sell high-priced average quality, so nothing wrong with using it apart from the money but), but about how it's done, clean and nice. Make sure the fan on the cooler runs well and gets fresh air and all should be ok.
Yes, I am WAY Off Topic here in this thread about thermal issues and need make a new thread for it. I only reported thermal problems so that you guys will not wonder "why is he not taking our advice and getting this done?". It is a courtesy to you guys and I will not drag it out here. I graduated top of my class electronics school in 1982 and have been a consumer electronics component level tech my whole life, hence I know all about thermal paste, cleanliness, proper application of compound, properly seating the heat sink, non-restricted air flow, and that brand names do not a cooling system work. What I have should be sufficient to run this machine. I did purchase an P4 with 64 bit instruction set at 600 MHz higher than it had on ebay for $24. Thus all of the cleaning, application of new paste, and mounting of the cooler has been done. Perhaps I overlooked something? Not seated properly? I repaired the Windows 7 on this machine and ran it for 2 weeks w/out issues before upping it for CentOS 7, that is when I noticed the thermal issues. (If you checked the link you see that this CPU is a "used" part. Could it be defective?)

The Cooler Master and Arctic Silver I put in my Windows 7 x64 "super desktop PC" because it is the best I have got. I spent $2K on parts from Newegg on this thing and man oh man does it perform! SSD, 16 GB DDR3 RAM (Was going for 32 but could not justify the extra couple hundred dollars.), video card, the works! But for Linux, I generally re-purpose older machines for it and this one should be perfect. I built this "bare bones" together with my friend and looking at my file list on it, the oldest date file is 2006, That would make this machine 9 years old. I have to green light or dump this machine and I hate to dump all this powerful hardware that would be perfect for Linux. What I am currently using is close to the same age and has never given me problems.

Thank you very much for the advice and for sharing your vast experience with me. And for taking the time and interest. I keep copies of all of it for reference. Thank you very much!

Last edited by ohmster; 05-20-2015 at 09:19 PM.
 
Old 05-21-2015, 02:48 AM   #20
dt64
Member
 
Registered: Sep 2012
Distribution: RHEL5/6, CentOS5/6
Posts: 218

Rep: Reputation: 38
Quote:
Originally Posted by ohmster View Post
Oh yeah man. dd will backup empty space. I got a 1Tb drive to backup and am using only about 200Mb of it right now. I need compression with dd and the examples I got in the links showed how to do it. BUT, does dd NEED the full 1TB to make the backup and THEN gzip it down smaller, or is that done on the fly so I can backup to a smaller size external hard drive.
If you use dd like in my example above (e.g. dd if=/dev/hda conv=sync,noerror bs=64K | gzip -c > /mnt/sda1/hda.img.gz) it will do the compression "on the fly" so you don't need to have the extra space, you save space in exchange for extra processor ressources used while copying/compressing.
Of course you could do the copy first and the compression later, but than you need at least exactly the same space on the destination as on the source.

Quote:
I am starting to understand it enough to get 'of=output' file and 'in=input file', DO NOT mix them up!
Well spotted. Wouldn't be the first HDD blanked by "PEBKAC"
 
Old 05-21-2015, 03:34 AM   #21
ohmster
Member
 
Registered: May 2005
Location: South Florida
Distribution: CentOS 7
Posts: 39

Original Poster
Rep: Reputation: 2
Quote:
Originally Posted by dt64 View Post
If you use dd like in my example above (e.g. dd if=/dev/hda conv=sync,noerror bs=64K | gzip -c > /mnt/sda1/hda.img.gz) it will do the compression "on the fly" so you don't need to have the extra space, you save space in exchange for extra processor ressources used while copying/compressing.
Of course you could do the copy first and the compression later, but than you need at least exactly the same space on the destination as on the source.
Well spotted. Wouldn't be the first HDD blanked by "PEBKAC"
Thank you, thank you, thank you DT! That did the trick! I swapped out the 1TB backup USB drive for the 750 GB one and it works fine. Same label and it shows up as the same /dev/sdb1, so everything works! I want to use the 1TB USB drive to image the entire drive and cut back rsync to my /home directory, but I am not quite sure how to edit your script. Can you show me please how to modify this backup script to only backup /home and nothing else?

# Script to backup hard drive to USB drive run by cron. By jlinkels
# http://www.linuxquestions.org/questi...9/#post5363825
Code:
#!/bin/bash

mount_point='/mnt/backup'
echo_flag=''

# Find if the device is mounted
df -h | grep $mount_point > /dev/null
if [ $? -eq 0 ]
then
        $echo_flag rsync -uav --exclude='/mnt' --exclude='/proc' --exclude='/sys' --delete / /mnt/backup > /var/log/rsync_daily
        echo "mount point $mount_point exists, rsync started"
else
        echo "Error: mount point $mount_point does not exist, rsync operation skipped"
fi
I *think* I change this line, first slash, to my desired backup directory and then delete some or all of the excludes, yes?

$echo_flag rsync -uav --exclude='/mnt' --exclude='/proc' --exclude='/sys' --delete / /mnt/backup > /var/log/rsync_daily
echo "mount point $mount_point exists, rsync started"

Change the red slash to '/home' and then remove ALL of the excludes because they will not be part of the backup, no?

Would you mind showing me the properly edited version please because I do not want to make a mistake. When you mentioned if this ran w/out the drive mounted, I could visualize the consequences and how bad that would be. So you understand how I need your help to be sure. Thanks buddy!
 
Old 05-25-2015, 05:15 PM   #22
Shadow_7
Senior Member
 
Registered: Feb 2003
Distribution: debian
Posts: 4,137
Blog Entries: 1

Rep: Reputation: 874Reputation: 874Reputation: 874Reputation: 874Reputation: 874Reputation: 874Reputation: 874
I've had heat issues before. Once when the fan on the power supply failed. Opening the case and having a $10 walmart fan blowing on the whole box keep that system alive for a couple more years. And once when a graphics card was failing. But the fan on said card died a year plus before it did. Once it failed, I pulled it and used the integrated graphics that came with the motherboard. It was still a warm system, but not nearly as warm once the graphics card was pulled. These days I tend to run the lower power fanless options. Fewer moving parts, fewer failures. Plus low heat and low power so I can safely run several of them off of one plug in a 70s era dwelling. If you have an old system consider one of the newer atom or similar low power ones. They'll pay for themselves in what you don't spend on electricity or air conditioning to counter the heat output. Plus so quiet, and my beer stays colder longer.

My clone process of sorts. Boot a random or older distro for your system so the system you want to clone is not currently in use.

Code:
# mount /dev/existing_install /mnt/OLD
# mount /dev/newly_formated_partition /mnt/NEW
# cd /mnt/OLD
# rsync -aRXv ./* /mnt/NEW/
# cd /mnt/NEW
# mount -t proc none /mnt/NEW/proc
# mount --rbind /dev /mnt/NEW/dev
# export LANG=C; chroot /mnt/NEW /bin/bash
(chroot)# nano etc/fstab
(chroot)# grub-install --force /dev/MBR_of_new_drive
(chroot)# update-grub
(chroot)# nano /boot/grub/grub.cfg
(chroot)# exit
# shutdown -h -P now
Swap out the drives. Push the power button. Carry on. I tend to use UUID= and set a custom UUID when I mkfs. The UUID that I set contains a YYYYMMDD of the date that I created it, which helps with maintenance and peace of mind.

The /etc/fstab edit is to update the UUID of the new filesystem. The /boot/grub/grub.cfg edit is to verify that that the root=UUID=... matches the new system and is otherwise NOT root=/dev/... The shutdown at the end is to allow hardware changes, and because unmounting the chroot is difficult depending on what you did while in the chroot.

Skipping the steps of partitioning and mkfs. I tend to use SDHC cards a lot, so cfdisk to change the fat filesystem to ext4 or xfs. And the mkfs for whichever. For xfs, you have to use xfs_admin to set the UUID after making the filesystem. I also tend to do debootstrap installs to a subdirectory of my live system, and then rsync it in a similar manner to a bootable location. It's quite useful if you need network drivers to finish the network install. Or other firmware that tends to get ommited by various installers.
 
Old 05-25-2015, 06:16 PM   #23
jlinkels
LQ Guru
 
Registered: Oct 2003
Location: Bonaire, Leeuwarden
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195

Rep: Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043
Quote:
Originally Posted by Shadow_7 View Post
Boot a random or older distro for your system so the system you want to clone is not currently in use.
You are using pretty much the same approach as I do. But you say Boot a random or older distro. Then you chroot. I remember that you need to run the same kernel as the kernel in your chroot system. Now when I read this I am not sure. Does it work for you when the running kernel and the kernel in your chroot are different?
Quote:
Originally Posted by Shadow_7 View Post
The UUID that I set contains a YYYYMMDD of the date that I created it
That is a smart trick I should remember!

jlinkels
 
Old 05-26-2015, 02:55 AM   #24
dt64
Member
 
Registered: Sep 2012
Distribution: RHEL5/6, CentOS5/6
Posts: 218

Rep: Reputation: 38
Quote:
Originally Posted by jlinkels View Post
You are using pretty much the same approach as I do. But you say Boot a random or older distro. Then you chroot. I remember that you need to run the same kernel as the kernel in your chroot system. Now when I read this I am not sure. Does it work for you when the running kernel and the kernel in your chroot are different?
you can boot any kernel you want as long as it supports your hardware.
 
Old 05-26-2015, 11:44 AM   #25
jlinkels
LQ Guru
 
Registered: Oct 2003
Location: Bonaire, Leeuwarden
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195

Rep: Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043
Quote:
Originally Posted by dt64 View Post
you can boot any kernel you want as long as it supports your hardware.
In that case I was mistaken. I should try it again, as it considerably increases flexibility during a restore.

jlinkels
 
Old 05-26-2015, 06:08 PM   #26
Shadow_7
Senior Member
 
Registered: Feb 2003
Distribution: debian
Posts: 4,137
Blog Entries: 1

Rep: Reputation: 874Reputation: 874Reputation: 874Reputation: 874Reputation: 874Reputation: 874Reputation: 874
Quote:
Originally Posted by jlinkels View Post
You are using pretty much the same approach as I do. But you say Boot a random or older distro. Then you chroot. I remember that you need to run the same kernel as the kernel in your chroot system. Now when I read this I am not sure. Does it work for you when the running kernel and the kernel in your chroot are different?
That is a smart trick I should remember!

jlinkels
There is one caveat. You need to be running a 64 bit kernel to chroot into a 64 bit installation. Which doesn't meant that the host distro is 64 bit, only the kernel needs to be. If you're going to be doing more stuff than just editing configs and installing grub, you may want to make special kernel considerations. But most things load at boot, so it's not like you need to change much. And you still have access to the host system. And with QEMU you can do non-native chroots of sorts, but I've never ventured that way myself. You may want to rerun update-grub after booting the cloned system so it has access to all filesystems (kernel modules) when it runs and beefs up the menu.

Last edited by Shadow_7; 05-26-2015 at 06:10 PM.
 
Old 05-26-2015, 07:51 PM   #27
jlinkels
LQ Guru
 
Registered: Oct 2003
Location: Bonaire, Leeuwarden
Distribution: Debian /Jessie/Stretch/Sid, Linux Mint DE
Posts: 5,195

Rep: Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043Reputation: 1043
Quote:
Originally Posted by Shadow_7 View Post
There is one caveat. You need to be running a 64 bit kernel to chroot into a 64 bit installation. Which doesn't meant that the host distro is 64 bit, only the kernel needs to be. If you're going to be doing more stuff than just editing configs and installing grub, you may want to make special kernel considerations. But most things load at boot, so it's not like you need to change much. And you still have access to the host system. And with QEMU you can do non-native chroots of sorts, but I've never ventured that way myself. You may want to rerun update-grub after booting the cloned system so it has access to all filesystems (kernel modules) when it runs and beefs up the menu.
That is no problem. As long as I know I have to chroot into a 64-bit system and can boot an arbitrary 64-bit kernel that is fine. And no, there is nothing more exotic I want to do than running a Grub install on a /dev/sd*. Like I said, I have to try this out. Given my current priority list that will be november 2018 or so.

jlinkels
 
  


Reply

Tags
backuppc, cronjob, crontab, rsync, tar gzip backup



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Is rsync better candidate for backup compared to tar? alaios Linux - Software 19 09-15-2012 07:12 AM
LXer: Rsync Backup for Windows, Linux Knoppix, and Other Smart Technologies in Handy Backup by Novos LXer Syndicated Linux News 0 12-24-2011 11:43 AM
LXer: Backup with rsync and rsync.net LXer Syndicated Linux News 0 09-14-2010 04:20 PM
rsync tar backup andycol Linux - Server 2 11-10-2009 08:04 AM
BackUp & Restore with TAR (.tar / .tar.gz / .tar.bz2 / tar.Z) asgarcymed Linux - General 5 12-31-2006 02:53 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 11:39 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration