Here are the details of the 4 modes available:
mode1 blocksize=0 density=0x51 compression=1 # DLT-V4 density, compression on mode2 blocksize=0 density=0x51 compression=0 # DLT-V4 density, compression off mode3 blocksize=0 density=0x50 compression=1 # VS160 density, compression on mode4 blocksize=0 density=0x40 compression=1 # VS80 density, compression on The unit is a Quantum DLT-V4 and the tapes are HP DLT Tape VS1 (160 GB with 2:1 compression). I guess I should use mode 1 or mode 3. Not sure. When I tar the data, should I also zip or rely on the tape compression? |
OK, mode 1 is what I need for sure, or mode 2 if I want to disable tape compression.
I went ahead and started the backup *crosses fingers* Will users notice a serious drop in performance as the tar program reads every single file on the server? CPU usage looks minor, but I'm worried about disk access (I guess that's a good reason to do it during low usage time). I decided I'm not worried if a few files get corrupted. I think it's reasonably likely that none will anyway. Each file (unless it happens to be very big) would be written to the tape very quickly, so it's not *too* likely that the file will be in the process of being modified when the tar program picks it up (unless the file is modified very frequently). |
I think I'm getting the hang of this. The big server needs two tapes. I think the data will fit on the tapes using the configuration below. I test compressed a 100 MB of mail and it was neatly cut in half so I know it will all fit. Same with the second tape.
Any reason the following scripts wouldn't work? Any suggestions to improve them? Code:
#!/bin/csh Code:
#!/bin/csh |
OK, I ran a test script:
Code:
#!/bin/csh Then I attempted to check the tables to verify that everything was alright, but I got errors: Code:
srv# tar tvzf /dev/nsa0 |
err..
Maybe running ... tar cvf /dev/rst12 / ... on the other server wasn't such a good idea. This /proc/ directory is causing all kinds of trouble and after looking up info about it... it's probably a bad idea to include it. Is there any way to tell the system to tar / but skip /proc ? If so, is there anything else I should skip? Is this going to ruin the backup (it's giving a lot of errors (arg list too long, is not a file, file changed size, invalid argument) although it keeps on going). It's also seriously degrading system performance although no one is using this system but me at the moment thankfully. EDIT: Well, it made it through finally. I'm coming to work tomorrow to back up the other server if I can get some good intel from you guys. Thanks :) Otherwise... it might have to wait until monday night though things are going to get busier and busier ... |
Yeah, don't backup /proc or /dev, they're really pseudo type filesystems, they don't really exist when you turn the machine off and created when the machine is turned on.
I also usually exclude /tmp and some of /var where I don't need to backup logs as I usually have a centralized loghost anyways. And if you don't have a lot of customized applications in /usr, you can usually omit that as well since you can do bare metal recoveries by installing the base OS first, then applying the backed up data, etc. |
Quote:
/var/adm/lists And there are some important scripts in it. For now can't skip anything except /proc and /dev Unless there is a really large chunk of data somewhere that I can be sure is not vital (as I could use a bit of space). |
All times are GMT -5. The time now is 01:08 PM. |