SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I discovered that the plug had been pulled out of my external USB disk.
The prospects for recovery are not good since it will not allow me to run e2fsck - saying that the superblock shows the device is still mounted (it is not) or is being used by another application (it is not).
I can get the files from elsewhere.
But I now want to build the disk with a file system that gives me the best chance of recovering my data after a mishap.
I use ext3 and since I live in Oklahoma (Tornado Alley) we get a lot of severe thunderstorms here which cause power outages on occasion and my PC went down many times from the power going out and ext3 always recovers nicely...
did you try un-mount the device with the -l option for umount ?
For my part, I did one successful data recovery with reiserfs, I deleted a partition by mistake and was able to recover all my files with reiserfsck --rebuild-tree -S
Well if Okie feels comfortable with ext3 after many power outages, that's a good recommendation.
Being able to rebuild the tree after a deleting a parttion is also good - though deleting a partition is not so destructive as it sounds: for my internal disks I use Ranish Partition Manager and with that one routinely "deletes" partitions simply in order to hide them.
As long as one knows the start and end addresses one can recover the contents - as long as one does not format the the partition, of course.
I shall try both and provoke a corrupted file system and see which handles it better.
Thanks for the recommendations.
Before that I'll try umount -l, but I'll have to wait as I am now running "testdisk" which is trying to extract files from the corrupted file systems.
Really any of them will recover well after a crash, except maybe for XFS as it speeds things up by caching a lot of stuff in RAM and then writing to the drive. ext3 tends to be a good choice mostly because it easier to recover from corruption, not that it will become corrupt less easily, I think they're all about equal in that aspect.
You should probably add the sync option to your mount options for the device so that writes will be done right away. That will help cut down on the chances of having unwritten data left in the buffer.
add the sync option to your mount options for the device
I've added "sync" after the defaults options in fstab:
Code:
/dev/sda1 /mnt/sda1 ext3 defaults,sync 0 0
The partitions are each 100gb - can I just rely on mkfs.ext3 defaults?
Should I have an external journal? From the man page:
Quote:
device=external-journal
Attach the filesystem to the journal block device
located on external-journal. The external journal
must already have been created using the command
mke2fs -O journal_dev external-journal
Note that external-journal must have been created
with the same block size as the new filesystem. In
addition, while there is support for attaching multi-
ple filesystems to a single external journal, the
Linux kernel and e2fsck(8) do not currently support
shared external journals yet.
Really any of them will recover well after a crash, except maybe for XFS as it speeds things up by caching a lot of stuff in RAM and then writing to the drive.
That is one of the sole reasons why I chose not to go with XFS. It seems to me that XFS arbitrarily chooses when to actually write data to the disk. Sure you may issue a command to have stuff copy over, but I don't want to spend time worrying if it is actually written to the disk, or just stored in volatile memory. To me XFS just doesn't seem as reliable, though some might prefer the 'delayed allocation' feature, but not me.
You can go with the defaults option until you see that something needs to be changed. Options which come after the word defaults override the defaults so be sure that is always the first option passed.
Using an external journal may be more dependable, but I don't do that -sounds over-the-top to me for most users.
I have had very good luck with resierfs for 5-6 years now, but have recently been trying out jfs and also using at least one ext3 partition. jfs uses less CPU cycles and mounts much faster than reiser. ext3 is very stable and is the default for most distros. reiser is especially good if you have lots of small files.
XFS isn't all that bad. It does not use nobarrier by default. If it did, it would be considerably faster but at the expense of far greater file system integrity risks.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.