SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
[EDIT] And you can't install LILO on a (XFS) root partition, only on the MBR. That'd be a problem for me as I want to preserve my Windows' boot loader (yes, I do launch it every now and then) on my Lenovo Thinkpad T61 (thus /dev/sda4 being my root Linux partition I have "boot = /dev/sda4" in lilo.conf and I made /dev/sda4 bootable).
Last edited by Didier Spaier; 12-05-2009 at 07:53 AM.
I have to say, I had JFS on my external USB 1TB device, but I had problems with the partition table after LUKS encrypting it. But I don't know, if the problems were caused by the file system or by something else...
Does anyone have experience with JFS and LUKS encryption on external USB devices? Does it work? It should, but see above...
and much more was posted on the topic a while ago here on LQ, just search for it. I don't think anything was done about it except to maybe warn users.
Either way, if you want a thoroughly tested fs, choose ext3. If you want performance and also reasonably good reliability (but not quite as extensively tested) choose XFS or JFS.
The "problem" you quoted does not exist. The delay time can be tuned. It's not specific to either filesystem. The most possible reason why you tune the write delay is switching between battery and mains (laptop users).
BTW, in recent kernels, the default mount options of ext3 also changed to more dangerous values to improve performance.
The "problem" you quoted does not exist. The delay time can be tuned. It's not specific to either filesystem. The most possible reason why you tune the write delay is switching between battery and mains (laptop users).
BTW, in recent kernels, the default mount options of ext3 also changed to more dangerous values to improve performance.
Please expand on your solution other than to make an arbitrary or just plain ambiguous statement. I've yet to experience problems with ext3 and I am curious where your getting the information.
The "problem" you quoted does not exist. The delay time can be tuned. It's not specific to either filesystem.
There was a long thread (titled like "slow application startup" or something) where this issue about ext3 came up. There was a kernel bug known as the Linux I/O wait bug, there were long discussions on the kernel bugzilla page. In the end it was closed as unresolved, since there appeared to be multiple factors causing high wait times. Some of these were filesystem-independent, likely to be related to the scheduler, but there was disagreement over whether it was a bug or just the expected behaviour under load. I don't know if they did any changes to the scheduler code about this. If what you're referring to above is this part of the problem, afaik you can't tune the delay time (unless you tinker with the kernel maybe?).
However there were also cases that were tracked down to the ext3 code (more specifically, to the way data=ordered mode behaves). There were so many complaints that they ended up switching the default mount option to data=writeback in ext3 and ext4. This is the unsafe choice, even Torvalds was openly against the idea. There were reported cases of data loss, too. I think they did little improvements to reduce the risk. Some Ext4 developers even said "it's not a bug, it's a feature" and blamed on poorly written application codes for not doing necessary checks and assuming certain behaviour. Anyway, writeback is still considered the unsafe default. As I said above they were working on an intermediate solution but I haven't followed up on that.
I've been happy with JFS. I use ext2 for my boot partition; ext3 or JFS for the remainder partitions. With Slackware I use JFS; with debian and CentOS I use ext3.
I had two crashes running Ubuntu on ext4 partitions. I was not able to do any recovery after the crashes. One of the machine would not even boot.
I'm using ext4 with 'noatime,barrier=0,data=writeback,nobh,commit=100'. It's being solid until now, but my backups are synced every night ;]
The other machine is working for years with Reiser and abused really hard without problems. I'm just waiting my vacation to format one old spare rig and install Slack with JFS and satisfy an old desire to try this fs.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.