Linux - NewsThis forum is for original Linux News. If you'd like to write content for LQ, feel free to contact us.
All threads in the forum need to be approved before they will appear.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Nevertheless, as a workaround, he very quickly wrote patches for Ext4 that recognise the rename() situation and ensure it behaves like Ext3, and a second procedure that uses ftruncate(). Now, however, he has provided a "proper" solution. The new ext4 mounting option "alloc_on_commit" gives Ext4 a mechanism analogous to "data=ordered" in Ext3, whereby metadata is not committed in the journal until after blocks have been allocated and the data has been written. However, this change probably won't arrive until version 2.6.30 of the kernel at the earliest.
I am planning to install Mandriva-2009.1 with the Ext4 fs when it becomes available. Mandriva will probably not have kernel-2.6.30 because it isn't even available yet. So, to protect against data loss, I was going to mount my Ext4 partitions with the -o nodelalloc option. I thought this was the option that enabled Ext4 to behave like Ext3 & write data to the disk without the Delayed Allocation feature.
I am now uncertain what option I should use.
1.) mount -t ext4 /dev/sdb1 / -o nodelalloc
2.) mount -t ext4 /dev/sdb1 / -o alloc_on_commit
Can anyone explain the difference?
I could be wrong about the "nodelallac" option because I found out about it be reading some pretty technical Ext4 internet postings.The discussions there were way above my head, so maybe I just didn't comprehend what was said properly.
Perhaps the designer / developer of ext4 should have thought a bit more into the problem.
I suspect he probably did - seeing as how he is also responsible for ext2 (as we know it) and ext3 (go to the bottom of "man mkfs.ext3" and see if the name looks familiar).
New features expose new problems - nothing remarkable about that. Unfortunately this came up so late in the release cycle. Personally I can't understand why people rush into the "latest and greatest" filesystem - it's what holds all your data.
Bragging rights about being on the "bleeding edge" lose their gloss when your data is trashed.
Yet another solution would be to mount ext4 volumes with the nodelalloc mount option. This will cause a significant performance hit, but apparently some Ubuntu users are happy using proprietary Nvidia drivers, even if it means that when they are done playing World of Goo, quitting the game causes the system to hang and they must hard-reset the system. For those users, it may be that nodelalloc is the right solution for now — personally, I would consider that kind of system instability to be completely unacceptable, but I guess gamers have very different priorities than I do.
Personally I can't understand why people rush into the "latest and greatest" filesystem - it's what holds all your data.
Bragging rights about being on the "bleeding edge" lose their gloss when your data is trashed.
My $AU0.02
Exactly.
What I can't understand is that back in the good old days (anyone remember the Apple II, Commodore 64, or IBM PC jr?), the system wrote the data when the user gave the command, and nobody was concerned over waiting a few seconds. Now today, with CPUs running millions times faster and drives accessing data hundreds of thousands times faster, and people are willing to trade data integrity for performance!
Is it just me, or has the market push for "bigger and faster" over the past quarter-century warped everyone's minds?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.