MS Windows less voulnerable for file system errors than linux?
Hi.
Is it so that the consequence of a Linux server crashing, with regards to file system errors, are greater than that of a MS Windows server crashing? By crashing I mean for example loosing connectivity to the hard drive, or someone pulling the plug. Regards, kenneho |
In my experience, quite the contrary. (Depending, of course, on which "Linux" file system you're using. There are, after all, many different ones.)
The latest default file system (ext4) and the priot default one (ext3) both include file system journals, so any file changes are "double-entered," and, in the event of a "crash," any incomplete file operations are automatically completed when the system is rebooted. As far as I can recall, no Windows file system incorporates a journal, so a "crash" on a Windows OS will, almost inevitably, result in some data loss. Most Linux file systems are also designed in such a way that the files are almost never "fragmented," so you never need to run a defragmentation program on a Linux file system. Of course,if you really want to use the old-fashioned Windows file system(s) on your Linux system, those file system can be used. But few Linux users are that silly. |
I think it's about the same. Using ext2 is like using FAT32. Pulling the plug can mess up a lot, including corrupting files and directories that are not written to at the time of the crash. NTFS is like the journaling filesystems (ext3,reiserfs etc) where files written to at the time of the crash can get problems.
The problems you get has more to do with the apps you use. An app like a game will hardly write at all. You can lose your new highscore if unlucky, but usually not more. But a database, or a bit torrent client, will have bigger problems. So many such programs have mechanisms to recover from corrupted files. |
Depends on the filesystem. But, in general, M$ is the worse choice.
|
Thansk for the many good replies. So to conclude: A linux crash and windows crash can potentially cause the same amount of file system problems, since the technology is in many regards quite alike. Only difference is that on linux, one is notified at startup about unclean file systems, while on windows one often is not.
|
Simply use modern, journaling filesystems like NTFS (Windows) or ext3 (Linux), and you should be fine. As far as I know, all of these systems perform some amount of disk-checking at startup and recognize whether or not the system was shut down "cleanly." Both systems are durable and designed for continuous service.
|
So, comparing NTFS with ext4, the only fundamental difference you see is that NTFS allocates files in blocks so the the disk platter tends fill the inner part of the disk before filling the outer part at the expense of fragmenting the files, while ext4 keeps files as a set contiguous blocks (i.e., unfragmented) at the expense of leaving "empty" gaps in the drive.
|
Quote:
http://geekblog.oneandoneis2.org/ind..._defragmenting Now I don't know specifically about NTFS, but for sure FAT* has this fragmentation problem. And really since NTFS still needs defrag, it seems that it also has this problem. Some interesting reads: http://bbs.archlinux.org/viewtopic.php?id=41532 http://www.sabi.co.uk/Notes/linuxFS.html One other thing to note is that ext3/4 is NOT the only filesystem available for Linux, in fact if you check the sites above you'll notice that these fragment the most out of all the Linux filesystems. I personally use JFS, and have not had any major issues with fragmentation. |
My experience is anecdotal, not statistical nor theoretical, so I don't know if it carries any real weight, but I've seen many filesystem failures and data loss with ntfs and none with ext3.
Most ntfs failures I have seen have been on systems "protected" by some version of fake raid 1 from Dell. In one case the data was actually recoverable from the other drive. In most cases, not. It is hard to be certain (especially with closed source) but I think most of the "hardware" failures have been imagined by the fake raid software. Otherwise it is hard to explain why disk hardware failures are so much more common on the systems with fake raid than on the systems with no raid, and why the failing drives seem to be perfect after the failure when tested in other systems. It is also hard to be certain about the obviously software causes of other ntfs failures. Those also look like the result of bugs in the fake raid software (rather than bugs in the ntfs filesystem itself or in Windows XP itself). But with closed source, and non reproducible rare errors, how could you ever find out. I expect ntfs is as immune as ext3 from ordinary corruption from simple OS crashes and power failures. Some failures trash the MBR, which stops you from even getting to the parts of the filesystem that would be immune. If that had ever happened to me on a Linux system, I think I would have an easier time repairing the MBR than I have had with Windows. The tools and the repair environments are much easier in Linux. When a raid 1 gets out of sync with itself and doesn't know, the occasional reads of filesystem info from the second copy can mix badly with majority reads from the first copy and spew new corruption onto both. The journalled filesystems can't protect you from that nor clean up well after it. So if it had ever happened to me with ext3 (and Linux software RAID) I expect the resulting mess would be just as bad as it has been with NTFS and Dell fake raid. |
Quote:
|
All times are GMT -5. The time now is 03:45 AM. |