LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Desktop (https://www.linuxquestions.org/questions/linux-desktop-74/)
-   -   The fastest File System. (https://www.linuxquestions.org/questions/linux-desktop-74/the-fastest-file-system-4175598044/)

hack3rcon 01-23-2017 12:25 AM

The fastest File System.
 
Hello.
I use Debian 8.6 amd64 and my PC not have any SSD. I like to use a File System that its speed is high, Any idea?

Thank you.

pan64 01-23-2017 01:57 AM

As it was already told:
How about you make at least some effort to find out for yourself?

But I post you a site as an example I found within a minute:

http://unix.stackexchange.com/questi...ot-of-small-fi

hack3rcon 01-23-2017 03:25 AM

Quote:

Originally Posted by pan64 (Post 5658882)
As it was already told:
How about you make at least some effort to find out for yourself?

But I post you a site as an example I found within a minute:

http://unix.stackexchange.com/questi...ot-of-small-fi

To be honest, I saw the link before and also find another link "http://www.linux-magazine.com/Online/Features/Filesystems-Benchmarked" but I need users experiences.

DavidMcCann 01-23-2017 11:20 AM

Users' experiences will depend on what they use their computers for and what storage media they have. Just to quote from memory, xfs is incredibly fast for huge files but very bad for lots of small ones: it's used in TV studios but not in data-centres.

As you can see in the stackexchange post, there's not a great deal of difference. For the user of a desktop or laptop, as opposed to a server, the time spent fetching files is usually going to be small compared to the time spent dealing with them, or waiting for your input.

jailbait 01-23-2017 11:52 AM

The fastest file system is ext2 which does not do journaling. All of the journaling file systems are a little slower than ext2. However other considerations such as file system size, speed of recovery from crashes, etc. mean that a journaling file system is usually a better choice than ext2.

I would stay away from reiserfs. Maintenance of reiserfs has been poor since Hans Reiser went to jail for murdering his wife. reiserfs is outdated.

As other posters have noted there is some difference in performance based on extreme examples of large files or a large number of small files. If your files are nothing out of the ordinary I would recommend that you use ext4.

--------------------
Steve Stites

szboardstretcher 01-23-2017 11:58 AM

Quote:

The fastest file system is ext2 which does not do journaling. All of the journaling file systems are a little slower than ext2. However other considerations such as file system size, speed of recovery from crashes, etc. mean that a journaling file system is usually a better choice than ext2.
Just a cursory google search will turn up evidence that ext2 is not the fastest file system.

http://www.linux-magazine.com/Online...ms-Benchmarked

Especially reads:

http://media.community.dell.com/en/d...gru31q6508.png

273 01-23-2017 12:29 PM

Just thnking out loud but isnn't the filesystem largely irrelevant once a file has been opened? By that I mean that once the data has been located it's then read in as fast as the disk can do it (fragmentation aside). So, then, you're looking for quick lookup (isn't that what b-tree bttrfs is about) but that lookup depends upon more than just the FS.
Is the FS really all that relevant (really badly set up ones aside)?

jefro 01-23-2017 03:56 PM

I used to use ext2 believing from web pages that it is faster but in my own results on modern systems and modern distro's I've found generally ext4 to be fastest (SOHO use) but each kernel level and filesystem level makes any measurement difficult.

Fastest is not a good term. Overall rating of a filesystem is measured using many metrics and under various conditions so your mileage may vary.

You pick a filesystem based on many features, not just one single test.

salasi 01-23-2017 04:59 PM

Quote:

Originally Posted by jailbait (Post 5659099)
I would stay away from reiserfs. Maintenance of reiserfs has been poor since Hans Reiser went to jail for murdering his wife. reiserfs is outdated.

Well that might be a reasonable conclusion, but be a bit conscious about the reasoning. The team behind ReiserFS discontinued support (as in 'we're not supporting, use Reiser4') when it was superseded by Reiser4. As far as I remember, at that time, Hans Reiser was still at large, and still difficult to deal with. Back in the day, Reiser4 may have been a legitimate choice but I can't think of any use case in which I would prefer it over the competition, these days.

@273
Quote:

Just thnking out loud but isnn't the filesystem largely irrelevant once a file has been opened?
No, or extremely no, depending on use case. The layout of the file on disk is varied in order to avoid the worst of the effects fragmentation. So the amount of head movement that a hard disk has to go through in order to get you all your data can be quite different, particularly when the disk gets full.

Metadata operations can also be very variable between filesystems, but practically, an end user computer probably rarely sees that in a big way, but a fileserver has the potential to be quite different.

@szboardstretcher

Given the lack of development in ext2, your conclusion is probably perfectly correct, but that article is probably too old to give all that much solid evidence about the performance of current filesystems. XFS has had a big rewrite since then, BTRFS is under strong continuous development (don't ask about the RAID modes, such as 5 or 6, which, last I heard needed a total re-write, although that might have happened by now, certainly hadn't happened by the time of that earlier reference) and even the relatively stable ext4 gets significant, frequent, but smaller changes.

273 01-24-2017 12:52 AM

Quote:

Originally Posted by salasi (Post 5659198)
@273


No, or extremely no, depending on use case. The layout of the file on disk is varied in order to avoid the worst of the effects fragmentation. So the amount of head movement that a hard disk has to go through in order to get you all your data can be quite different, particularly when the disk gets full.

Metadata operations can also be very variable between filesystems, but practically, an end user computer probably rarely sees that in a big way, but a fileserver has the potential to be quite different.

Right, so how, exactly, does BTRFS stop fragmentation while ext2 does not, or vice versa? I'll give you that more modern filesystems, using extents, are more efficient in this regard but I'm doubtful that once a disk hits this issue there's much any filesystem can do to stop fragmentation.
I'm not sure what you mean by "metadata operations"? If you refer to journaling and the like then, yes, I can believe that different ways of doing that are better at different types of files (lots of small versus a few large, for example) but that's not what is being asked here -- this is referring to the OPs PC and I'm guessing that there's a mix of file sizes and types.

hack3rcon 01-24-2017 09:42 AM

Thus, Ext4 is better than others but a SSD hard disk is mandatory.

Shadow_7 01-24-2017 10:16 AM

I tend to xfs if speed is a concern. But most of my machines are slow and my apps are small, so not really an issue for me. With the hardware, zfs can be fast. But if you're not willing to pay for the difference between ssd and hdd, good luck. There's M.2 and other hardware options that make most filesystems kind of moot. And HDDs kind of obsolete, beyond archival in $ per G. Or trust issues with known good ways to dispose of the drives without launching them into space aimed at a gas giant.

salasi 01-24-2017 02:10 PM

Quote:

Originally Posted by 273 (Post 5659316)
Right, so how, exactly, does BTRFS stop fragmentation while ext2 does not, or vice versa? I'll give you that more modern filesystems, using extents, are more efficient in this regard but I'm doubtful that once a disk hits this issue there's much any filesystem can do to stop fragmentation.

Well that's not what I said happened, so you are jumping to conclusions.

File allocation policies are a heavily debated subject and there are trade-offs between how soon the filesystem has to start manoeuvring around this problem and how bad it gets once it does set in. As you mention BTRFS, being a CoW system, the problem it faces when re-writing a file (say, storing changes after a modification in an editor) is rather different from the problem faced by more traditional file systems.

Quote:

Originally Posted by 273 (Post 5659316)
I'm not sure what you mean by "metadata operations"? If you refer to journaling and the like then, yes, I can believe that different ways of doing that are better at different types of files (lots of small versus a few large, for example) but that's not what is being asked here -- this is referring to the OPs PC and I'm guessing that there's a mix of file sizes and types.

Clearly.
http://www.linux-mag.com/cache/7525/1.html
http://www.linux-mag.com/id/7518/
http://www.linux-mag.com/cache/7497/1.html
http://www.linux-mag.com/cache/7742/1.html
http://www.linux-mag.com/cache/7642/1.html

The fact that there is a mix of file types and sizes really does nothing to help your argument: what would help would be if there was no mix of file operations but for the reasons that you mention that is unlikely to be the case for the OP.

273 01-24-2017 02:47 PM

Quote:

Originally Posted by salasi (Post 5659615)
Well that's not what I said happened, so you are jumping to conclusions.

No, I'm suggesting that the amount of head movement that has to go on is more dependent upon fragmentation than file system and that if the file system is fragmented it's an issue regardless of file system.
Quote:

Originally Posted by salasi (Post 5659615)
Clearly.
http://www.linux-mag.com/cache/7525/1.html
http://www.linux-mag.com/id/7518/
http://www.linux-mag.com/cache/7497/1.html
http://www.linux-mag.com/cache/7742/1.html
http://www.linux-mag.com/cache/7642/1.html

The fact that there is a mix of file types and sizes really does nothing to help your argument: what would help would be if there was no mix of file operations but for the reasons that you mention that is unlikely to be the case for the OP.

I read things like "For small files, btrfs has good file creation performance but file removal performance is not as good as ext3 and ext4 at this time" and "For larger files, btrfs has both excellent file creation and removal performance relative to the other 3 file systems" I don't know about you but to me that says that if one has mainly large files then btrfs is likely the better choice? OK, simplification but I think you're supplying a load of data to disprove something I'm not making a strong argument for.
I still don't see much evidence that the file system choice makes any appreciable difference to day-to-day desktop PC use in a way which can be quantified enough to dictate a "fastest file system".
I agree with Shadow_7 that a faster medium's a good choice.
Whether or not the file size, type, or read:write ratio makes a difference on servers I'll leave you to dictate should you wish.

jefro 01-24-2017 08:06 PM

"Thus, Ext4 is better than others but a SSD hard disk is mandatory."

I don't get it, do you mean you have a mandatory need for a ssd or are you saying that only a ssd with ext4 would be "fastest"?

I'll agree that xfs is a great choice on some of the most modern server uses. May eventually be better than ext4 for general use as it is being worked on again.

There have been some efforts to make a filesystem that targets a ssd but I don't know how the metrics on that stack up today.


All times are GMT -5. The time now is 08:13 PM.