Help figuring out: NAS, FreeNas, Linux "NAS", or Linux server
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have 3 WD drives in my array (RAID 0), 2 greens and 1 black. I don't have any issues with them spinning down. Once in a great while there is a split second delay loading a large directory. I feel the array has a decent speed to it:
Code:
# hdparm -Tt /dev/md9000
/dev/md9000:
Timing cached reads: 35866 MB in 2.00 seconds = 17951.43 MB/sec
Timing buffered disk reads: 1082 MB in 3.00 seconds = 360.20 MB/sec
If you feel your drives spin down too often, you can use a tool called WDIDLE to alter that. WD states that the green drives vary RPM speeds between 5400 and 7200, but people have found them to operate barely above 5400 RPM. Also, the number of platters/platter-density also affects performance.
I had considered using BTRFS on my array, but since I'm using encryption, I was getting very high I/O compared to EXT4.
You'll then have to experiment with different raid- and mount-settings.
I personally tried jfs, xfs and ext4 and ext4 was the one that gave me the best for both small and big files.
I'm currently mounting ext4 with the parameters...
...but different variants of the kernel/distributions might not accept for ext4 the same set of parameters (or ignore some) - happened to me.
Once you have the raid ready (if you choose ext4 don't forget to use "-E lazy_itable_init=0,lazy_journal_init=0" as parameters to avoid that your device is formatted WHILE you perform your tests, which will definitely degrade the results) write a very big file into it and once you have cleared the cache ("echo 1 > /proc/sys/vm/drop_caches") read it with e.g. "cat bigfile > /dev/null" => with 4 HDDs you should get at least 400MB/s. If not then something's wrong if the fs-format or fs-mount options or in the raid geometry.
For what replica9000 mentioned...
Quote:
I had considered using BTRFS on my array, but since I'm using encryption, I was getting very high I/O compared to EXT4.
...if you want to directly store encrypted data on the raid (whichever filesystem you're using), you can still use "encfs" (additional layer sitting on top of the defined directories) - bad performance when writing (~20MB/s), acceptable (150MB/s) when reading.
...if you want to directly store encrypted data on the raid (whichever filesystem you're using), you can still use "encfs" (additional layer sitting on top of the defined directories) - bad performance when writing (~20MB/s), acceptable (150MB/s) when reading.
I haven't tried encfs, but I've used ecryptfs. It must be something with the combination of mdadm/lvm/dm-crypt/btrfs causing the high I/O, because btrfs on top of dm-crypt, even with compression, wasn't causing high I/O when used without mdadm or lvm.
Sorry, I don't understand about which high i/o you're speaking.
Filesystem Input/Output. With btrfs on top of dm-crypt and mdadm, writing lots of data at a time was maxing out my CPU, which was at the time an i7-2600k. With ext4 in that same setup, the CPU probably sees a 3rd of the load writing the same data.
I use LVM by default, everywhere. I have had to re-organize space during production time too often to ever want to do without.
I recently ran some database reorganization trials for EXT4 and XFS, both encrypted and unencrypted. The encryption overhead for EXT4 ran from 2 to nearly 10%. XFS, to my surprise, took up to three times longer for the same procedure, and the overhead of encryption was higher.
I did not test BTRFS, but that is something I look forward to VERY much. From the readings I have done, it should replace the combination of LVM, mdadm, and EXT4 for most applications. To date, it does not look like the performance is up to EXT4/XFS levels.
I am impressed with the performance of EXT4. The degree to which it destroyed XFS for this specific use was a huge surprise. Perhaps the efficiency you noticed in your testing may have directly influenced the effects I observed.
PS: I would expect the BTRFS efficiency to improve as it matures. This might be a test to rerun in a year or two. By then the I/O characteristics may have changed markedly.
I have bought nas devices before. As always they seem to become "old" way too soon. Was going to buy a synology or qnap. The versions are quite broad and have secondary apps that may suit you. A cheaper solution would be the WD nas but you loose the apps.
I am kind of shocked that this system is overworked but drives do go out. I have an older Dell server with enterprise level hardware throughout. It is the only way to protect data. They aren't cheap running a real hardware raid on expensive ram and drives.
If you want you can also look at zfs. The BSD stuff is usually set to run zfs and it too includes a raid.
I use LVM by default, everywhere. I have had to re-organize space during production time too often to ever want to do without.
I recently ran some database reorganization trials for EXT4 and XFS, both encrypted and unencrypted. The encryption overhead for EXT4 ran from 2 to nearly 10%. XFS, to my surprise, took up to three times longer for the same procedure, and the overhead of encryption was higher.
I did not test BTRFS, but that is something I look forward to VERY much. From the readings I have done, it should replace the combination of LVM, mdadm, and EXT4 for most applications. To date, it does not look like the performance is up to EXT4/XFS levels.
I am impressed with the performance of EXT4. The degree to which it destroyed XFS for this specific use was a huge surprise. Perhaps the efficiency you noticed in your testing may have directly influenced the effects I observed.
PS: I would expect the BTRFS efficiency to improve as it matures. This might be a test to rerun in a year or two. By then the I/O characteristics may have changed markedly.
I was looking forward to using BTRFS, and that was the plan until I noticed it's performance on top of my setup. I'm also hoping in the future, that I can simply use 'df' instead of 'btrfs fi df' to see disk space usage.
I'm also hoping in the future, that I can simply use 'df' instead of 'btrfs fi df' to see disk space usage
Quote:
PS: I would expect the BTRFS efficiency to improve as it matures. This might be a test to rerun in a year or two. By then the I/O characteristics may have changed markedly.
Aahh, it's since ages that they're repeating that performance won't be a priority until all features have been implemented... but in my personal opinion filesystem features go hand-in-hand with performance.
Concerning Synology, I had a look at their 4-drives systems but at least at that time they were terribly expensive => in the end I went for a small 4-drives HP microserver (to have an offline backup of my primary raid5).
I recently had again a look at HP microservers to add an additional one (my old one so far worked perfectly even if the CPU is a bit weak), but people were complaining about its fan making quite a lot of noise when non-HP HDDs were used (somehow when the firmware of the microserver saw that the server was not hosting the official HP HDDs it started spinning up the fan).
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.