LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 04-10-2015, 02:59 PM   #16
replica9000
Senior Member
 
Registered: Jul 2006
Distribution: Debian Unstable
Posts: 1,130
Blog Entries: 2

Rep: Reputation: 260Reputation: 260Reputation: 260

I have 3 WD drives in my array (RAID 0), 2 greens and 1 black. I don't have any issues with them spinning down. Once in a great while there is a split second delay loading a large directory. I feel the array has a decent speed to it:
Code:
# hdparm -Tt /dev/md9000

/dev/md9000:
 Timing cached reads:   35866 MB in  2.00 seconds = 17951.43 MB/sec
 Timing buffered disk reads: 1082 MB in  3.00 seconds = 360.20 MB/sec
If you feel your drives spin down too often, you can use a tool called WDIDLE to alter that. WD states that the green drives vary RPM speeds between 5400 and 7200, but people have found them to operate barely above 5400 RPM. Also, the number of platters/platter-density also affects performance.

I had considered using BTRFS on my array, but since I'm using encryption, I was getting very high I/O compared to EXT4.
 
Old 04-10-2015, 04:19 PM   #17
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Great!

You'll then have to experiment with different raid- and mount-settings.
I personally tried jfs, xfs and ext4 and ext4 was the one that gave me the best for both small and big files.

I'm currently mounting ext4 with the parameters...
Code:
"noatime,async,barrier=1,nodiscard,journal_async_commit,nodelalloc,data=writeback"
...and the "mount" command reports...
Code:
"rw,noatime,barrier=1,nodiscard,journal_async_commit,nodelalloc,data=writeback"
...but different variants of the kernel/distributions might not accept for ext4 the same set of parameters (or ignore some) - happened to me.

Once you have the raid ready (if you choose ext4 don't forget to use "-E lazy_itable_init=0,lazy_journal_init=0" as parameters to avoid that your device is formatted WHILE you perform your tests, which will definitely degrade the results) write a very big file into it and once you have cleared the cache ("echo 1 > /proc/sys/vm/drop_caches") read it with e.g. "cat bigfile > /dev/null" => with 4 HDDs you should get at least 400MB/s. If not then something's wrong if the fs-format or fs-mount options or in the raid geometry.


For what replica9000 mentioned...
Quote:
I had considered using BTRFS on my array, but since I'm using encryption, I was getting very high I/O compared to EXT4.
...if you want to directly store encrypted data on the raid (whichever filesystem you're using), you can still use "encfs" (additional layer sitting on top of the defined directories) - bad performance when writing (~20MB/s), acceptable (150MB/s) when reading.
 
Old 04-10-2015, 04:56 PM   #18
replica9000
Senior Member
 
Registered: Jul 2006
Distribution: Debian Unstable
Posts: 1,130
Blog Entries: 2

Rep: Reputation: 260Reputation: 260Reputation: 260
Quote:
Originally Posted by Pearlseattle View Post
For what replica9000 mentioned...

...if you want to directly store encrypted data on the raid (whichever filesystem you're using), you can still use "encfs" (additional layer sitting on top of the defined directories) - bad performance when writing (~20MB/s), acceptable (150MB/s) when reading.
I haven't tried encfs, but I've used ecryptfs. It must be something with the combination of mdadm/lvm/dm-crypt/btrfs causing the high I/O, because btrfs on top of dm-crypt, even with compression, wasn't causing high I/O when used without mdadm or lvm.
 
Old 04-15-2015, 12:55 PM   #19
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Quote:
causing the high I/O
Sorry, I don't understand about which high i/o you're speaking.
 
Old 04-15-2015, 08:19 PM   #20
replica9000
Senior Member
 
Registered: Jul 2006
Distribution: Debian Unstable
Posts: 1,130
Blog Entries: 2

Rep: Reputation: 260Reputation: 260Reputation: 260
Quote:
Originally Posted by Pearlseattle View Post
Sorry, I don't understand about which high i/o you're speaking.
Filesystem Input/Output. With btrfs on top of dm-crypt and mdadm, writing lots of data at a time was maxing out my CPU, which was at the time an i7-2600k. With ext4 in that same setup, the CPU probably sees a 3rd of the load writing the same data.
 
Old 04-16-2015, 05:56 AM   #21
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS, Manjaro
Posts: 5,679

Rep: Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713Reputation: 2713
Performance

I use LVM by default, everywhere. I have had to re-organize space during production time too often to ever want to do without.

I recently ran some database reorganization trials for EXT4 and XFS, both encrypted and unencrypted. The encryption overhead for EXT4 ran from 2 to nearly 10%. XFS, to my surprise, took up to three times longer for the same procedure, and the overhead of encryption was higher.

I did not test BTRFS, but that is something I look forward to VERY much. From the readings I have done, it should replace the combination of LVM, mdadm, and EXT4 for most applications. To date, it does not look like the performance is up to EXT4/XFS levels.

I am impressed with the performance of EXT4. The degree to which it destroyed XFS for this specific use was a huge surprise. Perhaps the efficiency you noticed in your testing may have directly influenced the effects I observed.

PS: I would expect the BTRFS efficiency to improve as it matures. This might be a test to rerun in a year or two. By then the I/O characteristics may have changed markedly.

Last edited by wpeckham; 04-16-2015 at 05:59 AM.
 
Old 04-16-2015, 04:33 PM   #22
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,001

Rep: Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629
I have bought nas devices before. As always they seem to become "old" way too soon. Was going to buy a synology or qnap. The versions are quite broad and have secondary apps that may suit you. A cheaper solution would be the WD nas but you loose the apps.

I am kind of shocked that this system is overworked but drives do go out. I have an older Dell server with enterprise level hardware throughout. It is the only way to protect data. They aren't cheap running a real hardware raid on expensive ram and drives.

If you want you can also look at zfs. The BSD stuff is usually set to run zfs and it too includes a raid.

http://www.phoronix.com/scan.php?pag...raid_fs4&num=1
 
Old 04-16-2015, 05:28 PM   #23
replica9000
Senior Member
 
Registered: Jul 2006
Distribution: Debian Unstable
Posts: 1,130
Blog Entries: 2

Rep: Reputation: 260Reputation: 260Reputation: 260
Quote:
Originally Posted by wpeckham View Post
I use LVM by default, everywhere. I have had to re-organize space during production time too often to ever want to do without.

I recently ran some database reorganization trials for EXT4 and XFS, both encrypted and unencrypted. The encryption overhead for EXT4 ran from 2 to nearly 10%. XFS, to my surprise, took up to three times longer for the same procedure, and the overhead of encryption was higher.

I did not test BTRFS, but that is something I look forward to VERY much. From the readings I have done, it should replace the combination of LVM, mdadm, and EXT4 for most applications. To date, it does not look like the performance is up to EXT4/XFS levels.

I am impressed with the performance of EXT4. The degree to which it destroyed XFS for this specific use was a huge surprise. Perhaps the efficiency you noticed in your testing may have directly influenced the effects I observed.

PS: I would expect the BTRFS efficiency to improve as it matures. This might be a test to rerun in a year or two. By then the I/O characteristics may have changed markedly.
I was looking forward to using BTRFS, and that was the plan until I noticed it's performance on top of my setup. I'm also hoping in the future, that I can simply use 'df' instead of 'btrfs fi df' to see disk space usage.
 
1 members found this post helpful.
Old 04-16-2015, 06:26 PM   #24
Lnthink
Member
 
Registered: May 2010
Location: Lafayette, LA
Distribution: Ubuntu, RH, Fedora
Posts: 44

Rep: Reputation: 11
I would second a vote for synology also. However, it's very 'Enterprise cheap' but not very 'home user'/'home server' cheap. IMHO
 
Old 04-18-2015, 10:16 AM   #25
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Quote:
I'm also hoping in the future, that I can simply use 'df' instead of 'btrfs fi df' to see disk space usage


Quote:
PS: I would expect the BTRFS efficiency to improve as it matures. This might be a test to rerun in a year or two. By then the I/O characteristics may have changed markedly.
Aahh, it's since ages that they're repeating that performance won't be a priority until all features have been implemented... but in my personal opinion filesystem features go hand-in-hand with performance.

Concerning Synology, I had a look at their 4-drives systems but at least at that time they were terribly expensive => in the end I went for a small 4-drives HP microserver (to have an offline backup of my primary raid5).
I recently had again a look at HP microservers to add an additional one (my old one so far worked perfectly even if the CPU is a bit weak), but people were complaining about its fan making quite a lot of noise when non-HP HDDs were used (somehow when the firmware of the microserver saw that the server was not hosting the official HP HDDs it started spinning up the fan).
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
reliable centos nas server with raid or nas boxes which is better ? Gil@LQ Linux - Server 9 09-10-2015 05:13 AM
pros and cons -- small "linux box server" versus dedicated "NAS box" SaintDanBert Linux - Server 1 11-30-2013 03:29 PM
NAS server with FreeNas... waiting to build one. Gil@LQ Linux - Software 3 08-04-2012 12:30 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 01:38 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration