Linux - DesktopThis forum is for the discussion of all Linux Software used in a desktop context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I'm experimenting with ZFS but encountering perfomance issues.
Ubuntu 12.10 AMD64 live CD with zfs-fuse installed. 32GB RAM DDR3 1600MHz intel I3 2100. 4x 3TB of same model/make, 2 pools in stripe of 6TB. I'm copying the contents of pool 1 to pool 2. The disks of pool one are connected to SATA300, pool 2 is connected on SATA600. The software I use for the copy is midnight commander.
I see the progress bar progressing in mc but no write activity on pool2, then the progress bar stops, I see a lot of write on pool2 and finally the progress bar continues again.
I see the same kind of speeds/behaviour when I'm copying from a RAID array on an areca RAID controller to my mainboard so I guess it's not because both pools/devices are sharing the available bandwidth. I know the CPU isn't the best but I only rarely see peaks of 50% load so I guess that's not the bottleneck either.
I don't know whether this is an hardware issue or a ZFS tweak somewhere?
If you don't know anything about ZFS, you might be of help by telling what great method you use for spotting bottlenecks in systems
Isn't this "normal"?
I think that the normal kernel parameters say something like this: "whenever dirty buffers (stuff that hasn't yet been written to HDD) show up, start flusing them to the HDD only if they reach at least 10% of the available RAM or their age is older then 30 seconds".
This behaviour can be configured in the kernel parameters.
I ideally wanted the following added to the above: "...or flush them now if the HDD is doing nothing"
but of course the problem is that the "now" will be the future when the HDD will start writing and that you don't have any knowledge of what the HDD will be doing in the future... .
So, I think that your ZFS setup is ok and that your "source" disks are just faster than the timeout set by the kernel.
Even if you lower the settings in your kernel I think that you will end up in the same situation with just shorter timespans before "that" happens.
Pls. correct me if I'm wrong or if I misinterpreted your question.