Originally Posted by Enyo
im using a pair of 750gb sata disks in lvm to make a 1.5tb partion and while running rtorrent which runs at about 700Kb down my IOwait spikes so high that sshing into the machine becomes impossible. this is on ubuntu server 7.10 and the only other thing running is ushare and samba. the machine itself is a 1.5ghz via with 512mb ram (56% of which is in use by rtorrent)
A high amount of iowait, generally speaking, means there's more outstanding disk requests waiting than can be handled by your current setup. If this high iowait occurs only running this app, then my first hunch would be to check the amount of torrents (and clients) you're serving, because while the application itself may be small in terms of on-disk size, the amount of memory used will increase with the amount of torrents and clients served: maintaining and threads sockets require memory, swapping out parts of an application, reading and writing torrents and logging requires disk I/O, so if you temporarily suspend half of your torrents and see an improvement that's quick confirmation.
Broadening scope there's much that can influence a systems performance. The Linux kernel does pretty much take care of all things on its own, it's matured well, it's performant, reasonably fault tolerant, and out-of-the-box usable for generic, all-purpose usage. Sometimes human decisions can help make it perform better. Level one: how it recognises and drives hardware. Some controllers are more equal than others
. It wouldn't be the first time some controller card seems flakey because the kernel developers can't wrap their head around another vendors partial implementation of standards. This may or may not be the case. Looking up chipset support in the kernels Changelog or on the LKML can reveal things (also see: driver options, boot commandline args). Performance can also be influenced by tuning hardware latency (setpci?) and sysctls like those for VM (for instance caching, how you schedule flushouts in relation to the amount of memory available, how writes are queued), network (IPv4 settings and timings, amount of ports that can be opened, the amount of memory allowed to be consumed by sockets, etc, etc), and the choice of scheduler (for instance if it's CFQ you have access to ionice
). (But to be able to tweak w/o losing touch with reality you want to first set up baseline data (dstat, atop, SAR) preferably on an idle system.) One level up there's the choice of filesystem. FS type (extn
, XFS, Reiser) does matter, because not all are equally fast (or rugged) under pressure. FS options, like for instance how it updates timestamps (or not) or does journalling (tune2fs: ordered vs writeback) can matter as well. (Something that strikes me as odd is having a 1.5 TB LVM. Unless you really need that kind of storage: hardware access beats any emulation. You're looking at a filesystem layer on top of LVM on top of hardware. I've got no numbers but ditching LVM and careful partitioning *will* improve performance because then you have two devices for parallel read/writing.) Then there's the kernels process handling. While it obviously is a trade-off you could reschedule (prio/nice/ionice) the application so other processes get faster access to CPU cycles. Finally to address rtorrent vs SSH: you could shape traffic so combat bandwidth saturation, so there's always a portion free. However this doesn't alleviate performance problems on a lower level. I'm sorry if this is too terse or not easy to grasp. Each system is unique and has unique requirements. While a lot of tweaks make sense in a lot of situations it depends on you to get things right.
I hope the approach of setting up a baseline (so you know what you start out with), read docs (so you know what least-invasive improvements boost performance the most), tweak and compare data makes sense.