LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (http://www.linuxquestions.org/questions/linux-server-73/)
-   -   Linux VirtualBox, RAID5, iscsitarget, and performance. (http://www.linuxquestions.org/questions/linux-server-73/linux-virtualbox-raid5-iscsitarget-and-performance-812902/)

kevin0tech 06-08-2010 11:43 AM

Linux VirtualBox, RAID5, iscsitarget, and performance.
 
Hi,

I broke down and spent some $$ on a new server for home use. I mostly do technical research and testing, plus store movies and music. My interests are mainly in the IET iscsitarget performance.

Server system consists of an AMD Phenom II 550, 8GB RAM, 1x 80GB system partition, and a LVM-vg0 software raid5, running Ubuntu 10.04 server x64.

The vg0 consists of 3 x 500GB 7200RPM SATA drives (mdadm) sliced up with 100GB for VBox VMs, one slice is an iscsitarget for a Windows 2k3 server, and another slice iscsitarget for a desktop.

With this setup, the win2k3 server is booted from a .vdi image stored on the ext4 lvm raid5 vg0. Here are the DiskTT stats.

Code:

2.2GB file
Write: 75MB/s
Read: 210MB/s





However, with the Win2k3 server and the MS iscsi initiator I get using DiskTT:

Code:

2.2GB file
Write: 8.2MB per second
Read: 33.7MB per second

The speeds are incredibly slow considering a non iscsi connection is screaming fast in comparison. Any ideas?

All nics are 1GB.

Here is my ietd.conf

Code:

        MaxConnections                1                # Number of connections/session
                                                # We only support 1
        MaxSessions                0                # Number of sessions/target
                                                # 0 = no explicit limit
        InitialR2T                Yes                # Wait first for R2T
                                                # Yes = no unsolicited data
        ImmediateData                Yes                # Data can accompany command
                                                # Yes = cmnd/data in same PDU
        MaxRecvDataSegmentLength 8192                # Max data per PDU to receive
        MaxXmitDataSegmentLength 8192                # Max data per PDU to transmit
        MaxBurstLength                262144                # Max data per sequence (R2T)
        FirstBurstLength        65536                # Max unsolicited data sequence
        DefaultTime2Wait        2                # Secs wait for ini to log out
                                                # Not used
        #DefaultTime2Retain        20                # Secs keep cmnds after log out
                                                # Not used
        #MaxOutstandingR2T        1                # Max outstanding R2Ts per cmnd
        #DataPDUInOrder                Yes                # Data in PDUs is ordered
                                                # We only support ordered
        #DataSequenceInOrder        Yes                # PDUs in sequence are ordered
                                                # We only support ordered
        #ErrorRecoveryLevel        0                # We only support level 0
        #HeaderDigest                None,CRC32C        # PDU header checksum algo list
                                                # None or CRC32C
                                                # If only one is set then the
                                                # initiator must agree to it
                                                # or the connection will fail
        #DataDigest                None,CRC32C        # PDU data checksum algo list
                                                # Same as above
        #MaxSessions                0                # Maximum number of sessions to
                                                # this target - 0 = unlimited
        #NOPInterval                0                # Send a NOP-In ping each after
                                                # that many seconds if the conn
                                                # is otherwise idle - 0 = off
        #NOPTimeout                0                # Wait that many seconds for a
                                                # response on a NOP-In ping
                                                # If 0 or > NOPInterval, NOPInterval
                                                # is used!
        #
        # Various target parameters
        #
        #Wthreads                8                # Number of IO threads
        #QueuedCommands                32                # Number of queued commands

Target iqn.2010-06.lan.local:vg0.iscsi-server
        Lun 0 Path=/dev/vg0/iSCSI-Server,Type=fileio
        Alias LUN1
        #MaxConnections  6

Target iqn.2010-06.lan.local:vg0.iscsi-vista
        Lun 0 Path=/dev/vg0/iSCSI-Vista,Type=fileio
        Alias LUN1
        #MaxConnections  6


kevin0tech 06-10-2010 01:45 PM

Did some googleing, and found this article.

http://misterd77.blogspot.com/2007/1...ts-part-1.html

By tweaking the ietd.conf target section as follows...

Code:

Target iqn.2010-06.lan.local:vg0.iscsi-server
        Lun 0 Path=/dev/vg0/iSCSI-Server,Type=fileio,IOMode=wb
        Alias LUN1
        #MaxConnections  6

Target iqn.2010-06.lan.local:vg0.iscsi-vista
        Lun 0 Path=/dev/vg0/iSCSI-Vista,Type=fileio,IOMode=wb
        Alias LUN1
        #MaxConnections  6

Now when I run DiskTT, I see a drastic difference. My understanding is that adding IOMode=wb (writeback) I am allowing the OS to use available RAM as a cache for the software RAID because it is using the filesystem instead of just serving up a device.

Code:

2.2GB file
Write: 65.75s = 33.5MB per second
Read: 46.16s = 47.7MB per second
Random: 23.51s = 93.6MB per second

I also tested changing the Type=blockio in ietd.conf, however this caused an even slower response. In reading several posts on the matter I came to understand that blockio is more for hardware RAID controllers, and not software RAID setups like my system.


All times are GMT -5. The time now is 12:32 PM.