Visit the LQ Articles and Editorials section
Go Back > Forums > Linux Forums > Linux - Server
User Name
Linux - Server This forum is for the discussion of Linux Software used in a server related context.


Search this Thread
Old 06-08-2010, 11:43 AM   #1
LQ Newbie
Registered: Jun 2010
Posts: 2

Rep: Reputation: 0
Linux VirtualBox, RAID5, iscsitarget, and performance.


I broke down and spent some $$ on a new server for home use. I mostly do technical research and testing, plus store movies and music. My interests are mainly in the IET iscsitarget performance.

Server system consists of an AMD Phenom II 550, 8GB RAM, 1x 80GB system partition, and a LVM-vg0 software raid5, running Ubuntu 10.04 server x64.

The vg0 consists of 3 x 500GB 7200RPM SATA drives (mdadm) sliced up with 100GB for VBox VMs, one slice is an iscsitarget for a Windows 2k3 server, and another slice iscsitarget for a desktop.

With this setup, the win2k3 server is booted from a .vdi image stored on the ext4 lvm raid5 vg0. Here are the DiskTT stats.

2.2GB file
Write: 75MB/s
Read: 210MB/s

However, with the Win2k3 server and the MS iscsi initiator I get using DiskTT:

2.2GB file
Write: 8.2MB per second
Read: 33.7MB per second
The speeds are incredibly slow considering a non iscsi connection is screaming fast in comparison. Any ideas?

All nics are 1GB.

Here is my ietd.conf

	MaxConnections		1		# Number of connections/session
						# We only support 1
	MaxSessions		0		# Number of sessions/target
						# 0 = no explicit limit
	InitialR2T		Yes		# Wait first for R2T
						# Yes = no unsolicited data
	ImmediateData		Yes		# Data can accompany command
						# Yes = cmnd/data in same PDU
	MaxRecvDataSegmentLength 8192		# Max data per PDU to receive
	MaxXmitDataSegmentLength 8192		# Max data per PDU to transmit
	MaxBurstLength		262144		# Max data per sequence (R2T)
	FirstBurstLength	65536		# Max unsolicited data sequence
	DefaultTime2Wait	2		# Secs wait for ini to log out
						# Not used
	#DefaultTime2Retain	20		# Secs keep cmnds after log out
						# Not used
	#MaxOutstandingR2T	1		# Max outstanding R2Ts per cmnd
	#DataPDUInOrder		Yes		# Data in PDUs is ordered
						# We only support ordered
	#DataSequenceInOrder	Yes		# PDUs in sequence are ordered
						# We only support ordered
	#ErrorRecoveryLevel	0		# We only support level 0
	#HeaderDigest		None,CRC32C	# PDU header checksum algo list
						# None or CRC32C
						# If only one is set then the
						# initiator must agree to it
						# or the connection will fail
	#DataDigest		None,CRC32C	# PDU data checksum algo list
						# Same as above
	#MaxSessions		0		# Maximum number of sessions to
						# this target - 0 = unlimited
	#NOPInterval		0		# Send a NOP-In ping each after
						# that many seconds if the conn
						# is otherwise idle - 0 = off
	#NOPTimeout		0 		# Wait that many seconds for a
						# response on a NOP-In ping
						# If 0 or > NOPInterval, NOPInterval
						# is used!
	# Various target parameters
	#Wthreads		8		# Number of IO threads
	#QueuedCommands		32		# Number of queued commands

Target iqn.2010-06.lan.local:vg0.iscsi-server
        Lun 0 Path=/dev/vg0/iSCSI-Server,Type=fileio
        Alias LUN1
        #MaxConnections  6

Target iqn.2010-06.lan.local:vg0.iscsi-vista
        Lun 0 Path=/dev/vg0/iSCSI-Vista,Type=fileio
        Alias LUN1
        #MaxConnections  6

Last edited by kevin0tech; 06-08-2010 at 11:47 AM.
Old 06-10-2010, 01:45 PM   #2
LQ Newbie
Registered: Jun 2010
Posts: 2

Original Poster
Rep: Reputation: 0
Did some googleing, and found this article.

By tweaking the ietd.conf target section as follows...

Target iqn.2010-06.lan.local:vg0.iscsi-server
        Lun 0 Path=/dev/vg0/iSCSI-Server,Type=fileio,IOMode=wb
        Alias LUN1
        #MaxConnections  6

Target iqn.2010-06.lan.local:vg0.iscsi-vista
        Lun 0 Path=/dev/vg0/iSCSI-Vista,Type=fileio,IOMode=wb
        Alias LUN1
        #MaxConnections  6
Now when I run DiskTT, I see a drastic difference. My understanding is that adding IOMode=wb (writeback) I am allowing the OS to use available RAM as a cache for the software RAID because it is using the filesystem instead of just serving up a device.

2.2GB file
Write: 65.75s = 33.5MB per second
Read: 46.16s = 47.7MB per second
Random: 23.51s = 93.6MB per second
I also tested changing the Type=blockio in ietd.conf, however this caused an even slower response. In reading several posts on the matter I came to understand that blockio is more for hardware RAID controllers, and not software RAID setups like my system.


lvm, raid5, virtualbox

Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Multi Layer RAID50 fail (Intel SRCS14L RAID5 + 3ware 9550SX-4LP RAID5)+Linux RAID 0 BaronVonChickenPants Linux - Server 4 09-27-2009 04:06 AM
Bad RAID5 performance - horrible when reading over the network exscape Linux - Server 13 08-19-2007 03:38 AM
MD Raid5 Performance issue yepp Linux - Hardware 1 10-11-2006 09:30 AM
Unusual RAID50 / RAID5 performance with md crazee64 Linux - Hardware 1 06-07-2006 11:39 PM
Software RAID5 - poor write performance & freezing Nitrowolf Linux - Hardware 3 09-25-2005 09:49 PM

All times are GMT -5. The time now is 05:00 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration