I am testing the difference between SMB 1.0, 2.1, and 3.0
I am interested in how fast my client (CentOS 7) can read from enterprise class storage over SMB. I am using mount.cifs to force SMB to negotiate the different protocols by using the vers=3.0 option.
I mount using all three versions of SMB, then I do tests like so:
Code:
dd if=/mnt/smb3.0/mystorage/testfile of=/dev/zero bs=1024k count=8000
dd if=/mnt/smb2.1/mystorage/testfile of=/dev/zero bs=1024k count=8000
dd if=/mnt/smb1.0/mystorage/testfile of=/dev/zero bs=1024k count=8000
This works great and provides useful information. But I want to now test in different ways, for example: with rsync. But I don't want to have to worry about the speed of my storage on the client being a bottleneck; I want to use /dev/null to transfer the file into a blackhole like I did with /dev/zero in the dd tests.
Code:
rsync -av /mnt/smb3.0/mystorage/testfile /dev/null
The problem is that after about 15 seconds it comes back with :
Code:
rsync : write failed on "/dev/null": No space left on device
What? How can /dev/null run out of space??? Same thing happens if I use /dev/zero. Interestingly, if I now do an
ls /dev the null or zero link is gone after this. Which has some interesting side affects on the o/s and requires a reboot to recover from (after the reboot /dev/null is recreated).
Can anyone suggest how I can get rsync to transfer into /dev/null, or something similar, just so I can see the speeds it gets? Are there any other useful tests I can run to test the speed of the protocol (I am not interested in speed of the storage and don't want it interfering).