Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
What is the best way to implement iscsi on multiple kvm hosts to support live migration? I have a synology nas and have configured iscsi with separate targets per host. However, It seems I am unable to do live migration using this approach. I then configured Synology nas for "Allow multiple sessions from one or more iSCSI initiators" and used same target for all iscsi in KVM hosts and this seems to work. But i am worried if any performance issue or potential data corruption on the LUN. Does anyone have experience? Thanks in advance.
I use a similar configuration except with Linux LIO iSCSI targets backed by large hardware RAIDS. There's no performance penalty. As long as you don't start 2 or more VMs on different hosts accessing the same iSCSI target at the same time you'll be fine. Of course, this excludes the case of starting QEMU in incoming migration mode. In this case it will connect to the iSCSI target but won't read/write any blocks. When migration begins QEMU on the source will no longer write to the iSCSI target and will go dormant after the migration completes, there's no worry of corruption.
All of the above assumes you have set the correct cache configuration for KVM images over iSCSI, this generally means a cache setting of "directsync". I prefer to use QEMU's builtin initiator using libiscsi, this method seems to prevent configurations that would cause corruption. If you're using the hosts iSCSI initiator you need to be sure set the directsync option since QEMU cannot detect that block device employs the iSCSI protocol. This could result in the host caching data that is synced by the source VM and not flushed to the iSCSI target before the destination VM begins to access the iSCSI target.
Thank you very much for the info.
May I confirm few things?
"As long as you don't start 2 or more VMs on different hosts accessing the same iSCSI target at the same time you'll be fine"
Currently, my configuration is using the same iscsi targets for different hosts and running multiple VMs. I can see in Synology NAS flipping over the connection from target which i think is not good.
At some cases, I can see iscsi connection issue on dmesg which I think a contention of connection within the iscsi target.
However, assigning different targets on Synology does not allow me to perform live migration. Noted on the cache aspect. How do I utilize multiple hosts while running different set of VMs and enabling live migration? Thanks in advance.
I assume by "same iSCSI targets" you mean same target and LUN? Which dmesg is producing the error message, target or initiator? Can you post these error messages?
Hi, Yes. I am using same ISCSI Target and LUN but only one VM is running at a time on each of hosts. I noticed the below error in /var/log/syslog and is seen on all hosts connecting to it
Mar 8 10:26:20 kvm5 kernel: [11453.189502] connection1:0: detected conn error (1020)
Mar 8 10:26:20 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
Mar 8 10:26:23 kvm5 iscsid: connection1:0 is operational after recovery (1 attempts)
Mar 8 10:26:24 kvm5 kernel: [11457.550478] connection1:0: detected conn error (1020)
Mar 8 10:26:25 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
Mar 8 10:26:27 kvm5 iscsid: connection1:0 is operational after recovery (1 attempts)
Mar 8 10:26:28 kvm5 kernel: [11461.912619] connection1:0: detected conn error (1020)
Mar 8 10:26:29 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
Mar 8 10:26:31 kvm5 iscsid: connection1:0 is operational after recovery (1 attempts)
Mar 8 10:26:33 kvm5 kernel: [11466.276394] connection1:0: detected conn error (1020)
Mar 8 10:26:34 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
Does this mean you are specifying a different initiator name for each VM even though they are technically the same VM?
Are those dmesg messages from the VM dmesg or the from the hypervisor dmesg hosting them? This looks to me like it's from the hypervisor. Please clarify.
Last edited by slackwhere; 03-12-2022 at 07:45 PM.
Yes, the dmesg output is from the hypervisor itself. I changed the iqn from the hypervisor as initially i just used the default and i can see from the SAN that the connection reflects different hypervisor which maybe causing that issue. After changing it, i can see in Synology that those hypervisors are named appropriately with each connection.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.