LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Virtualization and Cloud
User Name
Password
Linux - Virtualization and Cloud This forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.

Notices


Reply
  Search this Thread
Old 02-16-2022, 06:36 AM   #1
depam
Member
 
Registered: Sep 2005
Posts: 861

Rep: Reputation: 30
best practice for iscsi on multiple kvm hosts


What is the best way to implement iscsi on multiple kvm hosts to support live migration? I have a synology nas and have configured iscsi with separate targets per host. However, It seems I am unable to do live migration using this approach. I then configured Synology nas for "Allow multiple sessions from one or more iSCSI initiators" and used same target for all iscsi in KVM hosts and this seems to work. But i am worried if any performance issue or potential data corruption on the LUN. Does anyone have experience? Thanks in advance.
 
Old 02-20-2022, 09:34 AM   #2
slackwhere
LQ Newbie
 
Registered: Jul 2018
Distribution: c'mon.... Slackware!
Posts: 25

Rep: Reputation: Disabled
I use a similar configuration except with Linux LIO iSCSI targets backed by large hardware RAIDS. There's no performance penalty. As long as you don't start 2 or more VMs on different hosts accessing the same iSCSI target at the same time you'll be fine. Of course, this excludes the case of starting QEMU in incoming migration mode. In this case it will connect to the iSCSI target but won't read/write any blocks. When migration begins QEMU on the source will no longer write to the iSCSI target and will go dormant after the migration completes, there's no worry of corruption.

All of the above assumes you have set the correct cache configuration for KVM images over iSCSI, this generally means a cache setting of "directsync". I prefer to use QEMU's builtin initiator using libiscsi, this method seems to prevent configurations that would cause corruption. If you're using the hosts iSCSI initiator you need to be sure set the directsync option since QEMU cannot detect that block device employs the iSCSI protocol. This could result in the host caching data that is synced by the source VM and not flushed to the iSCSI target before the destination VM begins to access the iSCSI target.

Hope this helps.
 
1 members found this post helpful.
Old 03-01-2022, 07:04 PM   #3
depam
Member
 
Registered: Sep 2005
Posts: 861

Original Poster
Rep: Reputation: 30
Hi,

Thank you very much for the info.
May I confirm few things?
"As long as you don't start 2 or more VMs on different hosts accessing the same iSCSI target at the same time you'll be fine"
Currently, my configuration is using the same iscsi targets for different hosts and running multiple VMs. I can see in Synology NAS flipping over the connection from target which i think is not good.
At some cases, I can see iscsi connection issue on dmesg which I think a contention of connection within the iscsi target.
However, assigning different targets on Synology does not allow me to perform live migration. Noted on the cache aspect. How do I utilize multiple hosts while running different set of VMs and enabling live migration? Thanks in advance.
 
Old 03-05-2022, 11:48 AM   #4
slackwhere
LQ Newbie
 
Registered: Jul 2018
Distribution: c'mon.... Slackware!
Posts: 25

Rep: Reputation: Disabled
Sorry for the late response.

I assume by "same iSCSI targets" you mean same target and LUN? Which dmesg is producing the error message, target or initiator? Can you post these error messages?
 
Old 03-07-2022, 02:55 PM   #5
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,982

Rep: Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626
Yes, iscsi is not able to handle concurrent use like smb does.
 
Old 03-07-2022, 08:28 PM   #6
depam
Member
 
Registered: Sep 2005
Posts: 861

Original Poster
Rep: Reputation: 30
Hi, Yes. I am using same ISCSI Target and LUN but only one VM is running at a time on each of hosts. I noticed the below error in /var/log/syslog and is seen on all hosts connecting to it

Mar 8 10:26:20 kvm5 kernel: [11453.189502] connection1:0: detected conn error (1020)
Mar 8 10:26:20 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
Mar 8 10:26:23 kvm5 iscsid: connection1:0 is operational after recovery (1 attempts)
Mar 8 10:26:24 kvm5 kernel: [11457.550478] connection1:0: detected conn error (1020)
Mar 8 10:26:25 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
Mar 8 10:26:27 kvm5 iscsid: connection1:0 is operational after recovery (1 attempts)
Mar 8 10:26:28 kvm5 kernel: [11461.912619] connection1:0: detected conn error (1020)
Mar 8 10:26:29 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
Mar 8 10:26:31 kvm5 iscsid: connection1:0 is operational after recovery (1 attempts)
Mar 8 10:26:33 kvm5 kernel: [11466.276394] connection1:0: detected conn error (1020)
Mar 8 10:26:34 kvm5 iscsid: Kernel reported iSCSI connection 1:0 error (1020 - ISCSI_ERR_TCP_CONN_CLOSE: TCP connection closed) stat
e (3)
 
Old 03-07-2022, 08:42 PM   #7
jefro
Moderator
 
Registered: Mar 2008
Posts: 21,982

Rep: Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626Reputation: 3626
https://support.hpe.com/hpesc/public...mmr_kc-0118122
 
Old 03-08-2022, 08:46 AM   #8
depam
Member
 
Registered: Sep 2005
Posts: 861

Original Poster
Rep: Reputation: 30
Thanks I manage to remove that issues by specifying initiator iqn.
 
Old 03-12-2022, 10:31 AM   #9
slackwhere
LQ Newbie
 
Registered: Jul 2018
Distribution: c'mon.... Slackware!
Posts: 25

Rep: Reputation: Disabled
Does this mean you are specifying a different initiator name for each VM even though they are technically the same VM?

Are those dmesg messages from the VM dmesg or the from the hypervisor dmesg hosting them? This looks to me like it's from the hypervisor. Please clarify.

Last edited by slackwhere; 03-12-2022 at 07:45 PM.
 
Old 05-17-2022, 01:59 AM   #10
depam
Member
 
Registered: Sep 2005
Posts: 861

Original Poster
Rep: Reputation: 30
Yes, the dmesg output is from the hypervisor itself. I changed the iqn from the hypervisor as initially i just used the default and i can see from the SAN that the connection reflects different hypervisor which maybe causing that issue. After changing it, i can see in Synology that those hypervisors are named appropriately with each connection.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] pacemaker - iscsi: how to set up iscsi targets/logical units? eantoranz Linux - Server 9 01-02-2013 08:38 AM
[SOLVED] "No KVM, No qemu-kvm" Available error while trying to install KVM on CentOS 6.3 sriramdas Linux - Virtualization and Cloud 5 01-01-2013 10:46 AM
KVM on Redhat 5.7: KVM can ping outside network, outside network can't ping KVM svandyk Linux - Networking 1 09-23-2011 06:45 AM
[Debian/Qemu/KVM] Why qemu --enable-kvm works but not kvm directly? gb2312 Linux - Virtualization and Cloud 2 03-21-2011 02:05 PM
changes on other iscsi nodes attached to an ext3 iscsi target not being seen sldahlin Linux - Server 1 06-07-2008 02:02 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Virtualization and Cloud

All times are GMT -5. The time now is 05:55 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration