LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 07-09-2015, 05:21 AM   #31
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513

Quote:
Originally Posted by voleg View Post
Definitely hardware error for sda, replace it.
Other option: replace SATA cable (is it SATA?).
It is a virtual machine that is failing... No cables to replace. No physical disk to replace.
 
Old 08-10-2015, 01:06 AM   #32
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Quote:
Originally Posted by jpollard View Post
It is a virtual machine that is failing... No cables to replace. No physical disk to replace.
Yeah exactly, and if it was a disk failure at the raid level I should only be getting errors on the file server and the VMs should not even be aware of the disk failure.

The following may have fixed it but it's early to tell:

Quote:
echo 300 >/sys/class/block/sda/device/timeout
I made it part of my official doc for new server deployments. I need to streamline that process actually, I will make a basic script I download which generates all the startup scripts/settings for when I setup a new server. there's lot of repetitious stuff that I could automate. Whole other topic though.

I went about a month without errors, and I had to restart all my VMs for some preventative maintenance that I never did (imaging them for backup of OS config) and I forgot to add it to my startup and then started getting all the errors again. So I don't know if it's a coincidence or if that setting is really helping but I reapplied it to all the VMs now and I'll just have to wait and see. I'll run some backup jobs manually to push the IO harder to see if it does it again.
 
Old 08-10-2015, 04:47 AM   #33
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
I still think you would get better throughput if you switched the VMs to either NFS, or better, to iscsi. The NFS advantage is the shared storage, the advantage to iscsi would be reduced overhead.

But with the pressure off, you have more time to experiment.
 
Old 08-11-2015, 09:26 PM   #34
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
I am using NFS, but I do want to look at some kind of iSCSI solution as it would probably cut some overhead, but I want the ability to live migrate VMs and such if ever I switch to a solution that supports that and get more than one host, so not sure how that works, I know VMware can do it but that's super expensive if I get into a setup that supports migrations and such. I was thinking though, is there a way in KVM that instead of having a virtual disk that is stored locally on the VM server or on network storage, I assign an iSCSI target and it boots off it? Basically each VM would get it's own iSCSI target, and it would not matter which server I boot the VM from as the target would only be for that one VM. Technically I should be able to live migrate even, I can't see why not anyway. Basically the VM server itself would not even store VM images, the VMs would simply boot to their respective iSCSI target directly. Is that doable?

My goal is to eventually move to KVM, I just find it's a lot of work to setup and has lot of quirks that I need to figure out so I setup VMware as I just needed something that works out of the box. Once I liberate one of my old servers which has VT on the cpu, I'll be able to experiment with KVM without having to feel rushed.

Worse case scenario I can always go with iSCSI targets on the VM server and just deal with the fact that I can't share the storage, as chances are good I'll have only 1 VM host for a while. There's gluster as well but was reading up on it, it looks way too complicated to setup, too many different parts that can go wrong imo. I want to try to simplify my setup not make it more complex.
 
Old 08-12-2015, 02:58 AM   #35
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
You still want iscsi. It is connected via the IP number, not the host. Migrate the VM, the IP goes with it.

The same applies to NFS. Instead of being tied to the host, it is tied to the VM. Migrate the VM, the NFS mounts move with it.
 
Old 08-12-2015, 12:19 PM   #36
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Quote:
Originally Posted by jpollard View Post
You still want iscsi. It is connected via the IP number, not the host. Migrate the VM, the IP goes with it.

The same applies to NFS. Instead of being tied to the host, it is tied to the VM. Migrate the VM, the NFS mounts move with it.
Yeah but if you have multiple VMs on 1 lun you have to have the luns setup on each host, which means you need some kind of cluster aware file system, probably gluster but I want to avoid that if I can due to it's complexity. What I'm asking is does KVM have a way to do away with a virtual disk altogether and instead boot to iSCSI? Then I just make 1 "lun" per VM. Would kinda be breaking away from a traditional setup though. Does KVM have this capability? I eventually want to switch to it. I'd also change my file storage architecture as now each VM would store it's own data locally, as opposed to the file server, which would now be a pure SAN. I'd set it on a seperate vlan/switch as well.

Last edited by Red Squirrel; 08-12-2015 at 12:20 PM.
 
Old 08-12-2015, 02:56 PM   #37
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
It isn't KVM that controls the use of Iscsi - it is the kernel in the VM. As long as KVM provides the network access, iscsi will work.

If the VM is using iscsi, then the data is stored in iscsi volume - which would go to the iscsi server where the data is stored.

You would not have multiple VMs on one lun - as that would require some form of distributed filesystem. Each lun would be associated with a specific disk image file. Thus no shareing of storage (which is the advantage of using NFS).

I suggest starting small - first a single VM using iscsi for a data disk. Then you can find out how to use iscsi for the system disk (I think it takes a PXE boot, with the initrd initializing the iscsi luns, then using that for the real root).
 
Old 08-13-2015, 12:39 AM   #38
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Ok so are you saying it's possible to use iSCSI as the system disk of the VM in KVM, instead of having a regular raw file that I would normally have to store on the VM server? Because if I can have the VM use iSCSI directly and boot from it as it was a regular disk then this could definitely work as I can create 1 lun per VM which would only need to be mounted when the VM is turned on from any host. This would actually be very ideal as it removes a lot of extra storage IO layers. This needs to work at a level where the OS has no idea it's going on though, as I don't want anything that will restrict what OS I can use as a VM. Only thing though, I still need a place to store the actual VM config file/folder. Where would that go? If I put it on the VM server then it's not shared. Or do I just use NFS for that?

Right now I'm on VMware though, but I do want to look at switching to KVM if I can get it going. I had lot of issues originally when I tried and was in a hurry so I had went with VMware.
 
Old 08-13-2015, 05:37 AM   #39
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
The thing causing the overhead is having the VM use the host for the virtual file service which then uses NFS for file service. Using iscsi (or NFS from the VM) becomes the alternative to using fibre channel for storage access. In neither case does the host get involved with files.

Communications, yes. But no file handling.

Since the host is now acting as only a router for the VM, migration of a VM becomes simpler as only routing tables need be updated. As far as migration goes, it becomes an issue of the virtual machine support on the host being able to create checkpoints and reload checkpoints.

The use of storage networks becomes a benefit as you can separate the general message traffic from the storage traffic, which then reduces the latency for virtual disk I/O.

From readings, it appears that iscsi is faster than NFS. The advantage NFS has is that it will be more familiar to set up, and it allows a more optimum usage of disk space on the storage server. In either case, the initial boot is using PXE so that the network can be initialized for storage access.
 
Old 08-14-2015, 06:31 AM   #40
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
If the objective is high availability, a minimum of 2 hosts is required.
Live migration of VMs between the hosts requires that both hosts have access to the VMs' virtual hard drives and definition files.

option 1)
Use a third server to act as a file server and export the virtual hard drives to the hosts (I think this is what you have) by NFS or iscsi.
Please note that this is the least efficient solution, as this server is a single point of failure (if it dies, all VMs die with it).
It also introduces extra latency (file server --> host --> VM), although you can expose iscsi targets directly to the VMs, as jpollard told you, you still have pass through the host and still have a single server handling the IO load of all VMs.

option 2)
Use 2 hosts, with a partition using a shared file system (I recommend OCFS2, for ease of setup).
In this partition, you must put the VMs' virtual hard drives and the VMs' definition files (the .xml files used by libvirt to define the VM).
This way, each host will handle half the VMs.
In case one of the hosts dies, the other can take all VMs with a click of a mouse (performance will drop, but they will not stop).
There will be less latency, as the IO is divided among the hosts (you can use 2 as a minimum, but can scale it up with ease with more hosts).
 
Old 08-17-2015, 10:27 PM   #41
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Ok I know all that, what I'm asking is, in KVM, when setting up a VM, is there a way that instead of using a virtual disk stored on a file system in / (whether it's local, networked, iSCSI etc) is there a way to tell it to actually boot off of an iSCSI LUN directly. Basically it would communicate block storage to the SAN directly without even having to read/write from/to an image file. The VM would see the lun as a local drive. If this is possible it would eliminate the need for having to even worry about setting up any kind of shared storage and remove a lot of overhead. My current setup would probably allow to do live migrations and such if I added a secondary VM host, but NFS has lot of overhead as explained, simply replacing NFS with iSCSI would be problematic as it would no longer allow me to do any kind of shared storage as iSCSI on it's own is not meant to be shared, it is block storage. So if KVM has an option to completely eliminate needing to create an image file but instead have each VM boot directly off it's own dedicated LUN it would be the best of both worlds. Is this possible?

I don't have a system that has VT-D right now which I can experiment with, so I want to know if KVM has such an option.

My future goal is to setup a real SAN like environment with only block storage and have two VM servers split the load but have them setup as failover so if one fails the other takes the load, or at very least allows me to start a VM on any host so I just manually go and restart the VMs that crashed. If there is a way in KVM to boot to iSCSI then it would not matter which host I turn the VM on as it does not depend on any iSCSI mount on the host, as it talks directly.

It would be a different approach to the typical VM setup but the idea is that I would just carve out LUNs, 1 per VM and it would be fully redundant raid storage but each individual VM would not know any better and just see it as a single local disk. Heck in the future I could even get iSCSI cards for my physical servers and do the same thing. Completely remove the need for local hard disks. Only downside of this is the SAN does become a single point of failure for the entire network but I could look into some kind of HA storage solution at some later point.

PXE boot is mentioned, so is there a distro I can install that acts as some kind of PXE server that will redirect a host to boot off an iSCSI target? How does this server know which host is which? Does it go by the mac address of the nic? This could definitely be interesting to do if someone can provide more info.

Last edited by Red Squirrel; 08-18-2015 at 02:39 PM.
 
Old 08-18-2015, 06:40 AM   #42
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by Red Squirrel View Post
Ok I know all that, what I'm asking is, in KVM, when setting up a VM, is there a way that instead of using a virtual disk stored on a file system in / (whether it's local, networked, iSCSI etc) is there a way to tell it to actually boot off of an NFS LUN directly. Basically it would communicate block storage to the SAN directly without even having to read/write from/to an image file. The VM would see the lun as a local drive. If this is possible it would eliminate the need for having to even worry about setting up any kind of shared storage and remove a lot of overhead. My current setup would probably allow to do live migrations and such if I added a secondary VM host, but NFS has lot of overhead as explained, simply replacing NFS with iSCSI would be problematic as it would no longer allow me to do any kind of shared storage as iSCSI on it's own is not meant to be shared, it is block storage. So if KVM has an option to completely eliminate needing to create an image file but instead have each VM boot directly off it's own dedicated LUN it would be the best of both worlds. Is this possible?

I don't have a system that has VT-D right now which I can experiment with, so I want to know if KVM has such an option.

My future goal is to setup a real SAN like environment with only block storage and have two VM servers split the load but have them setup as failover so if one fails the other takes the load, or at very least allows me to start a VM on any host so I just manually go and restart the VMs that crashed. If there is a way in KVM to boot to iSCSI then it would not matter which host I turn the VM on as it does not depend on any iSCSI mount on the host, as it talks directly.
That is what the PXE boot does. No local disk required. The PXE server provides the kernel, it provides the initrd/initramfs, it also provides the identification of the root NFS servers. Once the kernel is initialized, the initrd/initramfs can then mount the NFS root... and normal operations continue. This is usually combined with DHCP to facilitate configuration.
Quote:
It would be a different approach to the typical VM setup but the idea is that I would just carve out LUNs, 1 per VM and it would be fully redundant raid storage but each individual VM would not know any better and just see it as a single local disk. Heck in the future I could even get iSCSI cards for my physical servers and do the same thing. Completely remove the need for local hard disks. Only downside of this is the SAN does become a single point of failure for the entire network but I could look into some kind of HA storage solution at some later point.
Which is one reason NFS has failover servers.. Also available is gfs/gluster as as they are designed for cluster service where NFS was not. (NFS requires some actions to be taken on the backup server side when a server fails)
Quote:

PXE boot is mentioned, so is there a distro I can install that acts as some kind of PXE server that will redirect a host to boot off an iSCSI target? How does this server know which host is which? Does it go by the mac address of the nic? This could definitely be interesting to do if someone can provide more info.
An introduction: https://en.wikipedia.org/wiki/Preboo...on_Environment

How to for RH/CentOS: http://www.tecmint.com/install-pxe-n...r-in-centos-7/

Usually there are two such servers (a primary and a backup), but these do not have to be big servers (I've seen items on using a Raspberry Pi as a PXE server...)

Last edited by jpollard; 08-18-2015 at 06:44 AM.
 
Old 08-18-2015, 09:27 AM   #43
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
Quote:
Originally Posted by Red Squirrel View Post
Ok I know all that, what I'm asking is, in KVM, when setting up a VM, is there a way that instead of using a virtual disk stored on a file system in / (whether it's local, networked, iSCSI etc) is there a way to tell it to actually boot off of an NFS LUN directly. Basically it would communicate block storage to the SAN directly without even having to read/write from/to an image file. The VM would see the lun as a local drive. If this is possible it would eliminate the need for having to even worry about setting up any kind of shared storage and remove a lot of overhead.
This is confusing.
Are you talking about booting from nfs? Are you aware that nfs IS shared storage and that NO overhead will be removed at all?
Are you talking about iscsi, perhaps?

Quote:
Originally Posted by Red Squirrel View Post
My current setup would probably allow to do live migrations and such if I added a secondary VM host, but NFS has lot of overhead as explained, simply replacing NFS with iSCSI would be problematic as it would no longer allow me to do any kind of shared storage as iSCSI on it's own is not meant to be shared, it is block storage. So if KVM has an option to completely eliminate needing to create an image file but instead have each VM boot directly off it's own dedicated LUN it would be the best of both worlds. Is this possible?
To boot from iscsi you use pxe.
No need for KVM to support it directly.
Actually, you don't even need pxe, just initrd which would then setup iscsi and continue booting from there.

Quote:
Originally Posted by Red Squirrel View Post
I don't have a system that has VT-D right now which I can experiment with
Don't need it to use iscsi or nfs as boot medium for your VMs.

Quote:
Originally Posted by Red Squirrel View Post
My future goal is to setup a real SAN like environment with only block storage and have two VM servers split the load but have them setup as failover so if one fails the other takes the load
Needless complexity and overhead using a SAN, but a nice plan.
Please take care of single points of failure.


Quote:
Originally Posted by Red Squirrel View Post
If there is a way in KVM to boot to iSCSI then it would not matter which host I turn the VM on as it does not depend on any iSCSI mount on the host, as it talks directly.
That is not KVM's job.
You can expose iscsi targets to the VMs themselves.
You then have the problem of the scsi target migration...
Need some sort of virtual IP address to use as an iscsi target address, that must jump to another server in case of the SAN fails and you migrate to a backup.
Also, the VM definition file must be known to both hosts, so you need shares storage for that.

Quote:
Originally Posted by Red Squirrel View Post
It would be a different approach to the typical VM setup but the idea is that I would just carve out LUNs, 1 per VM and it would be fully redundant raid storage but each individual VM would not know any better and just see it as a single local disk. Heck in the future I could even get iSCSI cards for my physical servers and do the same thing. Completely remove the need for local hard disks.
Not a new concept.
Some people do that, but it adds complexity and overhead.

Quote:
Originally Posted by Red Squirrel View Post
Only downside of this is the SAN does become a single point of failure for the entire network
Yep, that's right.
So... don't do that ;-)

Quote:
Originally Posted by Red Squirrel View Post
PXE boot is mentioned, so is there a distro I can install that acts as some kind of PXE server that will redirect a host to boot off an iSCSI target?
Any distro can implement pxe (most of them out of the box).
Quote:
Originally Posted by Red Squirrel View Post
How does this server know which host is which? Does it go by the mac address of the nic?
Yes.
Quote:
Originally Posted by Red Squirrel View Post
This could definitely be interesting to do if someone can provide more info.
Think of it as a kind of networked initrd.
It loads an initial ramdisk through the network and boots from it. The ramdisk has the drivers that allow the system to continue booting from a different medium, like iscsi, nfs or even some shared filesystem like OCFS2 or GFS2.
I'm sure that if you bother to read the pxe documentation provided by your distro you'll get it (its not that hard to implement).
If you configured dhcp servers before, you know how to configure pxe.
 
Old 08-18-2015, 02:33 PM   #44
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Quote:
Originally Posted by jpollard View Post
That is what the PXE boot does. No local disk required. The PXE server provides the kernel, it provides the initrd/initramfs, it also provides the identification of the root NFS servers. Once the kernel is initialized, the initrd/initramfs can then mount the NFS root... and normal operations continue. This is usually combined with DHCP to facilitate configuration.

Which is one reason NFS has failover servers.. Also available is gfs/gluster as as they are designed for cluster service where NFS was not. (NFS requires some actions to be taken on the backup server side when a server fails)


An introduction: https://en.wikipedia.org/wiki/Preboo...on_Environment

How to for RH/CentOS: http://www.tecmint.com/install-pxe-n...r-in-centos-7/

Usually there are two such servers (a primary and a backup), but these do not have to be big servers (I've seen items on using a Raspberry Pi as a PXE server...)

Ok good to know it can be done with pxe, nothing in there seems to talk about iSCSI though but I'll google that further. I always figured PXE was a legacy thing that was no longer really being used. So would I run that on the SAN network? Each VM would have two nics, one for SAN and one for regular network traffic.

As for SAN being single point of failure I'm not in any different boat right now if my file server goes down anyway. Eventually I can look into adding redundancy for that.
 
Old 08-18-2015, 02:38 PM   #45
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Original Poster
Rep: Reputation: 54
Quote:
Originally Posted by Slax-Dude View Post
This is confusing.
Are you talking about booting from nfs? Are you aware that nfs IS shared storage and that NO overhead will be removed at all?
Are you talking about iscsi, perhaps?
No I want to boot off iSCSI and not use NFS anymore. I mean to say iSCSI lun. I want to use only block storage for the VMs to decrease overhead.

Quote:
Originally Posted by Slax-Dude View Post
Yep, that's right.
So... don't do that ;-)
I have a single point of failure right now too anyway, so it would not change much. Eventually I want to look into what it takes for redundant/failover storage. I'll spend that money on extra VM hosts first though as I get the benefit of added running VM capacity too. I might just deal with the fact that my storage is single point of failure and hope I don't get any motherboard, ram or cpu sporadic failure. The hard drives are all raided and I have redundant psu so at least there's that. The more big machines I add the less UPS run time I get and the more power it uses (and our hydro rates keep going up) so I need to find the best balance of redundancy and power usage. With the VM hosts at least I can turn down the secondary host when power goes out. I imagine any kind of redundant storage would have to run at all times or it would quickly go out of sync and have to rebuild.

Quote:
Originally Posted by Slax-Dude View Post
Any distro can implement pxe (most of them out of the box).
Yes.
Think of it as a kind of networked initrd.
It loads an initial ramdisk through the network and boots from it. The ramdisk has the drivers that allow the system to continue booting from a different medium, like iscsi, nfs or even some shared filesystem like OCFS2 or GFS2.
I'm sure that if you bother to read the pxe documentation provided by your distro you'll get it (its not that hard to implement).
If you configured dhcp servers before, you know how to configure pxe.

That sounds pretty interesting then, so really if I wanted to I could do this on physical hardware too without needing to buy an iSCSI card (those are expensive). I'll have to read up further on how to do that.

Last edited by Red Squirrel; 08-18-2015 at 03:45 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How do I get DBPedia References to stop crashing immediately after I start it? verynoobish Slackware 4 05-23-2011 10:34 PM
Whole system very slow and keeps crashing justinp526 Linux - Newbie 21 08-23-2010 06:39 AM
kde too slow or crashing when I have internet brasuca Linux - Networking 0 10-06-2005 04:20 PM
DISCUSSION: Howto stop your laptop from crashing when power source is changed. bufo333 LinuxAnswers Discussion 1 07-24-2004 03:11 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 08:06 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration