Quote:
Originally Posted by jpollard
It also depends on the server. Most enterprise servers have dual power supplies, dual network connections...
|
It does indeed, but an OS failure on that server will leave you dead in the water, no matter how many PSUs or NICs it has.
Also, it has only one motherboard.
Sure: it has a much lower probability of going down than commodity servers... but if it does, so will your whole system.
I have a client that, despite our recommendation, went with a single enterprise grade server (my company counseled 2 of them).
Although the server had 2 PSUs, they were connected to a single UPS...
Can you guess what happened one stormy night over the weekend?
If your answer was "the UPS fried" you won a cookie
Quote:
Originally Posted by Red Squirrel
I just thought of something, if I go the pxe route it means it's being done in software and not hardware (whether it's virtual or physical). This means the OS will see the nic that goes to the SAN. This is a HUGE security risk especially for VMs that are considered untrusted (ex: internet facing stuff, testing spyware/viruses etc).
|
That is a good point.
Yet one more reason to not expose the iscsi targets to the VMs.
I was only considering performance.
As I told you before, the network will always be your bottleneck, so I'd avoid using things like iscsi and nfs as much as possible on the VMs themselves (you kinda have to use it on the hosts if you want HA and migrations).
Quote:
Originally Posted by Red Squirrel
For a home setup I'll probably go as far as having multiple VM hosts and 1 file server.
|
For a home test lab, you can have 2 hosts (don't do the file server / SAN) for about 250/300€. That is the price of a decent smartphone nowadays...
Sure, it will be commodity hardware that will run 2 or 3 VMs maximum per host... but still, it will be quite enough for testing things out before you deploy the system on production servers.