If the application spawns discrete jobs that can be processed on different computers (e.g. scientific computing) then something like GPFS + Loadlever allow you to build clusters with fast shared disk and distribute jobs to different servers based on pretty much any metric you want.
If the application is a more standard commercial one, it will be written to run on one server and it won't be easy to share it - a limitation of the app rather than of the clustering software. In this case you need to amend the application to break it up so that different components can run on different servers.
For example, a SAP Central Instance can only run on one server, but you can make use of a distributed architecture by having SAP application servers elsewhere. With two nodes, you could then protect both so if either node fail, either the CI or the app server would fail over to the other. If your app can be made to look like this, you can take advantage of clustering.
If you were running AIX 5.2 on Power4 servers, you could do really clever things with Dynamic LPARs, but since neither of those is the case, I'll leave it there.
|