There are probably much more complicated ways for doing that compared to a crude and simple approach that might work using PHP:
1. Load distribution
You can setup a "dispatcher server". I. e. you can have one machine with a simple PHP script and associated database. Each time this PHP script serves a page, it can increment a counter. Then you can store a table with a list of the "actual" servers' IP addresses in that dispatcher, and all it does is increment the counter for each hit. It then redirects the incoming browser to the server IP address indicated in the list. When it reaches the end, it wraps back to the beginning. I.e. something like
For each hit
Get target server ip according to index
Redirect visitor to the server ip previously retrieved, with token
since all the dispatcher will be doing is the above, and you actually need a very simple script and database to do this, even a modest machine should be able to effieciently handle 50 or more simultaneous users, sending them off in turn to an "Actual" server.
To prevent misuse or direct hits from "outside" on servers in your server farm, you can tokenise accesses, i. e. on the dispatcher you generate a token when you "send" a vistior off to a server in the server farm. You keep this token on the dispatcher for a set time limit. Once the "target" has "received" the visitor, it checks the token back with you, and that server, if the dispatcher does hold such a token, sets up a session variable that indicates to the server that that user is a valid visitor. The MD5 of each visitor's IP, coupled with the system microtime might suffice, for example, for generting MD5 hashes to use as tokens. This is simple and of course hackable (since the session is stored userside) but for casual needs it should suffice.
If login is implied in your setup, you can load balance relatively easily - you can store user credentials on the dispatcher, and then you can logically bind a user to "his" server (where he will be in the db) and then send him to "his" server whenever he logs in. You can then for example, evenly distribute users over available servers as they register in your system. Or, you can make, for example, that all users with surnames starting with A-K are on Server #1, L-O are on Server #2, or whatever.
2. Database sharing
AFAIK MySQL does offer automatic replication and mirroring, which you can probably use in this scenario. The obvious thing is of course to have two database servers (primary and standby) and many HTTP servers, which might work in situations where you might be serving lots of HTML, but little dynamic PHP generated HTML off data based content.
Another technique may be to have each server write its database to a big-disced backup server at set intervals. You can have one backup server (dangerous, but less expensive) or a backup server per "frontline" server (safe, but expensive).
3. Backup & redundancy
Should be obvious - if a primary fails for some reason, you can just note that in your dispatcher, for example, and redirect people to the backup server. You can even pre-store backup server IPs on the dispatcher, with a flag that indicates for it so "send to primary" or "send to backup" for that server entry in its database. Then, if you have the space, you can for example setup a cronjob every 24 hours that copies the entire database to peers - i. e. each day, each server gets a complete copy of the entire dataset in the system. Note that this may be very expensive in disc space and network load, though, but gives you a very high degree of survivablity. For example, if 9 of your 10 servers fail together, you at least have, on Server #10, all 9 other databases you should have.
Actually, there are VERY many scenarious you can set up, and relatively simple logic and very simple PHP code should get you quite a long way without going very expensive and very specialised, or to dedicate hardware / router setups.