It's not hard, really. Generally, you need to connect both systems to the same network (usually wired Ethernet, but it can really be anything you like -- many clusters use InfiniBand for MPI traffic, but you certainly don't have to). Make sure that there are no firewalls blocking access between the two machines. Then install the MPICH2 packages on both machines and try to run a test program (the distribution comes with several). You will be using the mpdboot to set up communication and then mpirun or mpiexec to run the program (see http://www.dartmouth.edu/~rc/classes...mpich2_ex.html
for some further explanation). The MPICH-2 development team provides excellent documentation
, which I suggest you read.
Note that MPI has no concept of "master" and "slave" -- all ranks are created equal. Of course this does not stop one from writing an MPI program using a master/worker paradigm, in which one rank (usually 0) is the master and doles out work to the "workers", but this is a function of how the code divides up its workload and not anything inherent to MPI.
It's nice to have a shared filesystem accessible on both nodes (using NFS for starters, or if you need high IOPS something like Lustre and Ceph, but it sounds like you're just playing around so vanilla NFS will probably work just fine). If you want to designate one of your nodes the "master", you can put a big disk in it and export it via NFS to other other system. This saves you the bother of having to copy files to both systems.