We are using an IBM xSeries 440 server. This host has 3 NICs (one built-in on motherboard -- the broadcomm one and the other 2 were NIC cards in PCI slots in terms of Intel Pro*100).
The server originally had RHEL AS 3 U6 (32 bits) and we wanted to install RHEL AS 4 U3 remotely. The intent was to install it remotely without any manual intervention.
So we did copy the vmlinuz file and initrd image for RHEL AS 4 U3 (32 bits) as stated here:
The installation ISOs and kick-start file were shared via NFS.
This is what we provided in the menu.lst file of GRUB:
title Remote Install
kernel /vmlinuz_remote ks=nfs:nfs_server_ip:/path/to/ks.cfg vnc vncpasswd=mypassword
After making this entry as the default entry and saving grub, we rebooted the box anticipating that it will start the install.
The problem was, when we rebooted the box so as to start install via kickstart, it seems it was not able to find the kickstart file which was NFS mounted...Further investigation revealed that, the installer was perhaps NOT treating the onboard Broadcomm NIC as eth0 but one of the NICs in the PCI slot (one of the Intel Pro*100) was treated as eth0. AS none of the Intel Pro*100 had ethernet cables connected, the installer was not able to find the ks.cfg via NFS.
Obviously, we could remove the two Inte Pro*100 NICs from the PCI slots and go ahead with the install.. But the idea was to be able to remotely install without any manual intervention at all...
Hence we were wondering what could be done to achieve that ?
How do we make sure that the installer treats the onboard ethernet (the broadcomm one) as the eth0 and not the ones in the PCI Slots (the Intel Pro*100).
Will appreciate any pointers/suggestions.