the physical nic in the server can be thought of as one of the ports in a network switch
and the hypervisor configuration interface is just a virtual nic "plugged" in to another port on that network switch.
I find that the ESXi diagram a good representation of what is going on. the verticle grey bar is a vSwitch
usually the hypervisor os does some handy load balancing and failover if the physical server has more than one nic, so you can plug all of the cables in to the physical switch to access higher bandwidth. In this situation all of physical cables in that group can be thought of a single "fatter" cable.
This way you physical server has a private IP address like 192.168.42.1 on a /24 subnet.
The WAN interface vnic of your virtual nat machine has the public IP address of 188.8.131.52
The LAN interface vnic of your virtual nat machine has the private IP address of say 10.0.0.1 in say a /24 or /16 or /8 subnet, depending on how many student virtual machines there are.
Each of the student's VM has a fixed private IP on the same subnet as the LAN interface of the virtual nat machine and a port forward on the virtual nat machine mapped to their ssh port.
Then if the students want to access other ports on their server they can use ssh tunnelling.
For security I would recommend that there be at least 3 virtual switches. These would contain
1 - hypervisor config vnic with its own dedicated physical nic(s) connected to a switch that you can use to perform config from
2 - WAN interface vnic with its own dedicated physical nic(s) connected to the DMZ network
3 - LAN interface vnic and - Student VMs vnics