Network File System Administration Guide
Title: Network File System Administration Guide
Editor: Shi Yaoqiang <email@example.com>
Table of contents
1. NFS Server Configuration and Operation
1.1 Required Packages
1.2 Configuring NFS to Start
1.3 Configuring NFS for Basic Operation
1.4 Wildcards and Globbing
1.5 Activating the List of Exports
2. NFS Client Configuration and Operation
2.1 Mounting an NFS Directory from the Command Line
2.2 NFS and /etc/fstab
2.3 Client-Side Helper Processes
3. Quirks and Limitations of NFS
3.2 Absolute and Relative Symbolic Links
3.3 Root Squash
3.4 NFS Hangs
3.5 Soft Mounting
3.6 Inverse DNS Pointers
3.7 File Locking
3.8 Filesystem Nesting
4. Performance Tips
5. NFS Security
5.1 Shortcomings and Risks
5.2 Security Tips
The Network File System (NFS) is the standard for sharing files and printers on a directory with Linux and Unix computers. It was originally developed by Sun Microsystems in the mid-1980s. Linux has supported NFS (both as a client and a server) for years, and NFS continues to be popular in organizations with Unix- or Linux-based networks.
You can create shared NFS directories directly by editing the /etc/exports configuration file. As you need an NFS server before you can configure an NFS client, I describe how to create NFS servers first.
1. NFS Server Configuration and Operation
NFS servers are relatively easy to configure. All that is required is to export a filesystem, either generally or to a specific host, and then mount that filesystem from a remote client. I've shown you how to configure an NFS server to install RHEL 3 over a network. In this chapter, you'll learn the basics of NFS server configuration and operation.
Two RPM packages are closely associated with NFS: portmap and nfs-utils. They should be installed by default in RHEL 3. Just in case, you can use the rpm -q packagename command to make sure these packages are installed. The rpm -ql packagename command provides a list of files installed from that package. The nfs-utils package includes a number of key files. The following is not a complete list:
• /etc/rc.d/init.d/nfs (control script for NFS)
• /etc/rc.d/init.d/nfslock (control script for lockd and statd)
• /usr/share/doc/nfs-utils-1.0.5 (documentation, mostly in HTML format)
• Server daemons in /usr/sbin: rpc.mountd, rpc.nfsd
• Server daemons in /sbin: rpc.lockd, rpc.statd
• Control programs in /usr/sbin: exportfs, nfsstat, nhfsgraph, nhfsnums, nhfsrun, nhfsstone, showmount
• Status files in /var/lib/nfs: etab, rmtab, statd, state, xtab
The portmap RPM package includes the following key files (also not a complete list):
• /etc/rc.d/init.d/portmap (control script)
• /usr/share/doc/portmap-4.0 (documentation)
• Server daemon in /sbin: portmap
• Control programs in /usr/sbin: pmap_dump, pmap_set
Configuring NFS to Start
Once configured, you can set up NFS to start during the Linux boot process, or you can start it yourself with the service nfs start command. NFS also depends on the portmap package, which helps secure NFS directories that are shared through /etc/exports. Because of this dependency, make sure to start the portmap daemon before starting NFS, and don't stop it until after stopping NFS.
The nfs service script starts the following processes:
• rpc.mountd Handles mount requests
• nfsd Starts an nfsd kernel process for each shared directory
• rpc.rquotad Reports disk quota statistics to clients
If any of these processes are not running, NFS won't work. Fortunately, it's easy to check for these processes. Just run the rpcinfo -p command. As with other service scripts, if you want it to start when RHEL 3 boots, you'll need to run a command such as:
# chkconfig --level 35 nfs on
Configuring NFS for Basic Operation
NFS is fairly simple. The only major NFS configuration file is /etc/exports. Once configured, you can export these directories with the exportfs -a command. Each line in this file lists the directory to be exported, the hosts it will be exported to, and the options that apply to this export. You can export a given directory only once. Take the following examples from an /etc/exports file:
/pub (ro,sync) someone.mylocaldomain.com(rw,sync)
In the preceding example, the /pub directory is exported to all users as read-only. It is also exported to one specific computer with read/write privileges. The /home directory is exported, with read/write privileges, to any computer on the .mylocaldomain.com network. Finally, the /opt/diskless-root directory is exported with full read/write privileges (even for root users) on the diskless.mylocaldomain.com computer.
All of these options include the sync flag. This requires all changes to be written to disk before a command such as a file copy is complete. This is a fairly new change, which Red Hat first implemented on Red Hat Linux 8.0.
Be very careful with /etc/exports; one common cause of problems is an extra space between expressions. For example, if you type in a space after the comma in (ro,sync), your directory won't get exported, and you'll get an error message.
Wildcards and Globbing
In Linux network configuration files, you can specify a group of computers with the right wildcard. This process in Linux is also known as globbing. What you do for a wildcard depends on the configuration file. The NFS /etc/exports file uses 'conventional' wildcards; for example, *.mydomain.com specifies all computers within the mydomain.com domain. In contrast, /etc/hosts.deny is less conventional; .mydomain.com, with the leading dot, specifies all computers in that same domain.
For IPv4 networks, wildcards often require some form of the subnet mask. For example, 192.168.0.0/255.255.255.0 specifies the 192.168.0.0 network of computers with IP addresses that range from 192.168.0.1 to 192.168.0.254. Some services support the use of CIDR (Classless Inter-Domain Routing) notation. In CIDR, since 255.255.255.0 masks 24 bits, CIDR represents this with the number 24. If you're configuring a network in CIDR notation, you can represent this network as 192.168.0.0/24.
Activating the List of Exports
Once you've modified /etc/exports, you need to do more. First, this file is simply the default set of exported directories. You need to activate them with the exportfs -a command. The next time you boot RHEL 3, if you've activated nfs at the appropriate runlevels, the nfs start script automatically runs the exportfs -r command, which synchronizes exported directories. You can see this for yourself in the /etc/rc.d/init.d/nfs script.
When you add a share to /etc/exports, the exportfs -r command adds the new directories. However, if you're modifying, moving, or deleting a share, it is safest to first temporarily unexport all filesystems with the exportfs -ua command before reexporting the shares with the exportfs -a command.
Once exports are active, they're easy to check. Just run the showmount -e command on the server. If you're looking for the export list for a remote NFS server, just add the name of the NFS server. For example, the showmount -e verylinux command looks for the list of exported NFS directories from the verylinux computer. If this command doesn't work, you may have blocked NFS messages with a firewall.
2. NFS Client Configuration and Operation
Now you can mount a shared NFS directory from a client computer. The commands and configuration files are similar to those used for any local filesystem.
Mounting an NFS Directory from the Command Line
Before doing anything elaborate, you should test the shared NFS directory from a Linux or Unix client computer. But first, you should check for the list of shared NFS directories. If you're on an NFS server computer named verylinux, the command is easy:
# showmount -e
This command assumes that the NFS server is local. If you don't see a list of shared directories, review the steps described earlier in this chapter. Make sure you've configured your /etc/exports file properly. Remember to export the shared directories. And your NFS server can't work if you haven't started the NFS daemon on your computer.
If you're on a remote NFS client computer and want to see the list of shared directories from the verylinux computer, run the following command:
# showmount -e verylinux
If it doesn't work, there are a couple of more things that you'll need to check: firewalls and your /etc/hosts or DNS server. If you have a problem with your /etc/hosts or DNS server, you can substitute the IP address of the NFS server. You'll see output similar to the following:
Export list for verylinux
Now if you want to mount this directory locally, you'll need an empty local directory. Create a directory such as /mnt/remote if required. You can then mount the shared directory from the verylinux computer with the following command:
# mount -t nfs verylinux:/mnt/inst /mnt/remote
This command mounts the /mnt/inst directory from the computer named verylinux. This command specifies the use of the NFS protocol (-t nfs), and mounts the share on the local /mnt/remote directory. Depending on traffic on your network, this command may take a few seconds. Be patient! When it works, you'll be able to access files on /mnt/inst as if it were a local directory.
NFS and /etc/fstab
You can also configure an NFS client to mount a remote NFS directory during the boot process, as defined in /etc/fstab. For example, the following entry in a client /etc/fstab mounts the /homenfs share from the computer named nfsserv, on the local /nfs/home directory:
## Server: Directory Mount Point Type Mount Options Dump Fsckorder
nfsserv:/homenfs /nfs/home nfs soft,timeo=100 0 0
Alternatively, an automounter, such as autofs or amd, can be used to dynamically mount NFS filesystems as required by the client computer. The automounter can also unmount these remote filesystems after a period of inactivity.
Client-Side Helper Processes
When you start NFS as a client, it adds a few new system processes, including:
• rpc.statd Tracks the status of servers, for use by rpc.lockd in recovering locks after a server crash
• rpc.lockd Manages the client side of file locking
3. Quirks and Limitations of NFS
NFS does have its problems. An administrator who controls shared NFS directories would be wise to take note of these limitations.
NFS is a 'stateless' protocol. In other words, you don't need to log in separately to access a shared NFS directory. Instead, the NFS client normally contacts rpc.mountd on the server. The rpc.mounted daemon handles mount requests. It checks the request against currently exported filesystems. If the request is valid, rpc.mounted provides an NFS file handle (a 'magic cookie'), which is then used for further client/server communication for this share.
The stateless protocol allows the NFS client to wait if the NFS server ever has to be rebooted. The software waits, and waits, and waits. This can cause the NFS client to hang as discussed later.
This can also lead to problems with insecure single-user clients. When a file is opened through a share, it may be 'locked out' from other users. When an NFS server is rebooted, handling the locked file can be difficult. The security problems can be so severe that NFS communication is blocked even by the default Red Hat Enterprise Linux firewall.
In theory, the recent change to NFS, setting up sync as the default for file transfers, should help address this problem. In theory, locked-out users should not lose any data that they've written with the appropriate commands.
Absolute and Relative Symbolic Links
If you have any symbolic links on an exported directory, be careful. The client interprets a symbolically linked file with respect to its own local filesystem. Unless the mount point and filesystem structures are identical, the linked file can point to an unexpected location, which may lead to unpredictable consequences.
You have a couple of ways to address this issue. You can take care to limit the use of symbolic links within an exported directory. Alternatively, NFS offers a server-side export option (link_relative) that converts absolute links to relative links; however, this can have counter-intuitive results if the client mounts a subdirectory of the exported directory.
By default, NFS is set up to root_squash, which prevents root users on an NFS client from gaining root access to a share on an NFS server. Specifically, the root user on a client (with a user ID of 0) is mapped to the nfsnobody unprivileged account.
This behavior can be disabled via the no_root_squash server export option in /etc/exports. In that case, root users who connect from a client gain root privileges on the shared NFS directory.
Because NFS is stateless, NFS clients may wait up to several minutes for a server. In some cases, an NFS client may wait indefinitely if a server goes down. During the wait, any process that looks for a file on the mounted NFS share will hang. Once this happens, it is generally difficult or impossible to unmount the offending filesystems. You can do several things to reduce the impact of this problem:
• Take great care to ensure the reliability of NFS servers and the network.
• Avoid mounting many different NFS servers at once. If several computers mount each other's NFS directories, this could cause problems throughout the network.
• Mount infrequently used NFS exports only when needed. NFS clients should unmount these clients after use.
• Set up NFS shares with the sync option, which should at least reduce the incidence of lost files.
• Don't configure a mission-critical computer as an NFS client, if at all possible.
• Keep NFS mounted directories out of the search path for users, especially that of root.
• Keep NFS mounted directories out of the root (/) directory; instead, segregate them to a less frequently used filesystem such as /nfs/home or /nfs/share.
Consider using the soft option when mounting NFS filesystems. When an NFS server fails, a soft-mounted NFS filesystem will fail rather than hang. However, this risks the failure of long-running processes due to temporary network outages.
In addition, you can use the timeo option to set a timeout interval, in tenths of a second. For example, the following command would mount /nfs/home with a timeout of 30 seconds:
# mount -o soft,timeo=300 myserver:/home /nfs/home
Inverse DNS Pointers
An NFS server daemon checks mount requests. First, it looks at the current list of exports, based on /etc/exports. Then, it looks up the client's IP address to find its hostname. This requires a reverse DNS lookup.
This hostname is then finally checked against the list of exports. If NFS can't find a hostname, rpc.mountd will deny access to that client. For security reasons, it also adds a 'request from unknown host' entry in /var/log/messages.
Multiple NFS clients can be set up to mount the same exported directory from the same server. It's quite possible that people on different computers end up trying to use the same shared file. This is addressed by the file locking daemon service.
NFS has historically had serious problems making file locking work. If you have an application that depends on file locking over NFS, test it thoroughly before putting it into production.
It is impossible to export two directories in the same filesystem if one is inside the other. For example, /usr and /usr/local cannot both be exported unless /usr/local is mounted on a separate partition from /usr.
4. Performance Tips
You can do several things to keep NFS running in a stable and reliable manner. As you gain experience with NFS, you might monitor or even experiment with the following:
• Eight kernel NFS daemons, which is the default, is generally sufficient for good performance, even under fairly heavy loads. If your NFS server is busy, you may want to add additional NFS daemons through the /etc/rc.d/init.d/nfs script. Just keep in mind that the extra kernel processes consume valuable kernel resources.
• NFS write performance can be extremely slow, particularly with NFS v2 clients, as the client waits for each block of data to be written to disk.
• You may try specialized hardware with nonvolatile RAM. Data that is stored on such RAM isn't lost if you have trouble with network connectivity or a power failure.
• In applications where data loss is not a big concern, you may try the async option. This makes NFS faster because async NFS mounts do not write files to disk until other operations are complete. However, a loss of power or network connectivity can result in a loss of data.
• Hostname lookups are performed frequently by the NFS server; you can start the Name Switch Cache Daemon (nscd) to speed lookup performance.
5. NFS Security
NFS includes a number of serious security problems and should never be used in hostile environments (such as on a server directly exposed to the Internet), at least not without strong precautions.
Shortcomings and Risks
NFS is an easy-to-use yet powerful file-sharing system. However, it is not without its problems. The following are a few security issues to keep in mind:
• Authentication NFS relies on the host to report user and group IDs. However, this can expose your files if root users on other computers access your NFS shares. In other words, data that is accessible via NFS to any user can potentially be accessed by any other user.
• Privacy Not even Secure NFS encrypts its network traffic.
• portmap infrastructure Both the NFS client and server depend on the RPC portmap daemon. The portmap daemon has historically had a number of serious security holes. For this reason, portmap is not recommended for use on computers that are directly connected to the Internet or other potentially hostile networks.
If NFS must be used in or near a hostile environment, you can do some things to reduce the security risks:
• Educate yourself in detail about NFS security. If you do not clearly understand the risks, you should restrict your NFS use to friendly, internal networks behind a good firewall.
• Export as little data as possible, and export filesystems as read-only if possible.
• Use root squash to prevent clients from having root access to exported filesystems.
• If an NFS client has a direct connection to the Internet, use separate network adapters for the Internet connection and the LAN. Use the right firewall commands (iptables or ipchains) to block the routing on the TCP and UDP ports associated with portmapper, mountd, and nfsd.
• Use a firewall system such as iptables or ipchains to deny access to the portmapper, mountd, and nfsd ports, except from explicitly trusted hosts or networks. The ports are
111 TCP/UDP portmapper (server and client)
745 UDP mountd (server)
747 TCP mountd (server)
2049 TCP/UDP nfsd (server)
Use a port scanner to verify that these ports are blocked for untrusted network(s).
|All times are GMT -5. The time now is 04:51 PM.|