Network File System (NFS) is a system that allows a filesystem hierarchy located on a remote host to be mounted on a local host, allowing directories and files stored on the remote host to be accessed as if they were on the local host. NFS, originally developed for Unix over thirty years ago, remains useful on Linux, for example in automated Red Hat Enterprise Linux installations where the installation image is stored on a remote host.
This article provides an overview of its complex architecture, its mechanism for specifying directories to be accessible to remote hosts, and provides a simple example of its use.
Network File System (NFS) is a protocol created by Sun in 1984 for Solaris that allows a host to share directories -- export directories, in NFS terminology -- with remote hosts such that the remote host can mount the exported directories -- import the directories -- as if they were on local storage. The system is a network client/server architecture, where the exporting host is the server and the importing hosts are clients. It relies on Remote Procedure Calls between the server and clients. It also relies on various services that work with the RPCs to effectuate the NFS functionality.
After installation and configuration, with appropriate services running and necessary ports open, the essence of the system is the specification in the file /etc/exports, on the server, of the directories to share and the remote hosts which can access the shared directories on the server, along with NFS options. The specification in /etc/exports is in a format such as
/exported/directory remote-host(option1,option2,optionN)
For example, the /etc/exports file could contain the line:
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.103(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
which exports the directory /home/brook/DataEXT4/SoftwareDownloads/RockyLinux allowing only the remote host at 192.168.56.103 to import the exported directory, further specifying that NFS should operate for this particular remote host with the NFS options listed in the parentheses.
The NFS client can mount the directory exported by the server with a mount command in the form of
mount server-identification:/path/of/exported/directory /path/of/local/mount/point
For example,
sudo mount ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux /mnt/hostnfs/
mounts the directory /home/brook/DataEXT4/SoftwareDownloads/RockyLinux exported by the server identified by the FQDN ARCH-16ITH6.localdomain at the local filesystem hierarchy path /mnt/hostnfs/.
NFS has been useful for a very long time, allowing, for example, at a site like a university with networks of UNIX computers, users to log in to any UNIX workstation and have their home directory -- which would actually be on a remote server -- always available on the local host as an NFS import. It remains useful today, for example, in automated installations of Red Hat Enterprise Linux, where the installation image and the installation configuration can be stored on an NFS server (see ).
This article describes the installation and configuration of an NFS server, using an Arch Linux and an openSUSE host for demonstration. The installation of NFS client components and the methods of access are also described.
Although configuring an NFS server can be as straightforward as editing the exports file and starting the main NFS service for basic functionality, the system can become complex when considering the various versions of the protocol that are supported by Linux distributions, the necessary associated services with NFS -- which varies with the version -- the available operational parameters, firewall considerations, and various security mechanisms that the system supports. Below is a description of some aspects of NFS, but the relevant man pages should be consulted for a complete understanding, among them nfs(5), nfsd(8), exportfs(8), exports(8), nfsstat(8), and mountd(8), and nfs.conf(5).
Although NFS was created nearly forty years ago for UNIX, the system lives on in Linux (and UNIX). The protocol has evolved through four major versions -- each standardized in IETF Request for Comments. The latest major version, NFSv4, has itself gone through three minor versions. Most distributions currently support NFSv3 and NFSv4, and some with additional configuration also support NFSv2. The version used depends on the particular Linux implementation. Current Red Hat documentation, as of July 2022, states:
The default NFS version in Red Hat Enterprise Linux 8 is 4.2. NFS clients attempt to mount using NFSv4.2 by default, and fall back to NFSv4.1 when the server does not support NFSv4.2. The mount later falls back to NFSv4.0 and then to NFSv3.
This seems to be the general behavior of NFS operation on many distributions. Aside from the kernel version itself, NFS version support on a particular installation is primarily governed by the values of the configuration parameters vers2, vers3, vers4, vers4.0, vers4.1, and vers4.2 in /etc/nfs.conf, the main configuration file of the NFS system, described later in this article.
Among the differences between NFS3 and NFS4 are the RPC related services required with each version. NFSv4 does not use three of the services required by NFSv3, and related to this evolution, NFSv4 only uses the well-known TCP port for NFS 2049 as opposed to TCP/UDP ports 2049, 111, and 20048 in NFSv3. RFC 7530 which standardizes the NFSv4 protocol states:
The Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813). Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.
An extension to NFSv4 also exists called Parallel NFS or pNFS. This extension improves performance by allowing clients to concurrently access data from multiple servers.
The NFS architecture consists of a kernel module and a main user-space RPC process. The main user-space RPC process works together with other user-space RPC service processes to provide various components of NFS functionality. In modern Linux distributions, the main user-space RPC process is started by a primary systemd service which starts secondary systemd services which in turn start the other user-space RPC processes. The primary components of the NFS system is described and illustrated below.
The RPC processes that are necessary vary with the NFS protocol version supported by a particular host. As stated in Red Hat documentation:
The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind, lockd, and rpc-statd services. The nfs-mountd service is still required on the NFS server to set up the exports, but is not involved in any over-the-wire operations.
However, since the supported NFS version on a host can be specified in the NFS configuration file /etc/nfs.conf such that multiple NFS versions are supported, if both NFSv3 and NFSv4 are supported, all RPC components (and their associated ports) are needed.
The relationships between the various components on the NFS server are shown in the diagram below.
As is evident by examining the various outputs of the ss command below for openSUSE and Arch hosts acting as NFS servers, the main NFS kernel program and each NFS related RPC program runs as only a single process with a single PID even when providing its part of NFS functionality on both IP address families and both TCP and UDP transport layer protocols, and in the case of rpc.statd on multiple ports per protocol.
Security has also evolved since earlier versions, but basic security is provided by only allowing specified remote host clients to access the NFS server. Basic security also matches the UID and/or GID of the user reported by the client to match a UID and GID on the server and thus acquire the permissions of the user on the server.
58% 16:23:02 USER: brook HOST: ARCH-16ITH6 PCD: 12s ~ ❯$ sudo cat /var/lib/nfs/etab /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.104(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash) /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.103(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash) /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.101(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash) 58% 16:24:15 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo systemctl restart nfs-server.service 58% 16:24:31 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo cat /var/lib/nfs/etab 58% 16:24:41 USER: brook HOST: ARCH-16ITH6 ~ ❯$
To demonstrate this, consider the case of an NFS export configuration following the numbered procedure in the section Configuring NFS on Host (NFS Server) -> Arch Linux, below, where the directory /home/brook/DataEXT4/SoftwareDownloads is exported on the NFS server. A subdirectory, /home/brook/DataEXT4/SoftwareDownloads/RockyLinux contains files with the user:group ownership of brook:brook with UID:GID 1000:1000. In the following listing, where commands are executed in the NFS client after the exports are mounted, the first command by user brook with UID:GID 1000:1000 on the client is able to list the contents of the directory. In the second command, an su is executed to switch to a different user test with UID:GID 1001:1001 and in the third command, this user attempts to list the contents of the directory, but permission is denied, per the basic NFS security.
[brook@Rocky16ITH6-VM1 ~]$ ls -l /mnt/VMhostNFS/softwaredownloads/RockyLinux/ total 10950668 -rw-r-----. 1 brook brook 450 Jul 10 00:28 CHECKSUM -rw-r-----. 1 brook brook 11213471744 Jul 10 00:12 Rocky-8.6-x86_64-dvd1.iso -rw-r-----. 1 brook brook 2776 Jul 12 18:27 rockyvm2-ks.cfg [brook@Rocky16ITH6-VM1 ~]$ su -l test Password: [test@Rocky16ITH6-VM1 ~]$ ls -l /mnt/VMhostNFS/softwaredownloads/RockyLinux/ ls: cannot open directory '/mnt/VMhostNFS/softwaredownloads/RockyLinux/': Permission denied [test@Rocky16ITH6-VM1 ~]$ exit logout [brook@Rocky16ITH6-VM1 ~]$ id brook uid=1000(brook) gid=1000(brook) groups=1000(brook),10(wheel) [brook@Rocky16ITH6-VM1 ~]$ sudo id test uid=1001(test) gid=1001(test) groups=1001(test)
This basic security mechanism can be thwarted by a misconfigured or malicious client. However, other security mechanisms are available, as described below, the most secure -- but most complicated -- being the Kerberos network authentication.
NFS functionality is provided by a kernel module, nfsd, working together with various user-space remote procedure call (RPC) and other programs. Only the necessary user-space components must be installed to provide an NFS server.
In some distributions, the package that contains these user-space components is in a package named nfs-utils; this is the case in Red Hat Enterprise Linux as well as Arch Linux. In these distributions, installing this package will complete the installation of an NFS server.
In some other distributions, the package that contains the NFS user-space components is nfs-kernel-server; this is the case in openSUSE and Ubuntu. In these distributions installing this package will complete the installation of an NFS server.
Below are two examples of the installation and configuration of an NFS server, one in Arch Linux which provides an nfs-utils package and another in openSUSE Tumbleweed which provides the nfs-kernel-server package. These distributions acting are used as NFS servers in the demonstration of using NFS to share directories with Rocky Linux (a RHEL clone) and Ubuntu hosts acting as NFS clients.
The openSUSE Reference manual suggests installing an NFS server (the user-space components) by using the Software Management component of YaST and selecting the Patterns tab and activating the checkbox for File Server on the left pane, then initiating the package management transaction by clicking the "Accept" button (see the following set of images). This installs not only the package required for NFS server functionality, nfs-kernel-server, but other file sharing server applications such as ATFTP, TFTP, and VSFTP, in addition to Samba, which is installed by default in openSUSE.
Instead, if these other capabilities are not required, NFS server functionality can be enabled by installing the nfs-kernel-server package and to configure NFS with YaST, the yast2-nfs-server package. This can be done by, selecting the File Server option -- but NOT activating its checkbox -- in the left tab of the Patterns view and selecting only the nfs-kernel-server and yast2-nfs-server packages, then clicking Accept. These packages can be installed using zypper with
zypper in nfs-kernel-server yast2-nfs-server
The first image of the following set, which depicts the YaST Software Management module's Patterns view when the File Server options is selected (not activated), shows the packages that are components of the "File Server" pattern. The second image shows that the netcfg and nfs-client packages are dependencies of nfs-kernel-server (these are listed in the bottom-right pane, in the "Dependencies" tab). netcfg is installed by default in an openSUSE system, and provides the NFS server configuration file /etc/exports in addition to all network related configuration files, such as /etc/hosts, as shown in the fourth image. nfs-client provides NFS client capabilities. That /etc/exports is owned by a general package that provides all network configuration files instead of the package that provides the network service is a notable difference compared to other distributions.
In Arch Linux the NFS server user-space components are in the package nfs-utils. Installing this package will with the following command will complete the installation of an NFS server.
pacman -S nfs-utils
Installation of this package provides the primary configuration file, /etc/exports, other configuration files, the user-space programs, manual pages, as well other files, as shown in the listing below.
51% 21:36:18 USER: brook HOST: ARCH-16ITH6 ~ ❯$ pacman -Qlq nfs-utils /etc/ /etc/exports /etc/exports.d/ /etc/nfs.conf /etc/nfsmount.conf /etc/request-key.d/ /etc/request-key.d/id_resolver.conf /usr/ /usr/bin/ /usr/bin/blkmapd /usr/bin/exportfs /usr/bin/mount.nfs /usr/bin/mount.nfs4 /usr/bin/mountstats /usr/bin/nfsconf /usr/bin/nfsdcld /usr/bin/nfsdclddb /usr/bin/nfsdclnts /usr/bin/nfsdcltrack /usr/bin/nfsidmap /usr/bin/nfsiostat /usr/bin/nfsstat /usr/bin/nfsv4.exportd /usr/bin/rpc.gssd /usr/bin/rpc.idmapd /usr/bin/rpc.mountd /usr/bin/rpc.nfsd /usr/bin/rpc.statd /usr/bin/rpcdebug /usr/bin/showmount /usr/bin/sm-notify /usr/bin/start-statd /usr/bin/umount.nfs /usr/bin/umount.nfs4 /usr/lib/ /usr/lib/systemd/ /usr/lib/systemd/system-generators/ /usr/lib/systemd/system-generators/nfs-server-generator /usr/lib/systemd/system-generators/rpc-pipefs-generator /usr/lib/systemd/system/ /usr/lib/systemd/system/auth-rpcgss-module.service /usr/lib/systemd/system/nfs-blkmap.service /usr/lib/systemd/system/nfs-client.target /usr/lib/systemd/system/nfs-idmapd.service /usr/lib/systemd/system/nfs-mountd.service /usr/lib/systemd/system/nfs-server.service /usr/lib/systemd/system/nfs-utils.service /usr/lib/systemd/system/nfsdcld.service /usr/lib/systemd/system/nfsv4-exportd.service /usr/lib/systemd/system/nfsv4-server.service /usr/lib/systemd/system/proc-fs-nfsd.mount /usr/lib/systemd/system/rpc-gssd.service /usr/lib/systemd/system/rpc-statd-notify.service /usr/lib/systemd/system/rpc-statd.service /usr/lib/systemd/system/rpc_pipefs.target /usr/lib/systemd/system/var-lib-nfs-rpc_pipefs.mount /usr/share/ /usr/share/doc/ /usr/share/doc/nfs-utils/ /usr/share/doc/nfs-utils/NEWS /usr/share/doc/nfs-utils/README /usr/share/doc/nfs-utils/README.systemd /usr/share/man/ /usr/share/man/man5/ /usr/share/man/man5/exports.5.gz /usr/share/man/man5/nfs.5.gz /usr/share/man/man5/nfs.conf.5.gz /usr/share/man/man5/nfsmount.conf.5.gz /usr/share/man/man7/ /usr/share/man/man7/nfs.systemd.7.gz /usr/share/man/man7/nfsd.7.gz /usr/share/man/man8/ /usr/share/man/man8/blkmapd.8.gz /usr/share/man/man8/exportd.8.gz /usr/share/man/man8/exportfs.8.gz /usr/share/man/man8/gssd.8.gz /usr/share/man/man8/idmapd.8.gz /usr/share/man/man8/mount.nfs.8.gz /usr/share/man/man8/mountd.8.gz /usr/share/man/man8/mountstats.8.gz /usr/share/man/man8/nfsconf.8.gz /usr/share/man/man8/nfsd.8.gz /usr/share/man/man8/nfsdcld.8.gz /usr/share/man/man8/nfsdclddb.8.gz /usr/share/man/man8/nfsdclnts.8.gz /usr/share/man/man8/nfsdcltrack.8.gz /usr/share/man/man8/nfsidmap.8.gz /usr/share/man/man8/nfsiostat.8.gz /usr/share/man/man8/nfsstat.8.gz /usr/share/man/man8/nfsv4.exportd.8.gz /usr/share/man/man8/rpc.gssd.8.gz /usr/share/man/man8/rpc.idmapd.8.gz /usr/share/man/man8/rpc.mountd.8.gz /usr/share/man/man8/rpc.nfsd.8.gz /usr/share/man/man8/rpc.sm-notify.8.gz /usr/share/man/man8/rpc.statd.8.gz /usr/share/man/man8/rpcdebug.8.gz /usr/share/man/man8/showmount.8.gz /usr/share/man/man8/sm-notify.8.gz /usr/share/man/man8/statd.8.gz /usr/share/man/man8/umount.nfs.8.gz /var/ /var/lib/ /var/lib/nfs/ /var/lib/nfs/etab /var/lib/nfs/rmtab /var/lib/nfs/rpc_pipefs/ /var/lib/nfs/sm.bak/ /var/lib/nfs/sm/ /var/lib/nfs/state /var/lib/nfs/v4recovery/ 51% 21:36:21 USER: brook HOST: ARCH-16ITH6 ~ ❯$
Before configuring the NFS server and clients, certain issues which affect access and reliability, described in this section, must be considered.
Before installing NFS servers and clients, as a good practice suggested by Arch Linux, a time synchronization service such as Network Time Protocol daemon or Chrony must be enabled on all NFS server and client computers.
When mounting the exported filesystems in the NFS client with the mount command, the NFS server is must be identified. This can be done using either the NFS server host's IP address or its fully qualified domain name (FQDN). In some cases -- as described in the Arch Linux Wiki page on NFS -- the FQDN must be used instead of the IP address, otherwise the mount command will hang.
For cases where the basic NFS security mode of NFS is sufficient, such as where all hosts involved in NFS are on a trusted local network -- which is true of the hosts used in this demonstration -- the /etc/hosts file can simply be used to provide our NFS servers a FQDN. The following listing shows the /etc/hosts for the Arch Linux instance used in this article as an NFS server.
100% 17:01:18 USER: brook HOST: ARCH-16ITH6 ~ ❯$ head -n 25 /etc/hosts # Generated with hBlock 3.4.0 (https://github.com/hectorm/hblock) # Blocked domains: 244157 # Date: Sat Jul 9 20:42:27 EDT 2022 # BEGIN HEADER 127.0.0.1 localhost.localdomain localhost 127.0.1.1 ARCH-16ITH6.localdomain ARCH-16ITH6 255.255.255.255 broadcasthost ::1 localhost ARCH-16ITH6 ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts # END HEADER # BEGIN BLOCKLIST 0.0.0.0 0--e.info 0.0.0.0 0-0.fr 0.0.0.0 0-gkm-portal.net.daraz.com 0.0.0.0 0-owazo4.net.zooplus.de 0.0.0.0 0.0.0.0.beeglivesex.com 0.0.0.0 0.0.0.0.creative.hpyrdr.com 0.0.0.0 0.0.0.0.hpyrdr.com 100% 17:02:14 USER: brook HOST: ARCH-16ITH6 ~ ❯$
This file was originally generated by hblock, but then edited after generation to include the line
127.0.1.1 ARCH-16ITH6.localdomain ARCH-16ITH6
making the FQDN of the host ARCH-16ITH6.localdomain.
If using a firewall to protect the NFS server, before configuring NFS, the ports necessary for its operation must be accessible to incoming connections. The necessary ports and the services associated with them are shown in the following table.
Service | NFSv3 | NFSv4 |
---|---|---|
nfs | 2049 TCP/UDP | 2049 TCP |
rpcbind | 111 TCP/UDP | N/A |
mountd | 20048 TCP/UDP | N/A |
For NFSv4, only TCP port 2049 is required to be be open. If the NFS server or clients do not support this version, if the NFS server is configured to not use NFSv4 -- which is not the case in the examples used in this article, or if NFSv3 is also to be supported, other ports, namely ports 111, used by rpcbind, and 20048, used by rpc.mountd must also be open.
The following listing shows the output of ss with options specified to show listening UDP and TCP ports and the processes using them in openSUSE Tumbleweed after using its YaST Firewall module -- which interfaces to firewalld to open ports (the process is shown below) necessary for both NFSv3 and NFSv4 operation. As a result of these actions in YaST Firewall, the firewall module opens only TCP port 2049 for IPv4 and IPv6 for use by the kernel NFS process (signified by the lack of process information in the "Process" column" of ss output). It also opens IPv4 and IPv6 ports 111 and 20048 -- in this case both UDP and TCP -- for the rpcbind and rpc.mountd processes. We can also see that rpcbind makes numerous other ports available for the rpc.statd processes.
97% 19:41:18 USER: brook HOST: 16ITH6-openSUSE PCD: 7m17s ~ ❯$ sudo ss -tupnl [sudo] password for root: Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 127.0.0.1:732 0.0.0.0:* users:(("rpc.statd",pid=6492,fd=5)) udp UNCONN 0 0 0.0.0.0:20048 0.0.0.0:* users:(("rpc.mountd",pid=6488,fd=4)) udp UNCONN 0 0 0.0.0.0:37800 0.0.0.0:* users:(("VBoxHeadless",pid=10665,fd=25)) udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:* users:(("avahi-daemon",pid=1177,fd=11)) udp UNCONN 0 0 0.0.0.0:38297 0.0.0.0:* users:(("VBoxHeadless",pid=10665,fd=24)) udp UNCONN 0 0 0.0.0.0:55648 0.0.0.0:* users:(("VBoxHeadless",pid=10665,fd=27)) udp UNCONN 0 0 0.0.0.0:42030 0.0.0.0:* users:(("rpc.statd",pid=6492,fd=8)) udp UNCONN 0 0 0.0.0.0:45204 0.0.0.0:* users:(("VBoxHeadless",pid=10665,fd=26)) udp UNCONN 0 0 0.0.0.0:45511 0.0.0.0:* users:(("avahi-daemon",pid=1177,fd=13)) udp UNCONN 0 0 0.0.0.0:48867 0.0.0.0:* udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:* users:(("dhclient",pid=8521,fd=6)) udp UNCONN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=1227,fd=5),("systemd",pid=1,fd=43)) udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=1564,fd=5)) udp UNCONN 0 0 [::]:49982 [::]:* udp UNCONN 0 0 [::]:51846 [::]:* users:(("rpc.statd",pid=6492,fd=10)) udp UNCONN 0 0 [::]:52144 [::]:* users:(("avahi-daemon",pid=1177,fd=14)) udp UNCONN 0 0 [::]:20048 [::]:* users:(("rpc.mountd",pid=6488,fd=6)) udp UNCONN 0 0 [::]:5353 [::]:* users:(("avahi-daemon",pid=1177,fd=12)) udp UNCONN 0 0 [::]:111 [::]:* users:(("rpcbind",pid=1227,fd=7),("systemd",pid=1,fd=45)) udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=1564,fd=6)) tcp LISTEN 0 128 127.0.0.1:631 0.0.0.0:* users:(("cupsd",pid=1552,fd=7)) tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=1820,fd=13)) tcp LISTEN 0 4096 0.0.0.0:36477 0.0.0.0:* users:(("rpc.statd",pid=6492,fd=9)) tcp LISTEN 0 64 0.0.0.0:2049 0.0.0.0:* tcp LISTEN 0 64 0.0.0.0:35107 0.0.0.0:* tcp LISTEN 0 5 127.0.0.1:6600 0.0.0.0:* users:(("mpd",pid=6708,fd=10)) tcp LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=1227,fd=4),("systemd",pid=1,fd=41)) tcp LISTEN 0 4096 0.0.0.0:20048 0.0.0.0:* users:(("rpc.mountd",pid=6488,fd=5)) tcp LISTEN 0 128 [::1]:631 [::]:* users:(("cupsd",pid=1552,fd=6)) tcp LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=1820,fd=14)) tcp LISTEN 0 511 *:36027 *:* users:(("code",pid=10106,fd=41)) tcp LISTEN 0 64 [::]:2049 [::]:* tcp LISTEN 0 4096 [::]:39337 [::]:* users:(("rpc.statd",pid=6492,fd=11)) tcp LISTEN 0 64 [::]:38027 [::]:* tcp LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=1227,fd=6),("systemd",pid=1,fd=44)) tcp LISTEN 0 4096 [::]:20048 [::]:* users:(("rpc.mountd",pid=6488,fd=7)) 97% 19:41:37 USER: brook HOST: 16ITH6-openSUSE PCD: 3s ~ ❯$
The mapping between well known ports and their associated service names by network related programs such as ss are typically based on a process (see this Stack Exchange entry for the process). that eventually results in a lookup of the standard file /etc/services, which in the case of openSUSE -- for reasons found in this Stack Exchange post -- has been relocated to /usr/etc/services. firewalld, however, unlike network programs such as ss uses its own xml files in /usr/lib/firewalld/services/ to generate the mappings, for example to display the service names in outputs of the firewall-cmd command displayed below, in the section openSUSE Firewall Configuration for NFS.
The mappings are ultimately managed by the Internet Assigned Numbers Authority (see RFC 6335: Internet Assigned Numbers Authority (IANA) Procedures for the Management of the Service Name and Transport Protocol Port Number Registry). Unfortunately, as seen in a query for port 2049 in the Service Name and Transport Protocol Port Number Registry, there is a known conflict in the mapping of port 2049 which is assigned to multiple services, nfs and the obscure shilp, apparently a service that is associated with a CAD program. This conflict has been propagated to the /etc/services file of Arch Linux -- and other Linux distributions, if not modified by the distribution. This conflict in /etc/services causes outputs of ss where the command is executed with the option to resolve port numbers to well known service to misleadingly display the service name associated with port 2049 as "shilp " instead of nfs in Arch Linux and other distributions.
This misleading information is shown below in the output of ss on Arch Linux, similar to the one produced for openSUSE, above. Unlike in the previous output, the -n option is not used with ss, so well known service names associated with the port -- if the mapping is assigned -- are shown instead of the port numbers. The service name associated with the kernel NFS process is shown as "shilp" instead of "nfs". This could be a source of confusion and frustration for new users of NFS on Arch or other distributions. (openSUSE has modified its /usr/etc/services file to comment out the entry that maps port 2049 to shilp.)
Also interesting, besides the unfortunate mapping conflict, are the network service names associated with the rpcbind . The rpcbind process is associated with the service name sunrpc, which seems appropriate considering the NFS protocol was created by Sun and it relies on RPC processes.
73% 18:15:51 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo ss -tupl Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process udp UNCONN 0 0 0.0.0.0:48396 0.0.0.0:* users:(("VBoxHeadless",pid=5787,fd=27)) udp UNCONN 0 0 0.0.0.0:sunrpc 0.0.0.0:* users:(("rpcbind",pid=1169,fd=5),("systemd",pid=1,fd=36)) udp UNCONN 0 0 172.16.224.38:ntp 0.0.0.0:* users:(("ntpd",pid=794,fd=23)) udp UNCONN 0 0 192.168.56.1:ntp 0.0.0.0:* users:(("ntpd",pid=794,fd=25)) udp UNCONN 0 0 127.0.0.1:ntp 0.0.0.0:* users:(("ntpd",pid=794,fd=18)) udp UNCONN 0 0 0.0.0.0:ntp 0.0.0.0:* users:(("ntpd",pid=794,fd=17)) udp UNCONN 0 0 127.0.0.1:922 0.0.0.0:* users:(("rpc.statd",pid=1170,fd=5)) udp UNCONN 0 0 0.0.0.0:34500 0.0.0.0:* users:(("VBoxHeadless",pid=5918,fd=25)) udp UNCONN 0 0 0.0.0.0:51140 0.0.0.0:* users:(("VBoxHeadless",pid=5918,fd=28)) udp UNCONN 0 0 0.0.0.0:51763 0.0.0.0:* users:(("VBoxHeadless",pid=5918,fd=27)) udp UNCONN 0 0 0.0.0.0:52383 0.0.0.0:* users:(("rpcbind",pid=1169,fd=10)) udp UNCONN 0 0 0.0.0.0:mountd 0.0.0.0:* users:(("rpc.mountd",pid=1171,fd=4)) udp UNCONN 0 0 0.0.0.0:52926 0.0.0.0:* users:(("VBoxHeadless",pid=5918,fd=24)) udp UNCONN 0 0 0.0.0.0:53654 0.0.0.0:* users:(("VBoxHeadless",pid=5918,fd=26)) udp UNCONN 0 0 0.0.0.0:40010 0.0.0.0:* users:(("rpc.statd",pid=1170,fd=8)) udp UNCONN 0 0 0.0.0.0:57475 0.0.0.0:* udp UNCONN 0 0 [::]:49233 [::]:* users:(("rpcbind",pid=1169,fd=11)) udp UNCONN 0 0 [::]:sunrpc [::]:* users:(("rpcbind",pid=1169,fd=7),("systemd",pid=1,fd=38)) udp UNCONN 0 0 [fe80::64f2:24e1:cecf:a491]%wlp0s20f3:ntp [::]:* users:(("ntpd",pid=794,fd=24)) udp UNCONN 0 0 [fe80::800:27ff:fe00:0]%vboxnet0:ntp [::]:* users:(("ntpd",pid=794,fd=26)) udp UNCONN 0 0 [::1]:ntp [::]:* users:(("ntpd",pid=794,fd=19)) udp UNCONN 0 0 [::]:ntp [::]:* users:(("ntpd",pid=794,fd=16)) udp UNCONN 0 0 *:xmsg *:* users:(("kdeconnectd",pid=1786,fd=8)) udp UNCONN 0 0 [::]:51467 [::]:* udp UNCONN 0 0 [::]:mountd [::]:* users:(("rpc.mountd",pid=1171,fd=6)) udp UNCONN 0 0 [::]:41497 [::]:* users:(("rpc.statd",pid=1170,fd=10)) tcp LISTEN 0 64 0.0.0.0:shilp 0.0.0.0:* tcp LISTEN 0 64 0.0.0.0:38627 0.0.0.0:* tcp LISTEN 0 5 127.0.0.1:mshvlm 0.0.0.0:* users:(("mpd",pid=1376,fd=10)) tcp LISTEN 0 4096 0.0.0.0:51691 0.0.0.0:* users:(("rpc.statd",pid=1170,fd=9)) tcp LISTEN 0 4096 0.0.0.0:sunrpc 0.0.0.0:* users:(("rpcbind",pid=1169,fd=4),("systemd",pid=1,fd=35)) tcp LISTEN 0 4096 0.0.0.0:mountd 0.0.0.0:* users:(("rpc.mountd",pid=1171,fd=5)) tcp LISTEN 0 64 [::]:40191 [::]:* tcp LISTEN 0 64 [::]:shilp [::]:* tcp LISTEN 0 4096 [::]:sunrpc [::]:* users:(("rpcbind",pid=1169,fd=6),("systemd",pid=1,fd=37)) tcp LISTEN 0 4096 [::]:mountd [::]:* users:(("rpc.mountd",pid=1171,fd=7)) tcp LISTEN 0 4096 [::]:50419 [::]:* users:(("rpc.statd",pid=1170,fd=11)) tcp LISTEN 0 50 *:xmsg *:* users:(("kdeconnectd",pid=1786,fd=9)) 73% 18:16:17 USER: brook HOST: ARCH-16ITH6 ~ ❯$
Recent installations of openSUSE use firewalld by default instead of the previously used SUSEFirewall2. YaST has a Firewall module that simplifies making changes to firewalld, although the command-line tool firewall-cmd could be used.
The interactions with the YaST's Firewall component are shown in the following set of images.
We can verify the changes to the public zone with the firewall-cmd command as shown below in the output of firewall-cmd --list-all. It shows that public zone is active and that the services dhcpv6-client, mountd, nfs, and rpc-bind -- or the ports associated with them are accessible.
16ITH6-openSUSE:~ #firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: wlp0s20f3 sources: services: dhcpv6-client mountd nfs rpc-bind ports: protocols: forward: yes masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: 16ITH6-openSUSE:~ #
We can also verify that the needed ports have been opened by using the nmap command on the IP address of the openSUSE NFS server on the ports needed to support both NFSv3 and NFSv4, as shown in the following listing.
16ITH6-openSUSE:~ # nmap -p 111 192.168.56.1 Starting Nmap 7.92 ( https://nmap.org ) at 2022-07-26 20:30 EDT Nmap scan report for 192.168.56.1 Host is up (0.000082s latency). PORT STATE SERVICE 111/tcp open rpcbind Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds 16ITH6-openSUSE:~ # nmap -p 20048 192.168.56.1 Starting Nmap 7.92 ( https://nmap.org ) at 2022-07-26 20:31 EDT Nmap scan report for 192.168.56.1 Host is up (0.000055s latency). PORT STATE SERVICE 20048/tcp open mountd Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds 16ITH6-openSUSE:~ # nmap -p 2049 192.168.56.1 Starting Nmap 7.92 ( https://nmap.org ) at 2022-07-26 20:31 EDT Nmap scan report for 192.168.56.1 Host is up (0.000067s latency). PORT STATE SERVICE 2049/tcp open nfs Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds 16ITH6-openSUSE:~ # firewall-cmd --list-services dhcpv6-client mountd nfs rpc-bind 16ITH6-openSUSE:~ # firewall-cmd --list-services --permanent dhcpv6-client mountd nfs rpc-bind 16ITH6-openSUSE:~ #
Unlike openSUSE which has a pre-configured firewall installed by default, Arch Linux, as a DIY distribution does not. The following shows the user added firewall rules in a Arch Linux installation that uses UFW to manage the firewall.
100% 17:46:45 USER: brook HOST: ARCH-16ITH6 PCD: 3s ~ ❯$ sudo ufw status verbose Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- 2049/tcp ALLOW IN 192.168.56.0/24 111/tcp ALLOW IN 192.168.56.0/24 20048/tcp ALLOW IN 192.168.56.0/24 20048/udp ALLOW IN 192.168.56.0/24 111/udp ALLOW IN 192.168.56.0/24 2049/udp ALLOW IN 192.168.56.0/24
UFW adds these to the iptables chain ufw-user-input chain as shown in the output of iptables -S. If using these rules directly with iptables they would be in the INPUT chain rules.
100% 17:50:24 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo iptables -S -P INPUT DROP -P FORWARD DROP -P OUTPUT ACCEPT .. truncated ... -A ufw-user-input -s 192.168.56.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT -A ufw-user-input -s 192.168.56.0/24 -p tcp -m tcp --dport 111 -j ACCEPT -A ufw-user-input -s 192.168.56.0/24 -p tcp -m tcp --dport 20048 -j ACCEPT -A ufw-user-input -s 192.168.56.0/24 -p udp -m udp --dport 20048 -j ACCEPT -A ufw-user-input -s 192.168.56.0/24 -p udp -m udp --dport 111 -j ACCEPT -A ufw-user-input -s 192.168.56.0/24 -p udp -m udp --dport 2049 -j ACCEPT -A ufw-user-limit -m limit --limit 3/min -j LOG --log-prefix "[UFW LIMIT BLOCK] " -A ufw-user-limit -j REJECT --reject-with icmp-port-unreachable -A ufw-user-limit-accept -j ACCEPT 100% 17:50:30 USER: brook HOST: ARCH-16ITH6 ~ ❯$
As mentioned previously, in section NFS Security the basic security of NFS, where the sec NFS option has the value of sys, the default if it is not overridden, relies on matching the UID/GID of the owner of the shared files on the NFS server to the UID/GID of the user on the NFS client attempting to access files in exported directories. In the NFS use case of this article, where clients and servers are on the same trusted local network, this security mode is sufficient, but UID/GID must match between NFS server shared file ownership and NFS client user.
The essence of configuring an NFS server is specifying the directories to be shared -- or exported in the NFS configuration file /etc/exports, and specifying the remote hosts or networks that will be allowed to import the shared directories, and optionally listing a set of options that determine the operating parameters of NFS for the export.
The format of entries in /etc/exports, having the basic format shown previously and reproduced here:
/exported/directory remote-host(option1,option2,optionN)
consists of the following elements separated by whitespace
It should be noted that the path and host identification are separated by whitespace, but there MUST NOT be any whitespace between the host identification and the parentheses that contain the NFS options, if options are specified.
The same directory can be exported to multiple hosts by listing each host or hosts along with their NFS options after the initial string that specifies the exported directory. Or the exported directory can be listed multiple times with a single host(s) identification.
Remote hosts are identified by:
If options are not specified for a host, a default set of options are applied. According to the exportfs(8) man page, the default export options for the exportfs command (see the section nfs-server.service and exportfs, below) are sync, ro, root_squash, and wdelay. However, the exports(5) man page specifies a fuller set of default options which define the default behavior of NFS exports when /etc/exports is caused to be sourced by the exportfs command: sync, ro, root_squash, wdelay, hide, no_subtree_check, and sec=sys, secure, no_all_squash. This fuller set of options are applied in both openSUSE and Arch Linux when NFS export options are not specified in /etc/exports.
It should be noted that even if some options are specified in /etc/exports, the options from the default set that are not overridden by explicitly specified options will still be applied. This means that if behavior other than that determined by the default set of NFS options is desired, an option that overrides a default option that determines a particular default behavior must be supplied. For example, the default read-only behavior due to the default ro option must be overridden by the read-write option, rw, by including it inside the parentheses that immediately follow the host identification. These default options and some other important options are described below (see the exports(5) man page for complete details).
krb5
This option parameter specifies that the Kerberos security service is to be used to authenticate users accessing the NFS
server.
krb5i
This option parameter specifies that the Kerberos security service is to be used to authenticate users accessing the NFS server, and additionally provide integrity protection of data in NFS operations using checksums.
krb5p
This option parameter specifies that the Kerberos security service is to be used to authenticate users accessing the NFS server, and additionally provide integrity protection of data in NFS operations using checksums and protects data transmitted in NFS operations using encryption.
sys
This option parameter specifies only the basic NFS security which relies on matching the UID/GID of users on client hosts with the UID/GID of file owners on the NFS server. This is the default if the sec= option is not specified.
After specification of directories to be exported in /etc/exports, the actual exporting of the specified directories is initiated by the exportfs program. This command is typically started by the systemd service nfs-server.service, which also starts NFS related services which in turn start the various other NFS user-space components. Enabling this service will ensure that the NFS server will be available at boot. If the nfs-server.service is not enabled, it must be started manually whenever the NFS server is needed.
The output of the first command in the following listing shows that nfs-server.service first executes exportfs -r reading the /etc/exports file, and included files in /etc/exports.d/ to recreate the entries in /var/lib/nfs/etab. It then starts the nfsd RPC program. The warnings in the output of the systemctl status, referring to the fact desirable NFS options have not been set in the /etc/exports file, are irrelevant, as the default options are applied anyway, as shown in the output of the second command in the listing.
18:57:04 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo systemctl status nfs-server [sudo] password for brook: ● nfs-server.service - NFS server and services Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; preset: disabled) Drop-In: /run/systemd/generator/nfs-server.service.d └─order-with-mounts.conf Active: active (exited) since Mon 2022-09-12 18:06:59 EDT; 50min ago Process: 1467 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS) Process: 1468 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS) Main PID: 1468 (code=exited, status=0/SUCCESS) CPU: 3ms Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: NOTE: this default has changed since nfs-utils version 1.0.x Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: No options for /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.103: suggest 192.168.56.103(sync) to avoid warning Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.103:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux". Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: Assuming default behaviour ('no_subtree_check'). Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: NOTE: this default has changed since nfs-utils version 1.0.x Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: No options for /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.101: suggest 192.168.56.101(sync) to avoid warning Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.101:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux". Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: Assuming default behaviour ('no_subtree_check'). Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: NOTE: this default has changed since nfs-utils version 1.0.x Sep 12 18:06:59 ARCH-16ITH6 systemd[1]: Finished NFS server and services. 18:57:20 USER: brook HOST: ARCH-16ITH6 PCD: 3s ~ ❯$ sudo exportfs -v /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.104(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash) /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.103(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash) /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.101(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash) 18:58:25 USER: brook HOST: ARCH-16ITH6 ~ ❯$
Whenever the /etc/exports file is modified while NFS is active, either the nfs-server.service must be restarted or exportfs -r executed, the -r option "reexporting" directories.
Based on the introduction to NFS configuration, the details of configuring an NFS server can be complicated, but all that is typically necessary, depending on the use case, is to edit /etc/exports and add a line such as, for example,
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.104 192.168.56.103 192.168.56.101
to allow the remote hosts at 192.168.56.104, 192.168.56.103, and 192.168.56.102 to import, or access, the exported directory with default NFS options. The NFS service is then started with elevated privileges:
systemctl start nfs-server.service
This basic exports configuration was used in Sharing Host Computer Directories with VirtualBox Guests Using NFS in which a VirtualBox VM host was configured as an NFS server to share a directory with VM guests.
As an alternative to listing individual host IP addresses in /etc/exports, a subnetwork that includes these IP addresses could be used. The line in /etc/exports, using one possible expression for the range of IP addresses 192.168.56.0 to 192.168.56.255 in CIDR notation, would be
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/24
or the dotted-decimal subnet mask notation, the line would be
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/255.255.255.0
permitting all hosts with addresses in the range to access the NFS server. (Note that the VirtualBox host-only network discussed in the referenced article only creates assigns guests addresses in the range 192.168.56.101 through 192.168.56.254)
The above lines in /etc/exports will apply default options, as discussed above, one of which is the ro option allowing only read only access. If read write access is required the lines would be
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.104(rw) 192.168.56.103(rw) 192.168.56.101(rw) /home/brook/DataEXT4/SoftwareDownloads/RockyLinux -rw 192.168.56.104 192.168.56.103 192.168.56.101 /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/24(rw) /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/255.255.255.0(rw)
Methods for accessing the exported directories on a client are shown later in this article.
The Arch Linux documentation, however, recommends a more complicated exports configuration in which an NFS root is exported as well as the specific directories to exported under the NFS root. The wiki also recommends that the actual shared directory be bind mounted to the exported directory. Following the recommendation, if the directory /home/brook/DataEXT4/SoftwareDownloads/ is the directory to be shared by the NFS server:
100% 18:12:28 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo mkdir -p /srv/nfs
100% 18:14:08 USER: brook HOST: ARCH-16ITH6 PCD: 5s ~ ❯$ sudo mkdir -p /srv/nfs/softwaredownloads
100% 18:16:06 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo mount --bind /home/brook/DataEXT4/SoftwareDownloads /srv/nfs/softwaredownloads
/srv/nfs -fsid=0,crossmnt 192.168.56.104 192.168.56.103 192.168.56.101 /srv/nfs/softwaredownloads 192.168.56.104 192.168.56.103 192.168.56.101
sudo exportfs -arvThe output of this command will first show warnings regarding missing options and then list the exported directories and their hosts, as shown below.
100% 16:36:41 USER: brook HOST: ARCH-16ITH6 PCD: 3s ~ ❯$ sudo exportfs -arv exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.104:/srv/nfs". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.103:/srv/nfs". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.101:/srv/nfs". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: No options for /srv/nfs/softwaredownloads 192.168.56.104: suggest 192.168.56.104(sync) to avoid warning exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.104:/srv/nfs/softwaredownloads". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: No options for /srv/nfs/softwaredownloads 192.168.56.103: suggest 192.168.56.103(sync) to avoid warning exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.103:/srv/nfs/softwaredownloads". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exportfs: No options for /srv/nfs/softwaredownloads 192.168.56.101: suggest 192.168.56.101(sync) to avoid warning exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.101:/srv/nfs/softwaredownloads". Assuming default behaviour ('no_subtree_check'). NOTE: this default has changed since nfs-utils version 1.0.x exporting 192.168.56.104:/srv/nfs/softwaredownloads exporting 192.168.56.103:/srv/nfs/softwaredownloads exporting 192.168.56.101:/srv/nfs/softwaredownloads exporting 192.168.56.104:/srv/nfs exporting 192.168.56.103:/srv/nfs exporting 192.168.56.101:/srv/nfs 100% 16:36:43 USER: brook HOST: ARCH-16ITH6 ~ ❯$The warnings can be disregarded, as we will see below, the default NFS options will be applied automatically.
100% 19:48:24 USER: brook HOST: ARCH-16ITH6 ~ ❯$ sudo exportfs -v /srv/nfs 192.168.56.104(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,no_all_squash) /srv/nfs 192.168.56.103(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,no_all_squash) /srv/nfs 192.168.56.101(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,no_all_squash) /srv/nfs/softwaredownloads 192.168.56.104(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash) /srv/nfs/softwaredownloads 192.168.56.103(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash) /srv/nfs/softwaredownloads 192.168.56.101(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash) 100% 19:48:27 USER: brook HOST: ARCH-16ITH6 ~ ❯$
[brook@Rocky16ITH6-VM1 ~]$ showmount -e ARCH-16ITH6.localdomain Export list for ARCH-16ITH6.localdomain: /srv/nfs/softwaredownloads 192.168.56.101,192.168.56.103,192.168.56.104 /srv/nfs 192.168.56.101,192.168.56.103,192.168.56.104
[brook@Rocky16ITH6-VM1 ~]$ sudo mkdir /mnt/VMhostNFS/
[brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/ /mnt/VMhostNFS/
[brook@Rocky16ITH6-VM1 ~]$ nfsstat -m /mnt/VMhostNFS from ARCH-16ITH6.localdomain:/ Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.103,local_lock=none,addr=192.168.56.1
[brook@Rocky16ITH6-VM1 ~]$ tree -L 3 /mnt/VMhostNFS/ /mnt/VMhostNFS/ └── softwaredownloads ├── Arch │ ├── aibs.sh │ ├── archlinux-bootstrap-2022.01.01-x86_64.tar.gz │ ├── archlinux-bootstrap-2022.01.01-x86_64.tar.gz.sig │ ├── root.x86_64 │ └── sha1sums.txt ├── Mozilla ├── RockyLinux │ ├── CHECKSUM │ ├── Rocky-8.6-x86_64-dvd1.iso │ └── rockyvm2-ks.cfg ├── texlive │ ├── opentype-info.log │ ├── opentype-info.pdf │ ├── sample2e.aux │ ├── sample2e.dvi │ ├── sample2e.log │ ├── sample2e.pdf │ ├── sample2e.ps │ ├── texlive2020.iso │ ├── texlive2020.iso.sha512 │ ├── texlive2020.iso.sha512.asc │ └── texput.log ├── VirtualBox │ └── VBoxGuestAdditions_6.1.34.iso └── vivaldi └── vivaldi-stable-5.3.2679.61-1.x86_64.rpm 8 directories, 20 files [brook@Rocky16ITH6-VM1 ~]$
[brook@Rocky16ITH6-VM1 ~]$ ls -la /mnt/VMhostNFS/softwaredownloads/RockyLinux total 10950680 drwxr-xr-x. 2 brook brook 4096 Jul 12 18:27 . drwxr-xr-x. 8 brook brook 4096 Jul 17 15:30 .. -rw-r--r--. 1 brook brook 450 Jul 10 00:28 CHECKSUM -rw-------. 1 brook brook 63 Jul 10 20:56 .directory -rw-r--r--. 1 brook brook 11213471744 Jul 10 00:12 Rocky-8.6-x86_64-dvd1.iso -rw-r--r--. 1 brook brook 2776 Jul 12 18:27 rockyvm2-ks.cfg [brook@Rocky16ITH6-VM1 ~]$The output of the ls command shows that the contents the user has read and write permissions in this directory, but because we didn't specify the rw NFS option, we are not able to write to the imported directories, as indicated in the output when attempting to copy a file. Read-only is one of the default mount options if it is not explicitly overriden with the rw option in the /etc/exports file.
[brook@Rocky16ITH6-VM1 ~]$ cp /mnt/VMhostNFS/softwaredownloads/RockyLinux/rockyvm2-ks.cfg /mnt/VMhostNFS/softwaredownloads/RockyLinux/rockyvm2-ks.cfg.copy cp: cannot create regular file '/mnt/VMhostNFS/softwaredownloads/RockyLinux/rockyvm2-ks.cfg.copy': Read-only file system
Some items to note in the process above.
[brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/ /mnt/VMhostNFS/and not
[brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/srv/nfs /mnt/VMhostNFS/otherwise, while the exports will be accessible by the client, the service will fall back to NFSv3, as shown in the follwing output of nfsstat -m Mounting using the second of the above (/srv/nfs instead of /)
[brook@Rocky16ITH6-VM1 ~]$ nfsstat -m /mnt/VMhostNFS from ARCH-16ITH6.localdomain:/srv/nfs Flags: rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.56.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.56.1
Configuration of an NFS server is easily done with the YaST NFS module as shown in the following set of images.
The /etc/exports file generated by the above YaST NFS Server configuration actions is shown in the listing below.
100% 18:44:22 USER: brook HOST: 16ITH6-openSUSE ~ ❯$ cat /etc/exports /home/brook/DataEXT4/SoftwareDownloads 192.168.56.104(ro,root_squash,sync,no_subtree_check) 192.168.56.103(ro,root_squash,sync,no_subtree_check) 192.168.56.101(ro,root_squash,sync,no_subtree_check) 192.168.56.102(ro,root_squash,sync,no_subtree_check)
With this configuration on the openSUSE physical host acting as an NFS server, we are ready to use the import the exports in the Rocky Linux VM. On the client we can verify the exported directories on the server, mount the exports on a mountpoint in the client's filesystem hierarchy, and verify that the NFS mount with nfsstat -m, and access the imported filesystem, as shown in the following listing/
[brook@Rocky16ITH6-VM1 ~]$ showmount -e 192.168.56.1 Export list for 192.168.56.1: /home/brook/DataEXT4/SoftwareDownloads 192.168.56.102,192.168.56.101,192.168.56.103,192.168.56.104 [brook@Rocky16ITH6-VM1 ~]$ sudo mount 192.168.56.1:/home/brook/DataEXT4/SoftwareDownloads /mnt/hostnfs [brook@Rocky16ITH6-VM1 ~]$ nfsstat -m /mnt/hostnfs from 192.168.56.1:/home/brook/DataEXT4/SoftwareDownloads Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.103,local_lock=none,addr=192.168.56.1 [brook@Rocky16ITH6-VM1 ~]$ ls -l /mnt/hostnfs total 24 drwxr-x--x. 3 brook brook 4096 Feb 12 01:17 Arch drwxr-x--x. 2 brook brook 4096 Jun 1 16:02 Mozilla drwxr-x--x. 2 brook brook 4096 Jul 23 18:53 RockyLinux drwxr-x--x. 2 brook brook 4096 Dec 11 2020 texlive drwxr-x--x. 2 brook users 4096 Jul 17 15:30 VirtualBox drwxr-x--x. 2 brook users 4096 Jul 1 00:34 vivaldi [brook@Rocky16ITH6-VM1 ~]$
Configuration of an NFS client host only requires installation of the package that contains the appropriate client components. After necessary components are installed, only a mount command is necessary to access the directories exported by the client, as shown previously. To make the mount of imported directories persistent, a line can be added to /etc/fstab.
On some distributions the package that contains NFS client components is the same as the package that contains the NFS server components themselves, typically in the package nfs-utils. This is the case with Arch Linux and Red Hat Enterprise Linux (and clones); with these distributions, installing nfs-utils installs both NFS server and client components.
In other distributions, the NFS client components are in a separate package from the NFS server components. In openSUSE, the package nfs-client contains the necessary client components, while the optional package yast2-nfs-client enables a YaST module which allows configuration of a persistent mount of imported directories by adding a line in /etc/fstab through a GUI.
Ubuntu also packages the NFS client separately from the NFS server components. The only necessary package for enabling an NFS client is nfs-common.
After ensuring the necessary client components are installed, all that is necessary is to mount imported directories with a mount command such as
sudo mount 192.168.56.1:/home/brook/DataEXT4/SoftwareDownloads /mnt/hostnfs
or, as discussed in Configuring NFS on Host (NFS Server) -> Arch
sudo mount ARCH-16ITH6.localdomain:/ /mnt/VMhostNFS/
where the second form imports the NFS root (the filesystem hierarchy location identified as the NFS in the exports file) and the first imports the actual directory.
To make the mount persistent after reboots, the details of the imported directory in the mount command can instead be placed in /etc/fstab with a line similar to that of a line that specifies local storage, i.e.,
server:path /mountpoint fstype option1,option2,...,optionN 0 0
server:path identifies the NFS server and the exported directory, where "server" can be an IP address or a fully qualified domain name. For example, the following line in /etc/fstab is equivalent to the second mount command above.
192.168.56.1:/ /mnt/VMhostNFS nfs defaults 0 0
Using this exports configuration on the server, a client can mount the exported directory with:
sudo mount ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux /mnt/hostnfs/
on the Arch Linux VirtualBox host was adequate to allow the Rocky Linux VM -- labeled as VM3 - Rocky Linux 8.6 (RHEL 8) in the diagram at the top of the article -- to import the exported directory. The following listing shows the mount command executed in the NFS client to mount the export in the local filesystem hierarchy. The listing also shows the client accessing the export with the ls command. The third command shown in the listing provides statistics on the NFS mount indicating details such as an identification of the NFS server (as the FQDN) and as an IP address, the clients IP address, the mount location, the NFS options, and the version of the NFS protocol in use -- in this case version 4.2.
[brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux /mnt/hostnfs/ [brook@Rocky16ITH6-VM1 ~]$ ls -l /mnt/hostnfs total 10950668 -rw-r--r--. 1 brook brook 450 Jul 10 00:28 CHECKSUM -rw-r--r--. 1 brook brook 11213471744 Jul 10 00:12 Rocky-8.6-x86_64-dvd1.iso -rw-r--r--. 1 brook brook 2776 Jul 12 18:27 rockyvm2-ks.cfg [brook@Rocky16ITH6-VM1 ~]$ nfsstat -m /mnt/hostnfs from ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.103,local_lock=none,addr=192.168.56.1 [brook@Rocky16ITH6-VM1 ~]$