Introduction to Network File System (NFS)

Sept. 14, 2022, 9 p.m.

Network File System (NFS) is a system that allows a filesystem hierarchy located on a remote host to be mounted on a local host, allowing directories and files stored on the remote host to be accessed as if they were on the local host. NFS, originally developed for Unix over thirty years ago, remains useful on Linux, for example in automated Red Hat Enterprise Linux installations where the installation image is stored on a remote host.

This article provides an overview of its complex architecture, its mechanism for specifying directories to be accessible to remote hosts, and provides a simple example of its use.

Introduction

Network File System (NFS) is a protocol created by Sun in 1984 for Solaris that allows a host to share directories -- export directories, in NFS terminology -- with remote hosts such that the remote host can mount the exported directories -- import the directories -- as if they were on local storage. The system is a network client/server architecture, where the exporting host is the server and the importing hosts are clients. It relies on Remote Procedure Calls between the server and clients. It also relies on various services that work with the RPCs to effectuate the NFS functionality.

After installation and configuration, with appropriate services running and necessary ports open, the essence of the system is the specification in the file /etc/exports, on the server, of the directories to share and the remote hosts which can access the shared directories on the server, along with NFS options. The specification in /etc/exports is in a format such as

/exported/directory remote-host(option1,option2,optionN)

For example, the /etc/exports file could contain the line:

/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.103(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)

which exports the directory /home/brook/DataEXT4/SoftwareDownloads/RockyLinux allowing only the remote host at 192.168.56.103 to import the exported directory, further specifying that NFS should operate for this particular remote host with the NFS options listed in the parentheses.

The NFS client can mount the directory exported by the server with a mount command in the form of

mount server-identification:/path/of/exported/directory /path/of/local/mount/point

For example,

sudo mount ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux /mnt/hostnfs/

mounts the directory /home/brook/DataEXT4/SoftwareDownloads/RockyLinux exported by the server identified by the FQDN ARCH-16ITH6.localdomain at the local filesystem hierarchy path /mnt/hostnfs/.

NFS has been useful for a very long time, allowing, for example, at a site like a university with networks of UNIX computers, users to log in to any UNIX workstation and have their home directory -- which would actually be on a remote server -- always available on the local host as an NFS import. It remains useful today, for example, in automated installations of Red Hat Enterprise Linux, where the installation image and the installation configuration can be stored on an NFS server (see ).

This article describes the installation and configuration of an NFS server, using an Arch Linux and an openSUSE host for demonstration. The installation of NFS client components and the methods of access are also described.

NFS

Although configuring an NFS server can be as straightforward as editing the exports file and starting the main NFS service for basic functionality, the system can become complex when considering the various versions of the protocol that are supported by Linux distributions, the necessary associated services with NFS -- which varies with the version -- the available operational parameters, firewall considerations, and various security mechanisms that the system supports. Below is a description of some aspects of NFS, but the relevant man pages should be consulted for a complete understanding, among them nfs(5), nfsd(8), exportfs(8), exports(8), nfsstat(8), and mountd(8), and nfs.conf(5).

NFS Versions

Although NFS was created nearly forty years ago for UNIX, the system lives on in Linux (and UNIX). The protocol has evolved through four major versions -- each standardized in IETF Request for Comments. The latest major version, NFSv4, has itself gone through three minor versions. Most distributions currently support NFSv3 and NFSv4, and some with additional configuration also support NFSv2. The version used depends on the particular Linux implementation. Current Red Hat documentation, as of July 2022, states:

The default NFS version in Red Hat Enterprise Linux 8 is 4.2. NFS clients attempt to mount using NFSv4.2 by default, and fall back to NFSv4.1 when the server does not support NFSv4.2. The mount later falls back to NFSv4.0 and then to NFSv3.

This seems to be the general behavior of NFS operation on many distributions. Aside from the kernel version itself, NFS version support on a particular installation is primarily governed by the values of the configuration parameters vers2, vers3, vers4, vers4.0, vers4.1, and vers4.2 in /etc/nfs.conf, the main configuration file of the NFS system, described later in this article.

Among the differences between NFS3 and NFS4 are the RPC related services required with each version. NFSv4 does not use three of the services required by NFSv3, and related to this evolution, NFSv4 only uses the well-known TCP port for NFS 2049 as opposed to TCP/UDP ports 2049, 111, and 20048 in NFSv3. RFC 7530 which standardizes the NFSv4 protocol states:

The Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813). Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.

An extension to NFSv4 also exists called Parallel NFS or pNFS. This extension improves performance by allowing clients to concurrently access data from multiple servers.

NFS Architecture

The NFS architecture consists of a kernel module and a main user-space RPC process. The main user-space RPC process works together with other user-space RPC service processes to provide various components of NFS functionality. In modern Linux distributions, the main user-space RPC process is started by a primary systemd service which starts secondary systemd services which in turn start the other user-space RPC processes. The primary components of the NFS system is described and illustrated below.

nfsd
This is the Linux kernel module that provides the core of NFS functionality.
Remote Procedure Call Processes
RPC is a computing concept in which a program on one computer causes the execution of a program on a remote computer as if it was being executed locally. NFS uses various RPC processes specific to NFS to implement the functionality. These processes make use of a main RPC library and other RPC libraries for specific transport and security modes.
rpcbind
This is a program that manages ports for local RPC processes. It connects remote RPC processes to ports used by local RPC processes.
rpc.nfsd
This is the main user space RPC program that is a counterpart to the kernel module nfsd. It specifies to the kernel module the ports and internet transport layer protocols to use, NFS versions to support, the network interfaces (hostname or IP address) to use, logging parameters, and number of threads to use, among other parameters. The desired operational parameters are primarily obtained from /etc/nfs.conf
rpc.mountd
This is a user space RPC program that processes remote client requests to mount locally, filesystems (filesystem hierarchy sub-trees) exported by NFS servers, after first verifying that the requested objects are in fact exported and the client is authorized to access the requested objects.
rpc.idmapd
This is a user space RPC program that translates UID and GID to names and vice-versa.
lockd
This is a kernel thread running on both NFS servers and clients and allows clients to lock files on the server.
rpc.statd
This is a user space RPC program that is part of the Network Status Monitor (NSM) RPC protocol. It communicates file lock status between clients and servers.
rpc.quotad
This is a user space RPC program that provides local filesystem user quota information on a host that acts as an NFS server to a remote NFS client. It is also used to set quotas for a remote host by external utilities.
nfs fstab Format
This is a format for specifying an NFS filesystem with the mount command on an NFS client or in the client's /etc/fstab file for persistent mounts. It allows identification of the server and exported directory as well as mount and NFS options in a mount command or in /etc/fstab. Mount options can also be specified globally, per-server, or per mount point in a configuration file, /etc/nfsmount.conf instead of in the mount command or in the /etc/fstab file.
/etc/nfs.conf
This file is the main configuration file for the NFS system where parameters that affect NFS functionality are specified. This file is primarily accessed by rpc.nfsd, but other NFS component processes also access the file.
/etc/exports
This file defines the directories to be shared by the NFS server -- the exports -- as well as the remote clients allowed to access the exports, and the options that govern the NFS operating parameters for the particular export. Exports can also be defined in files in the directory /etc/exports.d
/var/lib/nfs/etab
This file is a table of the current exports provided by an NFS server. It is initialized from the contents of /etc/exports, or alternatively, from options to the exportfs command. It is accessible to the kernel's nfsd module through rpc.mountd. The contents are initialized from /etc/exports and contains entries which track the shared directory, the identifier of NFS clients permitted to access the export, and the NFS options that govern the NFS operating parameters for the export.
/var/lib/nfs/rmtab
This file contains entries of remote NFS clients currently accessing exports.
exportfs
This program initiates the actual sharing of exports by initializing the /var/lib/nfs/etab file (see above). It is typically executed by the systemd service nfs-server.service on modern Linux systems
. It can also be executed manually.
nfs-server.service
This is the primary NFS systemd service. It starts the primary user-space RPC program rpc.nfsd and other systemd services which in turn execute the other RPC processes. All of the associated systemd services are unified under the nfs-utils-service so that all associated services can be restarted by restarting this service.
/etc/nfsmount.conf
Options for mounting filesystems of the nfs format in /etc/fstab can be specified in this file

The RPC processes that are necessary vary with the NFS protocol version supported by a particular host. As stated in Red Hat documentation:

The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind, lockd, and rpc-statd services. The nfs-mountd service is still required on the NFS server to set up the exports, but is not involved in any over-the-wire operations.

However, since the supported NFS version on a host can be specified in the NFS configuration file /etc/nfs.conf such that multiple NFS versions are supported, if both NFSv3 and NFSv4 are supported, all RPC components (and their associated ports) are needed.

The relationships between the various components on the NFS server are shown in the diagram below.

The NFS Server Architecture

As is evident by examining the various outputs of the ss command below for openSUSE and Arch hosts acting as NFS servers, the main NFS kernel program and each NFS related RPC program runs as only a single process with a single PID even when providing its part of NFS functionality on both IP address families and both TCP and UDP transport layer protocols, and in the case of rpc.statd on multiple ports per protocol.

NFS Security

Security has also evolved since earlier versions, but basic security is provided by only allowing specified remote host clients to access the NFS server. Basic security also matches the UID and/or GID of the user reported by the client to match a UID and GID on the server and thus acquire the permissions of the user on the server.

 58%  16:23:02  USER: brook HOST: ARCH-16ITH6   
PCD: 12s ~  ❯$ sudo cat /var/lib/nfs/etab
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux       192.168.56.104(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux       192.168.56.103(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux       192.168.56.101(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)

 58%  16:24:15  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ sudo systemctl restart nfs-server.service

 58%  16:24:31  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ sudo cat /var/lib/nfs/etab

 58%  16:24:41  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ 

To demonstrate this, consider the case of an NFS export configuration following the numbered procedure in the section Configuring NFS on Host (NFS Server) -> Arch Linux, below, where the directory /home/brook/DataEXT4/SoftwareDownloads is exported on the NFS server. A subdirectory, /home/brook/DataEXT4/SoftwareDownloads/RockyLinux contains files with the user:group ownership of brook:brook with UID:GID 1000:1000. In the following listing, where commands are executed in the NFS client after the exports are mounted, the first command by user brook with UID:GID 1000:1000 on the client is able to list the contents of the directory. In the second command, an su is executed to switch to a different user test with UID:GID 1001:1001 and in the third command, this user attempts to list the contents of the directory, but permission is denied, per the basic NFS security.

[brook@Rocky16ITH6-VM1 ~]$ ls -l /mnt/VMhostNFS/softwaredownloads/RockyLinux/
total 10950668
-rw-r-----. 1 brook brook         450 Jul 10 00:28 CHECKSUM
-rw-r-----. 1 brook brook 11213471744 Jul 10 00:12 Rocky-8.6-x86_64-dvd1.iso
-rw-r-----. 1 brook brook        2776 Jul 12 18:27 rockyvm2-ks.cfg
[brook@Rocky16ITH6-VM1 ~]$ su -l test
Password: 
[test@Rocky16ITH6-VM1 ~]$ ls -l /mnt/VMhostNFS/softwaredownloads/RockyLinux/
ls: cannot open directory '/mnt/VMhostNFS/softwaredownloads/RockyLinux/': Permission denied
[test@Rocky16ITH6-VM1 ~]$ exit
logout
[brook@Rocky16ITH6-VM1 ~]$ id brook
uid=1000(brook) gid=1000(brook) groups=1000(brook),10(wheel)
[brook@Rocky16ITH6-VM1 ~]$ sudo id test
uid=1001(test) gid=1001(test) groups=1001(test)

This basic security mechanism can be thwarted by a misconfigured or malicious client. However, other security mechanisms are available, as described below, the most secure -- but most complicated -- being the Kerberos network authentication.

Access Control Lists
ACLs provide a way to extended the basic UNIX permissions of read, write, and execute for each of users, group, and others. One example of the capability of ACL over standard UNIX permissions is that specific users can be assigned permissions for a file that are different from the UNIX permissions for a file. NFSv3 supports ACLs based on Draft POSIX ACLs which depend on a separate RPC program. NFSv4 includes built-in support for ACLs based on ACL used in Windows, which include a larger set of attributes. Since filesystems that can be exported from Linux support the NFSv4 ACLs, a mapping between POSIX ACLs which Linux supports are mapped to NFSv4.

In order to use ACLs with NFS the filesystem containing the directory that is exported must be mounted on the server with ACL enabled. ACLs are generally manipulated with the setfacl and getfacl commands. openSUSE provides the specific commands nfs4-getfacl, nfs4-getfacl, and nfs4-editfacl to support NFSv4 ACLs.
Kerberos network authentication system and Generic Security Service
Instead of depending on the correct representation of the user by the client, this mechanism relies on cryptography to authenticate users. Kerberos is used not only with NFS but other network services to authenticate users -- which can be actual users or services associated with a program -- as legitimate users of network services. It also authenticates servers and clients to each other. A requirement of using Kerberos for securing NFS is the GSS API which allows applications to use a security service, such as Kerberos, among others, to allow a service independent interface to the security service instead of a built in interface to the security service.

Installing NFS Server

NFS functionality is provided by a kernel module, nfsd, working together with various user-space remote procedure call (RPC) and other programs. Only the necessary user-space components must be installed to provide an NFS server.

In some distributions, the package that contains these user-space components is in a package named nfs-utils; this is the case in Red Hat Enterprise Linux as well as Arch Linux. In these distributions, installing this package will complete the installation of an NFS server.

In some other distributions, the package that contains the NFS user-space components is nfs-kernel-server; this is the case in openSUSE and Ubuntu. In these distributions installing this package will complete the installation of an NFS server.

Below are two examples of the installation and configuration of an NFS server, one in Arch Linux which provides an nfs-utils package and another in openSUSE Tumbleweed which provides the nfs-kernel-server package. These distributions acting are used as NFS servers in the demonstration of using NFS to share directories with Rocky Linux (a RHEL clone) and Ubuntu hosts acting as NFS clients.

openSUSE

The openSUSE Reference manual suggests installing an NFS server (the user-space components) by using the Software Management component of YaST and selecting the Patterns tab and activating the checkbox for File Server on the left pane, then initiating the package management transaction by clicking the "Accept" button (see the following set of images). This installs not only the package required for NFS server functionality, nfs-kernel-server, but other file sharing server applications such as ATFTP, TFTP, and VSFTP, in addition to Samba, which is installed by default in openSUSE.

Instead, if these other capabilities are not required, NFS server functionality can be enabled by installing the nfs-kernel-server package and to configure NFS with YaST, the yast2-nfs-server package. This can be done by, selecting the File Server option -- but NOT activating its checkbox -- in the left tab of the Patterns view and selecting only the nfs-kernel-server and yast2-nfs-server packages, then clicking Accept. These packages can be installed using zypper with

zypper in nfs-kernel-server yast2-nfs-server

The first image of the following set, which depicts the YaST Software Management module's Patterns view when the File Server options is selected (not activated), shows the packages that are components of the "File Server" pattern. The second image shows that the netcfg and nfs-client packages are dependencies of nfs-kernel-server (these are listed in the bottom-right pane, in the "Dependencies" tab). netcfg is installed by default in an openSUSE system, and provides the NFS server configuration file /etc/exports in addition to all network related configuration files, such as /etc/hosts, as shown in the fourth image. nfs-client provides NFS client capabilities. That /etc/exports is owned by a general package that provides all network configuration files instead of the package that provides the network service is a notable difference compared to other distributions.

NFS Related Packages in openSUSE

Click on any of the thumbnails to view a slideshow of the images.

Arch

In Arch Linux the NFS server user-space components are in the package nfs-utils. Installing this package will with the following command will complete the installation of an NFS server.

pacman -S nfs-utils

Installation of this package provides the primary configuration file, /etc/exports, other configuration files, the user-space programs, manual pages, as well other files, as shown in the listing below.

 51%  21:36:18  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ pacman -Qlq nfs-utils
/etc/
/etc/exports
/etc/exports.d/
/etc/nfs.conf
/etc/nfsmount.conf
/etc/request-key.d/
/etc/request-key.d/id_resolver.conf
/usr/
/usr/bin/
/usr/bin/blkmapd
/usr/bin/exportfs
/usr/bin/mount.nfs
/usr/bin/mount.nfs4
/usr/bin/mountstats
/usr/bin/nfsconf
/usr/bin/nfsdcld
/usr/bin/nfsdclddb
/usr/bin/nfsdclnts
/usr/bin/nfsdcltrack
/usr/bin/nfsidmap
/usr/bin/nfsiostat
/usr/bin/nfsstat
/usr/bin/nfsv4.exportd
/usr/bin/rpc.gssd
/usr/bin/rpc.idmapd
/usr/bin/rpc.mountd
/usr/bin/rpc.nfsd
/usr/bin/rpc.statd
/usr/bin/rpcdebug
/usr/bin/showmount
/usr/bin/sm-notify
/usr/bin/start-statd
/usr/bin/umount.nfs
/usr/bin/umount.nfs4
/usr/lib/
/usr/lib/systemd/
/usr/lib/systemd/system-generators/
/usr/lib/systemd/system-generators/nfs-server-generator
/usr/lib/systemd/system-generators/rpc-pipefs-generator
/usr/lib/systemd/system/
/usr/lib/systemd/system/auth-rpcgss-module.service
/usr/lib/systemd/system/nfs-blkmap.service
/usr/lib/systemd/system/nfs-client.target
/usr/lib/systemd/system/nfs-idmapd.service
/usr/lib/systemd/system/nfs-mountd.service
/usr/lib/systemd/system/nfs-server.service
/usr/lib/systemd/system/nfs-utils.service
/usr/lib/systemd/system/nfsdcld.service
/usr/lib/systemd/system/nfsv4-exportd.service
/usr/lib/systemd/system/nfsv4-server.service
/usr/lib/systemd/system/proc-fs-nfsd.mount
/usr/lib/systemd/system/rpc-gssd.service
/usr/lib/systemd/system/rpc-statd-notify.service
/usr/lib/systemd/system/rpc-statd.service
/usr/lib/systemd/system/rpc_pipefs.target
/usr/lib/systemd/system/var-lib-nfs-rpc_pipefs.mount
/usr/share/
/usr/share/doc/
/usr/share/doc/nfs-utils/
/usr/share/doc/nfs-utils/NEWS
/usr/share/doc/nfs-utils/README
/usr/share/doc/nfs-utils/README.systemd
/usr/share/man/
/usr/share/man/man5/
/usr/share/man/man5/exports.5.gz
/usr/share/man/man5/nfs.5.gz
/usr/share/man/man5/nfs.conf.5.gz
/usr/share/man/man5/nfsmount.conf.5.gz
/usr/share/man/man7/
/usr/share/man/man7/nfs.systemd.7.gz
/usr/share/man/man7/nfsd.7.gz
/usr/share/man/man8/
/usr/share/man/man8/blkmapd.8.gz
/usr/share/man/man8/exportd.8.gz
/usr/share/man/man8/exportfs.8.gz
/usr/share/man/man8/gssd.8.gz
/usr/share/man/man8/idmapd.8.gz
/usr/share/man/man8/mount.nfs.8.gz
/usr/share/man/man8/mountd.8.gz
/usr/share/man/man8/mountstats.8.gz
/usr/share/man/man8/nfsconf.8.gz
/usr/share/man/man8/nfsd.8.gz
/usr/share/man/man8/nfsdcld.8.gz
/usr/share/man/man8/nfsdclddb.8.gz
/usr/share/man/man8/nfsdclnts.8.gz
/usr/share/man/man8/nfsdcltrack.8.gz
/usr/share/man/man8/nfsidmap.8.gz
/usr/share/man/man8/nfsiostat.8.gz
/usr/share/man/man8/nfsstat.8.gz
/usr/share/man/man8/nfsv4.exportd.8.gz
/usr/share/man/man8/rpc.gssd.8.gz
/usr/share/man/man8/rpc.idmapd.8.gz
/usr/share/man/man8/rpc.mountd.8.gz
/usr/share/man/man8/rpc.nfsd.8.gz
/usr/share/man/man8/rpc.sm-notify.8.gz
/usr/share/man/man8/rpc.statd.8.gz
/usr/share/man/man8/rpcdebug.8.gz
/usr/share/man/man8/showmount.8.gz
/usr/share/man/man8/sm-notify.8.gz
/usr/share/man/man8/statd.8.gz
/usr/share/man/man8/umount.nfs.8.gz
/var/
/var/lib/
/var/lib/nfs/
/var/lib/nfs/etab
/var/lib/nfs/rmtab
/var/lib/nfs/rpc_pipefs/
/var/lib/nfs/sm.bak/
/var/lib/nfs/sm/
/var/lib/nfs/state
/var/lib/nfs/v4recovery/

 51%  21:36:21  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ 

Prerequisites to Configuration

Before configuring the NFS server and clients, certain issues which affect access and reliability, described in this section, must be considered.

NTP Synchronization of NFS CLients and Servers

Before installing NFS servers and clients, as a good practice suggested by Arch Linux, a time synchronization service such as Network Time Protocol daemon or Chrony must be enabled on all NFS server and client computers.

FQDN of NFS Server and Clients

When mounting the exported filesystems in the NFS client with the mount command, the NFS server is must be identified. This can be done using either the NFS server host's IP address or its fully qualified domain name (FQDN). In some cases -- as described in the Arch Linux Wiki page on NFS -- the FQDN must be used instead of the IP address, otherwise the mount command will hang.

For cases where the basic NFS security mode of NFS is sufficient, such as where all hosts involved in NFS are on a trusted local network -- which is true of the hosts used in this demonstration -- the /etc/hosts file can simply be used to provide our NFS servers a FQDN. The following listing shows the /etc/hosts for the Arch Linux instance used in this article as an NFS server.

 100%  17:01:18  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ head -n 25 /etc/hosts
# Generated with hBlock 3.4.0 (https://github.com/hectorm/hblock)
# Blocked domains: 244157
# Date: Sat Jul  9 20:42:27 EDT 2022

# BEGIN HEADER
127.0.0.1       localhost.localdomain localhost
127.0.1.1       ARCH-16ITH6.localdomain ARCH-16ITH6
255.255.255.255 broadcasthost
::1             localhost ARCH-16ITH6
::1             ip6-localhost ip6-loopback
fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters
ff02::3         ip6-allhosts
# END HEADER

# BEGIN BLOCKLIST
0.0.0.0 0--e.info
0.0.0.0 0-0.fr
0.0.0.0 0-gkm-portal.net.daraz.com
0.0.0.0 0-owazo4.net.zooplus.de
0.0.0.0 0.0.0.0.beeglivesex.com
0.0.0.0 0.0.0.0.creative.hpyrdr.com
0.0.0.0 0.0.0.0.hpyrdr.com

 100%  17:02:14  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$

This file was originally generated by hblock, but then edited after generation to include the line

127.0.1.1 ARCH-16ITH6.localdomain ARCH-16ITH6

making the FQDN of the host ARCH-16ITH6.localdomain.

NFS and Firewall

If using a firewall to protect the NFS server, before configuring NFS, the ports necessary for its operation must be accessible to incoming connections. The necessary ports and the services associated with them are shown in the following table.

Service NFSv3 NFSv4
nfs 2049 TCP/UDP 2049 TCP
rpcbind 111 TCP/UDP N/A
mountd 20048 TCP/UDP N/A

For NFSv4, only TCP port 2049 is required to be be open. If the NFS server or clients do not support this version, if the NFS server is configured to not use NFSv4 -- which is not the case in the examples used in this article, or if NFSv3 is also to be supported, other ports, namely ports 111, used by rpcbind, and 20048, used by rpc.mountd must also be open.

The following listing shows the output of ss with options specified to show listening UDP and TCP ports and the processes using them in openSUSE Tumbleweed after using its YaST Firewall module -- which interfaces to firewalld to open ports (the process is shown below) necessary for both NFSv3 and NFSv4 operation. As a result of these actions in YaST Firewall, the firewall module opens only TCP port 2049 for IPv4 and IPv6 for use by the kernel NFS process (signified by the lack of process information in the "Process" column" of ss output). It also opens IPv4 and IPv6 ports 111 and 20048 -- in this case both UDP and TCP -- for the rpcbind and rpc.mountd processes. We can also see that rpcbind makes numerous other ports available for the rpc.statd processes.

 97%  19:41:18  USER: brook HOST: 16ITH6-openSUSE   
PCD: 7m17s ~  ❯$ sudo ss -tupnl
[sudo] password for root: 
Netid        State         Recv-Q         Send-Q                 Local Address:Port                  Peer Address:Port        Process                                                           
udp          UNCONN        0              0                          127.0.0.1:732                        0.0.0.0:*            users:(("rpc.statd",pid=6492,fd=5))                              
udp          UNCONN        0              0                            0.0.0.0:20048                      0.0.0.0:*            users:(("rpc.mountd",pid=6488,fd=4))                             
udp          UNCONN        0              0                            0.0.0.0:37800                      0.0.0.0:*            users:(("VBoxHeadless",pid=10665,fd=25))                         
udp          UNCONN        0              0                            0.0.0.0:5353                       0.0.0.0:*            users:(("avahi-daemon",pid=1177,fd=11))                          
udp          UNCONN        0              0                            0.0.0.0:38297                      0.0.0.0:*            users:(("VBoxHeadless",pid=10665,fd=24))                         
udp          UNCONN        0              0                            0.0.0.0:55648                      0.0.0.0:*            users:(("VBoxHeadless",pid=10665,fd=27))                         
udp          UNCONN        0              0                            0.0.0.0:42030                      0.0.0.0:*            users:(("rpc.statd",pid=6492,fd=8))                              
udp          UNCONN        0              0                            0.0.0.0:45204                      0.0.0.0:*            users:(("VBoxHeadless",pid=10665,fd=26))                         
udp          UNCONN        0              0                            0.0.0.0:45511                      0.0.0.0:*            users:(("avahi-daemon",pid=1177,fd=13))                          
udp          UNCONN        0              0                            0.0.0.0:48867                      0.0.0.0:*                                                                             
udp          UNCONN        0              0                            0.0.0.0:68                         0.0.0.0:*            users:(("dhclient",pid=8521,fd=6))                               
udp          UNCONN        0              0                            0.0.0.0:111                        0.0.0.0:*            users:(("rpcbind",pid=1227,fd=5),("systemd",pid=1,fd=43))        
udp          UNCONN        0              0                          127.0.0.1:323                        0.0.0.0:*            users:(("chronyd",pid=1564,fd=5))                                
udp          UNCONN        0              0                               [::]:49982                         [::]:*                                                                             
udp          UNCONN        0              0                               [::]:51846                         [::]:*            users:(("rpc.statd",pid=6492,fd=10))                             
udp          UNCONN        0              0                               [::]:52144                         [::]:*            users:(("avahi-daemon",pid=1177,fd=14))                          
udp          UNCONN        0              0                               [::]:20048                         [::]:*            users:(("rpc.mountd",pid=6488,fd=6))                             
udp          UNCONN        0              0                               [::]:5353                          [::]:*            users:(("avahi-daemon",pid=1177,fd=12))                          
udp          UNCONN        0              0                               [::]:111                           [::]:*            users:(("rpcbind",pid=1227,fd=7),("systemd",pid=1,fd=45))        
udp          UNCONN        0              0                              [::1]:323                           [::]:*            users:(("chronyd",pid=1564,fd=6))                                
tcp          LISTEN        0              128                        127.0.0.1:631                        0.0.0.0:*            users:(("cupsd",pid=1552,fd=7))                                  
tcp          LISTEN        0              100                        127.0.0.1:25                         0.0.0.0:*            users:(("master",pid=1820,fd=13))                                
tcp          LISTEN        0              4096                         0.0.0.0:36477                      0.0.0.0:*            users:(("rpc.statd",pid=6492,fd=9))                              
tcp          LISTEN        0              64                           0.0.0.0:2049                       0.0.0.0:*                                                                             
tcp          LISTEN        0              64                           0.0.0.0:35107                      0.0.0.0:*                                                                             
tcp          LISTEN        0              5                          127.0.0.1:6600                       0.0.0.0:*            users:(("mpd",pid=6708,fd=10))                                   
tcp          LISTEN        0              4096                         0.0.0.0:111                        0.0.0.0:*            users:(("rpcbind",pid=1227,fd=4),("systemd",pid=1,fd=41))        
tcp          LISTEN        0              4096                         0.0.0.0:20048                      0.0.0.0:*            users:(("rpc.mountd",pid=6488,fd=5))                             
tcp          LISTEN        0              128                            [::1]:631                           [::]:*            users:(("cupsd",pid=1552,fd=6))                                  
tcp          LISTEN        0              100                            [::1]:25                            [::]:*            users:(("master",pid=1820,fd=14))                                
tcp          LISTEN        0              511                                *:36027                            *:*            users:(("code",pid=10106,fd=41))                                 
tcp          LISTEN        0              64                              [::]:2049                          [::]:*                                                                             
tcp          LISTEN        0              4096                            [::]:39337                         [::]:*            users:(("rpc.statd",pid=6492,fd=11))                             
tcp          LISTEN        0              64                              [::]:38027                         [::]:*                                                                             
tcp          LISTEN        0              4096                            [::]:111                           [::]:*            users:(("rpcbind",pid=1227,fd=6),("systemd",pid=1,fd=44))        
tcp          LISTEN        0              4096                            [::]:20048                         [::]:*            users:(("rpc.mountd",pid=6488,fd=7))                             

 97%  19:41:37  USER: brook HOST: 16ITH6-openSUSE   
PCD: 3s ~  ❯$

The mapping between well known ports and their associated service names by network related programs such as ss are typically based on a process (see this Stack Exchange entry for the process). that eventually results in a lookup of the standard file /etc/services, which in the case of openSUSE -- for reasons found in this Stack Exchange post -- has been relocated to /usr/etc/services. firewalld, however, unlike network programs such as ss uses its own xml files in /usr/lib/firewalld/services/ to generate the mappings, for example to display the service names in outputs of the firewall-cmd command displayed below, in the section openSUSE Firewall Configuration for NFS.

The mappings are ultimately managed by the Internet Assigned Numbers Authority (see RFC 6335: Internet Assigned Numbers Authority (IANA) Procedures for the Management of the Service Name and Transport Protocol Port Number Registry). Unfortunately, as seen in a query for port 2049 in the Service Name and Transport Protocol Port Number Registry, there is a known conflict in the mapping of port 2049 which is assigned to multiple services, nfs and the obscure shilp, apparently a service that is associated with a CAD program. This conflict has been propagated to the /etc/services file of Arch Linux -- and other Linux distributions, if not modified by the distribution. This conflict in /etc/services causes outputs of ss where the command is executed with the option to resolve port numbers to well known service to misleadingly display the service name associated with port 2049 as "shilp " instead of nfs in Arch Linux and other distributions.

This misleading information is shown below in the output of ss on Arch Linux, similar to the one produced for openSUSE, above. Unlike in the previous output, the -n option is not used with ss, so well known service names associated with the port -- if the mapping is assigned -- are shown instead of the port numbers. The service name associated with the kernel NFS process is shown as "shilp" instead of "nfs". This could be a source of confusion and frustration for new users of NFS on Arch or other distributions. (openSUSE has modified its /usr/etc/services file to comment out the entry that maps port 2049 to shilp.)

Also interesting, besides the unfortunate mapping conflict, are the network service names associated with the rpcbind . The rpcbind process is associated with the service name sunrpc, which seems appropriate considering the NFS protocol was created by Sun and it relies on RPC processes.

 73%  18:15:51  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ sudo ss -tupl
Netid State  Recv-Q Send-Q                         Local Address:Port   Peer Address:Port Process                                                    
udp   UNCONN 0      0                                    0.0.0.0:48396       0.0.0.0:*     users:(("VBoxHeadless",pid=5787,fd=27))                   
udp   UNCONN 0      0                                    0.0.0.0:sunrpc      0.0.0.0:*     users:(("rpcbind",pid=1169,fd=5),("systemd",pid=1,fd=36)) 
udp   UNCONN 0      0                              172.16.224.38:ntp         0.0.0.0:*     users:(("ntpd",pid=794,fd=23))                            
udp   UNCONN 0      0                               192.168.56.1:ntp         0.0.0.0:*     users:(("ntpd",pid=794,fd=25))                            
udp   UNCONN 0      0                                  127.0.0.1:ntp         0.0.0.0:*     users:(("ntpd",pid=794,fd=18))                            
udp   UNCONN 0      0                                    0.0.0.0:ntp         0.0.0.0:*     users:(("ntpd",pid=794,fd=17))                            
udp   UNCONN 0      0                                  127.0.0.1:922         0.0.0.0:*     users:(("rpc.statd",pid=1170,fd=5))                       
udp   UNCONN 0      0                                    0.0.0.0:34500       0.0.0.0:*     users:(("VBoxHeadless",pid=5918,fd=25))                   
udp   UNCONN 0      0                                    0.0.0.0:51140       0.0.0.0:*     users:(("VBoxHeadless",pid=5918,fd=28))                   
udp   UNCONN 0      0                                    0.0.0.0:51763       0.0.0.0:*     users:(("VBoxHeadless",pid=5918,fd=27))                   
udp   UNCONN 0      0                                    0.0.0.0:52383       0.0.0.0:*     users:(("rpcbind",pid=1169,fd=10))                        
udp   UNCONN 0      0                                    0.0.0.0:mountd      0.0.0.0:*     users:(("rpc.mountd",pid=1171,fd=4))                      
udp   UNCONN 0      0                                    0.0.0.0:52926       0.0.0.0:*     users:(("VBoxHeadless",pid=5918,fd=24))                   
udp   UNCONN 0      0                                    0.0.0.0:53654       0.0.0.0:*     users:(("VBoxHeadless",pid=5918,fd=26))                   
udp   UNCONN 0      0                                    0.0.0.0:40010       0.0.0.0:*     users:(("rpc.statd",pid=1170,fd=8))                       
udp   UNCONN 0      0                                    0.0.0.0:57475       0.0.0.0:*                                                               
udp   UNCONN 0      0                                       [::]:49233          [::]:*     users:(("rpcbind",pid=1169,fd=11))                        
udp   UNCONN 0      0                                       [::]:sunrpc         [::]:*     users:(("rpcbind",pid=1169,fd=7),("systemd",pid=1,fd=38)) 
udp   UNCONN 0      0      [fe80::64f2:24e1:cecf:a491]%wlp0s20f3:ntp            [::]:*     users:(("ntpd",pid=794,fd=24))                            
udp   UNCONN 0      0           [fe80::800:27ff:fe00:0]%vboxnet0:ntp            [::]:*     users:(("ntpd",pid=794,fd=26))                            
udp   UNCONN 0      0                                      [::1]:ntp            [::]:*     users:(("ntpd",pid=794,fd=19))                            
udp   UNCONN 0      0                                       [::]:ntp            [::]:*     users:(("ntpd",pid=794,fd=16))                            
udp   UNCONN 0      0                                          *:xmsg              *:*     users:(("kdeconnectd",pid=1786,fd=8))                     
udp   UNCONN 0      0                                       [::]:51467          [::]:*                                                               
udp   UNCONN 0      0                                       [::]:mountd         [::]:*     users:(("rpc.mountd",pid=1171,fd=6))                      
udp   UNCONN 0      0                                       [::]:41497          [::]:*     users:(("rpc.statd",pid=1170,fd=10))                      
tcp   LISTEN 0      64                                   0.0.0.0:shilp       0.0.0.0:*                                                               
tcp   LISTEN 0      64                                   0.0.0.0:38627       0.0.0.0:*                                                               
tcp   LISTEN 0      5                                  127.0.0.1:mshvlm      0.0.0.0:*     users:(("mpd",pid=1376,fd=10))                            
tcp   LISTEN 0      4096                                 0.0.0.0:51691       0.0.0.0:*     users:(("rpc.statd",pid=1170,fd=9))                       
tcp   LISTEN 0      4096                                 0.0.0.0:sunrpc      0.0.0.0:*     users:(("rpcbind",pid=1169,fd=4),("systemd",pid=1,fd=35)) 
tcp   LISTEN 0      4096                                 0.0.0.0:mountd      0.0.0.0:*     users:(("rpc.mountd",pid=1171,fd=5))                      
tcp   LISTEN 0      64                                      [::]:40191          [::]:*                                                               
tcp   LISTEN 0      64                                      [::]:shilp          [::]:*                                                               
tcp   LISTEN 0      4096                                    [::]:sunrpc         [::]:*     users:(("rpcbind",pid=1169,fd=6),("systemd",pid=1,fd=37)) 
tcp   LISTEN 0      4096                                    [::]:mountd         [::]:*     users:(("rpc.mountd",pid=1171,fd=7))                      
tcp   LISTEN 0      4096                                    [::]:50419          [::]:*     users:(("rpc.statd",pid=1170,fd=11))                      
tcp   LISTEN 0      50                                         *:xmsg              *:*     users:(("kdeconnectd",pid=1786,fd=9))                     

 73%  18:16:17  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$

openSUSE Firewall Configuration for NFS

Recent installations of openSUSE use firewalld by default instead of the previously used SUSEFirewall2. YaST has a Firewall module that simplifies making changes to firewalld, although the command-line tool firewall-cmd could be used.

The interactions with the YaST's Firewall component are shown in the following set of images.

  • Image 1 shows the main YaST screen with the Firewall module's launcher highlighted.
  • Image 2 shows the screen that is opened after launching the Firewall module. This screen is divided into two panes, where the selection in the left pane opens corresponding controls in the right pane. Upon activation of this screen the Start-Up item is selected with "Reload" selected as the action to perform upon accepting firewall configuration changes. The "Reload" setting makes changes take effect immediately and makes the changes persistent. The left pane shows the available firewall zones, among which is the public zone, the default active zone in a standard openSUSE installation. Selecting any of these zones allows the addition of firewall rules in the right pane for that zone. Unless the public zone is selected first, adding rules to any other zone will have no effect unless the selected zone is made the active one.
  • Image 3 shows the pubic zone selected. The right pane shows the open ports by well known service name under the Allowed. The left side of the pane has a list of ports by well known service name if the Services tab activated or by port number if the Ports tab is activated. To allow a service or port to be accessible, the service or port is selected on the left and the Add button activated. Image 3 shows that the nfs service previously added to the list of allowed services or ports, which by default only includes the dhcpv6-client service.
  • Image 4 shows the mountd and rpc-bind services added to the allowed services.
YaST firewalld Configuration for NFS

Click on any of the thumbnails to view a slideshow of the images.

We can verify the changes to the public zone with the firewall-cmd command as shown below in the output of firewall-cmd --list-all. It shows that public zone is active and that the services dhcpv6-client, mountd, nfs, and rpc-bind -- or the ports associated with them are accessible.

16ITH6-openSUSE:~ #firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: wlp0s20f3
  sources: 
  services: dhcpv6-client mountd nfs rpc-bind
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
16ITH6-openSUSE:~ #

We can also verify that the needed ports have been opened by using the nmap command on the IP address of the openSUSE NFS server on the ports needed to support both NFSv3 and NFSv4, as shown in the following listing.

16ITH6-openSUSE:~ # nmap -p 111 192.168.56.1
Starting Nmap 7.92 ( https://nmap.org ) at 2022-07-26 20:30 EDT
Nmap scan report for 192.168.56.1
Host is up (0.000082s latency).

PORT    STATE SERVICE
111/tcp open  rpcbind

Nmap done: 1 IP address (1 host up) scanned in 0.14 seconds
16ITH6-openSUSE:~ # nmap -p 20048 192.168.56.1
Starting Nmap 7.92 ( https://nmap.org ) at 2022-07-26 20:31 EDT
Nmap scan report for 192.168.56.1
Host is up (0.000055s latency).

PORT      STATE SERVICE
20048/tcp open  mountd

Nmap done: 1 IP address (1 host up) scanned in 0.13 seconds
16ITH6-openSUSE:~ # nmap -p 2049 192.168.56.1
Starting Nmap 7.92 ( https://nmap.org ) at 2022-07-26 20:31 EDT
Nmap scan report for 192.168.56.1
Host is up (0.000067s latency).

PORT     STATE SERVICE
2049/tcp open  nfs

Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds
16ITH6-openSUSE:~ # firewall-cmd --list-services
dhcpv6-client mountd nfs rpc-bind
16ITH6-openSUSE:~ # firewall-cmd --list-services --permanent
dhcpv6-client mountd nfs rpc-bind
16ITH6-openSUSE:~ #

Example Arch Linux Firewall Configuration

Unlike openSUSE which has a pre-configured firewall installed by default, Arch Linux, as a DIY distribution does not. The following shows the user added firewall rules in a Arch Linux installation that uses UFW to manage the firewall.

 100%  17:46:45  USER: brook HOST: ARCH-16ITH6   
PCD: 3s ~  ❯$ sudo ufw status verbose
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
2049/tcp                   ALLOW IN    192.168.56.0/24           
111/tcp                    ALLOW IN    192.168.56.0/24           
20048/tcp                  ALLOW IN    192.168.56.0/24           
20048/udp                  ALLOW IN    192.168.56.0/24           
111/udp                    ALLOW IN    192.168.56.0/24           
2049/udp                   ALLOW IN    192.168.56.0/24 

UFW adds these to the iptables chain ufw-user-input chain as shown in the output of iptables -S. If using these rules directly with iptables they would be in the INPUT chain rules.

 100%  17:50:24  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ sudo iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT

.. truncated ...

-A ufw-user-input -s 192.168.56.0/24 -p tcp -m tcp --dport 2049 -j ACCEPT
-A ufw-user-input -s 192.168.56.0/24 -p tcp -m tcp --dport 111 -j ACCEPT
-A ufw-user-input -s 192.168.56.0/24 -p tcp -m tcp --dport 20048 -j ACCEPT
-A ufw-user-input -s 192.168.56.0/24 -p udp -m udp --dport 20048 -j ACCEPT
-A ufw-user-input -s 192.168.56.0/24 -p udp -m udp --dport 111 -j ACCEPT
-A ufw-user-input -s 192.168.56.0/24 -p udp -m udp --dport 2049 -j ACCEPT
-A ufw-user-limit -m limit --limit 3/min -j LOG --log-prefix "[UFW LIMIT BLOCK] "
-A ufw-user-limit -j REJECT --reject-with icmp-port-unreachable
-A ufw-user-limit-accept -j ACCEPT

 100%  17:50:30  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ 

UID and GID Matching Between NFS Server User and NFS Client User

As mentioned previously, in section NFS Security the basic security of NFS, where the sec NFS option has the value of sys, the default if it is not overridden, relies on matching the UID/GID of the owner of the shared files on the NFS server to the UID/GID of the user on the NFS client attempting to access files in exported directories. In the NFS use case of this article, where clients and servers are on the same trusted local network, this security mode is sufficient, but UID/GID must match between NFS server shared file ownership and NFS client user.

Configuring NFS Server

The essence of configuring an NFS server is specifying the directories to be shared -- or exported in the NFS configuration file /etc/exports, and specifying the remote hosts or networks that will be allowed to import the shared directories, and optionally listing a set of options that determine the operating parameters of NFS for the export.

The format of entries in /etc/exports, having the basic format shown previously and reproduced here:

/exported/directory remote-host(option1,option2,optionN)

consists of the following elements separated by whitespace

  • the filesystem hierarchy path of the directory to be exported
  • an identification of the remote hosts that are allowed to import the exported directory, identified either as individual hosts, as members of a (sub-)network, or as members of an NIS netgroup
  • an optional set of NFS options inside parentheses that immediately follow the host(s) identification, without any whitespace in between the host identification and the parentheses

It should be noted that the path and host identification are separated by whitespace, but there MUST NOT be any whitespace between the host identification and the parentheses that contain the NFS options, if options are specified.

The same directory can be exported to multiple hosts by listing each host or hosts along with their NFS options after the initial string that specifies the exported directory. Or the exported directory can be listed multiple times with a single host(s) identification.

NFS Client Identification

Remote hosts are identified by:

single host
  • a fully-qualified domain name
  • a host name that is able to be resolved within the local network
  • by either an IPv4 or an IPv6 address
multiple hosts
  • a sub-network in IPv4 CIDR notation, in the form a.b.c.d/e where a.b.c.d is the network and e is the netmask bits
  • an IPv4 subnet mask notation, in the form a.b.c.d/e.f.g.h, where a.b.c.d is an IPv4 subnet mask notation, in the form a.b.c.d/e.f.g.h, where a.b.c.d is the network and e.f.g.h is the netmask
  • a sub-network in IPv6 CIDR notation, similar to IPv4 CIDR notation, in the form a:b:c:d:e:f:g:h/i where a:b:c:d:e:f:g:h identifies the network and i identifies the possible host addresses
  • an NIS netgroup with the format @group-name

NFS Options

If options are not specified for a host, a default set of options are applied. According to the exportfs(8) man page, the default export options for the exportfs command (see the section nfs-server.service and exportfs, below) are sync, ro, root_squash, and wdelay. However, the exports(5) man page specifies a fuller set of default options which define the default behavior of NFS exports when /etc/exports is caused to be sourced by the exportfs command: sync, ro, root_squash, wdelay, hide, no_subtree_check, and sec=sys, secure, no_all_squash. This fuller set of options are applied in both openSUSE and Arch Linux when NFS export options are not specified in /etc/exports.

It should be noted that even if some options are specified in /etc/exports, the options from the default set that are not overridden by explicitly specified options will still be applied. This means that if behavior other than that determined by the default set of NFS options is desired, an option that overrides a default option that determines a particular default behavior must be supplied. For example, the default read-only behavior due to the default ro option must be overridden by the read-write option, rw, by including it inside the parentheses that immediately follow the host identification. These default options and some other important options are described below (see the exports(5) man page for complete details).

sync
This default option causes replies to new NFS requests to be processed only after previous requests have been committed to stable storage. This default behavior can be overridden with the async option.
async
This option overrides the sync default option allowing new NFS requests to default can be overridden with the async option.
ro
This default option disallows any NFS request that modifies files. The rw option is used to override this default behavior.
rw
This option overrides the ro option, allowing NFS requests that modify files.
root_squash
This default option enables "root squashing" an NFS operating mode in which the root privileges of a root user on an NFS client are removed when this user accesses an NFS server. In this mode, the UID of the root user on an NFS client accessing an NFS server is mapped to the UID of the nobody user.
no_root_squash
This option overrides the default option root_squash disabling the "root squashing" operating mode of the NFS export. When "root squashing" is disabled, the privileges of a root user on an NFS client are preserved on the NFS server when accessing exports because the user's UID is not mapped to the nobildy user's UID as with the "root squashing" mode.
all_squash
Similar to "root squashing", this option removes the privileges of all users on an NFS client when accessing the NFS server. Similar to "root squashing" the UIDs of all NFS client users are mapped to the UID of the noboldy user on the NFS server.
no_all_squash
This default option (distributions) overrides the all_squash option, preserving the native privileges of users on NFS client hosts when accessing the NFS server. With this option the UIDs of users on NFS client hosts are not mapped to the UID of the nobody user on the NFS server.
wdelay
This default option causes the NFS server to delay committing write requests if related write requests are in progress or it anticipates further related requests such that all related write requests are committed at the same time, increasing performance. This mode only improves performance when write requests are related, but will decrease performance if write requests are unrelated. This option has no effect if the async option is used. This option can be overriden with the np_wddelay option.
no_wdelay
This option overrides the default option wdelay. It disables the NFS server's delay before committing related write requests, increasing performance when write requests occurring about the same time are not related.
hide
This default option (distribution) applies only to NFSv3 in situations where two filesystem hierarchy paths, one of which is mounted on another, are exported. In this case the option causes the paths under the higher level export to be hidden when accessing them.
nohide
This option overrides the hide option. In situation where two filesystem hierarchy paths, one of which is mounted on another, are exported, nohide will allow filesystem paths under the higher level path to be visible. Like hide, this option is only relevant in NFSv2 and NFSv3.
no_subtree_check
This default option disables "subtree checking" which refers to the NFS operation mode in which, when a subdirectory of a filesystem is exported but the entire filesystem is not, the server verifies not only that a file accessed from an NFS client is part of the whole filesystem, but also that it is in the exported tree. no_subtree_check increases reliability in certain rare scenarios, but decreases security to a limited extent.
subtree_check
This option enables "subtree checking" which refers to the NFS operation mode in which, when a subdirectory of a filesystem is exported but the entire filesystem is not, the server verifies not only that a file accessed from an NFS client is part of the whole filesystem, but also that it is in the exported tree. subtree_check decreases reliability in certain rare scenarios, but increases security to a limited extent. "subtree checking" is required to ensure that files in directories only accessible to root can be accessed only if the no_root_squash option is used -- maintaining the security of these directories.
crossmnt
This option is used to make child filesystems of an exported filesystem to be accessible from an NFS client without being specifically exported. The nocrossmnt option disables this option if it had been previously set.
sec=sys
The sec option specifies the security mode to apply to a particular export identified by a filesystem-hierarchy-path/remote-host combination. The security modes to be applied to the export are specified as a comma separated list, in order of preference, after the =. The available security modes are:
  • krb5 This option parameter specifies that the Kerberos security service is to be used to authenticate users accessing the NFS server.
  • krb5i This option parameter specifies that the Kerberos security service is to be used to authenticate users accessing the NFS server, and additionally provide integrity protection of data in NFS operations using checksums.
  • krb5p This option parameter specifies that the Kerberos security service is to be used to authenticate users accessing the NFS server, and additionally provide integrity protection of data in NFS operations using checksums and protects data transmitted in NFS operations using encryption.
  • sys This option parameter specifies only the basic NFS security which relies on matching the UID/GID of users on client hosts with the UID/GID of file owners on the NFS server. This is the default if the sec= option is not specified.
secure
This option specifies, for an NFS configuration that does not use GSS (see section NFS -> NFS Security , above) only NFS requests originating from port numbers less than 1024 be allowed.

nfs-server.service and exportfs

After specification of directories to be exported in /etc/exports, the actual exporting of the specified directories is initiated by the exportfs program. This command is typically started by the systemd service nfs-server.service, which also starts NFS related services which in turn start the various other NFS user-space components. Enabling this service will ensure that the NFS server will be available at boot. If the nfs-server.service is not enabled, it must be started manually whenever the NFS server is needed.

The output of the first command in the following listing shows that nfs-server.service first executes exportfs -r reading the /etc/exports file, and included files in /etc/exports.d/ to recreate the entries in /var/lib/nfs/etab. It then starts the nfsd RPC program. The warnings in the output of the systemctl status, referring to the fact desirable NFS options have not been set in the /etc/exports file, are irrelevant, as the default options are applied anyway, as shown in the output of the second command in the listing.

 18:57:04  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$ sudo systemctl status nfs-server
[sudo] password for brook: 
● nfs-server.service - NFS server and services
     Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; preset: disabled)
    Drop-In: /run/systemd/generator/nfs-server.service.d
             └─order-with-mounts.conf
     Active: active (exited) since Mon 2022-09-12 18:06:59 EDT; 50min ago
    Process: 1467 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
    Process: 1468 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)
   Main PID: 1468 (code=exited, status=0/SUCCESS)
        CPU: 3ms

Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]:   NOTE: this default has changed since nfs-utils version 1.0.x
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: No options for /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.103: suggest 192.168.56.103(sync) to avoid warning
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.103:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux".
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]:   Assuming default behaviour ('no_subtree_check').
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]:   NOTE: this default has changed since nfs-utils version 1.0.x
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: No options for /home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.101: suggest 192.168.56.101(sync) to avoid warning
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]: exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.101:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux".
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]:   Assuming default behaviour ('no_subtree_check').
Sep 12 18:06:59 ARCH-16ITH6 exportfs[1467]:   NOTE: this default has changed since nfs-utils version 1.0.x
Sep 12 18:06:59 ARCH-16ITH6 systemd[1]: Finished NFS server and services.

 18:57:20  USER: brook HOST: ARCH-16ITH6   
PCD: 3s ~  ❯$ sudo exportfs -v
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux
                192.168.56.104(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux
                192.168.56.103(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux
                192.168.56.101(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)

 18:58:25  USER: brook HOST: ARCH-16ITH6   
 ~  ❯$

Whenever the /etc/exports file is modified while NFS is active, either the nfs-server.service must be restarted or exportfs -r executed, the -r option "reexporting" directories.

NFS Server Configuration Examples

Based on the introduction to NFS configuration, the details of configuring an NFS server can be complicated, but all that is typically necessary, depending on the use case, is to edit /etc/exports and add a line such as, for example,

/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.104 192.168.56.103 192.168.56.101

to allow the remote hosts at 192.168.56.104, 192.168.56.103, and 192.168.56.102 to import, or access, the exported directory with default NFS options. The NFS service is then started with elevated privileges:

systemctl start nfs-server.service

This basic exports configuration was used in Sharing Host Computer Directories with VirtualBox Guests Using NFS in which a VirtualBox VM host was configured as an NFS server to share a directory with VM guests.

As an alternative to listing individual host IP addresses in /etc/exports, a subnetwork that includes these IP addresses could be used. The line in /etc/exports, using one possible expression for the range of IP addresses 192.168.56.0 to 192.168.56.255 in CIDR notation, would be

/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/24

or the dotted-decimal subnet mask notation, the line would be

/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/255.255.255.0

permitting all hosts with addresses in the range to access the NFS server. (Note that the VirtualBox host-only network discussed in the referenced article only creates assigns guests addresses in the range 192.168.56.101 through 192.168.56.254)

The above lines in /etc/exports will apply default options, as discussed above, one of which is the ro option allowing only read only access. If read write access is required the lines would be

/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.104(rw) 192.168.56.103(rw) 192.168.56.101(rw)
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux -rw 192.168.56.104 192.168.56.103 192.168.56.101
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/24(rw)
/home/brook/DataEXT4/SoftwareDownloads/RockyLinux 192.168.56.0/255.255.255.0(rw)

Methods for accessing the exported directories on a client are shown later in this article.

Arch

The Arch Linux documentation, however, recommends a more complicated exports configuration in which an NFS root is exported as well as the specific directories to exported under the NFS root. The wiki also recommends that the actual shared directory be bind mounted to the exported directory. Following the recommendation, if the directory /home/brook/DataEXT4/SoftwareDownloads/ is the directory to be shared by the NFS server:

  1. Create the directory to be the NFS root:
     100%  18:12:28  USER: brook HOST: ARCH-16ITH6   
     ~  ❯$ sudo mkdir -p /srv/nfs
  2. Create the directory under the NFS root to which the shared directory will be bind mounted:
     100%  18:14:08  USER: brook HOST: ARCH-16ITH6   
    PCD: 5s ~  ❯$ sudo mkdir -p /srv/nfs/softwaredownloads
    
  3. Bind mount the directory to be shared to the exported directory under the NFS root:
     100%  18:16:06  USER: brook HOST: ARCH-16ITH6   
     ~  ❯$ sudo mount --bind /home/brook/DataEXT4/SoftwareDownloads /srv/nfs/softwaredownloads
    
  4. Edit the /etc/exports file and include the following line to export the directory to be shared (indirectly through the bind mount) with the remote hosts 192.168.56.104, 192.168.56.103, and 192.168.56.102. In this exports configuration, /srv/nfs is exported first and identified as the NFS root through the use of the option fsid=0, or alternatively, fsid=root. (See exports(5))
    /srv/nfs -fsid=0,crossmnt 192.168.56.104 192.168.56.103 192.168.56.101
    /srv/nfs/softwaredownloads 192.168.56.104 192.168.56.103 192.168.56.101
  5. If the nfs-server.service systemd service has not been started yet, start the service to export the specified filesystems. If the service had already been started, reexport the filesystems with
    sudo exportfs -arv
    The output of this command will first show warnings regarding missing options and then list the exported directories and their hosts, as shown below.
     100%  16:36:41  USER: brook HOST: ARCH-16ITH6   
    PCD: 3s ~  ❯$ sudo exportfs -arv
    exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.104:/srv/nfs".
      Assuming default behaviour ('no_subtree_check').
      NOTE: this default has changed since nfs-utils version 1.0.x
    
    exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.103:/srv/nfs".
      Assuming default behaviour ('no_subtree_check').
      NOTE: this default has changed since nfs-utils version 1.0.x
    
    exportfs: /etc/exports [3]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.101:/srv/nfs".
      Assuming default behaviour ('no_subtree_check').
      NOTE: this default has changed since nfs-utils version 1.0.x
    
    exportfs: No options for /srv/nfs/softwaredownloads 192.168.56.104: suggest 192.168.56.104(sync) to avoid warning
    exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.104:/srv/nfs/softwaredownloads".
      Assuming default behaviour ('no_subtree_check').
      NOTE: this default has changed since nfs-utils version 1.0.x
    
    exportfs: No options for /srv/nfs/softwaredownloads 192.168.56.103: suggest 192.168.56.103(sync) to avoid warning
    exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.103:/srv/nfs/softwaredownloads".
      Assuming default behaviour ('no_subtree_check').
      NOTE: this default has changed since nfs-utils version 1.0.x
    
    exportfs: No options for /srv/nfs/softwaredownloads 192.168.56.101: suggest 192.168.56.101(sync) to avoid warning
    exportfs: /etc/exports [4]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.56.101:/srv/nfs/softwaredownloads".
      Assuming default behaviour ('no_subtree_check').
      NOTE: this default has changed since nfs-utils version 1.0.x
    
    exporting 192.168.56.104:/srv/nfs/softwaredownloads
    exporting 192.168.56.103:/srv/nfs/softwaredownloads
    exporting 192.168.56.101:/srv/nfs/softwaredownloads
    exporting 192.168.56.104:/srv/nfs
    exporting 192.168.56.103:/srv/nfs
    exporting 192.168.56.101:/srv/nfs
    
     100%  16:36:43  USER: brook HOST: ARCH-16ITH6   
     ~  ❯$
    The warnings can be disregarded, as we will see below, the default NFS options will be applied automatically.
  6. The defaults applied to the exports can be viewed by using the export command again, this time with only the option -v. The output shows the options applied to the exports. Note, how the NFS server retained the only option supplied to the NFS root fsid=0, while applying the default options.
     100%  19:48:24  USER: brook HOST: ARCH-16ITH6   
     ~  ❯$ sudo exportfs -v
    /srv/nfs        192.168.56.104(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,no_all_squash)
    /srv/nfs        192.168.56.103(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,no_all_squash)
    /srv/nfs        192.168.56.101(sync,wdelay,hide,crossmnt,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,no_all_squash)
    /srv/nfs/softwaredownloads
                    192.168.56.104(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
    /srv/nfs/softwaredownloads
                    192.168.56.103(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
    /srv/nfs/softwaredownloads
                    192.168.56.101(sync,wdelay,hide,no_subtree_check,sec=sys,ro,secure,root_squash,no_all_squash)
    
     100%  19:48:27  USER: brook HOST: ARCH-16ITH6   
     ~  ❯$
  7. On the NFS client, labeled VM3 - Rocky Linux 8.6 (RHEL 8.6), verify the available exports from the server with, the showmount command, shown below with its output:
    [brook@Rocky16ITH6-VM1 ~]$ showmount -e ARCH-16ITH6.localdomain
    Export list for ARCH-16ITH6.localdomain:
    /srv/nfs/softwaredownloads 192.168.56.101,192.168.56.103,192.168.56.104
    /srv/nfs                   192.168.56.101,192.168.56.103,192.168.56.104
  8. On the NFS client, create a mount point for the export.
    [brook@Rocky16ITH6-VM1 ~]$ sudo mkdir /mnt/VMhostNFS/
  9. On the NFS client, mount the filesystems exported by the NFS server at the mount point.
    [brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/ /mnt/VMhostNFS/
  10. On the NFS client, use the nfsstat command to view the statistics of the NFS mount:
    [brook@Rocky16ITH6-VM1 ~]$ nfsstat -m
    /mnt/VMhostNFS from ARCH-16ITH6.localdomain:/
     Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.103,local_lock=none,addr=192.168.56.1
    
    
  11. On the NFS client, work with the imported directories as if they are local, for example view the files with the tree command.
    [brook@Rocky16ITH6-VM1 ~]$ tree -L 3 /mnt/VMhostNFS/
    /mnt/VMhostNFS/
    └── softwaredownloads
        ├── Arch
        │   ├── aibs.sh
        │   ├── archlinux-bootstrap-2022.01.01-x86_64.tar.gz
        │   ├── archlinux-bootstrap-2022.01.01-x86_64.tar.gz.sig
        │   ├── root.x86_64
        │   └── sha1sums.txt
        ├── Mozilla
        ├── RockyLinux
        │   ├── CHECKSUM
        │   ├── Rocky-8.6-x86_64-dvd1.iso
        │   └── rockyvm2-ks.cfg
        ├── texlive
        │   ├── opentype-info.log
        │   ├── opentype-info.pdf
        │   ├── sample2e.aux
        │   ├── sample2e.dvi
        │   ├── sample2e.log
        │   ├── sample2e.pdf
        │   ├── sample2e.ps
        │   ├── texlive2020.iso
        │   ├── texlive2020.iso.sha512
        │   ├── texlive2020.iso.sha512.asc
        │   └── texput.log
        ├── VirtualBox
        │   └── VBoxGuestAdditions_6.1.34.iso
        └── vivaldi
            └── vivaldi-stable-5.3.2679.61-1.x86_64.rpm
    
    8 directories, 20 files
    [brook@Rocky16ITH6-VM1 ~]$
  12. We can use the ls command to view the contents on one of the shared directories:
    [brook@Rocky16ITH6-VM1 ~]$ ls -la /mnt/VMhostNFS/softwaredownloads/RockyLinux
    total 10950680
    drwxr-xr-x. 2 brook brook        4096 Jul 12 18:27 .
    drwxr-xr-x. 8 brook brook        4096 Jul 17 15:30 ..
    -rw-r--r--. 1 brook brook         450 Jul 10 00:28 CHECKSUM
    -rw-------. 1 brook brook          63 Jul 10 20:56 .directory
    -rw-r--r--. 1 brook brook 11213471744 Jul 10 00:12 Rocky-8.6-x86_64-dvd1.iso
    -rw-r--r--. 1 brook brook        2776 Jul 12 18:27 rockyvm2-ks.cfg
    [brook@Rocky16ITH6-VM1 ~]$
    The output of the ls command shows that the contents the user has read and write permissions in this directory, but because we didn't specify the rw NFS option, we are not able to write to the imported directories, as indicated in the output when attempting to copy a file. Read-only is one of the default mount options if it is not explicitly overriden with the rw option in the /etc/exports file.
    [brook@Rocky16ITH6-VM1 ~]$ cp /mnt/VMhostNFS/softwaredownloads/RockyLinux/rockyvm2-ks.cfg /mnt/VMhostNFS/softwaredownloads/RockyLinux/rockyvm2-ks.cfg.copy
    cp: cannot create regular file '/mnt/VMhostNFS/softwaredownloads/RockyLinux/rockyvm2-ks.cfg.copy': Read-only file system

Some items to note in the process above.

  1. When mounting the exported filesystem, in order for the mount to be NFSv4, the root of the filesystem (identified by the fsid= option) must be mounted and not one of the child exports, i.e.,
    [brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/ /mnt/VMhostNFS/
    and not
    [brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/srv/nfs /mnt/VMhostNFS/
    otherwise, while the exports will be accessible by the client, the service will fall back to NFSv3, as shown in the follwing output of nfsstat -m Mounting using the second of the above (/srv/nfs instead of /)
    [brook@Rocky16ITH6-VM1 ~]$ nfsstat -m
    /mnt/VMhostNFS from ARCH-16ITH6.localdomain:/srv/nfs
     Flags: rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.56.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=192.168.56.1
    
    

openSUSE

Configuration of an NFS server is easily done with the YaST NFS module as shown in the following set of images.

  • Image 1 shows the main YaST interface with the NFS Server launcher highlighted.
  • Clicking the launcher activates the main screen of the YaST NFS module, shown in the featured image and in Image 2. On this screen, the main NFS service can be set to be enabled such that it started automatically by ensuring the "Start" radio button is activated under NFS Server; NFSv4 can be explicitly enabled by ensuring the "Enable NFSv4" checkbox is activated, otherwise only NFSv3 will be available; and NFS can be set to use Kerberos authentication, if it is already configured on the system, by activating the "Enable GSS Security" checkbox. According to openSUSE documentation, the message "Firewall not configurable" -- that appears where, presumably, firewall parameters relevant to NFS would be set -- is a cosmetic defect due to the fact that openSUSE only recently switched to firewalld from SuSE Firewall2 and the new firewall is not yet fully implemented in YaST. It can be configured separately in the YaST Firewall component, as shown previously in this article, to ensure that the necessary port(s) are open.
  • Clicking the "Next" button at the bottom of the main YaSt NFS Server screen opens the screen shown in Image 3. Here, directories that are already configured to be exported are shown in the top pane, and hosts that are configured to access the selected exported directory in the top pane, are shown in the bottom pane. The controls on this screen are very intuitive; clicking the "Add Directory" opens a dialog box in which a directory to be exported is added, and the "Add Host" button opens a dialog in which client hosts that can access import the directory cab be specidied. Modifications entered using these two buttons makes the appropriate additions to the /etc/exports file. The image shows that a directory has already been added and two hosts already specified as able to access the directory.
  • Image 4 shows the dialog that is opened when clicking the "Add Directory" button. A file path can be entered in the text box, or with the "Browse" button a file manager window can be opened to select the directory to export.
  • Image 5 illustrates the the dialog that is opened when clicking the "Add Host" button. Any of the possible methods to identify a host or a group of hosts in the /etc/exports file can be entered in the "Host Wild Card" box. The NFS options for the host -- options that would appear in the parentheses immediately following the host identification in the exports file -- can be entered in the "Options" text box. Leaving this box blank will cause default options to be automatically applied in the /etc/exports file.
  • Image 6 shows the addition of another NFS client host as being able to access the selected exported directory.
  • And Image 7 shows the change to the previous screen after the new client has been added.
Configuring NFS Server in openSUSE's YaST
Configuration of an NFS Server in openSUSE is Simple and Automated in YaST
Click on any of the thumbnails to view a slideshow of the images.

The /etc/exports file generated by the above YaST NFS Server configuration actions is shown in the listing below.

 100%  18:44:22  USER: brook HOST: 16ITH6-openSUSE   
 ~  ❯$ cat /etc/exports
/home/brook/DataEXT4/SoftwareDownloads  192.168.56.104(ro,root_squash,sync,no_subtree_check) 192.168.56.103(ro,root_squash,sync,no_subtree_check) 192.168.56.101(ro,root_squash,sync,no_subtree_check) 192.168.56.102(ro,root_squash,sync,no_subtree_check)

With this configuration on the openSUSE physical host acting as an NFS server, we are ready to use the import the exports in the Rocky Linux VM. On the client we can verify the exported directories on the server, mount the exports on a mountpoint in the client's filesystem hierarchy, and verify that the NFS mount with nfsstat -m, and access the imported filesystem, as shown in the following listing/

[brook@Rocky16ITH6-VM1 ~]$ showmount -e 192.168.56.1
Export list for 192.168.56.1:
/home/brook/DataEXT4/SoftwareDownloads 192.168.56.102,192.168.56.101,192.168.56.103,192.168.56.104
[brook@Rocky16ITH6-VM1 ~]$ sudo mount 192.168.56.1:/home/brook/DataEXT4/SoftwareDownloads /mnt/hostnfs
[brook@Rocky16ITH6-VM1 ~]$ nfsstat -m
/mnt/hostnfs from 192.168.56.1:/home/brook/DataEXT4/SoftwareDownloads
 Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.103,local_lock=none,addr=192.168.56.1

[brook@Rocky16ITH6-VM1 ~]$ ls -l /mnt/hostnfs
total 24
drwxr-x--x. 3 brook brook 4096 Feb 12 01:17 Arch
drwxr-x--x. 2 brook brook 4096 Jun  1 16:02 Mozilla
drwxr-x--x. 2 brook brook 4096 Jul 23 18:53 RockyLinux
drwxr-x--x. 2 brook brook 4096 Dec 11  2020 texlive
drwxr-x--x. 2 brook users 4096 Jul 17 15:30 VirtualBox
drwxr-x--x. 2 brook users 4096 Jul  1 00:34 vivaldi
[brook@Rocky16ITH6-VM1 ~]$

Configuring NFS Client

Configuration of an NFS client host only requires installation of the package that contains the appropriate client components. After necessary components are installed, only a mount command is necessary to access the directories exported by the client, as shown previously. To make the mount of imported directories persistent, a line can be added to /etc/fstab.

Installation of NFS Client Components

On some distributions the package that contains NFS client components is the same as the package that contains the NFS server components themselves, typically in the package nfs-utils. This is the case with Arch Linux and Red Hat Enterprise Linux (and clones); with these distributions, installing nfs-utils installs both NFS server and client components.

In other distributions, the NFS client components are in a separate package from the NFS server components. In openSUSE, the package nfs-client contains the necessary client components, while the optional package yast2-nfs-client enables a YaST module which allows configuration of a persistent mount of imported directories by adding a line in /etc/fstab through a GUI.

Ubuntu also packages the NFS client separately from the NFS server components. The only necessary package for enabling an NFS client is nfs-common.

Mounting Imported Directories

After ensuring the necessary client components are installed, all that is necessary is to mount imported directories with a mount command such as

sudo mount 192.168.56.1:/home/brook/DataEXT4/SoftwareDownloads /mnt/hostnfs

or, as discussed in Configuring NFS on Host (NFS Server) -> Arch

sudo mount ARCH-16ITH6.localdomain:/ /mnt/VMhostNFS/

where the second form imports the NFS root (the filesystem hierarchy location identified as the NFS in the exports file) and the first imports the actual directory.

To make the mount persistent after reboots, the details of the imported directory in the mount command can instead be placed in /etc/fstab with a line similar to that of a line that specifies local storage, i.e.,

server:path	/mountpoint	fstype option1,option2,...,optionN	0	0

server:path identifies the NFS server and the exported directory, where "server" can be an IP address or a fully qualified domain name. For example, the following line in /etc/fstab is equivalent to the second mount command above.

192.168.56.1:/ /mnt/VMhostNFS nfs defaults 0 0

Using this exports configuration on the server, a client can mount the exported directory with:

sudo mount ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux /mnt/hostnfs/

on the Arch Linux VirtualBox host was adequate to allow the Rocky Linux VM -- labeled as VM3 - Rocky Linux 8.6 (RHEL 8) in the diagram at the top of the article -- to import the exported directory. The following listing shows the mount command executed in the NFS client to mount the export in the local filesystem hierarchy. The listing also shows the client accessing the export with the ls command. The third command shown in the listing provides statistics on the NFS mount indicating details such as an identification of the NFS server (as the FQDN) and as an IP address, the clients IP address, the mount location, the NFS options, and the version of the NFS protocol in use -- in this case version 4.2.

[brook@Rocky16ITH6-VM1 ~]$ sudo mount ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux /mnt/hostnfs/
[brook@Rocky16ITH6-VM1 ~]$ ls -l /mnt/hostnfs
total 10950668
-rw-r--r--. 1 brook brook         450 Jul 10 00:28 CHECKSUM
-rw-r--r--. 1 brook brook 11213471744 Jul 10 00:12 Rocky-8.6-x86_64-dvd1.iso
-rw-r--r--. 1 brook brook        2776 Jul 12 18:27 rockyvm2-ks.cfg
[brook@Rocky16ITH6-VM1 ~]$ nfsstat -m
/mnt/hostnfs from ARCH-16ITH6.localdomain:/home/brook/DataEXT4/SoftwareDownloads/RockyLinux
 Flags: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.103,local_lock=none,addr=192.168.56.1

[brook@Rocky16ITH6-VM1 ~]$

References