The required packages are different depending on if the system is a client or a server. In this Howto, the server is the host that has the files you want to share and the client is the host that will be mounting the NFS share.

NFSv4 client

apt-get install nfs-common 

NFSv4 server

apt-get install nfs-kernel-server 

After you finish installing nfs-kernel-server, you might see failure to start nfs-kernel-server due to missing entries in /etc/exports. Remember to restart the service when you finish configuring.

NFSv4 without Kerberos
NFSv4 Server

NFSv4 exports exist in a single pseudo filesystem, where the real directories are mounted with the —bind option. Here is some additional information regarding this fact.

Let’s say we want to export our users’ home directories in /home/users. First we create the export filesystem:

mkdir /export
mkdir /export/users 

and mount the real users directory with:

mount --bind /home/users /export/users

To save us from retyping this after every reboot we add the following
line to /etc/fstab

/home/users    /export/users   none    bind  0  0

In /etc/default/nfs-kernel-server we set:

NEED_SVCGSSD=no # no is default

because we are not activating NFSv4 security this time.
In /etc/default/nfs-common we set:

NEED_GSSD=no # no is default

To export our directories to a local network
we add the following two lines to /etc/exports


Be aware of the following points:
Setting the crossmnt option on the main psuedo mountpoint has the same effect as setting nohide on the sub-exports: It allows the client to map the sub-exports within the psuedo filesystem. These two options are mutually exclusive.
Note that when locking down which clients can map an export by setting the IP and subnet mask, does not work. Either do not set any subnet or use /24 as shown. Can someone please provide a reason for this behaviour?
Restart the service

/etc/init.d/nfs-kernel-server restart

NFSv4 Client
On the client we can mount the complete export tree with one command:

mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/ /mnt

We can also mount an exported subtree with:

mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/users /home/users

To save us from retyping this after every reboot we add the following
line to /etc/fstab:

nfs-server:/   /mnt   nfs4    _netdev,auto  0  0

where the auto option mounts on startup and the _netdev option can be used by scripts to mount the filesystem when the network is available. Under NFSv3 (type nfs) the _netdev option will tell the system to wait to mount until the network is available. With a type of nfs4 this option is ignored, but can be used with mount -O _netdev in scripts later. Currently Ubuntu Server does not come with the scripts needed to auto-mount nfs4 entries in /etc/fstab after the network is up.
Note with remote NFS paths
They don’t work the way they did in NFSv3. NFSv4 has a global root directory and all exported directories are children to it. So what would have been nfs-server:/export/users on NFSv3 is nfs-server:/users on NFSv4, because /export is the root directory.
Note regarding UID/GID permissions on NFSv4 without Kerberos
They do not work. Can someone please help investigating? Following this guide will result in UID/GID on the export being generic despite having same UID on client and server. Mounting same shar on NFSv3 works correctly with regards to UID/GID. Does this need Kerberos to work fully?
This is a possibly related bug:
Not clear what is meant by UID/GID on the export being generic. This guide does not explicitly state that idmapd must also run on the client side, i.e. /etc/default/nfs-common needs the same settings as described in the server section. If idmapd is running the UID/GID are mapped correctly. Check with ps ax|grep rpc that rpc.idmapd is running.

If all directory listings show just «nobody» and «nogroup» instead of real user and group names, then you might want to check the Domain parameter set in /etc/idmapd.conf. NFSv4 client and server should be in the same domain. Other operating systems might derive the NFSv4 domain name from the domain name mentioned in /etc/resolv.conf (e.g. Solaris 10).
If you have a slow network connection and are not establishing mount at reboot, you can change the line in etc/fstab:

nfs-server:/    /mnt   nfs4    noauto  0  0

and execute this mount after a short pause once all devices are loaded. Add the following lines to /etc/rc.local

sleep 5
mount /mnt

If you experience Problems like this:
Warning: rpc.idmapd appears not to be running.
All uids will be mapped to the nobody uid.
mount: unknown filesystem type ‘nfs4′
(all directories and files on the client are owned by uid/gid 4294967294:4294967294) then you need to set in /etc/default/nfs-common:


and restart nfs-common

/etc/init.d/nfs-common restart

The «unknown Filesystem» Error will disappear as well.

NFSv4 and Autofs
Automount (or autofs) can be used in combination with NFSv4. Details on the configuration of autofs can be found in the AutofsHowto. The configuration is identical to NFSv2 and NFSv3 except that you have to specify -fstype=nfs4 as option. Automount supports NFSv4′s feature to mount all file systems exported by server at once. The exports are then treated as an entity, i.e. they are «all» mounted when you step into «one» directory on the NFS server’s file systems. When auto-mounting each file system separately the behavior is slightly different. In that case you would have to step into «each» file system to make it show up on the NFS client.

NFSv4 and NFSv3 simultaneously
NFSv4 and NFSv3 can be used simultaneously on a NFS server as well as on a NFS client. You have to setup NFSv3 on your NFS server (see SettingUpNFSHowTo). You can then export a file system with NFSv4 and NFSv3 simultaneously. Just put the appropriate export statements into /etc/exports and you are done. You might want to do this when you have NFS clients that don’t support NFSv4, e.g. Mac OS X and Windows clients. But don’t forget about the security risks of NFS with clients that can not be trusted.


Comments are closed.