Ask Your Question
0

NFS server apparently stops

asked 2013-10-19 16:19:03 -0600

digitalwiz gravatar image

I'm teaching an online embedded Linux class for UC San Diego extension. Two of my students seem to be having the same problem that I've never seen before. Both are running F17 under VMWare.

They can't mount the target board's root file system over NFS. Executing service nfs-server status yields:

[root@localhost linux-3.5.3]# service nfs-server status

Redirecting to /bin/systemctl status nfs-server.service

nfs-server.service - NFS Server

      Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled)
      Active: active (**exited**) since Sat, 19 Oct 2013 14:59:27 -0600; 7s ago
     Process: 2090 ExecStartPost=/usr/lib/nfs-utils/scripts/nfs-server.postconfig (code=exited, status=0/SUCCESS)
     Process: 2088 ExecStartPost=/usr/sbin/rpc.idmapd $RPCIDMAPDARGS (code=exited, status=0/SUCCESS)
     Process: 2086 ExecStartPost=/usr/sbin/rpc.mountd $RPCMOUNTDOPTS (code=exited, status=0/SUCCESS)
     Process: 2074 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS ${RPCNFSDCOUNT} (code=exited, status=0/SUCCESS)
     Process: 2073 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
     Process: 2071 ExecStartPre=/usr/sbin/rpc.rquotad $RPCRQUOTADOPTS (code=exited, status=0/SUCCESS)
     Process: 2068 ExecStartPre=/usr/lib/nfs-utils/scripts/nfs-server.preconfig (code=exited, status=0/SUCCESS)
    Main PID: 2072 (rpc.rquotad)
      CGroup: name=systemd:/system/nfs-server.service
              ├ 2072 /usr/sbin/rpc.rquotad
              ├ 2087 /usr/sbin/rpc.mountd
              â”” 2089 /usr/sbin/rpc.idmapd

Oct 19 14:59:26 localhost.localdomain rpc.mountd[2087]: Version 1.2.5 starting

where the key word is exited. On my system, which is F17 running under VirtualBox, it says running. Looking at the graphical services dialog, on my system nfs-server shows as running, while on theirs it shows as "finished".

What's going on.

edit retag flag offensive close merge delete

Comments

You're running this on each of the systems that is trying to mount NFS shares? If so, then the status of their own NFS server is independent of each other. The only one that counts is the NFS server service on the server you're mounting from. Is there something else stopping the clients from communicating with the NFS server? Perhaps the systems are firewalled?

cobra gravatar imagecobra ( 2013-10-20 09:59:36 -0600 )edit

We're talking about three separate, independent systems, the two students who are having problems and mine which runs correctly. They are trying to mount a root file system on an ARM-based target board using NFS. The difference between their systems and mine is that theirs show that nfs-server as "exited" or "finished" and mine shows that it is running. What we're trying to get at is why the difference?

digitalwiz gravatar imagedigitalwiz ( 2013-10-20 12:57:53 -0600 )edit

what does each /etc/exports look like?

randomuser gravatar imagerandomuser ( 2013-10-20 19:36:52 -0600 )edit

Ah yes, now I understand. As randomuser says, what's in the /etc/exports file for each nfs server? Check the firewall settings on each VM too. The other file to check on each virtual machine is /etc/sysconfig/nfs, which allows you to define many of the parameters of the NFS server. Check for any differences there, and put them up here too if you're not sure.

cobra gravatar imagecobra ( 2013-10-21 03:39:14 -0600 )edit

Here are the two exports files. They look OK to me.

Replace <your_home_directory> with the name of your home directory

You must be root to edit this file

/home/acharya/ *(rw,no_root_squash,sync,no_subtree_check)

Replace <your_home_directory> with the name of your home directory

You must be root to edit this file

/home/dolbonics/ *(rw,no_root_squash,sync,no_subtree_check)

digitalwiz gravatar imagedigitalwiz ( 2013-10-21 11:03:44 -0600 )edit

1 Answer

Sort by » oldest newest most voted
0

answered 2013-10-22 00:55:57 -0600

Check SELinux contexts and booleans. The policycoreutils-devel package contains manpages to help, in particular man nfsd_selinux. Here's my suggestion:

On the server:

semanage fcontext -a -t public_content_rw_t "/exports/user(/.*)?"
restorecon -F -R -v /exports/user

You'll want to check /var/log/audit/audit.log on both the client and the server to check for SELinux enforcement. There's a use_nfs_home_dirs boolean, for example. If it is an SELinux problem, a boolean - or relabel as above - is almost always enough to solve the problem.

If it doesn't look like an SELinux problem, speak up and we'll find a better answer :)

edit flag offensive delete link more

Question Tools

Stats

Asked: 2013-10-19 16:19:03 -0600

Seen: 937 times

Last updated: Oct 22 '13