Dear Managers,
I'm having the following problem:
I have 2 ES40 systems (sys1 and sys2) in ASE 1.6 configuration. In the
past days they shared a RA7000 cabinet partitioned in 4 raidsets (3x90Gb
+ 1x54Gb).
On each raidset is built an Advfs domain (dom1, dom2, dom3, dom4).
In each Advfs domain there are some filesets.
Each domain (all filesets of the domain) is associated to an NFS service
(svc1, svc2, svc3, svc4).
Each system has in charge 2 NFS services. All filesets of these 2
services are mounted local to this system. The other system mounts these
filesets in NFS (by the start/stop scripts):
Last week I removed one of the services (svc4) from sys1. The system
sys2 still mounts one of the filesets (his mount point is /ns1) to svc4
even if it doesn't exist anymore. When I try to umount -f the NFS
filesystem, sys2 tell me "/ns1: Device busy". The command fuser -k /ns1
tells "NFS3 Server svc4 not responding still trying". The df command
suspends telling "NFS3 Server svc4 not responding still trying". Each
command tha scans mounted filesystems suspends the same way. I don't
know how to remove this "dirty mounted" NFS filesystem. I tried to
recreate the service svc4 and re-export from this service the filesystem
/ns1, but nothing changed. I can'reboot sys2, nor I can do anything that
suspends other services (kill daemons, etc...) now because sys2 is in
production.
If anyone has a solution, please let me know.
Many thanks in advance
Massimo
Received on Thu Dec 02 1999 - 15:34:15 NZDT