SUMMARY: NIS/NFS/Digital UNIX 3.2c questions

From: Richard L Jackson Jr <rjackson_at_osf1.gmu.edu>
Date: Wed, 6 Sep 1995 10:16:15 -0400 (EDT)

Hello,

The following is summary to my 8 questions. I folded in the answers
with the questions to hopefully make it easier to read. I appended
the full text of all the answers.

-- 
Regards,
Richard Jackson                                George Mason University
Computer Systems Engineer                      UCIS / ISO
                                               Computer Systems Engineering
========================== Question intro =================================
I have a grab bag of questions related to NIS, NFS, and 3.2c issues.  I
have two AlphaServer 2100 (4/200, 4/275), both running Digital UNIX
3.2c.
========================== Responses From =================================
David St. Romain <stromain_at_alf.dec.com>
Alan alan_at_nabeth.cxo.dec.com
Ron Menner USG/Kernel Networking <rtm_at_zk3.dec.com>
========================== Question #1 =====================================
1. 3.2c vdump of root attempts to backup /proc.  Why?  I restored a 3.2 vdump
and received the same results.  I never noticed this prior to 3.2c.  Is
this a proc problem?
path     : /
dev/fset : root_domain#root
type     : advfs
advfs id : 0x2e5e15a7.000ecda0.1
vdump: Date of last level 0 dump: the start of the epoch
vdump: Dumping directories
vdump: unable to get info for file <./proc/02610>; [2] No such file or directory
vdump: unable to get info for file <./proc/02610>; [2] No such file or directory
vdump: Dumping 5202746737 bytes, 113 directories, 20847 files
vdump: Dumping regular files
vdump: unable to read file <./proc/00000>; [22] Invalid argument
vdump: unable to read file <./proc/00001>; [22] Invalid argument
vdump: unable to read file <./proc/00003>; [22] Invalid argument
... for pages of errors.
========================== Answer #1 =====================================
This problem has been reported to DEC and it has been elevated to 
Engineering.  The work around, per David St. Romain of CSC, is
to 1. umount /proc, 2. dump root filesystem, 3. mount /proc.  This 
solution may impact someone using the proc capability (e.g., someone
debugging code).  Of course, you could simply leave /proc mounted
and ignore the errors.
========================== Question #2 =====================================
2. One of the reasons I upgraded to 3.2c is for the DE500 (PCI Ethernet
10/100 network card) support.  Unfortunately, the update process does not
supply me with a genvmunix kernel  -- I had to install LSM/ATM subsets,
build the kernel, then remove the unwanted subsets.  This process
took some time to determine the required subsets.  Does anyone
know how DEC decides which releases will have a genvmunix?  I recommend
a genvmunix always be supplied since new hardware is normally introduced.
========================== Answer #2 =====================================
This is more of a comment than a question -- no reply received.
========================== Question #3 =====================================
3. One of my 2100's has the system disk RAID 0+1 via the SWXCR-EB
 (aka KZESC-BA).  I noticed if I run doconfig, reboot the system, then
/sys/MASON/vmunix becomes corrupted.  Note that the SWXCR board has
battery disabled and write-thru cache.  The RAID device is 8 RZ28-VA's
with AdvFS for the filesystem.  Here is my evidence...
 TEST /sys/MASON2/vmunix corruption.
  BEFORE BOOT:
        what /sys/MASON2/vmunix|wc      -> 520      5675     43075
        sum /sys/MASON2/vmunix          -> 30520   7334 /sys/MASON2/vmunix
        ls -l /sys/MASON2/vmunix        ->
-rwxr-xr-x   3 root     system   7509616 Aug 26 15:45 /sys/MASON2/vmunix
  AFTER BOOT:
        what /sys/MASON2/vmunix|wc      -> 182      1978     14995
        sum /sys/MASON2/vmunix          -> 29430   7334 /sys/MASON2/vmunix
        ls -l /sys/MASON2/vmunix        ->
-rwxr-xr-x   3 root     system   7509616 Aug 26 15:45 /sys/MASON2/vmunix
Note, I get the same results if I 'shutdown -h' or 'shutdown -r'.  SWXCR
firmware is 2.16, 2100 firmware is SRM 4.1, I use ECU 1.8 and RCU 3.11.
I discovered this because /,/var,/usr are on the same filesystem
and I USED to use 'mv' to copy /sys/MASON2/vmunix to /vmunix.  Now I 
must copy 'cp' to get a good copy before the reboot.  With the 
copy, /vmunix is ok.  But, /sys/MASON2/[vmunix,vmunix.OSF1, vmunix.swap]
become corrupted after the reboot.
========================== Answer #3 =====================================
No response for this question.
========================== Question #4 =====================================
4. I use the above mentioned DE500 network card as a private (autobahn)
network to have the two 2100's NFS disks.  I mount /pub, /var/spool/mail,
user space, and apps with rw,hard,intr.  I also do NOT mount the
NFS filesystem in root and use symbolic links for things like /pub.
That is, I NFS mount /pub on /nfs/pub and have a symbolic link /pub pointing
to /nfs/pub on the NFS client.  The NFS client system has its
local /var, /, /usr, swap (system space).  I notice the client NFS
system hangs while the NFS server is unavailable.  How do I determine the
process that hangs the system and kill it?  In general, how do I determine
which processes are hung on NFS?  If intr gives the chance to kill/interrupt
a NFS related operation, I need to know which one to kill.
========================== Answer #4 =====================================
No response for this question.
========================== Question #5 =====================================
5. Anyone notice that 3.2c replaces 'pseudo-device rpty nnn' with
'OPTION RPTY' in the kernel?  I had rpty set to 512 because I have 
upto 400 concurrent sessions.  I think the default was 255.  Under 3.2c
it appears the limit is 255, again.  My users received 'all network ports
in use' after about 255, or so, sessions.  How do I change this?  I took
a guess and modified the running kernel, via kdbx, with nptys=512.  Is this
correct? How do I specify this in the kernel?  The 3.2c BOOKREADER documents
don't talk about RPTY, nor nptys.  YIKES!
========================== Answer #5 =====================================
Per Ron Menner of USG/Kernel Networking, 
        nptys is now a sysconfigtab variable (the subsystem is pts:)
So inorder to set nptys add the following lines to /etc/sysconfigtab:
pts:
        nptys=512
        And then just reboot (no kernel config/build needed).  i believe this
is documented in both the release notes and the Kernel Tuning Guide.
COMMENT: Unfortunately, I failed to locate any mention of nptys in 
the release notes and tuning guide.  The release notes only mention that
'pseudo-device rpty nnn' is replaced with 'OPTION RPTY'.  Of course, I
could have missed the golden nugget that discusses nptys.
========================== Question #6 =====================================
6. I have 16,000+ users on the system and currently have 5 RZ28 mounted
as /usr/u1, /usr/u2,..., /usr/u5 which contain their home areas.  I
plan to RAID 0+1 several RZ29's and make one large /usr/home area.  Will
I have a performance hit since directory /usr/home will be large?  Is
it better to have a RAID 0+1 logical drive partitioned into several, say
5 partitions, instead of one large RAID 0+1 device and mount the home
area into the /usr/u1,..., /usr/u5 mount points?  The main
question is if one large directory would be a bottleneck?  Of course,
I assume a partitioned RAID 0+1 device does not spread (i.e., strip)
the load as well as one large RAID 0+1 device.  So, it appears I may
have a trade off.
========================== Answer #6 =====================================
Per Alan, it is better to have a large RAID 0+1 with a large home
directory, than to have the RAID device partitioned with a home
on each partition.  I like Alan's alternative idea of making one large RAID
device and making multiple directories.
========================== Question #7 =====================================
7. This question may be related to #6.  I have noticed 'pwd' in /usr/u1/jblow
takes about 10 seconds to complete on the NFS client.  However, 'pwd'
in /, /usr, /usr/u1 (/usr/u1 is NFS mounted), takes the usual fraction
of a second.  I have also noticed 'pwd' on the NFS server in /usr/u1/jblow
takes a fraction of a second.  Is the problem a large directory that 
is NFS exported?  Note that 'ls' in /usr/u1/jblow, on the NFS client
and server, take only a fraction of a second to complete.  Where is
the bottleneck?  Note: /usr/u1 has 3600+ entries (user directories).
========================== Answer #7 =====================================
Per Alan, this is to be expected for directories with many entries.
========================== Question #8 =====================================
8. Anyone using NIS/C2 in a large user base (5000+)?  If so, how
much of a performance hit did you take with NIS on?  I currently only have 
C2 enabled but want to use NIS in our pseudo cluster environment (RAID,
DECsafe, HSZ40, NFS, NIS, home grown software, etc) -- at
least until DEC provides that capability.  If anyone at DEC is reading
this, don't let the engineers working on this take sick and vacation time. 
However, permit weekly conjugal visits.  ;-)
========================== Answer #8 =====================================
No response for this question.
FULL TEXT OF RESPONSES:
===============================================================
alan_at_nabeth.cxo.dec.com:
        1.  Appears to be a bug.  If you have a support contract report
            it to the CSC, so they can push it up to engineering.  Other-
            wise, check the CSC Web pages from time to time to see if
            a patch has shown up.
        6.  Very large directories, can have non-linear lookup time, but
            they aren't necessarily slower.  Assuming a typical 8 byte
            directory name, the typical directory entry will use 20-24
            bytes of space.  For 16,000 directories, this is 320,000 to
            384,000 (around 48 8 KB blocks).  This isn't very large
            and if the single large directory is that frequently
            access, reads should hit the cache.
            It wouldn't be hard to construct a test that times the
            "lookup" time for names at the beginning, end and middle
            of the directory.
            The more interesting question would be around MP locking
            of the directory file.  As long as the directory is read
            mostly access, the MP code should be able to use shared
            read locks and therefore allow multiple readers.  UFS is
            MP safe (but Advfs isn't).  I don't know that applies to
            your system.
            If multiple accesses can run in parallel, then using a stripe
            set should help the through-put.
            An alternative to one large "home" directory is to create
            the large stripe set, mount it under /usr/users and then
            have /usr/u1 be a symbolic link to /usr/users/u1, /usr/u2
            be a link to /usr/users/u2, etc.
            If the I/O load isn't distributed evenly between the disks
            today, then using striping may help balance the I/O load.
            re: Partitioned 0+1 vs. large 0+1.
            Hard call.  By partitioning, you effectively short-stroke
            the disk for that subset of users.  If only that set of
            users is busy, then they may see slightly better through-
            put.  A given user, will see shorter logical seeks, but
            if all the users are equally active, then all the capacity
            will be in use.
            A clear disadvantage of partititioning is that it is more
            "things" to manage.
        7.  Actually, it sounds like you're getting NFS timeouts to
            some file name component in the path.  Higher up in the
            path you never try to go off system, so slow or dead
            NFS server aren't noticed.  This could just be a general
            NFS performance problem.
===============================================================
Ron Menner USG/Kernel Networking <rtm_at_zk3.dec.com>
        nptys is now a sysconfigtab variable (the subsystem is pts:)
So inorder to set nptys add the following lines to /etc/sysconfigtab:
pts:
        nptys=512
        And then just reboot (no kernel config/build needed).  i believe this
is documented in both the release notes and the Kernel Tuning Guide.
===============================================================
Received on Wed Sep 06 1995 - 16:58:52 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:45 NZDT