Hello Digital UNIX managers,
I have made a data collector available on
ftp.digital.com:/pub/DEC/collect_1.04.tar.gz. The README follows.
Please ignore the part about looking for a system with a raid
controller.
Rob Urban
-----------------------------------------------------------------------------
[README]
This is a data collector for Digital UNIX V3.x and V4. You will need
LSM (or at least the lsm includes) in order to compile it. (Or, you
can undefine LSM in the Makefile).
This is TOTALLY UNSUPPORTED by Digital. I have done my best to make it
useful and accurate, but it is in no way guaranteed to do anything, except
take on some space on your hard disk :-). On the other hand, if you find
a bug, or think something is incorrect, send me a mail, and, time permitting,
I'll have a look.
If you don't have gnu-make, use Makefile.simple, but first edit it
and set OV to the version of dunix that you're using ('3' or '4').
If you would prefer ready-made executables, copy
collect_<version>_bin.tar.gz from the same directory where you got the
sources.
I haven't tested the RAID (swxcr) subsystem under V4.0, so I
can't comment on it. If you have a system with a raid controller
(SWXCR), either EISA or PCI, running V4.0 where I could test my
collector, I'd appreciate it!
I've just finished debugging LSM under V4.0 *and* V3.2. Apparently,
it didn't work under V3 either. Now it's happy. I'm not too sure
about the avg service time reported, which agrees with volstat's;
I'm checking into this.
If you use it, and like it, or don't like it, or have comments about
it, or want to suggest improvements... send me a mail.
As far as performance is concerned, the collector will only have
significant impact if you collect everything every second (i.e.:
collect[34] -i1 -fcollect.out). If you leave the process subsystem
away, or give it a different interval (-i1:10), the collector should
not consume a significant portion of the cpu. I intend to attempt to
improve the collection of process statistics when I have the chance.
run 'collect -h' for more information about the various switches.
I have recently written some perl[5] scripts for extracting data to be
imported into gnuplot, or excel, if you insist. 'cfilt' slurps in the
ASCII output of the collector and can be given arbitrary expressions to
extract any data that one wants. Look at 'cfilt -h' for more info.
'cavg' takes the output of 'cfilt' and averages X lines (default 5)
into one line. Here is an example of their use:
collect -i1 -fcoll.bin # collect into 'coll.bin'
...wait a bit :-) ...
^C # terminate
collect -fcoll.bin -p|\ # read binary file (output ascii)
cfilt cpu:idle mem:free|\ # grab cpu idle and free memory
cavg > coll.data # average over 5 samples
of course, I could have just specified an interval of 5 seconds in the
original collection stage, then I would not have needed to use 'cavg',
but who knows what one might need? I think it's better to collect more
than I need just in case...
If there is enough (any!) interest, I might write a perl-tk/gnuplot front
end for the interpretation of data...
Also, someday, the collector might also run as a daemon and allow socket
connections, over which clients grab the collected data....
Rob Urban (urban_at_rto.dec.com)
Received on Fri Sep 06 1996 - 12:14:46 NZST