Hello,
i didnt receive to many resposes. I think a performance analysis is always
too specific
on the server and the application which it runs that it can be discussed on
a forum
like this one.
Thanks to Joe Fletcher who pointed me to the tool "alt":
See man alt for details on setting the gigabit card. Essentially you need
# lan_config -ialt0 -a0
to disable autonegotiation. Substitute the name of your card accordingly.
Cheers
Joe
And thanks to Alan who made some remarks about wait io and performance
analysis:
I'm not familiar with the data presented by collect, so I
can't comment on whether a particular value of something
is good or not. Internally, the system tracks five states
of CPU time; user mode, user node with an elevated "nice",
kernel mode, idle time and idle, but waiting on I/O. Inside
the kernel these are counts of some small fractional time
in mode since boot, but most tools present a percentage
of the each time since a previous sample.
In a perfect world there would be no waiting for I/O, but
devices are rarely able to get and accept data as fast as
modern CPUs can request it. In a perfect multi-processing
world, other processes (not waiting on a particular I/O)
would be able to use that idle. But, the world is rarely
perfect. A small percentage of time spent waiting for an
I/O to complete isn't uncommon.
You could benchmark the devices being used to determine their
maximum I/O capacity under real loads. Such loads have to
be specially constructed to eliminate the usual sources of
waiting. In general, having multiple I/Os to/from a device
waiting for the device, is the way to maximize the data and
request rates. For testing an individual device, not making
have to wait on another device is good. So generally, multiple
I/O requests out of or to memory, are how you maximize the
performance of an individual subsystem.
For a file system, sequential reading of a large file does a
pretty good job of this with minimal need for custom tools,
because the file systems generally support read-ahead. They
have the next I/O waiting for the last one to complete. For
well allocated files, they also support very long contiguous
transfers that reduce the time spent generating an I/O and
allow the subsystem to just move data.
Getting multiple I/Os to tape, is harder, but check the manual
page for the tapex command. I think it has a performance test
built into it.
Such performance tests of the components will give you an idea
of what kind of performance you can actually expect out of a
subsystem. Then, you can compare this to the actual to see
how far away it is. I can't offer a clue what to expect, but
for writing to DLT 7000 3-10 MB/sec is reasonable. The actual
capacity depends on how well the data compresses. Better
compression of the data supports a higher data rate.
My understanding of how start-stop mode performs compared to
streaming on the DLT 7000, is that running in start-stop mode
compared to streaming will be pretty obvious. Data rates will
be very poor if not streaming.
Thanks
Samier.
---------------------- Weitergeleitet von Samier Kesou/Extern/11/BfA-Berlin
on 19.09.2001 11:57 ---------------------------
Samier.Kesou_at_bfa-berlin.de on 17.09.2001 14:18:36
An: tru64-unix-managers_at_sws1.ctd.ornl.gov
Kopie: (Blindkopie: Samier Kesou/Extern/11/BfA-Berlin)
Thema: Performance Analysis, GB interface
Hello,
Im in the making of a performance analysis for a backup system. Im using
collect
in the version111.
The system is a Alpha 4100 with a Strogetek Libarary (6 DLT7000 drives on
three
buses) using DUV 40E PK 1.
The cpu goes sometimes into waitio (1-12 idle ticks). Otherwise the cpu
is in
total use.
Question:
Is 1-12 WIO acceptable or not (Could we gain performance if that does not
occure)
What does the process "kernel idle" mean.
How to check if the tape are in "streaming mode" while writing to them (at
least
the write
process seems to be steady)
How to configure a gigabit interface manually (not using autonegotiation)
on
a DUV 40E. lan_config doesnt have any gigabit options. Im not sure if the
consol
supports
gigabit options. The syslog messages says:
vmunix: alt0: Link up Autonegotiated ReceiveFlowControl
Isnt flow control disabled during full duplex?
Thanks
Samier.
Received on Wed Sep 19 2001 - 10:11:27 NZST