Summary Part2 DEGPA Performance issues

From: O'Brien, Pat <pobrien_at_mitidata.com>
Date: Fri, 12 Jan 2001 08:10:59 -0500

I continue to push dec to give me the performance expected over the degpa gb
network cards. I have have a dec support call opened now for almost 2
months, and most of the sugggestions contained, I have shared with all of
you. I feel this need to share with you the latest information provided to
me. In summary, there own testing can only push 400 mbps through the pipe
in the best scenario, and they want me to except this.
 

                Pat,

                I have done some testing with our Alpha 1200's, both of
which have
                DEGPA network cards which are directly connected to each
other.

                I have tried various tests with a program called netperf.
It is a
                public domain test program that can send a variety of packet
types
                and sizes back and forth between hosts.

                I used this package to test between the Alpha's over the
gigabit
                connection and found that with the default system parameter
settings
                and the use of the default ipmtu of 1500 for the cards, I
consistantly
                got about 260-270 Mbps. This is simply bits being sent to
the
                interface, then pushed to a coresponding server program at
the other
                end. No files, no disks.

                As a side note, to accomplish this, the netperf program and
its
                corresponding netserver program on the other end used a
conciderable
                amount of cpu. I witnessed sustained cpu loads of about
67% during
                these tests.

                If I adjusted the buffer sizes (tcp_sendspace &
tcp_recvspace) on
                one system only, then the throughput was drastically
reduced. Even if
                these parameters were adjusted as I suggested in a previous
note to
                larger values of 131072 from the default of 32768, I
witnessed a
                reduction in throughput.

                The best throughput using netperf was achieved when I
increased the
                sysconfig parameters and used the jumbo frame size (ipmtu =
9000)
                on both systems. With these settings, I was able to get
about 430Mbps
                between the systems.

                However, I then created a large binary file (161Mbytes) and
used ftp
                to transfer this file between the systems over the gigabit
interfaces
                and found that I got about 30 Mbps. I then transferred the
same file
                in the same direction using the 100Mbps ethernet interface
and the
                timing differed by less then a second.

                So we can pretty much conclude that while we are able to
push data
                across the gigabit interface at about 5 times the rate of
the 100Mbps
                ethernet pipe, when we introduce disk I/O, much of the
advantage of
                the larger network path is eliminated.

                This is why it is so difficult to guarantee or even give a
general
                guideline for the sort of throughput that one can expect
when using
                the DEGPA. As I have said before, all of our reports
indicate that
                under ideal conditions, with the fastest Alpha's and disks,
the best
                you can hope to get is about 400Mbps on a DEGPA. And this
will not
                be through single stream applications such as ftp.

                I will include some of my test data below. You may wish to
grab a
                copy of the netperf utility from the Internet and try your
own
                thoughput testing.

                I don't believe an escalation to engineering at this time
would be
                of any benefit. However, I can send this referral to your
local
                area offices if you prefer.

                Al


                # netperf -H mountain
                TCP STREAM TEST to mountain
                Recv Send Send
                Socket Socket Message Elapsed
                Size Size Size Time Throughput
                bytes bytes bytes secs. 10^6bits/sec

                 32768 32768 32768 10.00 92.06


                # netperf -H 192.168.80.19
                TCP STREAM TEST to 192.168.80.19
                Recv Send Send
                Socket Socket Message Elapsed
                Size Size Size Time Throughput
                bytes bytes bytes secs. 10^6bits/sec

                 32768 32768 32768 10.01 257.93

                On both systems:

                # ifconfig alt0 ipmtu 9000

                # netperf -H 192.168.80.19
                TCP STREAM TEST to 192.168.80.19
                Recv Send Send
                Socket Socket Message Elapsed
                Size Size Size Time Throughput
                bytes bytes bytes secs. 10^6bits/sec

                 32768 32768 32768 10.00 271.66


                On both systems:

                # sysconfig -r inet tcp_recvspace=131072
                tcp_recvspace: reconfigured
                # sysconfig -r inet tcp_sendspace=131072
                tcp_sendspace: reconfigured

                # netperf -H 192.168.80.19
                TCP STREAM TEST to 192.168.80.19
                Recv Send Send
                Socket Socket Message Elapsed
                Size Size Size Time Throughput
                bytes bytes bytes secs. 10^6bits/sec

                131072 131072 131072 10.00 432.15

                On both systems:
                # ifconfig alt0 ipmtu 1500

                # netperf -H 192.168.80.19
                TCP STREAM TEST to 192.168.80.19
                Recv Send Send
                Socket Socket Message Elapsed
                Size Size Size Time Throughput
                bytes bytes bytes secs. 10^6bits/sec

                131072 131072 131072 10.21 27.88



                ftp> put vmunixtest23x4 bigtest
                200 PORT command successful.
                150 Opening BINARY mode data connection for bigtest
(192.168.80.17,1529).
                226 Transfer complete.
                161108224 bytes sent in 43.48 secs (43.48 secs, 3618.46
Kbytes/s)

                ftp> put vmunixtest23x4 bigtest
                200 PORT command successful.
                150 Opening BINARY mode data connection for bigtest
(16.66.80.18,1530).
                226 Transfer complete.
                161108224 bytes sent in 43.77 secs (43.77 secs, 3594.85
Kbytes/s)
Received on Fri Jan 12 2001 - 13:12:57 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:41 NZDT