[SUMMARY] Implementing a backbone net for a set of Digital UNIX machines

From: Tonny Madsen <Tonny.Madsen_at_netman.dk>
Date: 01 Jul 1996 13:03:00 +0200

Hi'

I got a couple of comments to my layout and use of FDDI instead of
100BaseT, but no comments on the configuration of the involved UNIX
systems, so I take it that nobody found any flaws in my proposal :-)

Thanks to
    Ray Bellis <Ray.Bellis_at_psy.ox.ac.uk>
    Dave Newbold <phdmn_at_siva.bris.ac.uk>
    sarasin_at_yosemite.cop.dec.com
    David Warren <warren_at_atmos.washington.edu>
    "Dave Golden" <golden_at_falcon.invincible.com>
    Andre Paige <apaige_at_idulab.gov>

I have added my original mail and the communication I had with the
people above as a digest.

----------------------------------------------------------------------

Date: 27 Jun 1996 13:39:13 +0200
From: Tonny Madsen <Tonny.Madsen_at_netman.dk>
To: alpha-osf-managers_at_ornl.gov
Subject: Implementing a backbone net for a set of Digital UNIX machines
Message-ID: <rrspbho4ri.fsf_at_teapot.netman.dk>

Hi',

We are in the process of implementing a backbone network for our
company, and just before we start doing anything, I would like to hear
if any of you have any comments or ideas...

We have 4 Digital UNIX machines of varying size that serve nearly 50
users (programmers) on PC's running MS Windows and X servers. All
data, including the home directories of the users and sources for all
projects, is distributed among the servers.

We (consequently) have a great deal of NFS traffic between the servers
especially when we compile. In order to lighten the load of the global
company network we want to implement an FDDI backbone network between
the servers.

This will look like the following. Note that we also have printers,
routers, terminal concentrators and all sorts of other equipment on the
10Mb net.

            +--+ +--+ +--+ +--+ +--+ +--+ +--+
            |PC| |PC| |PC| |PC| |PC| |PC| |PC|
            +--+ +--+ +--+ +--+ +--+ +--+ +--+
             | | | | | | |
10Mb -------------------------------------
               | | |
          +---------+ +---------+ +---------+
          | OSF | | OSF | ... | OSF |
          +---------+ +---------+ +---------+
               | | |
100Mb +----------------------------+


Now, we think we can accomplish what we want by doing the
following.

The servers currently have address' on the form a.b.c.d where 1<=d<=14
and this will be true for a long time yet.

- we make a small subnetwork in our current class C address space
  a.b.c.xx: a.b.c.240-a.b.c.254

- for each of the servers, we configure the new FDDI cards with
  these addresses using

    ifconfig fta0 inet a.b.c.(d+240) netmask 255.255.255.240 metric 0

- for each of the servers, we reconfigure the old ethernet cards by
  adding "metric 10" to the IFCONFIG_0 definition in /etc/rc.config

- for each of the servers, we will start to use the routed daemon
  until we know the new backbone network works as expected. When this
  happens we will probably want to add some static routes (though this
  is not certain)

Have we forgotten anything serious here? Will it work as expected? Any
other comments or ideas?

/tonny
--
Tonny Madsen, NetMan a/s, Vandtaarnsvej 77, DK-2860 Soeborg, Denmark
E-mail: Tonny.Madsen_at_netman.dk
Telephone: +45 39 66 40 20 Fax: +45 39 66 06 75
------------------------------
Date: Thu, 27 Jun 96 08:08:49 -0400
From: "Dave Golden" <golden_at_falcon.invincible.com>
To: Tonny Madsen <tmadsen_at_netman.dk>
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines  
Message-Id: <9606271208.AA16360_at_falcon.invincible.com>
If I were you I would buy an ethernet switch, move the Alphas over to
100 mb/sec ethernet cards and segment your pc population on separate
10 mb/sec segments (not subnets).  This would allow you to view the
network as one subnet and would give you the performance you desire.
(to use your diagram)
	    +--+ +--+ +--+  +--+ +--+ +--+ +--+ 
	    |PC| |PC| |PC|  |PC| |PC| |PC| |PC| 
	    +--+ +--+ +--+  +--+ +--+ +--+ +--+ 
	     |    |    |     |    |    |    |
10Mb	  ---------------   -------------------
                      |       |
                     +==========+
                     |switch    |
                     +==========+
                           |
100Mb          +-----------+----------------+
	       |            |               |
	  +---------+  +---------+     +---------+
	  |   OSF   |  |   OSF   | ... |   OSF   |
	  +---------+  +---------+     +---------+
Note that you can get switches with various numbers of 10mb ports and
100 mb ports to suit your needs.
Good luck,
Dave
--
Dave Golden				golden_at_invincible.com
Invincible Technologies Corporation
------------------------------
Date: 27 Jun 1996 16:02:48 +0200
From: Tonny Madsen <Tonny.Madsen_at_netman.dk>
To: "Dave Golden" <golden_at_falcon.invincible.com>
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines
Message-ID: <rrpw6lny47.fsf_at_teapot.netman.dk>
References: <9606271208.AA16360_at_falcon.invincible.com>
Hi' Dave,
>From the answers I have seen so far, I can see that I didn't supply
enough information about our current network and the reasons for the
new network.
You wrote:
> 
> If I were you I would buy an ethernet switch, move the Alphas over to
> 100 mb/sec ethernet cards and segment your pc population on separate
> 10 mb/sec segments (not subnets).  This would allow you to view the
> network as one subnet and would give you the performance you desire.
> 
> (to use your diagram)
> 
> 	    +--+ +--+ +--+  +--+ +--+ +--+ +--+ 
> 	    |PC| |PC| |PC|  |PC| |PC| |PC| |PC| 
> 	    +--+ +--+ +--+  +--+ +--+ +--+ +--+ 
> 	     |    |    |     |    |    |    |
> 10Mb	  ---------------   -------------------
>                       |       |
>                      +==========+
>                      |switch    |
>                      +==========+
>                            |
> 100Mb          +-----------+----------------+
> 	       |            |               |
> 	  +---------+  +---------+     +---------+
> 	  |   OSF   |  |   OSF   | ... |   OSF   |
> 	  +---------+  +---------+     +---------+
> 
> 
> Note that you can get switches with various numbers of 10mb ports and
> 100 mb ports to suit your needs.
That's corrent, but I want to keep the old setup in order to have a
more fail-safe net.  If the FDDI fails, we still have the ethernet,
though with a much lower bandwidth. And it's cheaper, though not much,
as I can save the 10Mb access to the switch.
Thanks for you answer.
/tonny
-- 
Tonny Madsen, NetMan a/s, Vandtaarnsvej 77, DK-2860 Soeborg, Denmark
E-mail: Tonny.Madsen_at_netman.dk 
Telephone: +45 39 66 40 20 Fax: +45 39 66 06 75
------------------------------
Date: Thu, 27 Jun 1996 07:26:40 -0700 (PDT)
From: David Warren <warren_at_atmos.washington.edu>
To: tmadsen_at_netman.dk
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines
Message-Id: <199606271426.HAA26346_at_dry.atmos.washington.edu>
You probably will be better off just using static routes. The only reason to run
routed or gated is if you want to route traffic form other machines through them and
broadcast the routing info.
David Warren 		INTERNET: warren_at_atmos.washington.edu
(206) 543-0945		Fax: (206) 543-0308
University of Washington
Dept of Atmospheric Sciences, Box 351640
Seattle, WA 98195-1640
-------------------------------------------------------------------------------
DECUS E-PUBS Library Committee representative
SeaLUG DECUS Chair
------------------------------
Date: Thu, 27 Jun 96 17:17:02 -0400
From: sarasin_at_yosemite.cop.dec.com
To: Tonny Madsen <tmadsen_at_netman.dk>
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines 
Message-Id: <9606272117.AA00901_at_yosemite.cop.dec.com>
Hello, 
Are you using DEC FDDI cards? If so make sure your hub supports full
duplex FDDI that way you can greatly increase your traffic between the
systems. With full duplex fddi you get ~100mb each way.
Sam.
------------------------------
Date: Thu, 27 Jun 1996 14:05:51 BST
From: Dave Newbold <phdmn_at_siva.bris.ac.uk>
To: tmadsen_at_netman.dk
CC: phdmn_at_siva.bris.ac.uk
Subject: RE: Implementing a backbone net for a set of Digital UNIX machines
Message-ID: <009A47BA.3663B214.21_at_siva.bris.ac.uk>
Hi,
Two comments:
a) The subnet a.b.c.240-a.b.c.254 is illegal... you aren't supposed to have a
subnet address of all ones.
b) Wouldn't it be easier to have two different names for the two different
interfaces on each server (eg server1 and server1-fddi), and tell the NFS
system to use the FDDI hosts? This would require some static routes, but it
might be easier than messing about with routed. This is somewhat subjective.
  Dave
"Surrealism... could only account for the complete state of distraction which
we hope to attain here below. Kant's absentmindedness about women, Pasteur's
absentmindedness about grapes, Curie's absentmindedness about vehicles, are in
this respect, deeply symptomatic."
                                  Andre Breton, 1924, Manifeste du Surrealism.
	dave.newbold_at_bristol.ac.uk / http://www.phy.bris.ac.uk/~phdmn
------------------------------
Date: 27 Jun 1996 16:18:53 +0200
From: Tonny Madsen <Tonny.Madsen_at_netman.dk>
To: dave.newbold_at_bristol.ac.uk
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines
Message-ID: <rrohm5nxde.fsf_at_teapot.netman.dk>
References: <009A47BA.3663B214.21_at_siva.bris.ac.uk>
Hi Dave,
>From the answers I have seen so far, I can see that I didn't supply
enough information about our current network and the reasons for the
new network.
You wrote:
> Two comments:
> 
> a) The subnet a.b.c.240-a.b.c.254 is illegal... you aren't supposed to have a
> subnet address of all ones.
That's true, but 254 should be one short of that.
> b) Wouldn't it be easier to have two different names for the two different
> interfaces on each server (eg server1 and server1-fddi), and tell the NFS
> system to use the FDDI hosts? This would require some static routes, but it
> might be easier than messing about with routed. This is somewhat subjective.
I would prefer to do away with routed, but there are several good
reasons for not using two names for the servers.
It is not only the servers that uses NFS: we also have a few test
machines and some Linux boxes on the 10Mb ethernet. Today all NFS
mounts are handled through amd (an automounter), where the maps of amd
is distributed as NIS maps (very handy and very easy to
reconfigure). The maps for amd become much more complicated than
today, if we have two different names for the servers: the -fddi
versions are only usable between the servers; the rest of the machines
will have to use the old plain names. It's doable, but I prefer not to.
Having two names for the same host can also give us some problems with
the configuration of the servers and the software on these. Some of
the software we use, has a licence theme where the software only can
be used on a limited set of machines at the same time. If a machine
has two names and two different addresses, the software will probably
believe this is two different machines - and we can't afford extra
licences, as they are very, very expensive :-/
Thanks for your comments.
/tonny
-- 
Tonny Madsen, NetMan a/s, Vandtaarnsvej 77, DK-2860 Soeborg, Denmark
E-mail: Tonny.Madsen_at_netman.dk 
Telephone: +45 39 66 40 20 Fax: +45 39 66 06 75
------------------------------
Date: Thu, 27 Jun 1996 15:27:01 BST
From: Dave Newbold <phdmn_at_siva.bris.ac.uk>
To: tmadsen_at_netman.dk
CC: phdmn_at_siva.bris.ac.uk
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines
Message-ID: <009A47C5.8D58BE0C.146_at_siva.bris.ac.uk>
Hi,
Yes, as I said, the -fddi scheme is subjective! We have a network of 20 alphas
here using this system, though, and it works.
Regarding subnetting, if you just treat the addresses as part of the class c
address space, then they are legal. However, you will need to use that range as
a subnet to get routing to work... and the subnet address portion is certainly
all ones. This isn't a big problem if you have addresses to spare...
Hope you solve the problem,
  Dave
"Surrealism... could only account for the complete state of distraction which
we hope to attain here below. Kant's absentmindedness about women, Pasteur's
absentmindedness about grapes, Curie's absentmindedness about vehicles, are in
this respect, deeply symptomatic."
                                  Andre Breton, 1924, Manifeste du Surrealism.
	dave.newbold_at_bristol.ac.uk / http://www.phy.bris.ac.uk/~phdmn
------------------------------
Date: Thu, 27 Jun 96 14:23:51 +0100
From: Ray Bellis <Ray.Bellis_at_psy.ox.ac.uk>
To: Tonny Madsen <tmadsen_at_netman.dk>
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines 
Message-Id: <199606271323.OAA00496_at_axp01.mrc-bbc.ox.ac.uk>
If you're not already committed to FDDI then might I suggest the following:
               +-+-+-+-+-+-+
               |  FMS 100  |
               +-+-+-+-+-+-+
                 | | | | |
               100BaseT * 12
                 | | | | |
                 | | | | +-----------------------+
    +------------+ | | +-------------+           |
    |          +---+ +----+          |           |
    |          |          |          |           |
+---+---+  +---+---+  +---+---+  +---+---+  +----+---+     +--------+
| Alpha |  | Alpha |  | Alpha |  | Alpha |  | LS1000 | ... | LS1000 |
+-------+  +-------+  +-------+  +-------+  ++++++++++     +--------+
                                             ||....||
                                             ||....||
                                         +---+|....|+---+
                                         |    |    |    |
                                       
                                            10BaseT * 24 
                                     for PCs and 10Mb equipment
In the UK, the 3Com FMS100 (100BaseT hub) goes for around 2000 pounds,
and the 3Com LinkSwitch 1000 goes for around 3000 pounds.  The
advantage of this system is that you don't have to subnet everything,
and therefore there's no routing to worry about!
The LinkSwitch 1000 has a 100Mb/s downlink port and 24 (!) dedicated
10Mb/s (UTP) lines.  In theory, you could have up to 10 of your PCs
simultaneously using their full 10Mb/s bandwidth, without having your
Alphas bogged down forwarding packets between the two networks.  You
can also connect a normal repeater on a fan-out from one of the link-
switch ports if you have systems that don't justify 10Mb of bandwidth
to themselves.
The whole system would run fine on Category 5 UTP cabling, and thus
you wouldn't even have to have the Alpha's closely coupled for an
FDDI ring.
Ray.
--
  __  __     Computing Officer, MRC Research Centre in Brain and Behaviour,
 /_/ /_/ \/     Department of Experimental Psychology, Oxford University
/ \ / /  /  <http://www.mrc-bbc.ox.ac.uk/~rpb> <mailto:Ray.Bellis_at_psy.ox.ac.uk>
------------------------------
Date: 27 Jun 1996 16:37:21 +0200
From: Tonny Madsen <Tonny.Madsen_at_netman.dk>
To: Ray Bellis <Ray.Bellis_at_psy.ox.ac.uk>
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines
Message-ID: <rrn31pnwim.fsf_at_teapot.netman.dk>
References: <199606271323.OAA00496_at_axp01.mrc-bbc.ox.ac.uk>
Hi' Ray,
>From the answers I have seen so far, I can see that I didn't supply
enough information about our current network and the reasons for the
new network.
You wrote:
> 
> If you're not already committed to FDDI then might I suggest the following:
> 
> 
>                +-+-+-+-+-+-+
>                |  FMS 100  |
>                +-+-+-+-+-+-+
>                  | | | | |
>                100BaseT * 12
>                  | | | | |
>                  | | | | +-----------------------+
>     +------------+ | | +-------------+           |
>     |          +---+ +----+          |           |
>     |          |          |          |           |
> +---+---+  +---+---+  +---+---+  +---+---+  +----+---+     +--------+
> | Alpha |  | Alpha |  | Alpha |  | Alpha |  | LS1000 | ... | LS1000 |
> +-------+  +-------+  +-------+  +-------+  ++++++++++     +--------+
>                                              ||....||
>                                              ||....||
>                                          +---+|....|+---+
>                                          |    |    |    |
>                                        
>                                             10BaseT * 24 
>                                      for PCs and 10Mb equipment
> 
> In the UK, the 3Com FMS100 (100BaseT hub) goes for around 2000 pounds,
> and the 3Com LinkSwitch 1000 goes for around 3000 pounds.  The
> advantage of this system is that you don't have to subnet everything,
> and therefore there's no routing to worry about!
That is true, but I have two reasons to keep the old ethernet
connected as shown in my original mail.
One of the servers is a DEC3000 (rather old and slow by todays
standards, but alright for the things we want it to do). For this box
we can only get FDDI, not 100BaseT. As it is an important server for
some of the projects, we have chosen FDDI. FDDI also seems to be
slightly cheaper - at least for now :->.
My setup also give us a more fail-safe net.  If the FDDI fails
(seldom, but is will happen according to Murphy), we still have the
ethernet, though with a much lower bandwidth.
> The LinkSwitch 1000 has a 100Mb/s downlink port and 24 (!) dedicated
> 10Mb/s (UTP) lines.  In theory, you could have up to 10 of your PCs
> simultaneously using their full 10Mb/s bandwidth, without having your
> Alphas bogged down forwarding packets between the two networks.  You
> can also connect a normal repeater on a fan-out from one of the link-
> switch ports if you have systems that don't justify 10Mb of bandwidth
> to themselves.
Also true, but not a problem in my current net (and probably not for
the next couple of years, unless we have a serious change of business
(sp?)), as the PC's (and other equipment on the 10Mb net), mainly uses
the servers for three servives: NFS, HTTP and X. These three services
combined acount for 15-20% of the current load on the current net (the
inter-server NFS trafic account for 60-90% of the net load.
If I expect the PC's to use 10-20% more bandwidth than today, the 10Mb
will be able to support the company for 2 more years with the current
growth. At that time we will have to find new premises in order to
grow further, so I don't consider this a seious problem for the
timebeing. An increase of 10-20% in PC net trafic seem to be quite
reasonable as we develop software that only uses very little graphics.
> 
> The whole system would run fine on Category 5 UTP cabling, and thus
> you wouldn't even have to have the Alpha's closely coupled for an
> FDDI ring.
Right again, but we have a single room with the cooling (sp?)
necessary for the servers and we like to keep all the machines there
if possible.
Thanks for your comments.
/tonny
-- 
Tonny Madsen, NetMan a/s, Vandtaarnsvej 77, DK-2860 Soeborg, Denmark
E-mail: Tonny.Madsen_at_netman.dk 
Telephone: +45 39 66 40 20 Fax: +45 39 66 06 75
------------------------------
Date: Thu, 27 Jun 96 15:42:58 +0100
From: Ray Bellis <Ray.Bellis_at_psy.ox.ac.uk>
To: Tonny Madsen <tmadsen_at_netman.dk>
Subject: Re: Implementing a backbone net for a set of Digital UNIX machines 
Message-Id: <199606271442.PAA05078_at_axp01.mrc-bbc.ox.ac.uk>
All good reasons to go with your original solution!
Those DEC3000's are a right pain in the backside, I've got
a 3000/500 under my desk which is my main server for HTTP
and NFS but I just can't get a 100BaseT TurboChannel card :-(
Good luck with your network, but do pay particular attention
to which network port each Alpha uses to connect to the
network.  I haven't tried it myself but raising the metric
for the 10 Mb/s ports looks like a good idea.  You'll
also need to make sure that the PCs only ever try to connect
to `their' side of the Alphas, since (as I'm sure you're
aware) the Alpha's will try to route packets if you send
them to the FDDI port's IP address.
Ray.
------------------------------
Date: Thu, 27 Jun 1996 09:19:00 -0400
From: Andre Paige <apaige_at_idulab.gov>
To: Tonny Madsen <tmadsen_at_netman.dk>
Subject: RE: Implementing a backbone net for a se
Message-Id: <199606271220.IAA00284_at_relay3.smtp.psi.net>
References: <rrspbho4ri.fsf_at_teapot.netman.dk>
Tonny,
(IMHO)
I've glanced over your graphic of how you want to implement your network.
I don't see where you x-servers are located, but I'll assume they would be
labeled under pc in your grahpic.
Your backbone design is decent, in designing your backbone you need to
answer these questions:
*Who needs access to your OSF systems?
Who needs access to your x-servers?
What services do they need?
Then you lay out your backbone accordingly.
For example if I had a company with three departments (accounting, 
developers, administration)
I would have a server for each department and have my users connect directly 
to that server.  In your
case I would group my programmers into the projects they are working on 
(spend most time on a project
group them there and penalise them for having to go out of their assigned 
server to access other project
or server.
Then have my servers atttached to the backbone.  What this does is isolate 
related traffic to the
server segment keeping it off the rest of the backbone (10MB)
The issue of who needs OSF access can then be addressed with 
routers/bridges/etc allowing you to
further limit traffic on the net.  If you are putting a FDDI backbone for 
your OSF servers I would recommend
that you also put one in for the 10MB network even though you might not need 
it today or can't afford the
pieces for each workstation to connect you could purchase the connections 
for the x-servers first and
when you upgrade workstations or network connections you could purchase the 
correct connections.
Your company will also save money by laying the second cable by 1) not 
having to do it all over again later,
2) in case the first cable goes out you already have a spare in place *I 
know FDDI doesn't normally fail, but
you have to plan for worst case, besides who knows who may be up in your 
ceiling and might accidently cut
your cable, and 3)it practically cost effective to have two runs done at the 
same time.
I hope this helps.  If you have anymore questions feel free to email.
Andre' A.B. Paige
*WHO = users, groups, world wide customers(internet access), etc, etc.
 ----------
[original deleted]
------------------------------
Date: 27 Jun 1996 15:59:00 +0200
From: Tonny Madsen <Tonny.Madsen_at_netman.dk>
To: Andre Paige <apaige_at_idulab.gov>
Subject: Re: Implementing a backbone net for a se
Message-ID: <rrrar1nyaj.fsf_at_teapot.netman.dk>
References: <199606271220.IAA00284_at_relay3.smtp.psi.net>
Hi Andre,
>From the answers I have seen so far, I can see that I didn't supply
enough information about our current network and the reasons for the
new network.
You wrote:
> For example if I had a company with three departments (accounting,
> developers, administration) I would have a server for each
> department and have my users connect directly to that server.  In
> your case I would group my programmers into the projects they are
> working on (spend most time on a project group them there and
> penalise them for having to go out of their assigned server to
> access other project or server.
I'm afraid that, that is impossible in our current setup. We have far
too many projects and people usually work on morethan one project at a
time. Which machine that is allocated for which project, is usually a
matter of the requeired software/products for the project. Different
projects need different version and different setup, and thus
different machines.
> The issue of who needs OSF access can then be addressed with
> routers/bridges/etc allowing you to further limit traffic on the
> net.
If only I can limit most of the NFS trafic to the fast backbone, I
don't have any problems with the rest of the trafic (at least for the
next couple of years). Currently NFS trafic account for 60-90% of the
load of the network of which the inter-server trafic is 50-80% (we
have some Linux machines too and the PC's mount most of the network
drives through pc-NFS.
> If you are putting a FDDI backbone for your OSF servers I would
> recommend that you also put one in for the 10MB network even though
> you might not need it today or can't afford the pieces for each
> workstation to connect you could purchase the connections for the
> x-servers first and when you upgrade workstations or network
> connections you could purchase the correct connections.
At least for now that won't be necessary. X trafic accounts obly for
5-8% of the network load.
> Your company will also save money by laying the second cable by 1)
> not having to do it all over again later,
That is correct. This is not a problem for us, though, as our servers
are all placed in one location.
> 2) in case the first cable goes out you already have a spare in
> place *I know FDDI doesn't normally fail, but you have to plan for
> worst case, besides who knows who may be up in your ceiling and
> might accidently cut your cable, and
Again correct, and that is one of the reasons I let the old ethernet
access all the servers and not go the FDDI concentrator. If the FDDI
fails, we still have the ethernet, though with a much lower bandwidth.
> I hope this helps.  If you have anymore questions feel free to
> email.
Thanks for your comments...
/tonny
-- 
Tonny Madsen, NetMan a/s, Vandtaarnsvej 77, DK-2860 Soeborg, Denmark
E-mail: Tonny.Madsen_at_netman.dk 
Telephone: +45 39 66 40 20 Fax: +45 39 66 06 75
------------------------------
-- 
Tonny Madsen, NetMan a/s, Vandtaarnsvej 77, DK-2860 Soeborg, Denmark
E-mail: Tonny.Madsen_at_netman.dk 
Telephone: +45 39 66 40 20 Fax: +45 39 66 06 75
Received on Mon Jul 01 1996 - 14:10:22 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:46 NZDT