SUMMARY: Advice: SAN, CLuster ?

From: Tru64 User <tru64user_at_yahoo.com>
Date: Wed, 26 Sep 2001 12:27:05 -0700 (PDT)

Thanks to Tom Webster, ALan_at_nabeth, ALan Davis, Raul
Sossa for their opinions.
In brief, I will need to upgrade to HSG80, and
ESA12000
Also most likely my disks will work (but check SPD for
HSG80 firmware). Also backups on SAN need a
consideration (veritas backup $$$!!).

Tom answered it step by step below.
Great List!!!! Thanks everybody again. I have a much
clear kowledge of SAN now.....

--- Tom Webster <webster_at_ssdpdc.lgb.cal.boeing.com>
wrote:
> Richard,
>
> > Once again trying to get answers to long term
> > questions:
> > Current Env: 2 4100's, 2 2100's
> > 2 esa10000-hsz70 hooked onto 4100's
> > 2 sw300(??)-hsz50 hooked onto 2100
> > 2 machines have been taken to 5.1, others still on
> > 4.0g
> >
> > 36gb disks on hsz70, dual controllers, top &
> bottom
> > shelf (Just upgraded!! the disks)
> > 18gb disks on the hsz50's (just upgraded disks)
> >
> > Also currently:
> > All filesystems on hsz50's and hsz70's are cross
> > mounted all over!! (have fddi links, but still!!)
> >
> > In long term planning, while trying to utilize
> current
> > hardware as much as possible, what is the way to
> go? I
> > have read about SAN, have an idea on how it works,
> but
> > not sure what it would take to move there (how
> much of
> > current hardware can be re-utilized)
>
> In terms of a traditional SAN environment (using
> Fibre Channel), you
> are going to have to make a fair size investment.
> You will need to:
>
> 1. Add fibre channel host bus adapters (1 or 2
> depending
> on your redundancy requirements) to each host. I
> don't know if they
> are certified on the 2100s. If you aren't using
> 2100As, you might
> have a problem with getting enough PCI slots to
> support adding the
> HBAs.
>
> 2. You will need to add fibre switches. Depending
> on
> your redundancy requirements, you may need to run
> a redundant
> switching fabric.
>
> 3. You will need to swap out the HSZ70s on the
> esa10000s
> with HSG80s. I don't know about the older
> storage, but it may just
> be a controller swap as well.
>
> 4. You may want to look at upgrading to 5.1 on your
> systems.
> I think the SAN stuff may be somewhat supported
> under 4.0g, but not
> as well as under 5.1.
>
> You should be able to reuse the existing disks, at
> least on the
> esa10000s and most likely with whatever you do with
> the HSZ50 storage.
>
> > Another avenue, wondering if it would work better
> in
> > this scenario is clustering.
>
> Not 100% sure what you are asking here. SAN storage
> works great in a
> clustered environment. It's really the only sane
> way to do it with
> mode than a couple of nodes.
>
> TruCluster 5.x does provide some built-in device
> re-direction which
> allows other cluster members to 'see' devices that
> are local to another
> cluster member (and are not otherwise attached to
> the cluster). The
> performance is not nearly as good as having it SAN
> attached and you are
> in trouble if the node hosting the device goes down.
>
> In terms of cross-mounting, TruClusters 5.x is a
> wonderful solution
> -- at least for the cluster members. With TC5.x,
> you cluster mount
> AdvFS filesystems. Once you mount them on a node,
> they are mounted
> on all of the nodes, just like a local filesystem.
> No more of the
> TC1.x NFS mounts and local NFS loopback mounts.
>
> One thing that you haven't mentioned is backups
> and/or tape storage.
> The 'Holy Grail' of SAN implementations is usually
> SAN (AKA serverless)
> backups. This takes software (Legato/Veritas),
> licenses ($$$), and
> additional hardware to connect the tape libraries to
> the SAN as
> semi-independent devices ($$$$).
>
> > Extra question:
> > On those taken to 5.1, any idea into the new
> format
> > for some files, (eg below) even though this
> machine is
> > not part of a cluster?
> > lrwxrwxrwx 1 root system 26 Sep 18
> 10:16
> > tmp_at_ -> cluster/members/{memb}/tmp/
> > lrwxrwxrwx 1 root system 26 Sep 18
> 10:16
> > dev_at_ -> cluster/members/{memb}/dev/
> >
> > lrwxr-xr-x 1 root system 41 Sep 18
> 09:52
> > syslog.conf_at_ ->
> > ../cluster/members/{memb}/etc/syslog.conf
> >
> > strsetup.conf_at_ ->
> > ../cluster/members/{memb}/etc/strsetup.conf*
>
> That's because in 5.1, to paraphrase the US Army's
> recruiting slogan:
> Every system is a cluster of one. The idea was to
> always configure
> the base system with the cluster aware symbolic
> links in place. Other
> than being a little different for the SysAdmins to
> look at, it doesn't
> hurt anything. The when you decide to use that
> system as the initial
> node for a cluster, there is very little work that
> needs to be done
> to convert it to a TruClusters 5.x node.
>
> Hope this helped,
>
> Tom
> --
>
+-----------------------------------+---------------------------------+
> | Tom Webster | "Funny, I've
> never seen it |
> | SysAdmin MDA-SSD ISS-IS-HB-S&O | do THAT
> before...." |
> | webster_at_ssdpdc.lgb.cal.boeing.com | - Any user
> support person |
>
+-----------------------------------+---------------------------------+
> | Unless clearly stated otherwise, all opinions
> are my own. |
>
+---------------------------------------------------------------------+


=====


__________________________________________________
Do You Yahoo!?
Get email alerts & NEW webcam video instant messaging with Yahoo! Messenger. http://im.yahoo.com
Received on Wed Sep 26 2001 - 19:27:47 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:42 NZDT