Summary: TruCluster 5.1 Configuration

From: Seel, John <John.Seel_at_US.Faulding.com>
Date: Mon, 16 Oct 2000 10:40:27 -0400

My orignal questions are at the end. I'll summarize the answers.

1) The UNIX boot disk is only used to configure the cluster and after that
only used in an emergency. This disk can be on either a local bus or on a
shared bus. There does not seem to be any preferrence or recommendation
either way. It need only be large enough to load the UNIX OS.

2) The cluster root, usr, and var filesystems are shared by all members so
must be located on a shared bus. They do not ned to be on the same disk or
logical disk. For performance, it may help to put usr and/or var on
different disks. These file systems need to be as large as the regular
root, usr, and var, however, a cluster rolling upgrade (clu_upgrade),
requires at least TWICE the space on root, usr, and var (both the old and
the new OSs are on at the same time). So it was suggested to allocate a
generous amount of space for each of these shared filesystems.

3) Primary swap for each member system is on the members boot disk. If the
system will swap alot, then it would be advisable to spread the member boot
disks across disks rather than on the same disk or logical disk. It does not
appear that the primary swap can be moved from the member boot disk.
(Several people thought it was possible, but I didn't hear from anyone who
had done it). Secondary swap can go on a local disk. The member boot
partition is not heavily accessed, but if the primary swap is, then it might
be best to spread the member boot disks acrossed several disks or logical
disks. However, several people with similar clusters indicated they had no
performance problems even with the member boot partitions on the same
logical disk.

4) Several people who were running similar clusters had left /tmp alone and
had no performance issues. However, if /tmp is moved, you'd have to setup a
CDSL to each local disk partition.

I still need to discuss the overall disk requirements with the Oracle and
Veritas people in order to finalize the configuration, however, I now have a
much better understanding of the requirements and issues surrounding the
system disk setups.


Thanks to all who replied:
Paul Henderson
Gert-Jan Hilbrands
Scooter Morris
William H. Magill
Wayne Blom
Dan Goetzman
Larry Clegg
Howard Arnold
Jim Jones

----------------------------------------------------------------------------
----------------------------------------------------------------------------
Orignal Posting:

Hello All,

I am in the process of planning the configuration of aTruCluster version 5.1
Cluster and was hoping for a little guidance on the basic disk layout,
cluster tuning, etc.

The cluster will run Oracle 8i. At this point, I am not concerned about the
configuration of Oracle or the disks for the Oracle apps or databases, only
about the T64 and TCR configurations.

The basic hardware will consist of three GS80s and two DS20s.
Each GS80 has three 9gb local disks, 4 cpus, 8gb of memory, one memory
channel adapter, and two KZPGA fiber channel adapters.
( I don't have any specifics yet on the DS20s, only that they will, of
course, have memory channel and two KZPGA fiber adapters)

Shared storage is an EMA 12000 which consists of two HSG80 fiber controllers
and approximately 1 TB of disk space using 36gb disks. The fiber channel
will be configured for multipath.

There will also be a TL895 tape storage unit connected to the SAN which will
be used by Veritas Backup software.

I'd like to get opinions and recommendations on the following:

1) A UNIX disk is used to configure the first cluster member, create the
cluster root disk, and then, as I understand, only used again in an
emergency. Should this UNIX disk be located on a local disk of the first
member, or on the shared bus? If on the shared bus, can it be a SAN
partition? That is, for example, use a pair of 36gb disks in a mirror set,
partition that mirrorset down into smaller units, and use one of the smaller
units as the UNIX boot disk, then use the remaining partitions for, perhaps,
the member boot disks. How big should this UNIX disk be?

2) The cluster root,usr, and var filesystems are on the shared bus, given
that I have 36gb disks, is it advisable to create a single 36gb mirrorset
and use the entire 36 gb for root,usr, and var? If so, how large should each
filesystem be? For performance, it would seem that placing these
filesystems on separate mirrorsets would be optimal, however, that would
result in either some HUGE root, usr and var filesystems, or utilizing the
additional space (through SAN partitions) for other uses, or wasted space.

3) Each cluster member requires a dedicated member boot disk on the shared
bus. This disk will contain the member root filesystem, It also contains the
member's primary swap file, and a small CNX binary partition. Once the
member is booted, it accesses the cluster root filesystem. For performance,
is it advisable, again, to use small SAN partitions of the same 36gb mirror
set for each member's boot/swap/CNX disk, or should this also be spread
across several mirrorsets? I know that I can create secondary swap files on
either the member's local disks(preferred) or on other shared disks, can I
move the primary swap to a local disk?

4) For performance, I'd like to put each member's /tmp on a local disk. Are
there any drawbacks to this?


I have been to the Compaq website and browsed the documentation but didn't
really find anything that addressed performance issues. If there is such a
document, would someone please point me to it.

Thanks in advance for any other advice, tips, warnings, etc.

John Seel


----------------------------------------------------
John Seel
UNIX Systems Administrator
Faulding, Inc.
'john.seel_at_us.faulding.com"
(908) 659-2398
-----------------------------------------------------







----------------------------------------------------
John Seel
UNIX Systems Administrator
Faulding, Inc.
'john.seel_at_us.faulding.com"
(908) 659-2398
-----------------------------------------------------
Received on Mon Oct 16 2000 - 14:41:37 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:41 NZDT