Thanks to Rick Greene and especially, Dave Brady, for pointing me in the right
direction. Abbreviated versions of their responses follow the annotated
summary. Here is the annotated summary for configuring LSM on raw disks.
Sorry for the length, but the subject was complex.
The original question was:
Subject: Configuring LSM on raw disks for Oracle Parallel Server
To: tru64-unix-managers_at_ornl.gov
I need some basic help with configuring LSM on DRDs for use with Oracle
Parallel Server.
I have two 4100s with 4.0d/patch 3, memory channel hardware, production server
1.5 and Oracle Parallel Server 8.0.4. There are six 4GB physical disks but
will
configure only three for the cluster. The other three will be used as mirror
disks configured in the HSZ controller. I need to create locations for 52
files
ranging in size from 2 MB to 760 MB.
My questions range from, when do you use a striped volume? Should I create
multiple volumes to correspond with each file? How do I refer to the
tablespace
when I do the create, /dev/rvol/vol01, /dev/rdrd/drd1, or something else? Yes,
these questions may not have anything to do with what I need to accomplish,
but
this is how confused I am.
Any examples of volprint, SQL scripts, setup information, or configuration
"cook books" you have will be greatly appreciated. The Oracle side is in good
shape, I just need to know how to get it created on the cluster disks. I have
figured out the LSM configuration commands but can't figure out how the DRD
and
LSM relate to each other.
SUMMARY OF IMPLEMENTATION:
Label disks:
disklabel z rz24
disklabel W rz24
disklabel wr rz24 hsz50
disklabel r rz24
Initialize rootdg disk group with one disk: volsetup rz12
Initialize newdg disk group:
voldisksetup -i rz24 nconfig=2 privlen=1024
voldg init newdg newdg01=rz24h
voldg -g newdg adddisk newdg02=rzb24 (for each additional disk)
Create subdisk:
volmake g rootdg sd rz12-01 dm_name= rz12 dm_offset=0 len=4190040
volmake g newdg sd newdg01-01 dm_name=newdg01 dm_offset=16 len=247808
volmake g newdg sd newdg01-02 dm_name=newdg01 dm_offset=247824 len=18432, etc.
Create plex:
volmake g rootdg plex pl-01 sd=rz12-01
volmake -g newdg plex pl-01 layout=stripe st_width=128 sd=newdg01-01
volmake -g newdg plex pl-02 layout=stripe st_width=128 sd=newdg01-02, etc.
volsd g newdg assoc pl-01 newdg02-01 (for each additional disk)
Create volume:
volassist g newdg U gen make vol01 750336 alloc=0 align=0 layout=stripe
nstripe=3 stripe_width 128
volume g newdg set log_type=NONE vol01
volmake -U gen -g newdg vol vol01 read_pol=select user=dba group=adm mode=0644
log_type=none len=247808 plex=pl-01
volmake -U gen -g newdg vol vol02 read_pol=select user=dba group=adm mode=0644
log_type=none len=18432 plex=pl-02
Define cluster members with asemgr
Create drd service with asemgr
Mumble #%!*??#!! and reboot the offending server that can’t dd to the cluster
at least three times
Create OPS database
REPLY FROM RICK GREENE:
Create a new disk group
Add the disks to the disk group as full devices (don't partition them at all,
let LSM have the full disk)
Create logical volumes equivalent to the size of the tablespaces as defined by
your DBA. NOTE: you must make the logical volume 1 MB BIGGER than the DBA
requirement...something screwy with Oracle.
Create a Distributed Raw Disk service using the new disk group
Add all the logical volumes from the disk group to the DRD service
One thing to note also, you can't share a disk group between a DRD service and
a NFS or Disk Service, which means any *filesystem* space to be used by the
application or database (such as the Oracle software and admin directories)
need to be created under a *different* disk group and a separate TruCluster
service.
As to striping, well, I did an awful lot of work here to try to optimize the
I/O channels to the maximum possible, and set up 8-disk disk groups, one disk
per i/o channel in the disk group, and stripped all the logical volumes across
all the disks in each disk group. I haven't had anything prove that the work
was worth it, and we are now considering not only going away from raw disk
devices (since we don't think we need parallel, there's not as much
requirement
for raw disk or DRD) to filesystem-based databases, but away from striping as
well.
REPLY FROM DAVE BRADY:
I've read that some well-respected Oracle experts say that if you have a very
fast RAID controller (and you aren't using RAID 5), go ahead and stripe
everything.
I would create a single stripe + mirror set of your six disks, behind the
HSZ.
I suggest this because of the limited number of disks you have available. I
don't think it makes sense breaking them up.
(Never use LSM for striping/mirroring unless you *have* to, as you'll pay a
20-30% CPU penalty.)
- create an LSM disk group containing the single "disk" controlled by the
HSZ.
- use LSM to create raw volumes for the sizes you want.
- create a single DRD service in ASE
- add each LSM volume to the service
- each LSM volume (/dev/rvol/...) will become one DRD Device Special File
(/dev/rdrd/drd...)
- use the appropriate DRD Device Special File in your Oracle configuration
Here's what part of my configuration looks like:
In ASE (I have three DRD services: one for control files and redo logs, one
for indexes and rollback, and one for data):
--- DRD Service #1 ---
DRD Device Special File: /dev/rdrd/drd1
Underlying Storage: /dev/rvol/oractlredodg/control1
oractlredodg rz33c
DRD Device Special File: /dev/rdrd/drd2
Underlying Storage: /dev/rvol/oractlredodg/control2
oractlredodg rz33c
DRD Device Special File: /dev/rdrd/drd4
Underlying Storage: /dev/rvol/oractlredodg/redo1
oractlredodg rz33c
DRD Device Special File: /dev/rdrd/drd5
Underlying Storage: /dev/rvol/oractlredodg/redo2
oractlredodg rz33c
--- DRD Service #2 ---
DRD Device Special File: /dev/rdrd/drd10
Underlying Storage: /dev/rvol/oraindrolldg/index1
oraindrolldg rz34c
DRD Device Special File: /dev/rdrd/drd11
Underlying Storage: /dev/rvol/oraindrolldg/rbs1
oraindrolldg rz34c
DRD Device Special File: /dev/rdrd/drd12
Underlying Storage: /dev/rvol/oraindrolldg/rbs2
oraindrolldg rz34c
--- DRD Service #3 ---
DRD Device Special File: /dev/rdrd/drd14
Underlying Storage: /dev/rvol/oradatadg/system1
oradatadg rz35c
DRD Device Special File: /dev/rdrd/drd15
Underlying Storage: /dev/rvol/oradatadg/hound1
oradatadg rz35c
DRD Device Special File: /dev/rdrd/drd16
Underlying Storage: /dev/rvol/oradatadg/temp1
oradatadg rz35c
I've a init_com.ora file, which is used by both instances for common
parameters, which contains:
control_files = (/dev/rdrd/drd1,
/dev/rdrd/drd2)
When creating datafiles for tablespaces, you'd do something like:
create tablespace rbs1 datafile '/dev/rdrd/drd11' size 191M
default storage (maxextents unlimited);
create tablespace hound1 datafile '/dev/rdrd/drd15' size 47M
default storage (initial 128k next 128k
maxextents unlimited pctincrease 0);
(You'll need to back off 1 Mb in the size of the Oracle file as compared to
the size of the corresponding LSM volume.)
One (of several) *annoying* Oracle quirks is that to change the
gc_files_to_locks init.ora parameter, *all* instances must be shut down. Every
instance must have the same number and kind of datafile locks.
=====================
Wendy Fong
wfong_at_synacom.com
408.296.0303
UNIX is very user friendly,
it's just very particular about
who it makes friends with.
=====================
Received on Mon Jul 12 1999 - 18:22:42 NZST