Can not build 54GB stripe set successfully.

From: Darryl Milczarek <darryl.milczarek_at_emsusa.com>
Date: Wed, 25 Oct 2000 10:23:58 -0700

Hi Admins,

I have created a strip set using three 18GB DS-RZ1ED-VW drives in our AS4100
box running Tru64 v4.0G patch 1.

The devices are rz19, rz20 & rz21. Then I created another striped set with
identical drives on the next shelf using devices rz27 rz28 & rz29. These all
belong to disk group oracledg. In the file system, this has been named
u03_oracle and is mounted as /u03.

Here are the settings for this disk group which I have named oracledg:

# volprint -g oracledg
TYPE NAME ASSOC KSTATE LENGTH COMMENT
dg oracledg oracledg - -
dm oracled01 rz19 - 35564040
dm oracled02 rz20 - 35564040
dm oracled03 rz21 - 35564040
dm oracled04 rz27 - 35564040
dm oracled05 rz28 - 35564040
dm oracled06 rz29 - 35564040
sd oracled01-01 oracle-01 - 17709568
sd oracled02-01 oracle-01 - 17709568
sd oracled03-01 oracle-01 - 17709568
sd oracled04-01 oracle-02 - 17709568
sd oracled05-01 oracle-02 - 17709568
sd oracled06-01 oracle-02 - 17709568
plex oracle-01 oracle ENABLED 53128704
plex oracle-02 oracle ENABLED 53128704
vol oracle fsgen ENABLED 53128704

These settings in fstab look correct to me (the last entry, /u03, is the
object of concern):

# cat fstab
root_domain#root / advfs rw 0 0
/proc /proc procfs rw 0 0
usr_domain#usr /usr advfs rw 0 0
/dev/vol/rootdg/swapvol swap1 ufs sw 0 2
/dev/vol/rootdg/swapvol02 swap2 ufs sw 0 2
u03_domain#u03 /u03/oradata advfs rw 0 0
u01_domain#u01 /u01/oradata advfs rw 0 0
u02_apps#u02 /u02/apps/logs advfs rw 0 0
usr_var#usr_var /usr/var/spool advfs rw 0 0
u05_domain#u05 /u05 advfs rw 0 0
u04_domain#u04 /u04 advfs rw 0 0
u02_domain#u02 /u02/oradata advfs rw 0 0
temp#temp /temp advfs rw 0 0
u01root#u01root /u01 advfs rw 0 0
u02root#u02root /u02 advfs rw 0 0
u03_oracle#u03 /u03 advfs rw 0 0


The goal was to have one 54GB array on the main system shelf and mirror it
to another identical set on the next shelf above.

All seemed to go as planned. When I created the first strip set, all three
disk activity lights showed activity. The same for the second strip set. And
when I mirrored the first to the second, all six lights were active for
several hours.

However, I am only able to see 26GB using df -k (it is the last entry):

# df -k
File system 1024-blocks Used Available Capacity Mounted on
root_domain#root 1993584 105386 1880208 6% /
/proc 0 0 0 100% /proc
usr_domain#usr 8032560 7697318 308448 97% /usr
u03_domain#u03 1761056 16 1756592 1% /u03/oradata
u01_domain#u01 1033168 16 1028792 1% /u01/oradata
u02_apps#u02 1047720 16 1043336 1%
/u02/apps/logs
usr_var#usr_var 2048936 579 2042472 1%
/usr/var/spool
u05_domain#u05 4091896 1287288 2798848 32% /u05
u04_domain#u04 9560800 368048 9154256 4% /u04
u02_domain#u02 2022688 16 2018192 1% /u02/oradata
temp#temp 2066344 164869 1895280 9% /temp
u01root#u01root 4074488 1626649 2434864 41% /u01
u02root#u02root 6563136 3557602 2986744 55% /u02
u03_oracle#u03 26564352 25093670 1376568 95% /u03

Does anyone have an idea as to why /u03 is only showing half the size it
should be?

Darryl Milczarek
EMS 602 258-8545
darryl.milczarek_at_emsusa.com
Received on Wed Oct 25 2000 - 17:23:08 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:41 NZDT