Thanks very much to all that responded!!
idea:
The default stripe size with LSM should be fine, only issue might be if
you
do lots of writes - a non write-back-cache write can take up to 70ms,
and it
might be in your benefit to add a the LVD raid card - KZPCC, which will
increase performance and capacity through raid 5, which is not bad if
you
have WBC enabled.
idea:
>From my 331 book: (Designing and Implementing Compaq SAN solutions)
i.e. This is for HSG controllers, but I assume it should apply to LSM.
You have 2 basic scenarios - High Request Rate and High Data Rate.
(With lots of small files, I assume you are in a High Request Rate
situation.)
Again from the book:
Stripe Width = Chunk Size = Amount of data on a single disk/stripe
For High Request Rate:
High Locality: Chunk Size = 10x Average Transfer Size
Low Locality: Chunk Size = 20x Average Transfer Size
Unknown Locality: Chunk Size = 15x Average Transfer Size
Chunk size is specified in sectors, 512 bytes/sector.
Chunk size should be a prime number to "reduce the sequential response
time variance". (I don't know what this means.)
For High Data Rate (i.e. Large Sequential i/o requests):
Chunk Size = 17 is the recommended value.
>> Hello all,
>> I need advice on the most practical way to configure a stripe set
to
>> achieve
>> the greatest R/W performance. I have an ES45 with the extra disk
cage,
>> and
>> an external StorageWorks dual bus ultra3LVD box. Both controllers
sit
>> on 66MHZ
>> internal buses. Now I realize that I should stripe across each bus,
but
>> I'm not sure
>> how wide the stripe width should be. The data being written is very
>> small..ie 1.5K to
>> 10K. So this would be many small files. The default stripe width is
64k,
>> so it seems
>> that I would be losing most of the striping benefit unless I
lowered the
>> stripe width.
>> How low can (or should) I go with the width before the sheer
overhead of
>> managing
>> the set slows things down. ANY ideas are welcomed.
>> Thanks,
>> Scott
Received on Thu Jul 04 2002 - 12:25:25 NZST