SUMMARY: Stripe size for Raid0+1

From: Andy Cohen <acohen_at_cognex.com>
Date: Wed, 01 Mar 2000 10:41:50 -0500

Hi -

I received some very helpful, if contradictory, replies though I haven't had
a chance yet to try any of the suggestions.

Some people said I should make the strip size equal to the block size of the
database -- in our case that is 4KB. Others said that 256KB is a good
'in-between' size. Others said 8-32KB or in an extreme case, 1MB. Though
several people felt that with Oracle we wouldn't see much of a difference in
performance with differing stipe sizes.

I need to do some more research.

Thanks to everyone!

Andy Cohen
Oracle Database Administrator
Cognex Corporation
1 Vision Drive
Natick, MA 01760-2059
v: 508/650-3079
f: 508/650-3337
e: acohen_at_cognex.com

ORIGINAL QUESTION:
=================
> We are about to configure an RA3000 to have RAID0+1. The primary purpose
of
> this machine (a new DS20E running 4.0F w/3GB RAM) is as an Oracle 7
database
> server. We need to determine the strip (or is it stripe?) size for this.
> How do we go about determining whether we want a 'large' or 'small' size?
> Our primary tuning concern is month-end reporting so we think we'd like
this
> system 'tuned' or optimized for reads.


REPLY DETAILS:
==================================================
Make it as large as the block size of the database. But problably you
won't really notice the difference much. This is because of the tuning
inside oracle itself. The main thing for tuning oracle is the memory
allocation and the amount of memory. Oracle is made to optimize the disk
accesses. You could also use a raid 5 set. This will problably be
faster. Because there are more disks that can be used at the same
reads.

Your limiting factor for the disks will be your 1 scsi bus.

Remember that for a very good performance the amount of memory needs to
be about 1 or 2 procent of the amount of space used inside of the
database. Any memory inside the raid controller won't really help
because of the disk cache inside Oracle.
+++++++++++++++++++++++++++++
        For I/O loads where request rate is important and the
        application can generate multiple requests in parallel
        then a chunk size of 10 to 20 times the average I/O
        size is good. This increases the chances that random
        requests can be serviced by different disks and that
        a given request won't take more than one disk (split I/O).

        For I/O loads where data rate is important, the opposite
        is true. Such applications will tend to use very large
        I/O requests and it becomes desirable to have one request
        serviced by multiple disks in parallel. However, the
        effectiveness of this depends on the I/O size. If you
        have an application doing multiple megabyte transfers
        then a smaller chunk size is desirable. However, I/O
        requests are expensive in themselves and making the
        chunk size too small will cause more time to be spent
        processing the I/O than transferring data. To give an
        extreme example of this, using a chunk size of 2 KB on
        a member stripe set to spread out an 8 KB I/O request.

        The documentation does make the general recommendation that
        a chunk size of 256 KB is good for many work-loads.
++++++++++++++++++++++++++++++++++
I usually take defaults for the stripe size, if using an HSZ-class
controller, however, I change the MAX_CACHED_TRANSFER_SIZE to 256 for
units. Do not use LSM to do RAID5, as it will waste a lot of CPU and bus
bandwidth.
++++++++++++++++++++++++++++++++++
The chunk size should be determined according to your application. If, for
instance, you are going to use your RAID volumes for Oracle DB it is good
idea
to make the chunk size and oracle block size equal (for instance between
8k-32k). Lower chunk size will make more system calls for raed-write
operations,
bigger chunk size will reduce parallelism efect of stripping. So, the
optimal
size is quite dependent on application.
Received on Wed Mar 01 2000 - 15:34:56 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:40 NZDT