LSM Perf. problems

From: Ohad Imer <ohad_at_comport.com>
Date: Fri, 2 Aug 1996 16:19:45 -0400 (EDT)

Hello,

   I have a 2100A with single 5/250 CPU and 2 KZPSCs - 3 port raid. Used
   LSM to mirror root, swap, and user, and 2 more partitions on the 2
   internal drives with no problems! (OS is 3.2F)

   Then I used LSM to create a raw prtition using 3 re raid 5 volumes -
   WBC enabled - over the 2 controllers. When using dd to write or
   read from the /dev/rvol... I got very poor numbers - only about
   500K/sec with very low I/O rate! I tried differant LSM stripe width,
   with no effect on performance! I then broke up the volume, and tested
   throughput to /dev/rre0g - I got a write rate of about 9MB/sec and I/O
   rate of about 600I/Os /sec.

   I then create the volume again using LSM, and tried with Sybase -
   initialization of the 24GB to tablespace took only 8 minutes! After
   loading the DB, I tried a count(*) operation on a non index field -
   should do a full table scan - monitored I/Os, and only saw about
   30 I/Os /sec - total throughput about 500K/sec once in a while
   will shoot up to 4MB/sec for a short time - It seems like LSM causing
   a major botleneck - I axpected some, but not that bad!

   The performace was the same when used only 2 re volumes, 1 per
   controller.


   The volassist command:

   volassist -U gen make data-vol 24g layout=stripe nstripe=3 \
   stripe_width=128 re0 re8 re9

   voledit set user=sybase mode=0600 data-vol

   Thank alot for your time!

   Ohad
Received on Fri Aug 02 1996 - 22:49:32 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:46 NZDT