SUMMARY: HSZ70 and target ID assignments for optimal I/O

From: Michael Grau <Michael.Grau_at_alphawest.com.au>
Date: Tue, 15 Dec 1998 08:25:57 +0800

My original question is at the base of this text.

Firstly, it seems the O.S. doesn't care what the target ids are in terms of
managing a queue (of IO) to that device. So the investigation really starts
with the controller/s and the layout of disks in the cabinets.

Most respondents felt there is some benefit to spreading IO load within such
a system but not in the way I envisaged. I was thinking logically from the
OS' perspective but the controller and cabinets handle assignments a little
differently. It would seem that the assignment of DXXX is a logical
assignment and does not correspond to the physical access to disks (except
to identify the disks) and therefore does not influence speed. Others have
said it better than me so my thanks to Rodrigo, Joel, Alan and Peter whose
answers follow.

Result: I will not change DXXX allocations but I will investigate one or two
settings in the controller and physical disk layout. By the way I was
looking at a system the other day and by accessing the controller I found 70
Gbs of disk space which was installed into the controller but either had not
ever been assigned under UNIX or dropped in a subsequent boot (no entries in
fstab) (see SHOW DEVICES for more information). I guess the lesson is, if
you don't use the disk straight away the users won't miss it, especially if
it is installed along with a number of other disks.

The denonations of the Units inside of the HSZ70
will not have any impact in performance as seen form the O.S
(i.e. It will make no different in access time if you name
 a disk as D206 and another as D506.)

What is certainly of great importance is the physical layout
of the disks inside of the RA700, ESA1000 cabinet.
The Guidelines are :
1. To install the disks in the lowest positions
   ie to install the disks that correspond to the lowest ID per channel.
  The idea it that Physically the disk is closer to the channel input that
another
   disk installed physically higher in the cabinet.


2.To prefer the lowest channels. Some analysts mention that
    disks physically installed in, for example (channel 0) are more rapidly
    accessed that a disk installed in channel 6, This, I have not seen any
real
    or consistent truth, but This was a recommendation of some guys
    that came to give us a seminarium about HSZ70 Configuration.
3. set maximun_cached_transfer_size to 1024
    set cache_flush_timer to at lest 60 (seconds)




We have a couple of HSZ70 controllers on site. For oracle it may be
beneficial to arrange the disks so that they are on different channels from
the controller. The LUN's you assign have no real performance bearing on how
the controller will function, but the positioning of the disks within the
cabinet will affect performance. Unlike the older HSZ controllers with
discrete storage shelves, the HSZ70 scsi channels run VERTICALLY up the
cabinet, not across. Therefore, disks on the same horizontal shelf are on
different controller SCSI channels. Digital (Compaq)are soon to bring out a
Nine port scsi hub for use with HSZ70 controllers. On querying the
performance implications of this I was told that the disks are now the
limiting factor.

In a dual controller setup, it is common to split the IDs between
the controllers. This is done with the command:

SET {THIS_CONTROLLER|OTHER_CONTROLLER} PREFERRED_ID=(x,y,...)

With both controllers operating normally, the desired controller
will service I/O requests only for the target IDs that it
prefers. If the logical units aren't evenly distributed
between the controllers, one may have more work to do than
then other.

So, the next question is whether it is possible to unbalance
the load enough so that one controller is saturated, while
the other could respond to requests. The only way I've seen
to saturate a controller is produce a read-only I/O load of
one sector transfer sizes which get their data entirely from
the cache. I think such a load will saturate the shared
controller backplane before the SCSI bus saturates. In this
load, the other controller couldn't buy you anything because
it has to share the bandwidth available from the shared back-
plane.

In the host device driver, some data structures are allocated
on a per-target basis. Logical units on the same ID have to
share the data structures for that ID. If the data structure
becomes a limit to the performance, then having too many LUNs
per ID may limit performance. I'm told that this is unlikely
as the last example.

Still, future controllers may not having the limitation of the
current generation. It might be wise to get in the habit of
spreading units across the available target IDs before using
additional LUNs.



This should be a simple question to answer.

We have an alpha 4100 server running digital UNIX with raid cabinets
connected via hsz70 controllers. When this was originally setup the
procedure was to install disks using sequential numbers for target scsi ids.
That is, as each disk was added a new LUN was generated until moving to the
next sequential scsi target id as follows: D104, D105, D106, D200, D201 etc.

Will there be significant gains in moving disks to different scsi targets?
These disks are typically 9 GB running Oracle across multiple disks. They
are not striped but each Oracle database may straddle a number of disks, as
these have been installed sequentially the same Oracle Instance is likely to
straddle a number of disks all with the same target id. In this way I would
think it beneficial to move them to different target IDs but I am unsure if
there will be any gains. Can anyone tell me the relationship between scsi
targets in this setup? Is this just a token assignment by the HSZ70
controller or does it have I/O impact as on other systems?

I will summarise.

Michael Grau
Network Engineer

Enterprise Managed Services
AlphaWest Pty Ltd
Ph: (08) 9237 3041
Mobile: 041 331 5820
email: michael.grau_at_alphawest.com.au
visit our website : http://www.alphawest.com.au
Received on Tue Dec 15 1998 - 00:28:54 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:38 NZDT