HP OpenVMS Systemsask the wizard |
The Question is: The original system is a MicroVAX 3100, model 90, with a Numonix upgrade to what I believe are model 98 capabilities. OS version is 5.5-2. DECnet IV. 5 RZ28s on the 'A' SCSI bus, and 5 RX28s on the 'B' SCSI bus. Another system has been acquired. AlphaStation 200 4/233, VMS 7.2-1, DECnet IV. 2 GB system disk, and 5 9-GB disks on the internal SCSI bus. On the AlphaStation side, some changes to the PQL system parameters have been made to allow the FAL processes some more resources. At certain times of the day, BACKUP will be used to make IMAGE copies of some of the disks (2 GB RX28) on the VAX, with the target being savesets on 2 9-GB didks on the Alpha. During this time, nothing significant will be running on either system except for the BACKUP(s). Doing these backups is the major use of DECnet on these 2 systems. The goal is to perform these transfers as quickly as possible. Various changes to the user account running backup have been made, as per suggestions found in the system managers guide and elsewhere. With 10 Mbit per second ethernet, which is all that's possible with the VAX, it appears that the transfer through DECnet is the major bottleneck. Current ethernet line settings on the VAX are the defaults: Line = SVA-0 Receive buffers = 10 Controller = normal Protocol = Ethernet Service timer = 4000 Hardware address = 08-00-2B-BD-5C-DE Device buffer size = 1498 Of interest here are the receive buffers, and the device buffer size. In particular, the 1498 buffer size is strange to one who is used to seeing things in powers of 2, ie; 512, 1024, 16384, etc. I'm aware that there is packet overhead in DECnet packets , and probably other considerations I'm not aware of, and not in the network manual, which I've been reading. I've already set the EXECUTOR pipeline quota to 10,000, and the maximum buffers to 200. Since these are quotas, this seemed harmless enough. I hesitate to get into the SEGMENT BUFFER SIZE, the BUFFER size, and the LINE DEVICE BUFFER SIZE due to ignoranc e, and the sure knowledge that tuning can work in both directions. There is also the saveset block sizes that BACKUP uses. The default for tape is 32256, and I've seen larger sizes specified. I'm unaware of any considerations for the backup saveset size for savesets written to disk, and considerations for this block si ze when transfering over DECnet. Questions: 1) How to calculate LINE DEVICE BUFFER SIZE for the ethernet line, and any suggestions? Will a rather large LINE buffer size on both systems possibly yield significantly better performance for the task in question? 2) Any additional tuning suggestions for DECnet phase IV to speed the transfer of the backup savesets? 3) Any advice on the size of the backup saveset, knowing it will go to a disk over DECnet on ethernet? 4) Anything pertinant that I've forgotten to include or ask? Keeping in mind that the major use of DECnet is for the backups, and that the systems will not be doing other work during the backups, tuning for optimum speed for this task, with possible detriment to other DECnet activity is acceptable. The Answer is : With this many disks, you will want to replace the use of the one megabyte per second (theoretical peak bandwidth) network with a local tape drive. Consider archiving from disk to disk. Also consider consolidating to fewer and larger SCSI disks, given the RZ28 disk drive has a formatted capacity circa 2.688 GB. You mention no measured data transfer rates -- the (unattainable) theoretical data transfer time for these ten RZ28 disk drives is over seven hours. (Why unattainable? All disk, I/O bus, system, operator and network overhead was ignored for this calculation. The math used: (2688 megabytes * 10 disks) / (60 sec per min * 60 min per hour)) Setting larger values can (somewhat surprisingly) occasionally reduce throughput. The line buffer size of 1498 bytes is the Ethernet frame size.
|