HP OpenVMS Systemsask the wizard |
The Question is: I recently asked some questions about memory residence and DECram. I've since learned more. What we want is to reduce the time it takes to load some graphics files by a third party XWindows graphics app. that we use. I tried DECram. I also tried global bu ffers and running 2 instances of the app. But neither approach helped. I tested by alternating repetitively between opening the same 2 large graphics files in the app. The first open of the files takes a little longer while the process adjusts working set and virtual address space. But subsequent opens are almost as slow despite the process showing no page faults (I allowed the app an usual large working set). So I figure there is no need to pursue memory residence, DECram or DCL INSTALL. A "show proc /co nt" showed that the loading is very CPU bound. It also showed a lot of direct and buffered I/O in a ratio of 2:1. But no page faults. I ran the app. from the console thereby using the local transport. Is there a way to get a distribution of the CPU time s pent in the images, modules and functions executed by the process in order to figure out where to optimize code? The app. is single threaded. The Alpha is a DS20, 1 Gb, 1 processor. Would adding another processor board help? Is there a way to save, swap a nd reload the address space of the process thereby reducing the reload of the files to a memory copy of already loaded files? Bumping the Mhz does not appear to be an option. Would a faster graphics board be an option? How can I tell the time spent by the CPU on waiting for the graphics card? Are wait cycles for a slow graphics card counted as CPU time? We could of course use a process and a window per graphics. But we trying to avoid that. Any suggestions are much appreciated. Thank you very much. The Answer is : The vendor will need to review the program activity, potentially working with the Customer Support Center and OpenVMS Engineering and/or DECwindows Engineering to determine the particular performance bottleneck involved.
|