kernel settings for large jobs for 500au workstation

From: Neil B. Morley <morley_at_fusion.ucla.edu>
Date: Wed, 27 Jan 1999 17:58:58 -0800

I am running a matlab script called SPM for analysis of MRI images on a dec
alpha 500au with 1088 GB RAM, > 4 GB total swap space, and Digitial Unix 4.0D.

As I increase the number of data files (around 2000) and also the total
size of process (around 2.5 GB), I get to a point where the analysis no
longer completes, and the process goes into a kind of computational coma
(using CPU cycles but not progressing towards completion). I am trying to
diagnose this problem both from the SPM/MATLAB side, and from the DEC
ALPHA/OSF side.

Does anyone have suggestions for kernel settings for large data crunching
jobs (VSZ>2.5 GB) that also have a large number of open data files (>2000).
 Is there some critical kernel parameters that should be adjusted? Are
there any "secret" limits on the size (either total mem or open files) of
individual jobs on alpha machines?

thanks to all...

neil morley - ucla



________________________________________
Neil B. Morley
43-133 Engineering-IV
Mechanical & Aerospace Engineering Dept.
UCLA, Box 951597
Los Angeles, CA 90095-1597

Tel: 310/206-1230
Fax: 310/825-1715
Email: morley_at_fusion.ucla.edu
________________________________________
Received on Thu Jan 28 1999 - 02:01:14 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:38 NZDT