I use clustered machines for coding and testing my Ph.D thesis. My
thesis project uses autonomous agents to manage a Network File System (NFS)
server in a purely peer-to-peer organization. Logically, it deals with up to 9 exabytes per
user slice. Data replication and load balancing are handled
automatically, and backups are a thing of the past.
The control for what
is stored where is abstracted totally from the users, as is the storage
itself. It allows for systems to be added and removed without
interruption to service.
The sample data produced from it is around
700 megabytes per day, which I store on a remote NFS server at university. My
AlphaServer system mostly sits for 17 hours at a time doing discrete cosine transforms (DCTs), inverse discrete cosine transforms (IDCTs),
and fast Fourier transforms (FFTs) looking for patterns in the data. My beloved VAX system is a recent arrival and is
much treasured. I hope to soon save more cash and add to the
cluster, which will only decrease processing and testing time. Possibly
another 166 MHz AS200 would be a good goal.
I am also trying to wave the HP OpenVMS flag further in my local university
computer club. They have recently aquired a slightly damaged and recently
fixed AlphaServer 1000 system with a pretty meaty 266 MHz CPU and 128 megabytes of RAM to
use as a shell and bash-around box at the club. I will be helping to set
this up and run it.
— Alastair Boyanich, Australia
»Murdoch INformation and Computing Society (MINCS)
|