-- - Cameron Strom syscrs_at_devetir.qld.gov.au Brisbane, Queensland, Australia. ****************************************************************** From: Keith Lewis <keith_at_mukluk.cc.monash.edu.au> I look after a DEC 2100 A500 with one cpu and 128 Mbytes. We normally run 100 users, mostly with no trouble, doing normal undergraduate computing, C development, email, netscape and so on. The machine looks like it could support more users, but the ones it has are developing an appetite for more. Until recently they occupied a couple of severely limited Pyramid systems. Now that they have discovered the new system their appetite is growing. Remember to create lots of pty's, set a high maxusers in your config file, and have heaps of swap space. Good luck. Keith ******************************************************* From: Paul A Sand <pas_at_keokuk.unh.edu> We have a 2100 with 2 cpus and 448 Meg. I would guess that you could accomodate 200-250 concurrent users on your box. We've never been up that high ourselves, however, so it really is just a guess. -- -- Paul A. Sand | My list of what does not work is only -- University of New Hampshire | 1543662 bytes right now. It is not complete, -- pas_at_unh.edu | but does provide a good start. -- http://pubpages.unh.edu/~pas| (Mike Stump, in gnu.g++.bug) ********************************************************************** From: "Michael A. Crowley" <mcrowley_at_mhc.mtholyoke.edu> I'm running a 5000/260 with 320Meg memory for mail and such. We run pine. It gets slow over 200 users. We've peaked at 231. We're also running 1500 logins/hr which is one of the major correlates of slowness. When a "last -22" shows the screen of users all logging in within the last minute or two, the system shows a load of around 15-20. I'm planning on moving to the machine you describe. A summary would be really appreciated. Mike ************************************************************************** From: Selden E Ball Jr <SEB_at_LNS62.LNS.CORNELL.EDU> Alex, "Large" is relative and depends on what they're trying to do. We support roughly 80 concurrent users on a DEC 3000-600 with 128MB. Most are editing. People who run CPU bound jobs are persuaded to use batch. I hope this helps a little. Selden ====== Selden E. Ball, Jr. Cornell University Voice: +1-607-255-0688 Laboratory of Nuclear Studies FAX: +1-607-255-8062 230A Wilson Synchrotron Lab BITNET: n/a Judd Falls & Dryden Road Internet: SEB_at_LNS62.LNS.CORNELL.EDU Ithaca, NY, USA 14853-8001 HEPnet/SPAN: LNS62::SEB = 44284::SEB ******************************************************************* From: Johnny Kwan <J.Kwan_at_utexas.edu> We use a DEC AXP 3000/300 with 112Mb memory to handle mail for more than 30,000 users. The /var/spool/mail partition is even NFS-mounted on 6 client systems for users to acess their mail files. During peak hours, there are more than 300 users accessing their mail files concurrently, and our mail system can still handle it. Johnny Kwan Texas Education Network kwan_at_utexas.edu ********************************************************************** From: "Richard L Jackson Jr" <rjackson_at_portal.gmu.edu> > Our AlphaServer 2100 4/275 running OSF/1 3.2a in a multipurpose environment support about 350 concurrent users. We have 2GB of RAM and 13,000+ accounts. -- Regards, Richard Jackson George Mason University UNIX Systems Engineer UCIS / ISO Computer Systems Engineering ************************************************************************ From: "Scott Ruch - DTN 462-6082" <swr_at_unx.dec.com> > There'll be some (very little) LAT access but mainly it will be TCP/IP - > students coming across from other parts of the campus or from terminals > on other systems run by the library. I only know the LAT numbers off the top of my head - there's a document (somewhere) that states how many users OSF/1 supports over each of the types of transports. It is all dependent on how much memory/swap space you have. For example, in V3.0 a DEC/3000-500 with 256Mb of memory and ~1Gb of swap was able to sustain 1500 concurrent LAT users. I suggest you consult your sales rep. to determine the optimum configuration for your hardware. Scott ************************************************************************ From: Hellebo Knut <bgk1142_at_bggfu2.nho.hydro.com> Reagrds, How many users are you planning to have ?? -- ****************************************************************** * Knut Helleboe | DAMN GOOD COFFEE !! * * Norsk Hydro a.s | (and hot too) * * Phone: +47 55 996870, Fax: +47 55 996342 | * * Pager: +47 96 500718 | * * E-mail: Knut.Hellebo_at_nho.hydro.com | Dale Cooper, FBI * ****************************************************************** From: UTL2::ALEX 16-MAY-1995 08:04:57.54 To: MX%"Knut.Hellebo_at_nho.hydro.com" I hope to be able to support 200-300 concurrent users; anything more than that is gravy. Alex Hi again ! The 2100 should be able to support 300 users with no problems provided you have the sufficient resources (memory/disk OK). -- ****************************************************************** * Knut Helleboe | DAMN GOOD COFFEE !! * * Norsk Hydro a.s | (and hot too) * * Phone: +47 55 996870, Fax: +47 55 996342 | * * Pager: +47 96 500718 | * * E-mail: Knut.Hellebo_at_nho.hydro.com | Dale Cooper, FBI * ****************************************************************** ********************************************************************* From: William H. Magill <magill_at_dccs.upenn.edu> At the moment I have a 3000/600 with 268meg of memory runing 3.0, AdvFS/LSM. The user base is roughly 2500, I routinely see 100+ simultneous ELM users, and some uncounted number of POP users. Load factors are rarely above 1. The Medical School "literlly last week" brought up a 2100 which they plan to user to support some 3-5000 users. Again ELM and POP, with access to TIN, Lynx and "general Unix" but not "compute intensive" work. I think they also have 512Meg. It is a 1 CPU box with 2 RZ26 (system stuff) and 5 RZ28s for user data. T.T.F.N William H. Magill Manager, PennNet Computing Services Data Communications and Computing Services (DCCS) University of Pennsylvania Internet: magill_at_dccs.upenn.edu magill_at_acm.org magill_at_upenn.edu http://pobox.upenn.edu/~magill/Received on Thu May 25 1995 - 00:32:52 NZST
This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:45 NZDT