I had a rather out of topic question so I didn't send my original
question to this list but to Alan (alan_at_nabeth.cxo.dec.com) and
Tom Blinn (tpb_at_doctor.zk3.dec.com) from Compaq. However I was
recommended to send to all of you a summary because some of you
could be interested on this also. My question was about the
internal system architecture of the Compaq's alphaserver machines.
Compaq seems to lack of public documentation about this but the
answer from Alan and Tom are good enough to make up a general idea.
I would like to take advantage this time to say that the presence
of Alan and Tom within this list is very useful.
The answers from Alan and Tom was (in that order):
------------------------ Alan ----------------------------------------
----------------------------------------------------------------------
First, understand that the CPU architecture is a very different
thing from the architecture of a specific system using an
Alpha CPU.
Once upon a time, there was a document, the System Reference
Manual, for the Alpha Chip series. It would have described
the basic architecture of the chip and how most consoles are
expected to behave. You can probably still find copies of
it on our corporate web and ftp sites somewhere, but I
wouldn't where to start looking (well, start with the
.digital.com and .dec.com sites first).
System architectures are another matter entirely. A low
end workstation might be as simple as having an internal
bus that has the single CPU, a couple of memory arrays
and a few PCI slots on it. Something like:
CPU ---+--- Memory
|
--+ | +--
| | |
--+--- EISA --+--- PCI ---+--
| |
--+ +--
On the other end of things, complex system will have
the capability of having multiple CPUs, large memory
modules that are themselves complex subsystems, and I/O
subsystems that are heirarchial in design.
From the logical Tru64 UNIX standpoint, the "nexus" is
where all the highest level components connect; CPUs,
memory and I/O subsystems. In the simples systems the
one or a few PCI busses connect directly to the nexus.
In the more complex systems, components that support
multiple I/O busses connect to the nexus. The specific
names of these vary according to system, but they're
often just called "hoses" (the analogy being a large
file hose that carries significant amounts of water).
In all current system, PCI busses connect to the system
bus, or the I/O hose. First generation Alpha systems
used a wide variety of end I/O busses; TURBOchannel,
XMI, VAXBI, and whatever private bus was used by the
DEC 4000 family.
So, you might a complex system that looks like:
Nexus -+- Hose0 -+- PCI0 -+- Slot 0
| | +- Slot 1
| | +- Slot 2
| | +- Slot 3
| |
| +- PCI1 -+- Slot 0
| | +- Slot 1
| | +- Slot 2
| |
| +- PCI2 -+- Slot 0
| | +- Slot 1
| | +- Slot 2
| | +- Slot 3
| |
| +- PCI3 -+- Slot 0
| +- Slot 1
| +- Slot 2
|
+- Hose1 -+- PCI0 -+- Slot 0
| +- Slot 1
| +- Slot 2
| +- Slot 3
|
+- PCI1 -+- Slot 0
| +- Slot 1
| +- Slot 2
|
+- PCI2 -+- Slot 0
| +- Slot 1
| +- Slot 2
| +- Slot 3
|
+- PCI3 -+- Slot 0
+- Slot 1
+- Slot 2
The most current generation of GS systems go a step further
with each Quad Building Block (QBB) having a fairly complex
architecture, and systems being made of multiple QBBs.
Some system documentation will have high-level block
diagrams of architecture for that system. You might
be able to find whitepapers or other documentation on
the website for the particular system that will have
such information.
-------------------------------------------------------------------------
------------------------ Tom
--------------------------------------------
There is an Alpha Architecture book; I know it was originally put out by
what was then the Digital Press, I believe it's avaiable from Prentice
Hall, when I'm back in the office next week I can probably get you the
ISBN, but you should be able to get your local university book seller to
find the book for you. The author is Richard Sites. However, it's more
focused on the Alpha processor than on system design.
In addition to the Alpha processor, usually implemented in a collection
of chips, are the interfaces to memory and the I/O busses. A typical
system has one or more processors, one or more memory banks, and one or
more I/O busses. In the design of the UNIX kernel, there is a need to
control the hardware, so there is a need to develop a collection of
software data structures to contain things like state information and
topology information for the hardware.
The term "nexus" (look it up in a good dictionary and compare) refers
to the central point in the system to which the CPUs, memories, and
I/O busses are attached. So it's a central control structure. In a
typical Alpha system, it's the software construct that represents the
"core logic" of the system, the chips that interface the CPUs, memories,
and I/O busses.
For each bus-to-bus bridge, or I/O adapter, there is a driver of some
sort (software) that has a unique name. In your system, there is a
bus interface (really implemented in the core logic chips, I suspect)
that's called an "mcbus" and the driver uses that name. Actually, I
would bet the letters "mc" are capitalized in the hardware manuals and
that they stand for something, but I don't know what off-hand. I'd
have to find the hardware documents (design documents, etc.) or look
at the software sources to see if it's explained there. Anyway, the
"mcbus" is interfaced to a PCI bus through "pci" software, and since
you have a "pci1" you probably also have a "pci0". The number are
just keeping track of how many of a given interface is present.
The PCI (as I recall, that's "peripheral connection interface" or some
thing like that) bus was designed jointly by several companies to be a
replacement for some of the older buses. It is now widely used, but
will be replaced over time by newer designs. Two of the older busses
that are still in use are ISA and EISA (ISA stood for "Industry Standard
Architecture" and it was widely used in PCs at a time when IBM was
trying
to force the MCA bus, or MicroChannel, and ISA pretty much won out; EISA
is the Extended ISA bus).
It's not uncommon in Alpha systems to have one or more PCI busses, and
to have an ISA or EISA bus connected to one of the PCI busses, and have
low speed stuff like the keyboard, mouse, serial console, and floppy
disk interface connected to the ISA or EISA bus. (It's possible to get
a single commodity chip that incorporates all those functions, that is
what's usually used.) Each of those device functions has a driver.
The "ace" driver is for the serial lines, the "gpc" ("graphics PC", I
think) is the keyboard and mouse, "fdi" is the floppy drive interface.
You can also have other device or bus adapters that interface directly
to the PCI bus. It's also possible to bridge a PCI bus to another PCI
bus (with a PCI-to-PCI bridge chip, or PPB). For instance, many network
adapters attach directly to the PCI bus, and each would have driver that
depends on the interface chips on the card. One commonly used chip was
Digital's "tulip" as it was called, and the driver for that is named
"tu" (short for "tulip"). (The choice of the driver name is somewhat
arbitrary.)
The SCSI subsystem provides another level of abstraction. Since in most
cases, the person who wants to access a SCSI connected device doesn't
really care which kind of hardware is used to interface the SCSI bus to
the system, there is a layer of software that is designed to separate
device access logic from the details of the bus interfaces that connect
to, for instance, the PCI bus. (It's possible to do it differently, and
most of the backplane RAID controllers do it differently.)
So, for each SCSI bus, there is some piece of hardware that interfaces
the bus (and its attached devices) to the system, and there is a driver
for that bus interface. One of the oldest for the PCI bus was based on
a Symbios 810 interface chip, and the driver is called "psiop" which I
think stands for "PCI Symbios I/O Processor" but I may be mistaken. It
is not always obvious which driver name goes with which card. There is
a database used by the PCI bus probing code to do the matching. I will
not even attempt to explain how that works.
There are also device class drivers in the SCSI subsystem, for things
like disks (rz) and tapes (tz). Accesses to devices through the SCSI
device names (and drivers) gets redirected through the lower level
code to a specific adapter interface driver, which sends commands out
on the SCSI bus and gets information back and passes the results back
up through the software layers to the originator of the request. I'll
not even attempt to explain how this really works.
-----------------------------------------------------------
---sram
"Don't listen to what I say; listen to what I mean!" --Feynman
Salvador Ramirez Flandes PROFC, Universidad de Concepcion, CHILE
http://www.profc.udec.cl/~sram mailto:sram_at_profc.udec.cl
Received on Fri Jun 23 2000 - 22:14:14 NZST