Misplaced Pages

Unibus

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The Unibus was the earliest of several computer bus and backplane designs used with PDP-11 and early VAX systems manufactured by the Digital Equipment Corporation (DEC) of Maynard , Massachusetts . The Unibus was developed around 1969 by Gordon Bell and student Harold McFarland while at Carnegie Mellon University .

#26973

59-434: The name refers to the unified nature of the bus; Unibus was used both as a system bus allowing the central processing unit to communicate with main memory , as well as a peripheral bus , allowing peripherals to send and receive data. Unifying these formerly separate busses allowed external devices to easily perform direct memory access (DMA) and made the construction of device drivers easier as control and data exchange

118-528: A stored program computer . The Report presented a general organization and theoretical model of the computer, however, not the implementation of that model. Soon designs integrated the control unit and ALU into what became known as the central processing unit (CPU). Computers in the 1950s and 1960s were generally constructed in an ad-hoc fashion. For example, the CPU, memory, and input/output units were each one or more cabinets connected by cables. Engineers used

177-662: A CPU's internal datapath busses) connects the CPU to memory and I/O devices. Typically a system level bus is designed for use as a backplane . Many of the computers were based on the First Draft of a Report on the EDVAC report published in 1945. In what became known as the Von Neumann architecture , a central control unit and arithmetic logic unit (ALU, which he called the central arithmetic part) were combined with computer memory and input and output functions to form

236-447: A Report on the EDVAC , written by John von Neumann in 1945, describing designs discussed with John Mauchly , J. Presper Eckert at University of Pennsylvania's Moore School of Electrical Engineering . The document describes a design architecture for an electronic digital computer with these components: The attribution of the invention of the architecture to von Neumann is controversial, not least because Eckert and Mauchly had done

295-468: A bidirectional bus, often implemented as a three-state bus . To prevent bus contention on the address bus, a bus arbiter selects which particular bus master is allowed to drive the address bus during this bus cycle. Intel has used the term Dual Independent Bus (DIB) for two different purposes. The first one came when Intel changed from a single local bus to the DIB, using the external front-side bus to

354-439: A computer system, combining the functions of a data bus to carry information, an address bus to determine where it should be sent or read from, and a control bus to determine its operation. The technique was developed to reduce costs and improve modularity, and although popular in the 1970s and 1980s, more modern computers use a variety of separate buses adapted to more specific needs. The system level bus (as distinct from

413-473: A description titled First Draft of a Report on the EDVAC based on the work of Eckert and Mauchly. It was unfinished when his colleague Herman Goldstine circulated it, and bore only von Neumann's name (to the consternation of Eckert and Mauchly). The paper was read by dozens of von Neumann's colleagues in America and Europe, and influenced the next round of computer designs. Jack Copeland considers that it

472-506: A desk calculator (in principle) is a fixed program computer. It can do basic mathematics , but it cannot run a word processor or games. Changing the program of a fixed-program machine requires rewiring, restructuring, or redesigning the machine. The earliest computers were not so much "programmed" as "designed" for a particular task. "Reprogramming"—when possible at all—was a laborious process that started with flowcharts and paper notes, followed by detailed engineering designs, and then

531-405: A facility was the need for a program to increment or otherwise modify the address portion of instructions, which operators had to do manually in early designs. This became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the instruction stream using immediate addressing . On a large scale,

590-440: A lot of the required design work and claim to have had the idea for stored programs long before discussing the ideas with von Neumann and Herman Goldstine The term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus ). This is referred to as the von Neumann bottleneck , which often limits

649-607: A machine he called the Automatic Computing Engine (ACE) . He presented this to the executive committee of the British National Physical Laboratory on February 19, 1946. Although Turing knew from his wartime experience at Bletchley Park that what he proposed was feasible, the secrecy surrounding Colossus , that was subsequently maintained for several decades, prevented him from saying so. Various successful implementations of

SECTION 10

#1732772107027

708-677: A machine was the development of suitable memory with instantaneously accessible contents. At first they suggested using a special vacuum tube —called the " Selectron "—which the Princeton Laboratories of RCA had invented. These tubes were expensive and difficult to make, so von Neumann subsequently decided to build a machine based on the Williams memory . This machine—completed in June, 1952 in Princeton—has become popularly known as

767-466: A major influence. Modern functional programming and object-oriented programming are much less geared towards "pushing vast numbers of words back and forth" than earlier languages like FORTRAN were, but internally, that is still what computers spend much of their time doing, even highly parallel supercomputers. As of 1996, a database benchmark study found that three out of four CPU cycles were spent waiting for memory. Researchers expect that increasing

826-557: A mechanism such as discrete plugboard wiring or fixed control circuitry for instruction implementation . Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as the Colossus and the ENIAC . These were programmed by setting switches and inserting patch cables to route data and control signals between various functional units. The vast majority of modern computers use

885-420: A modular system with lower cost . This is sometimes called a "streamlining" of the architecture. In subsequent decades, simple microcontrollers would sometimes omit features of the model to lower cost and size. Larger computers added features for higher performance. The use of the same bus to fetch instructions and data leads to the von Neumann bottleneck , the limited throughput (data transfer rate) between

944-602: A single system bus, starting with the S-100 bus in the Altair 8800 computer system in about 1975. The IBM PC used the Industry Standard Architecture (ISA) bus as its system bus in 1981. The passive backplanes of early models were replaced with the standard of putting the CPU and RAM on a motherboard , with only optional daughterboards or expansion cards in system bus slots. The Multibus became

1003-607: A standard of the Institute of Electrical and Electronics Engineers as IEEE standard 796 in 1983. Sun Microsystems developed the SBus in 1989 to support smaller expansion cards. The easiest way to implement symmetric multiprocessing was to plug in more than one CPU into the shared system bus, which was used through the 1980s. However, the shared bus quickly became the bottleneck and more sophisticated connection techniques were explored. Even in very simple systems, at various times

1062-436: A test program, some dates are the first time the computer was demonstrated or completed, and some dates are for the first delivery or installation. Through the decades of the 1960s and 1970s computers generally became both smaller and faster, which led to evolutions in their architecture. For example, memory-mapped I/O lets input and output devices be treated the same as memory. A single system bus could be used to provide

1121-455: Is "historically inappropriate to refer to electronic stored-program digital computers as 'von Neumann machines ' ". His Los Alamos colleague Stan Frankel said of von Neumann's regard for Turing's ideas I know that in or about 1943 or '44 von Neumann was well aware of the fundamental importance of Turing's paper of 1936.... Von Neumann introduced me to that paper and at his urging I studied it with care. Many people have acclaimed von Neumann as

1180-492: Is allowed to drive the data bus during this bus cycle. In very simple systems, every instruction cycle starts with a READ memory cycle where program memory drives the instruction onto the data bus while the instruction register latches that instruction from the data bus. Some instructions continue with a WRITE memory cycle where the memory data register drives data onto the data bus into the chosen RAM or I/O device. Other instructions continue with another READ memory cycle where

1239-413: Is an example of a system bus (besides directly accessed PCIE lanes) implemented by Intel and known since at least 2004. It's primarily used to access memory-mapped I/O devices and communicate CPU to the chipset . Von Neumann architecture The von Neumann architecture —also known as the von Neumann model or Princeton architecture —is a computer architecture based on the First Draft of

SECTION 20

#1732772107027

1298-459: Is obsolete in the modern personal and server computers, which instead use higher-performance interconnection technologies such as HyperTransport and Intel QuickPath Interconnect , while the system bus architecture continued to be used on simpler embedded microprocessors. The systems bus can even be internal to a single integrated circuit, producing a system-on-a-chip . Examples include AMBA , CoreConnect , and Wishbone . Direct Media Interface

1357-639: The ENIAC at the Moore School of Electrical Engineering of the University of Pennsylvania , wrote about the stored-program concept in December 1943. In planning a new machine, EDVAC , Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device, a mercury metal delay-line memory . This was the first time the construction of a practical stored-program machine

1416-466: The Java virtual machine , or languages embedded in web browsers ). On a smaller scale, some repetitive operations such as BITBLT or pixel and vertex shaders can be accelerated on general purpose processors with just-in-time compilation techniques. This is one use of self-modifying code that has remained popular. The mathematician Alan Turing , who had been alerted to a problem of mathematical logic by

1475-517: The cache coherence of shared data located in different caches have to be sent in broadcast (snooped) to check the other FSB's CPUs' cache state, reducing the available bandwidth. To reduce the coherency traffic, a snoop filter was included in the higher-end chipsets, in order to have cache state information available on-chipset. In 2007 Intel extended the idea of multiple buses in the 7300 chipset with four independent FSBs, calling it dedicated high-speed interconnects (DHSI). The system bus approach

1534-509: The central processing unit (CPU) and memory compared to the amount of memory. Because the single bus can only access one of the two classes of memory at a time, throughput is lower than the rate at which the CPU can work. This seriously limits the effective processing speed when the CPU is required to perform minimal processing on large amounts of data. The CPU is continually forced to wait for needed data to move to or from memory. Since CPU speed and memory size have increased much faster than

1593-487: The " reduction to practice " of these concepts but I would not regard these as comparable in importance with the introduction and explication of the concept of a computer able to store in its memory its program of activities and of modifying that program in the course of these activities. At the time that the "First Draft" report was circulated, Turing was producing a report entitled Proposed Electronic Calculator . It described in engineering and programming detail, his idea of

1652-404: The "father of the computer" (in a modern sense of the term) but I am sure that he would never have made that mistake himself. He might well be called the midwife, perhaps, but he firmly emphasized to me, and to others I am sure, that the fundamental conception is owing to Turing—in so far as not anticipated by Babbage.... Both Turing and von Neumann, of course, also made substantial contributions to

1711-514: The ACE design were produced. Both von Neumann's and Turing's papers described stored-program computers, but von Neumann's earlier paper achieved greater circulation and the computer architecture it outlined became known as the "von Neumann architecture". In the 1953 publication Faster than Thought: A Symposium on Digital Computing Machines (edited by B. V. Bowden), a section in the chapter on Computers in America reads as follows: The Machine of

1770-676: The Automatic Computing Engine, but although comparatively small in bulk and containing only about 800 thermionic valves, as can be judged from Plates XII, XIII and XIV, it is an extremely rapid and versatile calculating machine. The basic concepts and abstract principles of computation by a machine were formulated by Dr. A. M. Turing, F.R.S., in a paper . read before the London Mathematical Society in 1936, but work on such machines in Britain

1829-726: The Institute For Advanced Study, Princeton In 1945, Professor J. von Neumann, who was then working at the Moore School of Engineering in Philadelphia, where the E.N.I.A.C. had been built, issued on behalf of a group of his co-workers, a report on the logical design of digital computers. The report contained a detailed proposal for the design of the machine that has since become known as the E.D.V.A.C. (electronic discrete variable automatic computer). This machine has only recently been completed in America, but

Unibus - Misplaced Pages Continue

1888-571: The Maniac. The design of this machine inspired at least half a dozen machines now being built in America, all known affectionately as "Johniacs". In the same book, the first two paragraphs of a chapter on ACE read as follows: Automatic Computation at the National Physical Laboratory One of the most modern digital computers which embodies developments and improvements in the technique of automatic electronic computing

1947-507: The Von Neumann performance bottleneck. For example, the following all can improve performance : The problem can also be sidestepped somewhat by using parallel computing , using for example the non-uniform memory access (NUMA) architecture—this approach is commonly employed by supercomputers . It is less clear whether the intellectual bottleneck that Backus criticized has changed much since 1977. Backus's proposed solution has not had

2006-588: The ability to treat instructions as data is what makes assemblers , compilers , linkers , loaders , and other automated programming tools possible. It makes "programs that write programs" possible. This has made a sophisticated self-hosting computing ecosystem flourish around von Neumann architecture machines. Some high-level languages leverage the von Neumann architecture by providing an abstract, machine-independent way to manipulate executable code at runtime (e.g., LISP ), or by using runtime information to tune just-in-time compilation (e.g. languages hosted on

2065-462: The bus, typically on a terminator card. Type 2 lines are selectively propagated by each card to the next slot – if the card wants to keep the request grant it will assert the SACK line and not propagate the request to the next slot. If a slot is empty, it is necessary to install a "grant continuity card" in the slot to propagate the four type 2 signals to the next card. Type 3 signals are generated by

2124-407: The chosen RAM, program memory, or I/O device drives data onto the data bus while the memory data register latches that data from the data bus. More complex systems have a multi-master bus —not only do they have many devices that each drive the data bus, but also have many bus masters that each drive the address bus. The address bus as well as the data bus in bus snooping systems is required to be

2183-633: The common techniques of standardized bundles of wires and extended the concept as backplanes were used to hold printed circuit boards in these early machines. The name "bus" was already used for " bus bars " that carried electrical power to the various parts of electric machines, including early mechanical calculators. The advent of integrated circuits vastly reduced the size of each computer unit, and buses became more standardized. Standard modules could be interconnected in more uniform ways and were easier to develop and maintain. To provide even more modularity with reduced cost, memory and I/O buses (and

2242-458: The complex logic required to implement asynchronous data transfers is forced into the relatively few master devices. For interrupts, only the interrupt-fielding processor needs to contain the complex timing logic. The result is that most I/O controllers can be implemented with simple logic, and most of the critical logic is implemented as a custom MSI IC . Type 1 lines are a normal multi-sender wired-OR bus with pull-up resistors at each end of

2301-492: The current bus master is still performing data transfers. The 18 address lines allow the addressing of a maximum of 256 KB. Typically, the top 8 KB is reserved for the registers of the memory-mapped I/O devices used in the PDP-11 architecture. The design deliberately minimizes the amount of redundant logic required in the system. For example, a system always contains more slave devices than master devices so most of

2360-447: The data bus is driven by the program memory, by RAM, and by I/O devices. To prevent bus contention on the data bus, at any one instant only one device drives the data bus. In very simple systems, only the data bus is required to be a bidirectional bus. In very simple systems, the memory address register always drives the address bus, the control unit always drives the control bus, and an address decoder selects which particular device

2419-573: The latter became the Electronics Section of the Laboratory, under the charge of Mr. F. M. Colebrook. The First Draft described a design that was used by many universities and corporations to construct their computers. Among these various computers, only ILLIAC and ORDVAC had compatible instruction sets. The date information in the following chronology is difficult to put into proper order. Some dates are for first running

Unibus - Misplaced Pages Continue

2478-689: The lectures of Max Newman at the University of Cambridge , wrote a paper in 1936 entitled On Computable Numbers, with an Application to the Entscheidungsproblem , which was published in the Proceedings of the London Mathematical Society . In it he described a hypothetical machine he called a universal computing machine , now known as the " Universal Turing machine ". The hypothetical machine had an infinite store (memory in today's terminology) that contained both instructions and data. John von Neumann became acquainted with Turing while he

2537-576: The main system memory and I/O devices, and the internal back-side bus to the L2 CPU cache . This was introduced in the Pentium Pro in 1995. In 2005 and 2006 Intel introduced the 8500 and 5000 chipsets, where DIB referred to the two front-side buses on a chipset, which doubles the system bandwidth compared to having just one FSB shared by all the CPUs. However, the information needed to guarantee

2596-445: The number of simultaneous instruction streams with multithreading or single-chip multiprocessing will make this bottleneck even worse. In the context of multi-core processors , additional overhead is required to maintain cache coherence between processors and threads. Aside from the von Neumann bottleneck, program modifications can be quite harmful, either by accident or design. In some simple stored-program computer designs,

2655-467: The often-arduous process of physically rewiring and rebuilding the machine. It could take three weeks to set up and debug a program on ENIAC . With the proposal of the stored-program computer, this changed. A stored-program computer includes, by design, an instruction set , and can store in memory a set of instructions (a program ) that details the computation . A stored-program design also allows for self-modifying code . One early motivation for such

2714-474: The performance of the corresponding system. The von Neumann architecture is simpler than the Harvard architecture (which has one dedicated set of address and data buses for reading and writing to memory and another set of address and data buses to fetch instructions ). A stored-program computer uses the same underlying mechanism to encode both program instructions and data as opposed to designs which use

2773-423: The power and ground lines, it is usually referred to as a 56-line bus. It can exist within a backplane or on a cable. Up to 20 nodes (devices) can be connected to a single Unibus segment; additional segments can be connected via a bus repeater . The bus is completely asynchronous , allowing a mixture of fast and slow devices. It allows the overlapping of arbitration (selection of the next bus master ) while

2832-415: The power supply and have only a single sender. They warn the devices on the bus when the power is about to fail, so those devices can execute an orderly shutdown, and disable operations to prevent spurious writes. The two control lines (C0 and C1) allowed the selection of four different data transfer cycles: System bus A system bus is a single computer bus that connects the major components of

2891-446: The required control and power buses ) were sometimes combined into a single unified system bus. Modularity and cost became important as computers became small enough to fit in a single cabinet (and customers expected similar price reductions). Digital Equipment Corporation (DEC) further reduced cost for mass-produced minicomputers , and memory-mapped I/O into the memory bus, so that the devices appeared to be memory locations. This

2950-471: The same hardware mechanism to encode and store both data and program instructions, but have caches between the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instruction and data fetches use separate buses ( split-cache architecture ). The earliest computing machines had fixed programs. Some very simple computers still use this design, either for simplicity or training purposes. For example,

3009-406: The throughput between them, the bottleneck has become more of a problem, a problem whose severity increases with every new generation of CPU. The von Neumann bottleneck was described by John Backus in his 1977 ACM Turing Award lecture. According to Backus: Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through

SECTION 50

#1732772107027

3068-560: The von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it. There are several known methods for mitigating

3127-498: The von Neumann report inspired the construction of the E.D.S.A.C. (electronic delay-storage automatic calculator) in Cambridge (see p. 130). In 1947, Burks, Goldstine and von Neumann published another report that outlined the design of another type of machine (a parallel machine this time) that would be exceedingly fast, capable perhaps of 20,000 operations per second. They pointed out that the outstanding problem in constructing such

3186-514: Was a visiting professor at Cambridge in 1935, and also during Turing's PhD year at the Institute for Advanced Study in Princeton, New Jersey during 1936–1937. Whether he knew of Turing's paper of 1936 at that time is not clear. In 1936, Konrad Zuse also anticipated, in two patent applications, that machine instructions could be stored in the same storage used for data. Independently, J. Presper Eckert and John Mauchly , who were developing

3245-556: Was all handled through memory-mapped I/O . Unibus was physically large, which led to the introduction of Q-bus , which multiplexed some signals to reduce pin count. Higher performance PDP systems used Fastbus, essentially two Unibusses in one. The system was later supplanted by Massbus , a dedicated I/O bus introduced on the VAX and late-model PDP-11s. The Unibus consists of 72 signals, usually connected via two 36-way edge connectors on each printed circuit board . When not counting

3304-585: Was delayed by the war. In 1945, however, an examination of the problems was made at the National Physical Laboratory by Mr. J. R. Womersley, then superintendent of the Mathematics Division of the Laboratory. He was joined by Dr. Turing and a small staff of specialists, and, by 1947, the preliminary planning was sufficiently advanced to warrant the establishment of the special group already mentioned. In April, 1948,

3363-594: Was implemented in the Unibus of the PDP-11 around 1969, eliminating the need for a separate I/O bus. Even computers such as the PDP-8 without memory-mapped I/O were soon implemented with a system bus, which allowed modules to be plugged into any slot. Some authors called this a new streamlined "model" of computer architecture. Many early microcomputers (with a CPU generally on a single integrated circuit ) were built with

3422-615: Was proposed. At that time, he and Mauchly were not aware of Turing's work. Von Neumann was involved in the Manhattan Project at the Los Alamos National Laboratory . It required huge amounts of calculation, and thus drew him to the ENIAC project, during the summer of 1944. There he joined the ongoing discussions on the design of this stored-program computer, the EDVAC. As part of that group, he wrote up

3481-502: Was recently demonstrated at the National Physical Laboratory, Teddington, where it has been designed and built by a small team of mathematicians and electronics research engineers on the staff of the Laboratory, assisted by a number of production engineers from the English Electric Company, Limited. The equipment so far erected at the Laboratory is only the pilot model of a much larger installation which will be known as

#26973