In computer science and computer engineering , computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the description may include the instruction set architecture design, microarchitecture design, logic design , and implementation .
71-411: The Eckert–Mauchly Award recognizes contributions to digital systems and computer architecture . It is known as the computer architecture community’s most prestigious award. First awarded in 1979, it was named for John Presper Eckert and John William Mauchly , who between 1943 and 1946 collaborated on the design and construction of the first large scale electronic computing machine, known as ENIAC ,
142-491: A real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable and limited time period after the brake pedal is sensed or else failure of the brake will occur. Benchmarking takes all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it should not be how you choose
213-505: A Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, "Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints." Brooks went on to help develop the IBM System/360 line of computers, in which "architecture" became
284-505: A basic ALU operation, such as "add", with the access of one or more operands in memory (using addressing modes such as direct, indirect, indexed, etc.). Certain architectures may allow two or three operands (including the result) directly in memory or may be able to perform functions such as automatic pointer increment, etc. Software-implemented instruction sets may have even more complex and powerful instructions. Reduced instruction-set computers , RISC , were first widely implemented during
355-401: A cache line or virtual memory page boundary, for instance), and are therefore somewhat easier to optimize for speed. In early 1960s computers, main memory was expensive and very limited, even on mainframes. Minimizing the size of a program to make sure it would fit in the limited memory was often central. Thus the size of the instructions needed to perform a particular task, the code density ,
426-415: A computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost. Once an instruction set and microarchitecture have been designed, a practical machine must be developed. This design process is called the implementation . Implementation
497-490: A computer system. The case of instruction set architecture can be used to illustrate the balance of these competing factors. More complex instruction sets enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction (such as the x86 Loop instruction ). However, longer and more complex instructions take longer for the processor to decode and can be more costly to implement effectively. The increased complexity from
568-427: A computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might render video games more smoothly. Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but do not offer similar advantages to general tasks. Power efficiency
639-415: A conditional branch instruction will transfer control if the condition is true, so that execution proceeds to a different part of the program, and not transfer control if the condition is false, so that execution continues sequentially. Some instruction sets also have conditional moves, so that the move will be executed, and the data stored in the target location, if the condition is true, and not executed, and
710-469: A detailed analysis of the computer's organization. For example, in an SD card , the designers might need to arrange the card so that the most data can be processed in the fastest possible way. Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example,
781-572: A given instruction may specify: More complex operations are built up by combining these simple instructions, which are executed sequentially, or as otherwise directed by control flow instructions. Examples of operations common to many instruction sets include: Processors may include "complex" instructions in their instruction set. A single "complex" instruction does something that may take many instructions on other computers. Such instructions are typified by instructions that take multiple steps, control multiple functional units, or otherwise appear on
SECTION 10
#1732798002041852-407: A higher clock rate may not necessarily have greater performance. As a result, manufacturers have moved away from clock speed as a measure of performance. Other factors influence speed, such as the mix of functional units , bus speeds, available memory, and the type and order of instructions in the programs. There are two main types of speed: latency and throughput . Latency is the time between
923-407: A large instruction set also creates more room for unreliability when instructions interact in unexpected ways. The implementation involves integrated circuit design , packaging, power , and cooling . Optimization of the design requires familiarity with topics from compilers and operating systems to logic design and packaging. An instruction set architecture (ISA) is the interface between
994-601: A larger scale than the bulk of simple instructions implemented by the given processor. Some examples of "complex" instructions include: Complex instructions are more common in CISC instruction sets than in RISC instruction sets, but RISC instruction sets may include them as well. RISC instruction sets generally do not include ALU operations with memory operands, or instructions to move large blocks of memory, but most RISC instruction sets include SIMD or vector instructions that perform
1065-412: A noun defining "what the user needs to know". The System/360 line was succeeded by several compatible lines of computers, including the current IBM Z line. Later, computer users came to use the term in many less explicit ways. The earliest computer architectures were designed on paper and then directly built into the final hardware form. Later, computer architecture prototypes were physically built in
1136-419: A period of rapidly growing memory subsystems. They sacrifice code density to simplify implementation circuitry, and try to increase performance via higher clock frequencies and more registers. A single RISC instruction typically performs only a single operation, such as an "add" of registers or a "load" from a memory location into a register. A RISC instruction set normally has a fixed instruction length , whereas
1207-408: A proposed instruction set. Modern emulators can measure size, cost, and speed to determine whether a particular ISA is meeting its goals. Computer organization helps optimize performance-based products. For example, software engineers need to know the processing power of processors . They may need to optimize software in order to gain the most performance for the lowest price. This can require quite
1278-543: A single architecture for a series of five processors spanning a wide range of cost and performance. None of the five engineering design teams could count on being able to bring about adjustments in architectural specifications as a way of easing difficulties in achieving cost and performance objectives. Some virtual machines that support bytecode as their ISA such as Smalltalk , the Java virtual machine , and Microsoft 's Common Language Runtime , implement this by translating
1349-503: A single chip as possible. In the world of embedded computers , power efficiency has long been an important goal next to throughput and latency. Increases in clock frequency have grown more slowly over the past few years, compared to power reduction improvements. This has been driven by the end of Moore's Law and demand for longer battery life and reductions in size for mobile technology . This change in focus from higher clock rates to power consumption and miniaturization can be shown by
1420-545: A single instruction. Some exotic instruction sets do not have an opcode field, such as transport triggered architectures (TTA), only operand(s). Most stack machines have " 0-operand " instruction sets in which arithmetic and logical operations lack any operand specifier fields; only instructions that push operands onto the evaluation stack or that pop operands from the stack into variables have operand specifiers. The instruction set carries out most ALU actions with postfix ( reverse Polish notation ) operations that work only on
1491-408: A standard and compatible application binary interface (ABI) for a particular ISA, machine code will run on future implementations of that ISA and operating system. However, if an ISA supports running multiple operating systems, it does not guarantee that machine code for one operating system will run on another operating system, unless the first operating system supports running machine code built for
SECTION 20
#17327980020411562-463: A typical CISC instruction set has instructions of widely varying length. However, as RISC computers normally require more and often longer instructions to implement a given task, they inherently make less optimal use of bus bandwidth and cache memories. Certain embedded RISC ISAs like Thumb and AVR32 typically exhibit very high density owing to a technique called code compression. This technique packs two 16-bit instructions into one 32-bit word, which
1633-540: A writable control store use it to allow the instruction set to be changed (for example, the Rekursiv processor and the Imsys Cjip ). CPUs designed for reconfigurable computing may use field-programmable gate arrays (FPGAs). An ISA can also be emulated in software by an interpreter . Naturally, due to the interpretation overhead, this is slower than directly running programs on the emulated hardware, unless
1704-415: Is 15 bytes (120 bits). Within an instruction set, different instructions may have different lengths. In some architectures, notably most reduced instruction set computers (RISC), instructions are a fixed length , typically corresponding with that architecture's word size . In other architectures, instructions have variable length , typically integral multiples of a byte or a halfword . Some, such as
1775-400: Is a stub . You can help Misplaced Pages by expanding it . Computer architecture The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace , describing the analytical engine . While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in
1846-444: Is a complex issue. There were two stages in history for the microprocessor. The first was the CISC (Complex Instruction Set Computer), which had many different instructions. In the 1970s, however, places like IBM did research and found that many instructions in the set could be eliminated. The result was the RISC (Reduced Instruction Set Computer), an architecture that uses a smaller set of instructions. A simpler instruction set may offer
1917-490: Is an abstract model that generally defines how software controls the CPU in a computer or a family of computers. A device or program that executes instructions described by that ISA, such as a central processing unit (CPU), is called an implementation of that ISA. In general, an ISA defines the supported instructions , data types , registers , the hardware support for managing main memory , fundamental features (such as
1988-415: Is another important measurement in modern computers. Higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W (millions of instructions per second per watt). Modern circuits have less power required per transistor as the number of transistors per chip grows. This is because each transistor that
2059-602: Is due to the many addressing modes and optimizations (such as sub-register addressing, memory operands in ALU instructions, absolute addressing, PC-relative addressing, and register-to-register spills) that CISC ISAs offer. The size or length of an instruction varies widely, from as little as four bits in some microcontrollers to many hundreds of bits in some VLIW systems. Processors used in personal computers , mainframes , and supercomputers have minimum instruction sizes between 8 and 64 bits. The longest possible instruction on x86
2130-471: Is put in a new chip requires its own power supply and requires new pathways to be built to power it. However, the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into
2201-553: Is similar to the code density of RISC; the increased instruction density is offset by requiring more of the primitive instructions to do a task. There has been research into executable compression as a mechanism for improving code density. The mathematics of Kolmogorov complexity describes the challenges and limits of this. In practice, code density is also dependent on the compiler . Most optimizing compilers have options that control whether to optimize code generation for execution speed or for code density. For instance GCC has
Eckert–Mauchly Award - Misplaced Pages Continue
2272-465: Is the amount of time that it takes for information from one node to travel to the source) and throughput. Sometimes other considerations, such as features, size, weight, reliability, and expandability are also factors. The most common scheme does an in-depth power analysis and figures out how to keep power consumption low while maintaining adequate performance. Modern computer performance is often described in instructions per cycle (IPC), which measures
2343-432: Is then unpacked at the decode stage and executed as two instructions. Minimal instruction set computers (MISC) are commonly a form of stack machine , where there are few separate instructions (8–32), so that multiple instructions can be fit into a single machine word. These types of cores often take little silicon to implement, so they can be easily realized in an FPGA or in a multi-core form. The code density of MISC
2414-466: Is usually not considered architectural design, but rather hardware design engineering . Implementation can be further broken down into several steps: For CPUs , the entire implementation process is organized differently and is often referred to as CPU design . The exact form of a computer system depends on the constraints and goals. Computer architectures usually trade off standards, power versus performance , cost, memory capacity, latency (latency
2485-504: The ARM with Thumb-extension have mixed variable encoding, that is two fixed, usually 32-bit and 16-bit encodings, where instructions cannot be mixed freely but must be switched between on a branch (or exception boundary in ARMv8). Fixed-length instructions are less complicated to handle than variable-length instructions for several reasons (not having to check whether an instruction straddles
2556-514: The Stretch , an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of "system architecture", a term that seemed more useful than "machine organization". Subsequently, Brooks,
2627-411: The compiler responsible for instruction issue and scheduling. Architectures with even less complexity have been studied, such as the minimal instruction set computer (MISC) and one-instruction set computer (OISC). These are theoretically important types, but have not been commercialized. Machine language is built up from discrete statements or instructions . On the processing architecture,
2698-590: The memory consistency , addressing modes , virtual memory ), and the input/output model of implementations of the ISA. An ISA specifies the behavior of machine code running on implementations of that ISA in a fashion that does not depend on the characteristics of that implementation, providing binary compatibility between implementations. This enables multiple implementations of an ISA that differ in characteristics such as performance , physical size, and monetary cost (among other things), but that are capable of running
2769-529: The microarchitecture of a processor, engineers use blocks of "hard-wired" electronic circuitry (often designed separately) such as adders, multiplexers, counters, registers, ALUs, etc. Some kind of register transfer language is then often used to describe the decoding and sequencing of each instruction of an ISA using this physical microarchitecture. There are two basic ways to build a control unit to implement this description (although many designs use middle ways or compromises): Some microcoded CPU designs with
2840-457: The stack or in an implicit register. If some of the operands are given implicitly, fewer operands need be specified in the instruction. When a "destination operand" explicitly specifies the destination, an additional operand must be supplied. Consequently, the number of operands encoded in an instruction may differ from the mathematically necessary number of arguments for a logical or arithmetic operation (the arity ). Operands are either encoded in
2911-504: The x86 instruction set , but they have radically different internal designs. The concept of an architecture , distinct from the design of a specific machine, was developed by Fred Brooks at IBM during the design phase of System/360 . Prior to NPL [System/360], the company's computer designers had been free to honor cost objectives not only by selecting technologies but also by fashioning functional and architectural refinements. The SPREAD compatibility objective, in contrast, postulated
Eckert–Mauchly Award - Misplaced Pages Continue
2982-431: The "opcode" representation of the instruction, or else are given as values or addresses following the opcode. Register pressure measures the availability of free registers at any point in time during the program execution. Register pressure is high when a large number of the available registers are in use; thus, the higher the register pressure, the more often the register contents must be spilled into memory. Increasing
3053-624: The Electronic Numerical Integrator and Computer. A certificate and $ 5,000 are awarded jointly by the Association for Computing Machinery (ACM) and the IEEE Computer Society for outstanding contributions to the field of computer and digital systems architecture. This science awards article is a stub . You can help Misplaced Pages by expanding it . This computer-engineering -related article
3124-595: The bytecode for commonly used code paths into native machine code. In addition, these virtual machines execute less frequently used code paths by interpretation (see: Just-in-time compilation ). Transmeta implemented the x86 instruction set atop VLIW processors in this fashion. An ISA may be classified in a number of different ways. A common classification is by architectural complexity . A complex instruction set computer (CISC) has many specialized instructions, some of which may only be rarely used in practical programs. A reduced instruction set computer (RISC) simplifies
3195-503: The code is to understand), size of the code (how much code is required to do a specific action), cost of the computer to interpret the instructions (more complexity means more hardware needed to decode and execute the instructions), and speed of the computer (with more complex decoding hardware comes longer decode time). Memory organization defines how instructions interact with the memory, and how memory interacts with itself. During design emulation , emulators can run programs written in
3266-425: The computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high-level programming languages such as Java , C++ , or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers . Software tools, such as compilers , translate those high level languages into instructions that
3337-448: The efficiency of the architecture at any clock frequency; a faster IPC rate means the computer is faster. Older computers had IPC counts as low as 0.1 while modern processors easily reach nearly 1. Superscalar processors may reach three to five IPC by executing several instructions per clock cycle. Counting machine-language instructions would be misleading because they can do varying amounts of work in different ISAs. The "instruction" in
3408-406: The expression stack , not on data registers or arbitrary main memory cells. This can be very convenient for compiling high-level languages, because most arithmetic expressions can be easily translated into postfix notation. Conditional instructions often have a predicate field—a few bits that encode the specific condition to cause an operation to be performed rather than not performed. For example,
3479-406: The final hardware form. The discipline of computer architecture has three main subcategories: There are other technologies in computer architecture. The following technologies are used in bigger companies like Intel, and were estimated in 2002 to count for 1% of all of computer architecture: Computer architecture is concerned with balancing the performance, efficiency, cost, and reliability of
3550-475: The form of a transistor–transistor logic (TTL) computer—such as the prototypes of the 6800 and the PA-RISC —tested, and tweaked, before committing to the final hardware form. As of the 1990s, new computer architectures are typically "built", tested, and tweaked—inside some other computer architecture in a computer architecture simulator ; or inside a FPGA as a soft microprocessor ; or both—before committing to
3621-414: The hardware running the emulator is an order of magnitude faster. Today, it is common practice for vendors of new ISAs or microarchitectures to make software emulators available to software developers before the hardware implementation is ready. Often the details of the implementation have a strong influence on the particular instructions selected for the instruction set. For example, many implementations of
SECTION 50
#17327980020413692-546: The instruction set includes support for something such as " fetch-and-add ", " load-link/store-conditional " (LL/SC), or "atomic compare-and-swap ". A given instruction set can be implemented in a variety of ways. All ways of implementing a particular instruction set provide the same programming model , and all implementations of that instruction set are able to run the same executables. The various ways of implementing an instruction set give different tradeoffs between cost, performance, power consumption, size, etc. When designing
3763-474: The instructions. The names can be recognized by a software development tool called an assembler . An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers and software programs to isolate and correct malfunctions in binary computer programs. ISAs vary in quality and completeness. A good ISA compromises between programmer convenience (how easy
3834-663: The large number of bits needed to encode the three registers of a 3-operand instruction, RISC architectures that have 16-bit instructions are invariably 2-operand designs, such as the Atmel AVR, TI MSP430 , and some versions of ARM Thumb . RISC architectures that have 32-bit instructions are usually 3-operand designs, such as the ARM , AVR32 , MIPS , Power ISA , and SPARC architectures. Each instruction specifies some number of operands (registers, memory locations, or immediate values) explicitly . Some instructions give one or both operands implicitly, such as by being stored on top of
3905-543: The most fundamental abstractions in computing . An instruction set architecture is distinguished from a microarchitecture , which is the set of processor design techniques used, in a particular processor, to implement the instruction set. Processors with different microarchitectures can share a common instruction set. For example, the Intel Pentium and the AMD Athlon implement nearly identical versions of
3976-401: The number of registers in an architecture decreases register pressure but increases the cost. While embedded instruction sets such as Thumb suffer from extremely high register pressure because they have small register sets, general-purpose RISC ISAs like MIPS and Alpha enjoy low register pressure. CISC ISAs like x86-64 offer low register pressure despite having smaller register sets. This
4047-439: The operation to perform, such as add contents of memory to register —and zero or more operand specifiers, which may specify registers , memory locations, or literal data. The operand specifiers may have addressing modes determining their meaning or may be in fixed fields. In very long instruction word (VLIW) architectures, which include many microcode architectures, multiple simultaneous opcodes and operands are specified in
4118-447: The option -Os to optimize for small machine code size, and -O3 to optimize for execution speed at the cost of larger machine code. The instructions constituting a program are rarely specified using their internal, numeric form ( machine code ); they may be specified by programmers using an assembly language or, more commonly, may be generated from high-level programming languages by compilers . The design of instruction sets
4189-451: The other operating system. An ISA can be extended by adding instructions or other capabilities, or adding support for larger addresses and data values; an implementation of the extended ISA will still be able to execute machine code for versions of the ISA without those extensions. Machine code using those extensions will only run on implementations that support those extensions. The binary compatibility that they provide makes ISAs one of
4260-476: The potential for higher speeds, reduced processor size, and reduced power consumption. However, a more complex set may optimize common operations, improve memory and cache efficiency, or simplify programming. Some instruction set designers reserve one or more opcodes for some kind of system call or software interrupt . For example, MOS Technology 6502 uses 00 H , Zilog Z80 uses the eight codes C7,CF,D7,DF,E7,EF,F7,FF H while Motorola 68000 use codes in
4331-570: The processor by efficiently implementing only the instructions that are frequently used in programs, while the less common operations are implemented as subroutines, having their resulting additional processor execution time offset by infrequent use. Other types include very long instruction word (VLIW) architectures, and the closely related long instruction word (LIW) and explicitly parallel instruction computing (EPIC) architectures. These architectures seek to exploit instruction-level parallelism with less hardware than RISC and CISC by making
SECTION 60
#17327980020414402-482: The processor can understand. Besides instructions, the ISA defines items in the computer that are available to a program—e.g., data types , registers , addressing modes , and memory . Instructions locate these available items with register indexes (or names) and memory addressing modes. The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short (vaguely) mnemonic names for
4473-479: The range A000..AFFF H . Fast virtual machines are much easier to implement if an instruction set meets the Popek and Goldberg virtualization requirements . The NOP slide used in immunity-aware programming is much easier to implement if the "unprogrammed" state of the memory is interpreted as a NOP . On systems with multiple processors, non-blocking synchronization algorithms are much easier to implement if
4544-488: The same arithmetic operation on multiple pieces of data at the same time. SIMD instructions have the ability of manipulating large vectors and matrices in minimal time. SIMD instructions allow easy parallelization of algorithms commonly involved in sound, image, and video processing. Various SIMD implementations have been brought to market under trade names such as MMX , 3DNow! , and AltiVec . On traditional architectures, an instruction includes an opcode that specifies
4615-433: The same machine code, so that a lower-performance, lower-cost machine can be replaced with a higher-cost, higher-performance machine without having to replace software. It also enables the evolution of the microarchitectures of the implementations of that ISA, so that a newer, higher-performance implementation of an ISA can run software that runs on previous generations of implementations. If an operating system maintains
4686-526: The same storage used for data, i.e., the stored-program concept. Two other early and important examples are: The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr. , members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about
4757-632: The significant reductions in power consumption, as much as 50%, that were reported by Intel in their release of the Haswell microarchitecture ; where they dropped their power consumption benchmark from 30–40 watts down to 10–20 watts. Comparing this to the processing speed increase of 3 GHz to 4 GHz (2002 to 2006), it can be seen that the focus in research and development is shifting away from clock frequency and moving towards consuming less power and taking up less space. Instruction set In computer science , an instruction set architecture ( ISA )
4828-498: The standard measurements is not a count of the ISA's machine-language instructions, but a unit of measurement, usually based on the speed of the VAX computer architecture. Many people used to measure a computer's speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU . However, this metric is somewhat misleading, as a machine with
4899-507: The start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (like when the disk drive finishes moving some data). Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse, but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in
4970-524: The target location not modified, if the condition is false. Similarly, IBM z/Architecture has a conditional store instruction. A few instruction sets include a predicate field in every instruction; this is called branch predication . Instruction sets may be categorized by the maximum number of operands explicitly specified in instructions. (In the examples that follow, a , b , and c are (direct or calculated) addresses referring to memory cells, while reg1 and so on refer to machine registers.) Due to
5041-753: Was an important characteristic of any instruction set. It remained important on the initially-tiny memories of minicomputers and then microprocessors. Density remains important today, for smartphone applications, applications downloaded into browsers over slow Internet connections, and in ROMs for embedded applications. A more general advantage of increased density is improved effectiveness of caches and instruction prefetch. Computers with high code density often have complex instructions for procedure entry, parameterized returns, loops, etc. (therefore retroactively named Complex Instruction Set Computers , CISC ). However, more typical, or frequent, "CISC" instructions merely combine
#40959