Tandem Computers, Inc. was the dominant manufacturer of fault-tolerant computer systems for ATM networks, banks , stock exchanges , telephone switching centers, 911 systems, and other similar commercial transaction processing applications requiring maximum uptime and no data loss. The company was founded by Jimmy Treybig in 1974 in Cupertino, California . It remained independent until 1997, when it became a server division within Compaq . It is now a server division within Hewlett Packard Enterprise , following Hewlett-Packard 's acquisition of Compaq and the split of Hewlett-Packard into HP Inc. and Hewlett Packard Enterprise.
113-533: Tandem's NonStop systems use a number of independent identical processors, redundant storage devices, and redundant controllers to provide automatic high-speed " failover " in the case of a hardware or software failure. To contain the scope of failures and of corrupted data, these multi-computer systems have no shared central components, not even main memory. Conventional multi-computer systems all use shared memories and work directly on shared data objects. Instead, NonStop processors cooperate by exchanging messages across
226-511: A data warehouse and business intelligence server line, HP Neoview , based on the NonStop line. It acted as a database server , providing NonStop OS and NonStop SQL , but lacked the transaction processing functionality of the original NonStop systems. The line was retired , and no longer marketed , as of 24 January 2011. Computer cluster A computer cluster is a set of computers that work together so that they can be viewed as
339-513: A business campus; a cluster of clusters with a total of 224 CPUs. This allowed further scale-up for taking on the largest mainframe applications. Like the CPU modules within the computers, the Guardian operating system could failover entire task sets to other machines in the network. Worldwide clusters of 4000 CPUs could also be built via conventional long-haul network links. In 1986, Tandem introduced
452-400: A challenge. In a heterogeneous CPU-GPU cluster with a complex application environment, the performance of each job depends on the characteristics of the underlying cluster. Therefore, mapping tasks onto CPU cores and GPU devices provides significant challenges. This is an area of ongoing research; algorithms that combine and extend MapReduce and Hadoop have been proposed and studied. When
565-643: A cluster requires parallel language primitives and suitable tools such as those discussed by the High Performance Debugging Forum (HPDF) which resulted in the HPD specifications. Tools such as TotalView were then developed to debug parallel implementations on computer clusters which use Message Passing Interface (MPI) or Parallel Virtual Machine (PVM) for message passing. The University of California, Berkeley Network of Workstations (NOW) system gathers cluster data and stores them in
678-570: A cluster was the Burroughs B5700 in the mid-1960s. This allowed up to four computers, each with either one or two processors, to be tightly coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation. The first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" (ARC) system, developed in 1977, and using ARCnet as
791-424: A complex server farm . Most customers also have a backup server in a remote location for IT disaster recovery . There are standard products to keep the data of the production and the backup server in sync, for example, HPE's Remote Database Facility (RDF), hence there is fast takeover and little to no data loss also in a disaster situation with the production server being disabled or destroyed. HP also developed
904-455: A custom operating system which was significantly different from Unix or HP 3000's MPE. It was initially called T/TOS ( Tandem Transactional Operating System ) but soon named Guardian for its ability to protect all data from machine faults and software faults. In contrast to all other commercial operating systems, Guardian was based on message passing as the basic way for all processes to interact, without shared memory, regardless of where
1017-488: A database, while a system such as PARMON, developed in India, allows visually observing and managing large clusters. Application checkpointing can be used to restore a given state of the system when a node fails during a long multi-node computation. This is essential in large clusters, given that as the number of nodes increases, so does the likelihood of node failure under heavy computational loads. Checkpointing can restore
1130-409: A dedicated network, is densely located, and probably has homogeneous nodes. The other extreme is where a computer job uses one or few nodes, and needs little or no inter-node communication, approaching grid computing . In a Beowulf cluster , the application programs never see the computational nodes (also called slave computers) but only interact with the "Master" which is a specific computer handling
1243-401: A failed part. Their fast clocks could not be synchronized as in strict lock stepping, so voting instead happened at each interrupt. Some other versions of Integrity used 4x "pair and spares" redundancy. Pairs of processors ran in lock-step to check each other. When they disagreed, both processors were marked untrusted, and their workload was taken over by a hot-spare pair of processors whose state
SECTION 10
#17327974812071356-504: A high-availability approach, etc. " Load-balancing " clusters are configurations in which cluster-nodes share computational workload to provide better overall performance. For example, a web server cluster may assign different queries to different nodes, so the overall response time will be optimized. However, approaches to load-balancing may significantly differ among applications, e.g. a high-performance cluster used for scientific computations would balance load with different algorithms from
1469-685: A large and shared file server that stores global persistent data, accessed by the slaves as needed. A special purpose 144-node DEGIMA cluster is tuned to running astrophysical N-body simulations using the Multiple-Walk parallel tree code, rather than general purpose scientific computations. Due to the increasing computing power of each generation of game consoles , a novel use has emerged where they are repurposed into High-performance computing (HPC) clusters. Some examples of game console clusters are Sony PlayStation clusters and Microsoft Xbox clusters. Another example of consumer game product
1582-402: A large number of computers clustered together, this lends itself to the use of distributed file systems and RAID , both of which can increase the reliability and speed of a cluster. One of the issues in designing a cluster is how tightly coupled the individual nodes may be. For instance, a single computer job may require frequent communication among nodes: this implies that the cluster shares
1695-469: A new way that was safe from all " single-point failures " yet would be only marginally more expensive than conventional non-fault-tolerant systems. They would be less expensive and support more throughput than some existing ad-hoc toughened systems that used redundant but usually required "hot spares". Each engineer was confident they could quickly pull off their own part of this complex new design but doubted that others' areas could be worked out. The parts of
1808-455: A node in a cluster fails, strategies such as " fencing " may be employed to keep the rest of the system operational. Fencing is the process of isolating a node or protecting shared resources when a node appears to be malfunctioning. There are two classes of fencing methods; one disables a node itself, and the other disallows access to resources such as shared disks. The STONITH method stands for "Shoot The Other Node In The Head", meaning that
1921-598: A novel way; the compiler, not the microcode, was responsible for deciding when full registers were spilled to the memory stack and when empty registers were re-filled from the memory stack. On the HP ;3000, this decision took extra microcode cycles in every instruction. The HP 3000 supported COBOL with several instructions for calculating directly on arbitrary-length BCD (binary-coded decimal) strings of digits. The T/16 simplified this to single instructions for converting between BCD strings and 64-bit binary integers. In
2034-428: A number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The computer clustering approach usually (but not always) connects a number of readily available computing nodes (e.g. personal computers used as servers) via a fast local area network . The activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop
2147-497: A pair of CPUs, controllers, or buses, so that the system would keep running without loss of connections if one power supply failed. The careful complex arrangement of parts and connections in customers' larger configurations were documented in a Mackie diagram , named after lead salesman David Mackie, who invented the notation. None of these duplicated parts were wasted "hot spares"; everything added to system throughput during normal operations. Besides recovering well from failed parts,
2260-479: A process or CPU failure. Data integrity is maintained during those takeovers; no transactions or data are lost or corrupted. The operating system as a whole is branded NonStop OS and includes the Guardian layer, which is a low-level component of the operating system and the Open System Services (OSS) personality which runs atop this layer, which implements a Unix-like interface for other components of
2373-423: A reliable fabric, and software takes periodic snapshots for possible rollback of program memory state. Besides masking failures, this " shared-nothing " messaging system design also scales to the largest commercial workloads. Each doubling of the total number of processors doubles system throughput, up to the maximum configuration of 4000 processors. In contrast, the performance of conventional multiprocessor systems
SECTION 20
#17327974812072486-402: A result of the convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, and software for high-performance distributed computing . They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia . Prior to
2599-427: A run-time environment for message-passing, task and resource management, and fault notification. PVM can be used by user programs written in C, C++, or Fortran, etc. MPI emerged in the early 1990s out of discussions among 40 organizations. The initial effort was supported by ARPA and National Science Foundation . Rather than starting anew, the design of MPI drew on various features available in commercial systems of
2712-440: A simple two-node system which just connects two personal computers, or may be a very fast supercomputer . A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high-performance computing . An early project that showed the viability of the concept was the 133-node Stone Soupercomputer . The developers used Linux ,
2825-408: A simpler hardware-only memory-centric design where all recovery was done by switching between hot spares. The most successful competitor was Stratus Technologies , whose machines were re-marketed by IBM as "IBM System/88". In such systems, the spare processors do not contribute to system throughput between failures, but merely redundantly execute exactly the same data thread as the active processor at
2938-440: A single system. Unlike grid computers , computer clusters have each node set to perform the same task, controlled and scheduled by software. The newest manifestation of cluster computing is cloud computing . The components of a cluster are usually connected to each other through fast local area networks , with each node (computer used as a server) running its own instance of an operating system . In most circumstances, all of
3051-713: A small number of top-of-stack, 16-bit data registers plus some extra address registers for accessing the memory stack. Both used Huffman encoding of operand address offsets, to fit a large variety of address modes and offset sizes into the 16-bit instruction format with good code density. Both relied heavily on pools of indirect addresses to overcome the short instruction format. Both supported larger 32- and 64-bit operands via multiple ALU cycles, and memory-to-memory string operations. Both used "big-endian" addressing of long versus short memory operands. These features had all been inspired by Burroughs B5500–B6800 mainframe stack machines. The T/16 instruction set changed several features from
3164-467: A small number of users need to take advantage of the parallel processing capabilities of the cluster and partition "the same computation" among several nodes. Automatic parallelization of programs remains a technical challenge, but parallel programming models can be used to effectuate a higher degree of parallelism via the simultaneous execution of separate portions of a program on different processors. Developing and debugging parallel programs on
3277-572: A stronger foundation than its inherited HP 3000 traits. Rainbow's hardware was a 32-bit register-file machine that aimed to be better than a Digital Equipment Corporation VAX . For reliable programming, the main programming language was "TPL", a subset of Ada . At that time, programmers barely understood how to compile Ada to unoptimized code. There was no migration path for existing NonStop system software coded in TAL. The OS, database and Cobol compilers were entirely redesigned. Customers would see it as
3390-492: A third generation CPU, the NonStop VLX . It had 32-bit data paths, wider microcode, 12 MHz cycle time, and a peak rate of one instruction per cycle. It was built from three boards of ECL gate array chips (with TTL pinout). It had a revised Dynabus with speed raised to 20 MB/s per link, 40 MB/s total. Later, FOX II increased the physical diameter of TNS clusters to 4 kilometers. Tandem's initial database support
3503-538: A totally disjoint product line requiring all-new software from them. The software side of this project took much longer than planned. The hardware was already obsolete and outperformed by TXP before its software was ready, resulting in the Rainbow project being abandoned. All subsequent efforts emphasized upward compatibility and easy migration paths. Development of Rainbow's advanced client/server application development framework called "Crystal" continued awhile longer and
Tandem Computers - Misplaced Pages Continue
3616-565: A web-server cluster which may just use a simple round-robin method by assigning each new request to a different node. Computer clusters are used for computation-intensive purposes, rather than handling IO-oriented operations such as web service or databases. For instance, a computer cluster might support computational simulations of vehicle crashes or weather. Very tightly coupled computer clusters are designed for work that may approach " supercomputing ". " High-availability clusters " (also known as failover clusters, or HA clusters) improve
3729-675: Is a series of server computers introduced to market in 1976 by Tandem Computers Inc., beginning with the NonStop product line . It was followed by the Tandem Integrity NonStop line of lock-step fault-tolerant computers, now defunct (not to be confused with the later and much different Hewlett-Packard Integrity product line extension ). The original NonStop product line is currently offered by Hewlett Packard Enterprise since Hewlett-Packard Company's split in 2015. Because NonStop systems are based on an integrated hardware/software stack, Tandem and later HPE also developed
3842-431: Is limited by the speed of some shared memory, bus, or switch. Adding more than 4–8 processors in that manner gives no further system speedup. NonStop systems have more often been bought to meet scaling requirements than for extreme fault tolerance. They compete against IBM's largest mainframes, despite being built from simpler minicomputer technology. Tandem Computers was founded in 1974 by James Treybig . Treybig first saw
3955-657: Is the Nvidia Tesla Personal Supercomputer workstation, which uses multiple graphics accelerator processor chips. Besides game consoles, high-end graphics cards too can be used instead. The use of graphics cards (or rather their GPU's) to do calculations for grid computing is vastly more economical than using CPU's, despite being less precise. However, when using double-precision values, they become as precise to work with as CPU's and are still much less costly (purchase cost). Computer clusters have historically run on separate physical computers with
4068-804: The Tandem NonStop (a 1976 high-availability commercial product) and the IBM S/390 Parallel Sysplex (circa 1994, primarily for business use). Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer. Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, and introduced internal parallelism via vector processing . While early supercomputers excluded clusters and relied on shared memory , in time some of
4181-772: The InfiniBand industry standard. All S-Series machines used MIPS processors, including the R4400, R10000 , R12000 , and R14000 . The design of the later, faster MIPS cores was primarily funded by Silicon Graphics Inc . But Intel's sixth generation Pentium Pro overtook the performance of RISC designs and also SGI's graphics business shrunk. After the R10000, there was no investment in significant new MIPS core designs for high-end servers. So Tandem needed to move its NonStop product line to another microprocessor architecture with competitive fast chips. Jimmy Treybig remained CEO of
4294-510: The Linux operating system. Clusters are primarily designed with performance in mind, but installations are based on many other factors. Fault tolerance ( the ability of a system to continue operating despite a malfunctioning node ) enables scalability , and in high-performance situations, allows for a low frequency of maintenance routines, resource consolidation (e.g., RAID ), and centralized management. Advantages include enabling data recovery in
4407-724: The Lockheed SR-71 Blackbird Mach 3 spy plane. Cyclone's name was supposed to represent its "unstoppable speed in roaring through OLTP workloads". Announcement day was October 17, 1989. That afternoon, the region was struck by the magnitude 6.9 Loma Prieta earthquake , causing freeway collapses in Oakland and major fires in San Francisco . Tandem offices were shaken, but no one was badly hurt on site. In 1980–1983, Tandem attempted to re-design its entire hardware and software stack to put its NonStop methods on
4520-518: The NonStop TXP CPU was the first entirely new implementation of the TNS instruction set architecture. It was built from standard TTL chips and Programmed Array Logic chips, with four boards per CPU module. It had Tandem's first use of cache memory. It had a more direct implementation of 32-bit addressing, but still sent them through 16-bit adders. A wider microcode store allowed a major reduction in
4633-598: The Oracle Cluster File System . Two widely used approaches for communication between cluster nodes are MPI ( Message Passing Interface ) and PVM ( Parallel Virtual Machine ). PVM was developed at the Oak Ridge National Laboratory around 1989 before MPI was available. PVM must be directly installed on every cluster node and provides a set of software libraries that paint the node as a "parallel virtual machine". PVM provides
Tandem Computers - Misplaced Pages Continue
4746-587: The Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a relatively low cost. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may also be used to achieve very high levels of performance. The TOP500 organization's semiannual list of the 500 fastest supercomputers often includes many clusters, e.g.
4859-481: The kernel that provide for automatic process migration among homogeneous nodes. OpenSSI , openMosix and Kerrighed are single-system image implementations. Microsoft Windows computer cluster Server 2003 based on the Windows Server platform provides pieces for high-performance computing like the job scheduler, MSMPI library and management tools. gLite is a set of middleware technologies created by
4972-561: The 1980s, so were supercomputers . One of the elements that distinguished the three classes at that time was that the early supercomputers relied on shared memory . Clusters do not typically use physically shared memory, while many supercomputer architectures have also abandoned it. However, the use of a clustered file system is essential in modern computer clusters. Examples include the IBM General Parallel File System , Microsoft's Cluster Shared Volumes or
5085-651: The Apache Trafodion project. In 1987, Tandem introduced the NonStop CLX , a low-cost less-expandable minicomputer system. Its role was for growing the low end of the fault-tolerant market, and for deploying on the remote edges of large Tandem networks. Its initial performance was roughly similar to the TXP; later versions improved to where they were about 20% slower than a VLX. Its small cabinet could be installed into any "copier room" office environment. A CLX CPU
5198-532: The CPU core and shared a single bus and single bank of SRAM . As a result, CLX required at least two machine cycles per instruction. In 1989, Tandem introduced the NonStop Cyclone , a fast but expensive system for the mainframe end of the market. Each self-checking CPU took three boards full of hot-running ECL gate array chips, plus memory boards. Despite being microprogrammed, the CPU was superscalar , often completing two instructions per cache cycle. This
5311-511: The Cyclone/R, also known as CLX/R. This was a low-cost mid-range system based on CLX components but used R3000 microprocessors instead of the much slower CLX stack machine board. To minimize time to market, this machine was initially shipped without any MIPS native-mode software. Everything, including its NonStop Kernel (NSK) operating system (a follow-on to Guardian) and NonStop SQL database, was compiled to TNS stack machine code. That object code
5424-525: The Dynamite to serving primarily as a smart terminal. It was quietly and quickly withdrawn from the market. Tandem's message-based NonStop operating system had advantages for scaling, extreme reliability, and efficiently using expensive "spare" resources. But many potential customers wanted just good-enough reliability in a small system, using a familiar Unix operating system and industry-standard programs. Tandem's various fault-tolerant competitors all adopted
5537-451: The GNBD server. Load balancing clusters such as web servers use cluster architectures to support a large number of users and typically each user request is routed to a specific node, achieving task parallelism without multi-node cooperation, given that the main goal of the system is providing rapid user access to shared data. However, "computer clusters" which perform complex computations for
5650-625: The HP 3000 design. The T/16 supported paged virtual memory from the beginning. The HP 3000 series did not add paging until the PA-RISC generation, 10 years later (although via MPE V it had a form of paging using the APL firmware, in 1978). Tandem added support for 32-bit addressing in its second machine; HP 3000 lacked this until its PA-RISC generation. Paging and long addresses were critical for supporting complex system software and large applications. The T/16 treated its top-of-stack registers in
5763-535: The NonStop Himalaya S-Series with a new top-level system architecture based on ServerNet connections. ServerNet replaced the Dynabus, FOX, and I/O buses. It was much faster, more general, and could be extended to more than just two-way redundancy via an arbitrary fabric of point-to-point connections. Tandem designed ServerNet for its own needs but then promoted its use by others; it evolved into
SECTION 50
#17327974812075876-667: The NonStop OS operating system for them. NonStop systems are, to an extent, self-healing. To circumvent single points of failure , they are equipped with almost all redundant components. When a mainline component fails, the system automatically falls back to the backup. These systems can be used by banks , stock exchanges , payment applications, retail companies, energy and utility services, healthcare organizations, manufacturers, telecommunication providers, transportation, and other enterprises requiring extremely high uptime . Originally introduced in 1976 by Tandem Computers Inc.,
5989-485: The OS to use. The operating system and application are both designed to support the fault tolerant hardware. The operating system continually monitors the status of all components, switching control as necessary to maintain operations. There are also features designed into the software that allow programs to be written as continuously available programs. That is accomplished using a pair of processes where one process performs all
6102-424: The OS, and systems can be expanded up to over 4000 CPUs. This is a shared-nothing architecture — a "share nothing" arrangement also known as loosely coupled multiprocessing . Due to the integrated hardware/software stack and a single system image for even the largest configurations, system management requirements for NonStop systems are rather low. In most deployments there is just a single production server, not
6215-497: The T/16 was also designed to detect as many kinds of intermittent failures as possible, as soon as possible. This prompt detection is called "fail fast". The point was to find and isolate corrupted data before it was permanently written into databases and other disk files. In the T/16, error detection was by added custom circuits that added little cost to the total design; no major parts were duplicated to get error detection. The T/16 CPU
6328-452: The T/16, each CPU consisted of two boards of TTL logic and SRAMs , and ran at about 0.7 MIPS . At any instant, it could access only four virtual memory segments (System Data, System Code, User Data, User Code), each limited to 128 KB in size. The 16-bit address spaces were already small for major applications when it shipped. The first release of T/16 had only a single programming language, Transaction Application Language (TAL). This
6441-501: The advent of clusters, single-unit fault tolerant mainframes with modular redundancy were employed; but the lower upfront cost of clusters, and increased speed of network fabric has favoured the adoption of clusters. In contrast to high-reliability mainframes, clusters are cheaper to scale out, but also have increased complexity in error handling, as in clusters error modes are not opaque to running programs. The desire to get more computing power and better reliability by orchestrating
6554-438: The availability of the cluster approach. They operate by having redundant nodes , which are then used to provide service when system components fail. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure . There are commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for
6667-488: The challenges in the use of a computer cluster is the cost of administrating it which can at times be as high as the cost of administrating N independent machines, if the cluster has N nodes. In some cases this provides an advantage to shared memory architectures with lower administration costs. This has also made virtual machines popular, due to the ease of administration. When a large multi-user cluster needs to access very large amounts of data, task scheduling becomes
6780-563: The chips must be designed to be fully deterministic. Any hidden internal state must be cleared by the chip's reset mechanism. Otherwise, the matched chips can go out of sync for no visible reason and without any faults, long after the chips are restarted. Chip designers agree that these are good principles because it helps them test chips at manufacturing time. But all new microprocessor chips seemed to have bugs in this area and required months of shared work between MIPS (the third-party manufacturer used by Tandem) and Tandem to eliminate or work around
6893-581: The choice to abdicate its successful PA-RISC product lines in favor of Intel's Itanium microprocessors that HP helped to design. Shortly thereafter, Compaq and HP announced their plan to merge and consolidate their similar product lines. This contentious merger became official in May 2002. The consolidations were painful and destroyed the DEC and "HP Way" engineer-oriented cultures, but the combined company did know how to sell complex systems to enterprises and profit, so it
SECTION 60
#17327974812077006-522: The cluster interface. Clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VMS operating system. The ARC and VAXcluster products not only supported parallel computing , but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were
7119-409: The cluster. This property of computer clusters can allow for larger computational loads to be executed by a larger number of lower performing computers. When adding a new node to a cluster, reliability increases because the entire cluster does not need to be taken down. A single node can be taken down for maintenance, while the rest of the cluster takes on the load of that individual node. If you have
7232-487: The company he founded until a downturn in 1996. The next CEO was Roel Pieper , who joined the company in 1996 as president and CEO. Re-branding to promote itself as a true Wintel (Windows/Intel) platform was conducted by their in-house brand and creative team led by Ronald May, who later went on to co-found the Silicon Valley Brand Forum in 1999. The concept worked, and shortly thereafter the company
7345-414: The cycles executed per instruction; speed increased to 2.0 MIPS. It used the same rack packaging, controllers, backplane, and buses as before. The Dynabus and I/O buses had been overdesigned in the T/16 so they would work for several generations of upgrades. Up to 14 TXP and NonStop II systems could now be combined via FOX , a long-distance fault-tolerant fibre optic bus for connecting TNS clusters across
7458-461: The details of this in a semi-portable way. In 1981, all T/16 CPUs were replaced by the NonStop II . Its main difference from the T/16 was support for occasional 32-bit addressing via a user-switchable "extended data segment". This supported the next ten years of growth in software and was an advantage over the T/16 or HP 3000. Visible registers remained 16-bit, and this unplanned addition to
7571-596: The duplicated parts are commodity single-chip microprocessors. Tandem's products for this market began with the Integrity line in 1989, using MIPS processors and a "NonStop UX" variant of Unix. It was developed in Austin, Texas. In 1991, the Integrity S2 used TMR, Triple Modular Redundancy, where each logical CPU used three MIPS R2000 microprocessors to execute the same data thread, with voting to find and lock out
7684-487: The era, including large mainframes , had mean-time-between-failures (MTBF) on the order of a few days, the NonStop system was designed to failure intervals 100 times longer, with uptimes measured in years. Nevertheless, the NonStop was designed to be price-competitive with conventional systems, with a simple 2-CPU system priced at just over twice that of a competing single-processor mainframe, as opposed to four or more times of other fault-tolerant solutions. The first system
7797-415: The event of a disaster and providing parallel data processing and high processing capacity. In terms of scalability, clusters provide this in their ability to add nodes horizontally. This means that more computers may be added to the cluster, to improve its performance, redundancy and fault tolerance. This can be an inexpensive solution for a higher performing cluster compared to scaling up a single node in
7910-432: The fastest supercomputers (e.g. the K computer ) relied on cluster architectures. Computer clusters may be configured for different purposes ranging from general purpose business needs such as web-service support, to computation-intensive scientific calculations. In either case, the cluster may use a high-availability approach. Note that the attributes described below are not exclusive and a "computer cluster" may also use
8023-584: The fastest-growing public company in America. By 1996, Tandem was a $ 2.3 billion company employing approximately 8,000 people worldwide. Over 40 years, Tandem's main NonStop product line grew and evolved in an upward-compatible way from the initial T/16 fault-tolerant system, with three major changes to its top-level modular architecture or its programming-level instruction set architecture. Within each series, there have been several major re-implementations as chip technology progressed. While conventional systems of
8136-658: The final subtle bugs. In 1993, Tandem released the NonStop Himalaya K-series with the faster MIPS R4400 , a native mode NSK operating system, and fully expandable Cyclone system components. These were connected by Dynabus, Dynabus+, and the original I/O bus, which by now were all running out of performance headroom. In 1995, the NonStop Kernel was extended with a Unix-like POSIX environment called Open System Services. The original Guardian shell and ABI remained available. In 1997, Tandem introduced
8249-582: The hardware and software design that did not have to be different were largely based on incremental improvements to the familiar hardware and software designs of the HP 3000. Many subsequent engineers and programmers also came from HP. Tandem headquarters in Cupertino, California, were a quarter mile away from the HP offices. Initial venture capital investment in Tandem Computers came from Tom Perkins, who
8362-402: The instruction differences, even when debugging at machine code level. These Cyclone/R machines were updated with a faster native-mode NSK operating system in a follow-up release. The R3000 and later microprocessors had only a typical amount of internal error checking, insufficient for Tandem's needs. So, the Cyclone/R ran pairs of R3000 processors in lock step, running the same data thread. This
8475-535: The instruction set required executing many instructions per memory reference compared to most 32-bit minicomputers. All subsequent TNS computers were hampered by this instruction set inefficiency. As the NonStop II lacked wider internal data paths, it had to use additional microcode steps for 32-bit addresses. A NonStop II CPU had three boards, using chips and design similar to the T/16. The NonStop II also replaced core memory with battery-backed DRAM memory. In 1983,
8588-682: The line was later owned by Compaq (from 1997), Hewlett-Packard Company (from 2003) and Hewlett Packard Enterprise (since 2015). In 2005, the HP Integrity "NonStop i" (or TNS/E) servers, based on Intel Itanium microprocessors, was introduced. In 2014, the first "NonStop X" (or TNS/X) systems, based on Intel x86-64 processors, were introduced. Sales of the Itanium-based systems ended in July 2020. Early NonStop applications had to be specifically coded for fault tolerance . That requirement
8701-758: The market need for fault tolerance in OLTP (online transaction processing) systems while running a marketing team for Hewlett-Packard 's HP 3000 computer division, but HP was not interested in developing for this niche. He then joined the venture capital firm Kleiner Perkins and developed the Tandem business plan there. Treybig pulled together a core engineering team hired away from the HP 3000 division: Mike Green, Jim Katzman, Dave Mackie and Jack Loustaunou. Their business plan called for ultra-reliable systems that never had outages and never lost or corrupted data. These were modular in
8814-516: The master process ran into trouble. This allowed the application to survive failures in any CPU or its associated devices, without data loss. It further allowed recovery from some intermittent-style software failures. Between failures, the monitoring by the slave process added some performance overhead but this was far less than the 100% duplication in other system designs. Some major early applications were directly coded in this checkpoint style, but most instead used various Tandem software layers which hid
8927-447: The nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept. Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer-to-peer or grid computing which also use many nodes, but with a far more distributed nature . A computer cluster may be
9040-478: The nodes use the same hardware and the same operating system, although in some setups (e.g. using Open Source Cluster Application Resources (OSCAR)), different operating systems can be used on each computer, or different hardware. Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability. Computer clusters emerged as
9153-469: The number of nodes added to the system, whereas most databases had performance that plateaued quite quickly, often after just two CPUs. A later version released in 1989 added transactions that could be spread over nodes, a feature that remained unique for some time. NonStop SQL continued to evolve, first as NonStop SQL/MP and then NonStop SQL/MX, which transitioned from Tandem to Compaq to HP. The code remains in use in both HP's NonStop SQL/MP, NonStop SQL/MX and
9266-456: The plants to fabricate the chips. Facing the challenges of this changing marketplace and manufacturing landscape, Tandem partnered with MIPS and adopted its R3000 and successor chipsets and their advanced optimizing compiler. Subsequent NonStop Guardian machines using the MIPS architecture were known to programmers as TNS/R machines and had a variety of marketing names. In 1991, Tandem released
9379-514: The primary processing and the other serves as a "hot backup", receiving updates to data whenever the primary reaches a critical point in processing. Should the primary stop, the backup steps in to resume execution using the current transaction. The systems support relational database management systems like NonStop SQL and hierarchical databases such as Enscribe . Languages supported include Java , C , C++ , COBOL , SCOBOL (Screen COBOL), Transaction Application Language (TAL), etc. It uses
9492-404: The processes were running. This approach easily scaled to multiple-computer clusters and helped isolate corrupted data before it propagated. All file system processes and all transactional application processes were structured as master/slave pairs of processes running in separate CPUs. The slave process periodically took snapshots of the master's memory state and took over the workload if and when
9605-442: The same operating system . With the advent of virtualization , the cluster nodes may run on separate physical computers with different operating systems which are painted above with a virtual layer to look similar. The cluster may also be virtualized on various configurations as maintenance takes place; an example implementation is Xen as the virtualization manager with Linux-HA . As the computer clusters were appearing during
9718-403: The same instant, in "lock step". Faults are detected by seeing when the cloned processors' outputs diverged. To detect failures, the system must have two physical processors for each logical, active processor. To also implement automatic failover recovery, the system must have three or four physical processors for each logical processor. The triple or quadruple cost of this sparing is practical when
9831-465: The scheduling and management of the slaves. In a typical implementation the Master has two network interfaces, one that communicates with the private Beowulf network for the slaves, the other for the general purpose network of the organization. The slave computers typically have their own version of the same operating system, and local memory and disk space. However, the private slave network may also have
9944-559: The scripting and job control language TACL (Tandem Advanced Command Language), and is written in TAL and C. The HPE Integrity NonStop computers are a line of fault-tolerant , message-based server computers based on the Intel Xeon processor platform, and optimized for transaction processing. Average availability levels of 99.999% have been observed. NonStop systems feature a massively parallel processing (MPP) architecture and provide linear scalability. Each CPU runs its own copy of
10057-475: The suspected node is disabled or powered off. For instance, power fencing uses a power controller to turn off an inoperable node. The resources fencing approach disallows access to resources without powering off the node. This may include persistent reservation fencing via the SCSI3 , fibre channel fencing to disable the fibre channel port, or global network block device (GNBD) fencing to disable access to
10170-446: The system to a stable state so that processing can resume without needing to recompute results. The Linux world supports various cluster software; for application clustering, there is distcc , and MPICH . Linux Virtual Server , Linux-HA – director-based clusters that allow incoming requests for services to be distributed across multiple cluster nodes. MOSIX , LinuxPMI , Kerrighed , OpenSSI are full-blown clusters integrated into
10283-456: The time. The MPI specifications then gave rise to specific implementations. MPI implementations typically use TCP/IP and socket connections. MPI is now a widely available communications model that enables parallel programs to be written in languages such as C , Fortran , Python , etc. Thus, unlike PVM which provides a concrete implementation, MPI is a specification which has been implemented in systems such as MPICH and Open MPI . One of
10396-433: The world's fastest machine in 2011 was the K computer which has a distributed memory , cluster architecture. Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s. The formal engineering basis of cluster computing as a means of doing parallel work of any sort
10509-420: Was a proprietary design. It was greatly influenced by the HP 3000 minicomputer. They were both microprogrammed , 16-bit , stack-based machines with segmented, 16-bit virtual addressing. Both were intended to be programmed exclusively in high-level languages, with no use of assembler . Both were initially implemented via standard low-density TTL chips, each holding a 4-bit slice of the 16-bit ALU . Both had
10622-523: Was accomplished by having a separate microcode routine for every common pair of instructions. That fused pair of stack instructions generally accomplished the same work as a single instruction of normal 32-bit minicomputers. Cyclone processors were packaged as sections of four CPUs each, and the sections joined by a fiber optic version of Dynabus. Like Tandem's prior high-end machines, Cyclone cabinets were styled with much angular black to suggest strength and power. Advertising videos directly compared Cyclone to
10735-464: Was acquired by Compaq. Compaq's x86-based server division was an early outside adopter of Tandem's ServerNet/InfiniBand interconnect technology. In 1997, Compaq acquired the Tandem Computers company and NonStop customer base to balance Compaq's heavy focus on personal computers (PCs). In 1998, Compaq also acquired the much larger Digital Equipment Corporation and inherited its DEC Alpha RISC servers with OpenVMS and Tru64 Unix customer bases. Tandem
10848-593: Was already current. In 1995, the Integrity S4000 was the first to use ServerNet (a networked "bus" structure) and moved toward sharing peripherals with the NonStop line. In 1995–1997, Tandem partnered with Microsoft to implement high-availability features and advanced SQL configurations in clusters of commodity Microsoft Windows NT machines. This project was codenamed "Wolfpack" and first shipped as Microsoft Cluster Server in 1997. Microsoft benefited greatly from this partnership; Tandem did not. When Tandem
10961-434: Was an efficient machine-dependent systems programming language (for operating systems, compilers, etc.) but could also be used for non-portable applications. It was derived from HP 3000's System Programming Language (SPL). Both had semantics similar to C but a syntax based on Burroughs' ALGOL . Subsequent releases added support for Cobol74, Basic , Fortran , Java , C, C++ , and MUMPS . The Tandem NonStop series ran
11074-526: Was an improvement for the surviving NonStop division and its customers. In some ways, Tandem's journey from HP-inspired start-up to an HP-inspired competitor, then to an HP division was "bringing Tandem back to its original roots", but this was not the same HP. The porting of the NSK-based NonStop product line from MIPS processors to Itanium-based processors was completed and was branded as "HP Integrity NonStop Servers". (This NSK Integrity NonStop
11187-445: Was arguably invented by Gene Amdahl of IBM , who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law . The history of early computer clusters is more or less directly tied to the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster. The first production system designed as
11300-459: Was duplicated and had dual connections to both CPUs and devices. Each disk was mirrored, with separate connections to two independent disk controllers. If a disk failed, its data was still available from its mirrored copy. If a CPU, controller or bus failed, the disk was still reachable through alternative CPU, controller, and/or bus. Each disk or network controller was connected to two independent CPUs. Power supplies were each wired to only one side of
11413-447: Was for purposes of data integrity, and not fault-tolerance – fault tolerance was handled by the other mechanisms still in place. It used a variation of lock stepping. The checker processor ran 1 cycle behind the primary processor. This allowed them to share a single copy of external code and data caches without putting excessive pinout load on the system bus and lowering the system clock rate. To successfully run microprocessors in lock step,
11526-419: Was formed in 1974, every computer company designed and built its CPUs from basic circuits, using its own proprietary instruction set, compilers, etc. With each year of semiconductor progress with Moore's Law, more of a CPU's core circuits could fit into single chips and run faster and cheaper as a result. However, it became increasingly expensive for a computer company to design those advanced custom chips or build
11639-400: Was formerly a general manager of the HP 3000 division. The business plan included detailed ideas for building a unique corporate culture reflecting Treybig's values. The design of the initial Tandem/16 hardware was completed in 1975, and the first system shipped to Citibank in May 1976. The company enjoyed uninterrupted exponential growth through 1983. Inc. magazine ranked Tandem as
11752-470: Was one board, containing six "compiled silicon" ASIC CMOS chips. The CPU core chip was duplicated and lock stepped for maximal error detection. This added no additional fault tolerance but assured data integrity as each CPU included checking logic that made certain that the results of both CPU chips were identical. Other processors would provide fault tolerance. Pinout was a main limitation of this chip technology. Microcode, cache, and TLB were all external to
11865-480: Was only for hierarchical, non-relational databases via the ENSCRIBE file system. This was extended into a relational database called ENCOMPASS . In 1986 Tandem introduced the first fault-tolerant SQL database, NonStop SQL . Developed totally in-house, NonStop SQL includes a number of features based on Guardian to ensure data validity across nodes. NonStop SQL is known for scaling linearly in performance with
11978-566: Was removed in 1983 with the introduction of the Transaction Monitoring Facility (TMF), along with Pathway transaction management software and SCOBOL applications (or, later, NonStop Tuxedo transaction management software), which handles the various aspects of fault tolerance on the system level. NonStop OS is a message-based operating system designed for fault tolerance. It works with process pairs and ensures that backup processes on redundant CPUs take over in case of
12091-404: Was sold to customers needing the utmost reliability. This new checking approach was called NSAA, NonStop Advanced Architecture . As in the earlier migration from stack machines to MIPS microprocessors, all customer software was carried forward without source changes. "Native mode" source code compiled directly to MIPS machine code was simply recompiled for Itanium. Some older "non-native" software
12204-479: Was spun off as the "Ellipse" product of Cooperative Systems Incorporated. In 1985, Tandem attempted to grab a piece of the rapidly growing personal computer market with its introduction of the MS-DOS based Dynamite PC/workstation. Numerous design compromises (including a unique 8086-based hardware platform incompatible with expansion cards of the day and extremely limited compatibility with IBM -based PCs) relegated
12317-551: Was still in TNS stack machine form. These were automatically ported onto Itanium via object code translation techniques. The next endeavor was to move from Itanium to the Intel x86 architecture. It was completed in 2014 with the first systems being made commercially available. The inclusion of the fault-tolerant 4X FDR (Fourteen Data Rate) InfiniBand double-wide switches provided more than 25 times increase in system interconnect capacity. NonStop (server computers) NonStop
12430-513: Was the Tandem/16 or T/16 , later re-branded NonStop I . The machine consisted of between two and 16 CPUs, organized as a fault-tolerant computer cluster packaged in a single rack. Each CPU had its own private, unshared memory, its own I/O processor, its own private I/O bus to connect to I/O controllers, and dual connections to all the other CPUs over a custom inter-CPU backplane bus called Dynabus . Each disk controller or network controller
12543-505: Was then midway in porting its NonStop product line from MIPS R12000 microprocessors to Intel's new Itanium Merced microprocessors. This project was restarted with Alpha as the new target to align NonStop with Compaq's other large server lines. But in 2001, Compaq terminated all Alpha engineering investments in favor of the Itanium microprocessors, before any new NonStop products were released on Alpha. In 2001, Hewlett-Packard similarly made
12656-516: Was then translated to equivalent partially optimized MIPS instruction sequences at kernel install time by a tool called the Accelerator. Less-important programs could also be executed directly without pre-translation, via a TNS code interpreter . These migration techniques were successful and remain in use today. End-user software was brought over without extra work, the performance was good enough for mid-range machines, and programmers could ignore
12769-779: Was unrelated to Tandem's original "Integrity" series for Unix.) Because it was not possible to run Itanium McKinley chips with clock-level lock stepping, the Integrity NonStop machines instead lock stepped using comparisons between chip states at longer time scales, at interrupt points and at various software synchronization points in between interrupts. The intermediate synchronization points were automatically triggered at every n'th taken branch instruction and were also explicitly inserted into long loop bodies by all NonStop compilers. The machine design supported both dual and triple redundancy, with either two or three physical microprocessors per logical Itanium processor. The triple version
#206793