Misplaced Pages

UnixWare NonStop Clusters

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

NonStop Clusters (NSC) was an add-on package for SCO UnixWare that allowed creation of fault-tolerant single-system image clusters of machines running UnixWare. NSC was one of the first commercially available highly available clustering solutions for commodity hardware .

#544455

32-403: NSC provided a full single-system image cluster: The NSC system was designed for high availability β€”all system services were either redundant or would fail-over from one node to another in the advent of a node crash. The disk subsystem was either accessible from multiple nodes (using a Fibre Channel SAN or dual-ported SCSI ) or used cross-node mirroring in a similar fashion to DRBD . NSC

64-690: A GPLed version of the NSC code, which eventually became OpenSSI . Single-system image In distributed computing , a single system image ( SSI ) cluster is a cluster of machines that appears to be one single system. The concept is often considered synonymous with that of a distributed operating system , but a single image may be presented for more limited purposes, just job scheduling for instance, which may be achieved by means of an additional layer of software over conventional operating system images running on each node . The interest in SSI clusters

96-460: A transport-layer protocol began increasing. While several vendors had already added support for NFS Version 2 with TCP as a transport, Sun Microsystems added support for TCP as a transport for NFS at the same time it added support for Version 3. Using TCP as a transport made using NFS over a WAN more feasible, and allowed the use of larger read and write transfer sizes beyond the 8 KB limit imposed by User Datagram Protocol . WebNFS

128-468: A later date. Checkpointing can be seen as related to migration, as migrating a process from one node to another can be implemented by first checkpointing the process, then restarting it on another node. Alternatively checkpointing can be considered as migration to disk . Some SSI systems provide the illusion that all processes are running on the same machine - the process management tools (e.g. "ps", "kill" on Unix like systems) operate on all processes in

160-419: A method of separating the filesystem meta-data from file data location; it goes beyond the simple name/data separation by striping the data amongst a set of data servers. This differs from the traditional NFS server which holds the names of files and their data under the single umbrella of the server. Some products are multi-node NFS servers, but the participation of the client in separation of meta-data and data

192-467: A modular implementation, reflected in a simple protocol. By February 1986, implementations were demonstrated for operating systems such as System V release 2, DOS , and VAX/VMS using Eunice . NFSv2 only allows the first 2 GB of a file to be read due to 32-bit limitations. Version 3 (RFC 1813, June 1995) added: The first NFS Version 3 proposal within Sun Microsystems

224-512: A similar agreement to give ISOC change control over NFS, although writing the contract carefully to exclude NFS version 2 and version 3. Instead, ISOC gained the right to add new versions to the NFS protocol, which resulted in IETF specifying NFS version 4 in 2003. By the 21st century, neither DFS nor AFS had achieved any major commercial success as compared to SMB or NFS. IBM, which had formerly acquired

256-530: A single system image. Network File System (protocol) Network File System ( NFS ) is a distributed file system protocol originally developed by Sun Microsystems (Sun) in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. NFS

288-425: Is an open IETF standard defined in a Request for Comments (RFC), allowing anyone to implement the protocol. Sun used version 1 only for in-house experimental purposes. When the development team added substantial changes to NFS version 1 and released it outside of Sun, they decided to release the new version as v2, so that version interoperation and RPC version fallback could be tested. Version 2 of

320-509: Is available in some enterprise solutions as VMware ESXi . NFS version 4.2 (RFC 7862) was published in November 2016 with new features including: server-side clone and copy, application I/O advise, sparse files, space reservation, application data block (ADB), labeled NFS with sec_label that accommodates any MAC security system, and two new operations for pNFS (LAYOUTERROR and LAYOUTSTATS). One big advantage of NFSv4 over its predecessors

352-490: Is available on: During the development of the ONC protocol (called SunRPC at the time), only Apollo's Network Computing System (NCS) offered comparable functionality. Two competing groups developed over fundamental differences in the two remote procedure call systems. Arguments focused on the method for data-encoding β€” ONC's External Data Representation (XDR) always rendered integers in big-endian order, even if both peers of

SECTION 10

#1732787634545

384-725: Is based on the perception that they may be simpler to use and administer than more specialized clusters. Different SSI systems may provide a more or less complete illusion of a single system. Different SSI systems may, depending on their intended usage, provide some subset of these features. Many SSI systems provide process migration . Processes may start on one node and be moved to another node, possibly for resource balancing or administrative reasons. As processes are moved from one node to another, other associated resources (for example IPC resources) may be moved with them. Some SSI systems allow checkpointing of running processes, allowing their current state to be saved and reloaded at

416-429: Is limited. The NFSv4.1 pNFS server is a set of server resources or components; these are assumed to be controlled by the meta-data server. The pNFS client still accesses one meta-data server for traversal or interaction with the namespace; when the client moves data to and from the server it may directly interact with the set of data servers belonging to the pNFS server collection. The NFSv4.1 client can be enabled to be

448-437: Is that only one UDP or TCP port, 2049, is used to run the service, which simplifies using the protocol across firewalls. WebNFS , an extension to Version 2 and Version 3, allows NFS to integrate more easily into Web-browsers and to enable operation through firewalls. In 2007 Sun Microsystems open-sourced their client-side WebNFS implementation. Various side-band protocols have become associated with NFS. Note: NFS

480-751: The Distributed Computing Environment (DCE) and the DCE Distributed File System (DFS) over Sun/ONC RPC and NFS. DFS used DCE as the RPC, and DFS derived from the Andrew File System (AFS); DCE itself derived from a suite of technologies, including Apollo's NCS and Kerberos . Sun Microsystems and the Internet Society (ISOC) reached an agreement to cede "change control" of ONC RPC so that

512-629: The Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols. NFS version 4.1 (RFC 5661, January 2010; revised in RFC 8881, August 2020) aims to provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers (pNFS extension). Version 4.1 includes Session trunking mechanism (Also known as NFS Multipathing) and

544-549: The Open Software Foundation (OSF) in 1988. Ironically, Sun and AT&T had formerly competed over Sun's NFS versus AT&T's Remote File System (RFS), and the quick adoption of NFS over RFS by Digital Equipment, HP, IBM, and many other computer vendors tipped the majority of users in favor of NFS. NFS interoperability was aided by events called "Connectathons" starting in 1986 that allowed vendor-neutral testing of implementations with each other. OSF adopted

576-611: The ISOC's engineering-standards body, the Internet Engineering Task Force (IETF), could publish standards documents (RFCs) related to ONC RPC protocols and could extend ONC RPC. OSF attempted to make DCE RPC an IETF standard, but ultimately proved unwilling to give up change control. Later, the IETF chose to extend ONC RPC by adding a new authentication flavor based on Generic Security Services Application Program Interface (GSSAPI), RPCSEC GSS , to meet IETF requirements that protocol standards have adequate security. Later, Sun and ISOC reached

608-499: The cluster interconnect inter-node communication path. In this form NSC was commercialized by the Tandem Computers division of Compaq and only supported on qualified hardware from Compaq, and later Fujitsu-Siemens . In 2000, NSC was modified to allow standard Fast Ethernet and later Gigabit Ethernet switches as the cluster interconnect and commercialized by SCO as UnixWare NonStop Clusters 7.1.1+IP. This release of NSC

640-468: The cluster that can be used to contact the cluster as if it were one machine. This can be used for load balancing inbound calls to the cluster, directing them to lightly loaded nodes, or for redundancy, moving the cluster address from one machine to another as nodes join or leave the cluster. Examples here vary from commercial platforms with scaling capabilities, to packages/frameworks for creating distributed systems, as well as those that actually implement

672-399: The cluster. Most SSI systems provide a single view of the file system. This may be achieved by a simple NFS server, shared disk devices or even file replication. The advantage of a single root view is that processes may be run on any available node and access needed files with no special precautions. If the cluster implements process migration a single root view enables direct accesses to

SECTION 20

#1732787634545

704-734: The connection had little-endian machine-architectures, whereas NCS's method attempted to avoid byte-swap whenever two peers shared a common endianness in their machine-architectures. An industry-group called the Network Computing Forum formed (March 1987) in an (ultimately unsuccessful) attempt to reconcile the two network-computing environments. In 1987, Sun and AT&T announced they would jointly develop AT&T's UNIX System V Release 4. This caused many of AT&T's other licensees of UNIX System to become concerned that this would put Sun in an advantaged position, and ultimately led to Digital Equipment, HP, IBM, and others forming

736-546: The files from the node where the process is currently running. Some SSI systems provide a way of "breaking the illusion", having some node-specific files even in a single root. HP TruCluster provides a "context dependent symbolic link" (CDSL) which points to different files depending on the node that accesses it. HP VMScluster provides a search list logical name with node specific files occluding cluster shared files where necessary. This capability may be necessary to deal with heterogeneous clusters, where not all nodes have

768-459: The primary commercial vendor of DFS and AFS, Transarc , donated most of the AFS source code to the free software community in 2000. The OpenAFS project lives on. In early 2005, IBM announced end of sales for AFS and DFS. In January, 2010, Panasas proposed an NFSv4.1 based on their Parallel NFS (pNFS) technology claiming to improve data-access parallelism capability. The NFSv4.1 protocol defines

800-417: The protocol (defined in RFC 1094, March 1989) originally operated only over User Datagram Protocol (UDP). Its designers meant to keep the server side stateless , with locking (for example) implemented outside of the core protocol. People involved in the creation of NFS version 2 include Russel Sandberg , Bob Lyon , Bill Joy , Steve Kleiman , and others. The Virtual File System interface allows

832-649: The same configuration. In more complex configurations such as multiple nodes of multiple architectures over multiple sites, several local disks may combine to form the logical single root. Some SSI systems allow all nodes to access the I/O devices (e.g. tapes, disks, serial lines and so on) of other nodes. There may be some restrictions on the kinds of accesses allowed (For example, OpenSSI can't mount disk devices from one node on another node). Some SSI systems allow processes on different nodes to communicate using inter-process communications mechanisms as if they were running on

864-423: The same machine. On some SSI systems this can even include shared memory (can be emulated in software with distributed shared memory ). In most cases inter-node IPC will be slower than IPC on the same machine, possibly drastically slower for shared memory. Some SSI clusters include special hardware to reduce this slowdown. Some SSI systems provide a " cluster IP address", a single address visible from outside

896-419: The starting point. Both of those changes have later been incorporated into NFSv4. Version 4 (RFC 3010, December 2000; revised in RFC 3530, April 2003 and again in RFC 7530, March 2015), influenced by Andrew File System (AFS) and Server Message Block (SMB), includes performance improvements, mandates strong security, and introduces a stateful protocol. Version 4 became the first version developed with

928-489: Was an extension to NFSv2 and NFSv3 allowing it to function behind restrictive firewalls without the complexity of Portmap and MOUNT protocols. WebNFS had a fixed TCP/UDP port number (2049), and instead of requiring the client to contact the MOUNT RPC service to determine the initial filehandle of every filesystem, it introduced the concept of a public filehandle (null for NFSv2, zero-length for NFSv3) which could be used as

960-699: Was available on commodity PC hardware, although SCO recommended that systems with more than two nodes used the ServerNet interconnect. After the sale of the SCO Unix business to Caldera Systems , it was announced that the long-term goal was to integrate the NSC product into the base UnixWare code but this was not to be, Caldera Systems ceased distribution of NSC, replacing it by the Reliant HA clustering solution and in May 2001 Compaq announced that it would release

992-517: Was created not long after the release of NFS Version 2. The principal motivation was an attempt to mitigate the performance issue of the synchronous write operation in NFS Version ;2. By July 1992, implementation practice had solved many shortcomings of NFS Version 2, leaving only lack of large file support (64-bit file sizes and offsets) a pressing issue. At the time of introduction of Version 3, vendor support for TCP as

UnixWare NonStop Clusters - Misplaced Pages Continue

1024-531: Was developed for Tandem Computers by Locus Computing Corporation based on their Transparent Network Computing technology. During the lifetime of the project Locus were acquired by Platinum Technology Inc. The NSC team and product were then transferred to Tandem. Initially NSC was developed for the Compaq Integrity XC packaged cluster, consisting of between two and six Compaq ProLiant servers and one or two Compaq ServerNet switches to provide

#544455