QFS ( Quick File System ) is a filesystem from Oracle . It is tightly integrated with SAM, the Storage and Archive Manager, and hence is often referred to as SAM-QFS . SAM provides the functionality of a hierarchical storage manager .
57-557: QFS supports some volume management capabilities, allowing many disks to be grouped together into a file system. File system metadata can be kept on a separate set of disks, which is useful for streaming applications where long disk seeks cannot be tolerated. SAM extends the QFS file system transparently to archival storage. A SAM-QFS file system may have a relatively small (gigabytes to terabytes) "disk cache" backed by petabytes of tape or other bulk storage. Files are copied to archival storage in
114-399: A centralized data storage model, but with its own data network . A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic backup of data, and the monitoring of the storage as well as the backup process. A SAN is a combination of hardware and software. It grew out of data-centric mainframe architectures , where clients in
171-417: A computer appliance – a purpose-built specialized computer. NAS systems are networked appliances that contain one or more storage drives , often arranged into logical , redundant storage containers or RAID . Network-attached storage typically provide access to files using network file sharing protocols such as NFS , SMB , or AFP . From the mid-1990s, NAS devices began gaining popularity as
228-402: A heterogeneous group of clients. The term "NAS" can refer to both the technology and systems involved, or a specialized device built for such functionality (as unlike tangentially related technologies such as local area networks , a NAS device is often a singular unit). A NAS device is optimised for serving files either by its hardware, software, or configuration. It is often manufactured as
285-590: A CIM object manager (CIMOM), to manage objects and interactions, and allows for the central management of SAN storage devices. Basic device management for SANs can also be achieved through the Storage Management Interface Specification (SMI-S), were CIM objects and processes are registered in a directory. Software applications and subsystems can then draw on this directory. Management software applications are also available to configure SAN storage devices, allowing, for example,
342-457: A LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires software. File systems have been developed to work with SAN software to provide file-level access . These are known as shared-disk file system . Video editing systems require very high data transfer rates and very low latency. SANs in media and entertainment are often referred to as serverless due to
399-402: A SAN are said to form the storage layer . It can include a variety of hard disk and magnetic tape devices that store data. In SANs, disk arrays are joined through a RAID which makes a lot of hard disks look and perform like one big storage device. Every storage device, or even partition on that storage device, has a logical unit number (LUN) assigned to it. This is a unique number within
456-420: A SAN as "a network whose primary purpose is the transfer of data between computer systems and storage elements". But a SAN does not just consist of a communication infrastructure, it also has a software management layer . This software organizes the servers, storage devices, and the network so that data can be transferred and stored. Because a SAN does not use direct attached storage (DAS), the storage devices in
513-441: A SAN provides only block-level access, file systems built on top of SANs do provide file-level access and are known as shared-disk file systems . Newer SAN configurations enable hybrid SAN and allow traditional block storage that appears as local storage but also object storage for web services through APIs. Storage area networks (SANs) are sometimes referred to as network behind the servers and historically developed out of
570-462: A SAN-NAS hybrid, offering both file-level protocols (NAS) and block-level protocols (SAN) from the same system. A shared disk file system can also be run on top of a SAN to provide filesystem services. In the early 1980s, the " Newcastle Connection " by Brian Randell and his colleagues at Newcastle University demonstrated and developed remote file access across a set of UNIX machines. Novell 's NetWare server operating system and NCP protocol
627-598: A convenient method of sharing files among multiple computers, as well as to remove the responsibility of file serving from other servers on the network; by doing so, a NAS can provide faster data access, easier administration, and simpler configuration as opposed to using general-purpose server to serve files. Accompanying a NAS are purpose-built hard disk drives , which are functionally similar to non-NAS drives but may have different firmware, vibration tolerance, or power dissipation to make them more suitable for use in RAID arrays,
SECTION 10
#1732802038312684-470: A few. In 2009, NAS vendors (notably CTERA networks and Netgear ) began to introduce online backup solutions integrated in their NAS appliances, for online disaster recovery. By 2021, three major types of NAS solutions are offered (all with hybrid cloud models where data can be stored both on-premise on the NAS and off site on a separate NAS or through a public cloud service provider). The first type of NAS
741-413: A network can connect to several servers that store different types of data. To scale storage capacities as the volumes of data grew, direct-attached storage (DAS) was developed, where disk arrays or just a bunch of disks (JBODs) were attached to servers. In this architecture, storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed
798-573: A request to read or write data, it will check its access list to establish whether the node, identified by its LUN, is allowed to access the storage area, also identified by a LUN. LUN masking is a technique whereby the host bus adapter and the SAN software of a server restrict the LUNs for which commands are accepted. In doing so LUNs that should never be accessed by the server are masked. Another method to restrict server access to particular SAN storage devices
855-435: A single drive can be recovered completely via the redundancy encoded across the RAID set. If a drive spends several seconds executing extensive retries it might cause the RAID controller to flag the drive as "down" whereas if it simply replied promptly that the block of data had a checksum error, the RAID controller would use the redundant data on the other drives to correct the error and continue without any problem. A NAS unit
912-622: A single protocol. The key difference between direct-attached storage (DAS) and NAS is that DAS is simply an extension to an existing server and is not necessarily networked. As the name suggests, DAS typically is connected via a USB or Thunderbolt enabled cable. NAS is designed as an easy and self-contained solution for sharing files over the network. Both DAS and NAS can potentially increase availability of data by using RAID or clustering . Both NAS and DAS can have various amount of cache memory , which greatly affects performance. When comparing use of NAS with use of local (non-networked) DAS,
969-640: A single-writer/multi-reader mode which can be used to share disks between hosts without the need for a network connection.) SAM-QFS was designed and implemented at Large Storage Systems (LSC). The lead architect of SAM-QFS was Harriet Coverston , the founder and VP of Technology at LSC. LSC and SAM-QFS were purchased by Sun in 2001. Sun released the SAM-QFS source code to the OpenSolaris project in March 2008. After Oracle acquired Sun, Oracle continued to develop
1026-520: A switch provides a dedicated link to connect all its ports with one another. When SANs were first built, Fibre Channel had to be implemented over copper cables, these days multimode optical fibre cables are used in SANs. SANs are usually built with redundancy, so SAN switches are connected with redundant links. SAN switches connect the servers with the storage devices and are typically non-blocking allowing transmission of data across all attached wires at
1083-402: A technology often used in NAS implementations. For example, some NAS versions of drives support a command extension to allow extended error recovery to be disabled. In a non-RAID application, it may be important for a disk drive to go to great lengths to successfully read a problematic storage block, even if it takes several seconds. In an appropriately configured RAID array, a single bad block on
1140-449: A time-limited cachable capability for clients to access the storage objects. A file access from the client to the disks has the following sequence: A clustered NAS is a NAS that is using a distributed file system running simultaneously on multiple servers. The key difference between a clustered and traditional NAS is the ability to distribute (e.g. stripe) data and metadata across the cluster nodes or storage devices. Clustered NAS, like
1197-471: A web browser. They can run from a virtual machine , Live CD , bootable USB flash drive ( Live USB ), or from one of the mounted hard drives. They run Samba (an SMB daemon), NFS daemon, and FTP daemons which are freely available for those operating systems. Network-attached secure disks ( NASD ) is 1997–2001 research project of Carnegie Mellon University , with the goal of providing cost-effective scalable storage bandwidth . NASD reduces
SECTION 20
#17328020383121254-413: Is a computer network which provides access to consolidated, block-level data storage . SANs are primarily used to access data storage devices, such as disk arrays and tape libraries from servers so that the devices appear to the operating system as direct-attached storage . A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN). Although
1311-488: Is a single point of failure , and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct-attached shared storage architecture was implemented, where several servers could access the same storage device. DAS was the first network storage system and is still widely used where data storage requirements are not very high. Out of it developed
1368-434: Is a computer connected to a network that provides only file-based data storage services to other devices on the network. Although it may technically be possible to run other software on a NAS unit, it is usually not designed to be a general-purpose server. For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser. A full-featured operating system
1425-603: Is fabric-based access control, or zoning, which is enforced by the SAN networking devices and servers. Under zoning, server access is restricted to storage devices that are in a particular SAN zone. A mapping layer to other protocols is used to form a network: Storage networks may also be built using Serial Attached SCSI (SAS) and Serial ATA (SATA) technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from Parallel ATA direct-attached storage. SAS and SATA devices can be networked using SAS Expanders . The Storage Networking Industry Association (SNIA) defines
1482-563: Is focused on consumer needs with lower-cost options that typically support 1–5 hot plug hard drives. The second is focused on small-to-medium-sized businesses – these NAS solutions range from 2–24+ hard drives and are typically offered in tower or rackmount form factors. Pricing can vary greatly depending on the processor, components, and overall features supported. The last type is geared toward enterprises or large businesses and are offered with more advanced software capabilities. NAS solutions are typically sold without hard drives installed to allow
1539-430: Is focused solely on data storage but capabilities can be available based on specific vendor options. NAS provides both storage and a file system . This is often contrasted with SAN ( storage area network ), which provides only block-based storage and leaves file system concerns on the "client" side. SAN protocols include Fibre Channel , iSCSI , ATA over Ethernet (AoE) and HyperSCSI . One way to loosely conceptualize
1596-475: Is not needed on a NAS device, so often a stripped-down operating system is used. NAS systems contain one or more hard disk drives, often arranged into logical, redundant storage containers or RAID . NAS uses file-based protocols such as NFS (popular on UNIX systems), SMB ( Server Message Block ) (used with Microsoft Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare ). NAS units rarely limit clients to
1653-606: Is often used on top of the Fibre Channel switched fabric protocol in servers and SAN storage devices. The Internet Small Computer Systems Interface (iSCSI) over Ethernet and the Infiniband protocols may also be found implemented in SANs, but are often bridged into the Fibre Channel SAN. However, Infiniband and iSCSI storage devices, in particular, disk arrays, are available. The various storage devices in
1710-760: Is the consumer market where there is a large amount of multi-media data. Such consumer market appliances are now commonly available. Unlike their rackmounted counterparts, they are generally packaged in smaller form factors. The price of NAS appliances has fallen sharply in recent years, offering flexible network-based storage to the home consumer market for little more than the cost of a regular USB or FireWire external hard disk. Many of these home consumer devices are built around ARM , x86 or MIPS processors running an embedded Linux operating system . Open-source NAS-oriented distributions of Linux and FreeBSD are available. These are designed to be easy to set up on commodity PC hardware, and are typically configured using
1767-449: Is the process of abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the physical location, a concept called location transparency . This is implemented in modern disk arrays, often using vendor-proprietary technology. However,
QFS - Misplaced Pages Continue
1824-404: The network-attached storage (NAS) architecture, where one or more dedicated file server or storage devices are made available in a LAN. Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck. Therefore, SANs were developed, where a dedicated storage network
1881-522: The SAM-QFS product. Later Oracle renamed SAM-QFS to Oracle HSM (Oracle Hierarchical Storage Manager). In 2014 Versity , a storage startup co-founded by SAM-QFS lead architect Coverston, released the Versity Storage Manager (VSM), a Linux-based HSM based on the SAM-QFS code. Oracle announced 2019 the end of life to 2021, Versity shows an migration path. Storage area network A storage area network ( SAN ) or storage network
1938-406: The SAN are not owned and managed by a server. A SAN allows a server to access a large data storage capacity and this storage capacity may also be accessible by other servers. Moreover, SAN software must ensure that data is directly moved between storage devices within the SAN, with minimal server intervention. SAN management software is installed on one or more servers and management clients on
1995-412: The SAN devices from other vendors. In a SAN, data is transferred, stored and accessed on a block level. As such, a SAN does not provide data file abstraction, only block-level storage and operations. Server operating systems maintain their own file systems on their own dedicated, non-shared LUNs on the SAN, as though they were local to themselves. If multiple systems were simply to attempt to share
2052-491: The SAN, and they convert digital bits into light impulses that can then be transmitted over the Fibre Channel cables. Conversely, the GBIC converts incoming light impulses back into digital bits. The predecessor of the GBIC was called gigabit link module (GLM). The fabric layer consists of SAN networking devices that include SAN switches , routers, protocol bridges, gateway devices, and cables. SAN network devices move data within
2109-402: The SAN, or between an initiator , such as an HBA port of a server, and a target , such as the port of a storage device. When SANs were first built, hubs were the only devices that were Fibre Channel capable, but Fibre Channel switches were developed and hubs are now rarely found in SANs. Switches have the advantage over hubs that they allow all attached devices to communicate simultaneously, as
2166-402: The SAN, so-called SAN servers are used, which in turn connect to SAN host adapters . Within the SAN, a range of data storage devices may be interconnected, such as SAN-capable disk arrays, JBODS and tape libraries . Servers that allow access to the SAN and its storage devices are said to form the host layer of the SAN. Such servers have host adapters, which are cards that attach to slots on
2223-463: The SAN. Every node in the SAN, be it a server or another storage device, can access the storage by referencing the LUN. The LUNs allow for the storage capacity of a SAN to be segmented and for the implementation of access controls. A particular server, or a group of servers, may, for example, be only given access to a particular part of the SAN storage layer, in the form of LUNs. When a storage device receives
2280-853: The Windows SMB and the UNIX NFS protocols and had superior scalability and ease of deployment. This started the market for proprietary NAS devices now led by NetApp and EMC Celerra. Starting in the early 2000s, a series of startups emerged offering alternative solutions to single filer solutions in the form of clustered NAS – Spinnaker Networks (acquired by NetApp in February 2004), Exanet (acquired by Dell in February 2010), Gluster (acquired by RedHat in 2011), ONStor (acquired by LSI in 2009), IBRIX (acquired by HP ), Isilon (acquired by EMC – November 2010), PolyServe (acquired by HP in 2007), and Panasas , to name
2337-434: The background, and transparently retrieved to disk when accessed. SAM-QFS supports up to four archival copies, each of which can be on disk, tape, optical media, or may be stored at a remote site also running SAM-QFS. Shared QFS adds a multi-writer global filesystem, allowing multiple machines to read from & write to the same disks concurrently through the use of multi-ported disks or a storage area network . (QFS also has
QFS - Misplaced Pages Continue
2394-480: The buyer (or IT departments) to select the hard drive cost, size, and quality. The way manufacturers make NAS devices can be classified into three types: NAS is useful for more than just general centralized storage provided to client computers in environments with large amounts of data. NAS can enable simpler and lower cost systems such as load-balancing and fault-tolerant email and web server systems by providing storage services. The potential emerging market for NAS
2451-447: The configuration of zones and LUNs. Ultimately SAN networking and storage devices are available from many vendors and every SAN vendor has its own management and configuration software. Common management in SANs that include devices from different vendors is only possible if vendors make the application programming interface (API) for their devices available to other vendors. In such cases, upper-level SAN management software can manage
2508-495: The difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as a file server (the client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system and mounted . Despite their differences, SAN and NAS are not mutually exclusive and may be combined as
2565-421: The goal of storage virtualization is to group multiple disk arrays from different vendors, scattered over a network, into a single storage device. The single storage device can then be managed uniformly. Network-attached storage Network-attached storage ( NAS ) is a file-level (as opposed to block-level storage ) computer data storage server connected to a computer network providing data access to
2622-445: The host bus adapter (HBA). If a device is connected to the SAN its WWN is registered in the SAN switch name server. In place of a WWN, or worldwide port name (WWPN), SAN Fibre Channel storage device vendors may also hardcode a worldwide node name (WWNN). The ports of storage devices often have a WWN starting with 5, while the bus adapters of servers start with 10 or 21. The serialized Small Computer Systems Interface (SCSI) protocol
2679-408: The nature of the configuration which places the video workflow (ingest, editing, playout) desktop clients directly on the SAN rather than attaching to servers. Control of data flow is managed by a distributed file system. Per-node bandwidth usage control, sometimes referred to as quality of service (QoS), is especially important in video editing as it ensures fair and prioritized bandwidth usage across
2736-540: The network. SAN Storage QoS enables the desired storage performance to be calculated and maintained for network customers accessing the device. Some factors that affect SAN QoS are: Alternatively, over-provisioning can be used to provide additional capacity to compensate for peak network traffic loads. However, where network loads are not predictable, over-provisioning can eventually cause all bandwidth to be fully consumed and latency to increase significantly resulting in SAN performance degradation. Storage virtualization
2793-479: The overhead on the file server (file manager) by allowing storage devices to transfer data directly to clients . Most of the file manager's work is offloaded to the storage disk without integrating the file system policy into the disk. Most client operations like Read/Write go directly to the disks; less frequent operations like authentication go to the file manager. Disks transfer variable-length objects instead of fixed-size blocks to clients. The File Manager provides
2850-508: The performance of NAS depends mainly on the speed of and congestion on the network. NAS is generally not as customizable in terms of hardware (CPU, memory, storage components) or low level software (extensions, plug-ins , additional protocols) but most NAS solutions will include the option to install a wide array of software applications to allow better configuration of the system or to include other capabilities outside of storage (like video surveillance, virtualization, media, etc). DAS typically
2907-414: The same time. SAN switches are for redundancy purposes set up in a meshed topology . A single SAN switch can have as few as 8 ports and up to 32 ports with modular extensions. So-called director-class switches can have as many as 128 ports. In switched SANs, the Fibre Channel switched fabric protocol FC-SW-6 is used under which every device in the SAN has a hardcoded World Wide Name (WWN) address in
SECTION 50
#17328020383122964-402: The server motherboard (usually PCI slots) and run with a corresponding firmware and device driver . Through the host adapters the operating system of the server can communicate with the storage devices in the SAN. In Fibre channel deployments, a cable connects to the host adapter through the gigabit interface converter (GBIC). GBICs are also used on switches and storage devices within
3021-405: The storage devices. Two approaches have developed in SAN management software: in-band and out-of-band management. In-band means that management data between server and storage devices is transmitted on the same network as the storage data. While out-of-band means that management data is transmitted over dedicated links. SAN management software will collect management data from all storage devices in
3078-622: The storage layer. This includes info on read and write failures, storage capacity bottlenecks and failure of storage devices. SAN management software may integrate with the Simple Network Management Protocol (SNMP). In 1999 Common Information Model (CIM), an open standard, was introduced for managing storage devices and to provide interoperability, The web-based version of CIM is called Web-Based Enterprise Management (WBEM) and defines SAN storage device objects and process transactions. Use of these protocols involves
3135-468: The success of file servers from Novell, IBM , and Sun, several firms developed dedicated file servers. While 3Com was among the first firms to build a dedicated NAS for desktop operating systems, Auspex Systems was one of the first to develop a dedicated NFS server for use in the UNIX market. A group of Auspex engineers split away in the early 1990s to create the integrated NetApp FAS , which supported both
3192-782: Was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the SAN, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent. In a NAS architecture data is transferred using the TCP and IP protocols over Ethernet . Distinct protocols were developed for SANs, such as Fibre Channel , iSCSI , Infiniband . Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures. SANs have their own networking devices, such as SAN switches. To access
3249-590: Was released in 1983. Following the Newcastle Connection, Sun Microsystems ' 1984 release of NFS allowed network servers to share their storage space with networked clients. 3Com and Microsoft would develop the LAN Manager software and protocol to further this new market. 3Com 's 3Server and 3+Share software was the first purpose-built server (including proprietary hardware, software, and multiple disks) for open systems servers. Inspired by
#311688