InfiniBand ( IB ) is a computer networking communications standard used in high-performance computing that features very high throughput and very low latency . It is used for data interconnect both among and within computers. InfiniBand is also used as either a direct or switched interconnect between servers and storage systems, as well as an interconnect between storage systems. It is designed to be scalable and uses a switched fabric network topology . Between 2014 and June 2016, it was the most commonly used interconnect in the TOP500 list of supercomputers.
35-478: Mellanox (acquired by Nvidia ) manufactures InfiniBand host bus adapters and network switches , which are used by large computer system and database vendors in their product lines. As a computer cluster interconnect, IB competes with Ethernet , Fibre Channel , and Intel Omni-Path . The technology is promoted by the InfiniBand Trade Association . InfiniBand originated in 1999 from
70-501: A Fibre Channel vendor. At the 2011 International Supercomputing Conference , links running at about 56 gigabits per second (known as FDR, see below), were announced and demonstrated by connecting booths in the trade show. In 2012, Intel acquired QLogic's InfiniBand technology, leaving only one independent supplier. By 2014, InfiniBand was the most popular internal connection technology for supercomputers, although within two years, 10 Gigabit Ethernet started displacing it. In 2016, it
105-504: A computer cluster . One typical application was a large database management system . Mellanox network adapter and switches supported remote direct memory access (RDMA) and RDMA over Converged Ethernet . Product names included: By 2011, Mellanox's InfiniBand products for computer clusters had been deployed in many of the TOP500 lists of high-performance computers . Although originally associated with InfiniBand products, Mellanox
140-416: A switched fabric topology, as opposed to early shared medium Ethernet . All transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service (QoS). InfiniBand transmits data in packets of up to 4 KB that are taken together to form
175-489: A choice of BSD license for Windows. It has been adopted by most of the InfiniBand vendors, for Linux , FreeBSD , and Microsoft Windows . IBM refers to a software library called libibverbs , for its AIX operating system, as well as "AIX InfiniBand verbs". The Linux kernel support was integrated in 2005 into the kernel version 2.6.11. Ethernet over InfiniBand, abbreviated to EoIB, is an Ethernet implementation over
210-478: A developer of silicon photonics optical interconnect technology for high-speed networking. In July 2013, Mellanox acquired privately held IPtronics A/S, a designer of optical interconnect components for digital communications. In July 2014, Mellanox acquired privately held Integrity Project, for its software connectivity, low-level development, real-time applications and security technology. In February 2016, Mellanox acquired publicly held EZchip Semiconductor ,
245-530: A message. A message can be: In addition to a board form factor connection, it can use both active and passive copper (up to 10 meters) and optical fiber cable (up to 10 km). QSFP connectors are used. The InfiniBand Association also specified the CXP connector system for speeds up to 120 Gbit/s over copper, active optical cables, and optical transceivers using parallel multi-mode fiber cables with 24-fiber MPO connectors. Mellanox operating system support
280-549: A provider of network processors and multi-core processors from EZchip's earlier acquisition of Tilera . In 2016, Mellanox Technologies began to employ programmers in the Gaza Strip , in addition to its team of Israeli Arab programmers and programmers in Ramallah and Nablus. In 2016, Mellanox had revenues of $ 857 million. In December 2017, Mellanox announced it would start a new startup accelerator. Over 2017, shares in
315-433: Is a direct memory access from the memory of one computer into that of another without involving either one's operating system . This permits high-throughput, low- latency networking, which is especially useful in massively parallel computer clusters . RDMA supports zero-copy networking by enabling the network adapter to transfer data from the wire directly to application memory or from application memory directly to
350-426: Is available for Solaris , FreeBSD , Red Hat Enterprise Linux , SUSE Linux Enterprise Server (SLES), Windows , HP-UX , VMware ESX , and AIX . InfiniBand has no specific standard application programming interface (API). The standard only lists a set of verbs such as ibv_open_device or ibv_post_send , which are abstract representations of functions or methods that must exist. The syntax of these functions
385-456: Is duplex. Links can be aggregated: most systems use a 4 link/lane connector (QSFP). HDR often makes use of 2x links (aka HDR100, 100 Gb link using 2 lanes of HDR, while still using a QSFP connector). 8x is called for with NDR switch ports using OSFP (Octal Small Form Factor Pluggable) connectors "Cable and Connector Definitions" . InfiniBand provides remote direct memory access (RDMA) capabilities for low CPU overhead. InfiniBand uses
SECTION 10
#1732801444156420-541: Is left to the vendors. Sometimes for reference this is called the verbs API. The de facto standard software is developed by OpenFabrics Alliance and called the Open Fabrics Enterprise Distribution (OFED). It is released under two licenses GPL2 or BSD license for Linux and FreeBSD, and as Mellanox OFED for Windows (product names: WinOF / WinOF-2; attributed as host controller driver for matching specific ConnectX 3 to 5 devices) under
455-423: Is not notified of the completion of the request (single-sided communications). As of 2018 RDMA had achieved broader acceptance as a result of implementation enhancements that enable good performance over ordinary networking infrastructure. For example RDMA over Converged Ethernet (RoCE) now is able to run over either lossy or lossless infrastructure. In addition iWARP enables an Ethernet RDMA implementation at
490-715: The EU , U.S. and Chinese antitrust authorities. The company was integrated into Nvidia's networking division in 2020 and Nvidia stopped using the brand name "Mellanox" for its new networking products. Mellanox was founded in May 1999 by former Israeli executives of Intel Corporation and Galileo Technology (which was acquired by Marvell Technology Group in October 2000 for $ 2.8 billion) Eyal Waldman , Shai Cohen, Roni Ashuri, Michael Kagan, Evelyn Landman, Eitan Zahavi, Shimon Rottenberg, Udi Katz and Alon Webman. Eyal Waldman founded Mellanox in
525-602: The Tel Aviv Stock Exchange , until 2013 when the company de-listed itself, but remained on NASDAQ. In February 2011, Mellanox acquired Voltaire Ltd., a provider of data center switches for about $ 218 million. In November 2012, Mellanox was named one of the fastest growing companies by a marketing firm. In 2013 Mellanox acquired assets of XLoom Communications Ltd., including opto-electric chip-scale packaging, and some of XLoom's technology personnel. In July 2013, Mellanox acquired privately held Kotura, Inc.,
560-630: The Virtual Interface Architecture , RDMA over Converged Ethernet (RoCE), InfiniBand , Omni-Path and iWARP . Applications access control structures using well-defined APIs originally designed for the InfiniBand Protocol (although the APIs can be used for any of the underlying RDMA implementations). Using send and completion queues, applications perform RDMA operations by submitting work queue entries (WQEs) into
595-593: The Far East or Eastern Europe, Mellanox hired Palestinian engineers from Ramallah through a Palestinian outsourcing firm. In 2018, Waldman told a Tel Aviv conference hosted by Globes magazine that over 100 Palestinians are working on Mellanox projects. Waldman had previously talked about Mellanox's plans to build a research and development center in Ramallah, even though it is more expensive than outsourcing to Eastern Europe. Remote direct memory access In computing , remote direct memory access ( RDMA )
630-530: The InfiniBand protocol and connector technology. EoIB enables multiple Ethernet bandwidths varying on the InfiniBand (IB) version. Ethernet's implementation of the Internet Protocol Suite , usually referred to as TCP/IP, is different in some details compared to the direct InfiniBand protocol in IP over IB (IPoIB). Mellanox Mellanox Technologies Ltd. ( Hebrew : מלאנוקס טכנולוגיות בע"מ )
665-643: The Israel city of Yokne'am . Financial offices were in Santa Clara, California in the USA. In February, 2002, a round of venture capital investment was announced of about $ 56 million. Later extended to about $ 64 million, investors included Intel , IBM , Sequoia Capital and U.S. Venture Partners , Mellanox had its initial public offering in February, 2007, on NASDAQ that raised $ 102 million, and valued
700-522: The burst of the dot-com bubble there was hesitation in the industry to invest in such a far-reaching technology jump. By 2002, Intel announced that instead of shipping IB integrated circuits ("chips"), it would focus on developing PCI Express , and Microsoft discontinued IB development in favor of extending Ethernet. Sun Microsystems and Hitachi continued to support IB. In 2003, the System X supercomputer built at Virginia Tech used InfiniBand in what
735-516: The company at over half a billion dollars. Its shares were listed under the symbol MLNX. Created in 2009, Mellanox's investment fund was publicly announced in 2014. Initially founded as an integrated circuit (chip) manufacturer, it evolved into a producer of complete network systems by 2009. In 2010, Oracle Corporation became a major investor in the company, holding around 10% of its stock. Oracle uses InfiniBand technology in its Exadata and Exalogic appliances. Stock shares were also listed on
SECTION 20
#1732801444156770-584: The company rose by 55 percent. That year, the company also made its largest acquisition with EZchip . The activist investor Starboard Value LP purchased a 10.7% stake in the company in November 2017. In January 2018, Starboard criticized the company's research and development spending and argued for short-term profits instead. The day after, on January 9, 2018, Mellanox announced it would immediately discontinue its 1550 nm silicon photonics development activities, with president and CEO Eyal Waldman saying
805-422: The company's board, including Starboard head Jeffrey Smith . In May, 2018, stockholders approved the company's governance proposals related to the possibility of the contested board elections. By June, 2018, three board members agreed to step down and be replaced by two Starboard candidates and one agreed upon by both sides. In 2019, Mellanox was acquired for $ 6.9 billion by Nvidia Corporation making it one of
840-943: The largest mergers and acquisitions in 2019. Founder and long-term CEO Eyal Waldman left the company in November, 2020. He made an estimated $ 240 million on the acquisition. Mellanox was a fabless semiconductor company , which then sold products based on those semiconductor integrated circuits . Starting from at least 2011, its chips were produced by Taiwan Semiconductor Manufacturing Corp (TSMC). Mellanox Technologies provided Ethernet and InfiniBand network adapters , switches and cables for servers and storage used in cloud and enterprise data centers based on internally developed integrated circuits. Mellanox had two major customers, Hewlett-Packard and Dell EMC , which each contributed more than 10% of revenues in 2017, 2018, and 2019. Mellanox specialized in switched fabrics for enterprise data centers and high performance computing , when high data rates and low latency are required such as in
875-403: The market, adopted a "buy to kill" strategy. Cisco successfully killed InfiniBand switching companies such as Topspin via acquisition. Of the top 500 supercomputers in 2009, Gigabit Ethernet was the internal interconnect technology in 259 installations, compared with 181 using InfiniBand. In 2010, market leaders Mellanox and Voltaire merged, leaving just one other IB vendor, QLogic , primarily
910-504: The merger of two competing designs: Future I/O and Next Generation I/O (NGIO). NGIO was led by Intel , with a specification released in 1998, and joined by Sun Microsystems and Dell . Future I/O was backed by Compaq , IBM , and Hewlett-Packard . This led to the formation of the InfiniBand Trade Association (IBTA), which included both sets of hardware vendors as well as software vendors such as Microsoft . At
945-1289: The physical layer using TCP / IP as the transport, combining the performance and latency advantages of RDMA with a low-cost, standards-based solution. The RDMA Consortium and the DAT Collaborative have played key roles in the development of RDMA protocols and APIs for consideration by standards groups such as the Internet Engineering Task Force and the Interconnect Software Consortium. Hardware vendors have started working on higher-capacity RDMA-based network adapters, with rates of 100 Gbit/s reported. Software vendors, such as IBM , Red Hat and Oracle Corporation , support these APIs in their latest products, and as of 2013 engineers have started developing network adapters that implement RDMA over Ethernet. Both Red Hat Enterprise Linux and Red Hat Enterprise MRG have support for RDMA. Microsoft supports RDMA in Windows Server 2012 via SMB Direct . VMware ESXi also supports RDMA as of 2015. Common RDMA implementations include
980-487: The review of the silicon photonics business had started in May 2017. Mellanox also said it would fire 100 people, all in the US. At the time, the company employed 2,900 people, mostly in Israel. In a "board battle," Starboard sent a letter to shareholders asking them to entirely replace the board of directors. At the time, Mellanox had a $ 3.3 billion market value. Starboard said it would nominate nine candidates for election to
1015-477: The submission queue (SQ) and getting notified of responses from the completion queue (CQ). RDMA can transport data reliably or unreliably over the Reliably Connected (RC) and Unreliable Datagram (UD) transport protocols, respectively. The former has the benefit of preserving requests (no requests are lost), while the latter requires fewer queue pairs when handling multiple connections. This is due to
1050-880: The time it was thought some of the more powerful computers were approaching the interconnect bottleneck of the PCI bus, in spite of upgrades like PCI-X . Version 1.0 of the InfiniBand Architecture Specification was released in 2000. Initially the IBTA vision for IB was simultaneously a replacement for PCI in I/O, Ethernet in the machine room , cluster interconnect and Fibre Channel . IBTA also envisaged decomposing server hardware on an IB fabric . Mellanox had been founded in 1999 to develop NGIO technology, but by 2001 shipped an InfiniBand product line called InfiniBridge at 10 Gbit/second speeds. Following
1085-401: The wire, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs , caches , or context switches , and transfers continue in parallel with other system operations. This reduces latency in message transfer. However, this strategy presents several problems related to the fact that the target node
InfiniBand - Misplaced Pages Continue
1120-564: Was an Israeli -American multinational supplier of computer networking products based on InfiniBand and Ethernet technology. Mellanox offered adapters, switches, software, cables and silicon for markets including high-performance computing , data centers , cloud computing , computer data storage and financial services. On March 11, 2019, Nvidia announced its intent to acquire the company for $ 6.9 billion. Other companies willing to acquire Mellanox were Intel , Xilinx and Microsoft . The deal closed on April 27, 2020, with approval from
1155-497: Was estimated to be the third largest computer in the world at the time. The OpenIB Alliance (later renamed OpenFabrics Alliance) was founded in 2004 to develop an open set of software for the Linux kernel. By February, 2005, the support was accepted into the 2.6.11 Linux kernel. In November 2005 storage devices finally were released using InfiniBand from vendors such as Engenio. Cisco, desiring to keep technology superior to Ethernet off
1190-583: Was later able to use its technology for storage area networks (SANs), to replace legacy Fibre Channel for example with the much more common Ethernet family of standards, since 2011. In addition to its headquarters in the US, Mellanox had offices in Israel, Denmark, China, Russia, Singapore, Taiwan, Japan and the United Kingdom. Mellanox outsourced some of its engineering to the West Bank . Rather than setting up offshore engineering centers in
1225-538: Was reported that Oracle Corporation (an investor in Mellanox) might engineer its own InfiniBand hardware. In 2019 Nvidia acquired Mellanox, the last independent supplier of InfiniBand products. Specifications are published by the InfiniBand trade association. Original names for speeds were single-data rate (SDR), double-data rate (DDR) and quad-data rate (QDR) as given below. Subsequently, other three-letter acronyms were added for even higher data rates. Each link
#155844