Misplaced Pages

POWER6

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The POWER6 is a microprocessor developed by IBM that implemented the Power ISA v.2.05 . When it became available in systems in 2007, it succeeded the POWER5+ as IBM's flagship Power microprocessor. It is claimed to be part of the eCLipz project, said to have a goal of converging IBM's server hardware where practical (hence "ipz" in the acronym: iSeries , pSeries , and zSeries ).

#877122

48-717: POWER6 was described at the International Solid-State Circuits Conference (ISSCC) in February 2006, and additional details were added at the Microprocessor Forum in October 2006 and at the next ISSCC in February 2007. It was formally announced on May 21, 2007. It was released on June 8, 2007 at speeds of 3.5, 4.2 and 4.7 GHz, but the company has noted prototypes have reached 6 GHz. POWER6 reached first silicon in

96-530: A storage area network (SAN) allows for an entirely disk-free blade, an example of which implementation is the Intel Modular Server System . Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of

144-584: A 5 GHz duty-cycle correction clock distribution network for the processor. In the network, the company implements a copper distribution wire that is 3 μm wide and 1.2 μm thick. The POWER6 design uses dual power supplies, a logic supply in the 0.8-to-1.2 Volt range and an SRAM power supply at about 150-mV higher. The thermal characteristics of POWER6 are similar to that of the POWER5 . Dr Frank Soltis , an IBM chief scientist, said IBM had solved power leakage problems associated with high frequency by using

192-681: A combination of 90 nm and 65 nm parts in the POWER6 design. The slightly enhanced POWER6+ was introduced in April 2009, but had been shipping in Power 560 and 570 systems since October 2008. It added more memory keys for secure memory partition , a feature taken from IBM's mainframe processors . As of 2008, the range of POWER6 systems includes "Express" models (the 520, 550 and 560) and Enterprise models (the 570 and 595). The various system models are designed to serve any sized business. For example,

240-416: A complete server, with its operating system and applications, on a single card/board/blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to

288-471: A novel decimal floating-point unit. The binary floating-point unit incorporates "many microarchitectures, logic, circuit, latch and integration techniques to achieve [a] 6-cycle, 13- FO4 pipeline", according to a company paper. Unlike the servers from IBM's competitors, the POWER6 has hardware support for IEEE 754 decimal arithmetic and includes the first decimal floating-point unit integrated in silicon. More than 50 new floating point instructions handle

336-533: A single function with a small real-time executive . The VMEbus architecture ( c.  1981 ) defined a computer interface that included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing. In the 1990s, the PCI Industrial Computer Manufacturers Group PICMG developed a chassis/blade structure for

384-602: Is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. As of 2014 , densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems. The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among

432-672: Is a global forum for presentation of advances in solid-state circuits and Systems-on-a-Chip . The conference is held every year in February at the San Francisco Marriott Marquis in downtown San Francisco . ISSCC is sponsored by IEEE Solid-State Circuits Society. According to The Register , "The ISSCC event is the second event of each new year, following the Consumer Electronics Show , where new PC processors and sundry other computing gadgets are brought to market." Early participants in

480-416: Is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack-mount servers. Blade servers generally include integrated or optional network interface controllers for Ethernet or host adapters for Fibre Channel storage systems or converged network adapter to combine storage and data via one Fibre Channel over Ethernet interface. In many blades, at least one interface

528-646: Is embedded on the motherboard and extra interfaces can be added using mezzanine cards . A blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or in networking blades . While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g. FireWire , SATA , E-SATA , SCSI , SAS DAS , FC and iSCSI ) are readily moved outside

SECTION 10

#1732783779878

576-608: Is supported for blades running AIX, i, and Linux. The BladeCenter E, HT, and T chassis support blades running AIX and Linux but not i. At the SuperComputing 2007 (SC07) conference in Reno a new water-cooled Power 575 was revealed. The 575 is composed of 2U "nodes" each with 32 POWER6 cores at 4.7 GHz with up to 256 GB of RAM. Up to 448 cores can be installed in a single frame. International Solid-State Circuits Conference International Solid-State Circuits Conference

624-584: The University of Pennsylvania . The registration was $ 4 (early registration was $ 3) and 601 people registered. International attendees arrived from Canada, England and Japan. With subsequent conferences came many more international participants with the first international presentation in 1958. By 1965, the number of overseas program committee members increased to 8 and in 1970 the overseas members began meeting separately in both Europe and Japan. Selected members of these regional program committees would attend

672-521: The heating, ventilation, and air conditioning problems that affect large conventional server farms. Developers first placed complete microcomputers on cards and packaged them in standard 19-inch racks in the 1970s, soon after the introduction of 8-bit microprocessors . This architecture was used in the industrial process control industry as an alternative to minicomputer -based control systems. Early models stored programs in EPROM and were limited to

720-594: The 520 Express is marketed to small businesses while the Power 595 is marketed for large, multi-environment data centers. The main difference between the Express and Enterprise models is that the latter include Capacity Upgrade on Demand (CUoD) capabilities and hot-pluggable processor and memory "books". IBM also offers four POWER6 based blade servers . Specifications are shown in the table below. All blades support AIX , IBM i , and Linux . The BladeCenter S and H chassis

768-656: The Executive Committee. From formative years through 1980 the Conference chair was usually filled by the previous year's Program Chair. To provide needed continuity, the term of Conference Chair was extended to at least 5 years. Blade server A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all

816-831: The Networld+Interop show in May 2000. Patents were awarded for the Ketris blade server architecture . In October 2000 Ziatech was acquired by Intel Corp and the Ketris Blade Server systems would become a product of the Intel Network Products Group. PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification

864-485: The POWER6 still achieves significant performance improvements over the POWER5+ even with unmodified software, according to the lead engineer on the POWER6 project. POWER6 also takes advantage of ViVA-2 , Vi rtual V ector A rchitecture, which enables the combination of several POWER6 nodes to act as a single vector processor . Each core has two integer units , two binary floating-point units , an AltiVec unit, and

912-505: The ability to provision (power up, install operating systems and applications software) (e.g. a Web Servers) remotely from a Network Operations Center (NOC). The system architecture when this system was announced was called Ketris, named after the Ketri Sword , worn by nomads in such a way as to be drawn very quickly as needed. First envisioned by Dave Bottom and developed by an engineering team at Ziatech Corp in 1999 and demonstrated at

960-417: The backplane (where server blades would plug-in) eliminating more than 160 cables in a single 84 Rack Unit high 19" rack. For a large data center tens of thousands of Ethernet cables, prone to failure would be eliminated. Further this architecture provided the capabilities to inventory modules installed in the system remotely in each system chassis without the blade servers operating. This architecture enabled

1008-404: The blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor. Computers operate over a range of DC voltages, but utilities deliver power as AC , and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect

SECTION 20

#1732783779878

1056-420: The blade itself, and in the blade system as a whole. In a standard server-rack configuration, one rack unit or 1U —19 inches (480 mm) wide and 1.75 inches (44 mm) tall—defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack form-factor

1104-476: The cache is assigned a specific core, but the other has a fast access to it. The two cores share a 32 MiB L3 cache which is off die, using an 80 GB/s bus. POWER6 can connect to up to 31 other processors using two inter node links (50 GB/s), and supports up to 10 logical partitions per core (up to a limit of 254 per system). There is an interface to a service processor that monitors and adjusts performance and power according to set parameters. IBM also makes use of

1152-490: The decimal math and conversions between binary and decimal . This feature was also added to the z10 microprocessor featured in the System z10 . Each core has a 64 KB, four-way set-associative instruction cache and a 64 KB data cache of an eight-way set-associative design with a two-stage pipeline supporting two independent 32-bit reads or one 64-bit write per cycle. Each core has semi-private 4 MiB unified L2 cache , where

1200-454: The emerging Internet Data Centers where the manpower simply didn't exist to keep pace a new server architecture was needed. In 1998 and 1999 this new Blade Server Architecture was developed at Ziatech based on their Compact PCI platform to house as many as 14 "blade servers" in a standard 19" 9U high rack mounted chassis, allowing in this configuration as many as 84 servers in a standard 84 Rack Unit 19" rack. What this new architecture brought to

1248-570: The enclosure. Systems administrators can use storage blades where a requirement exists for additional local storage. Blade servers function well for specific purposes such as web hosting , virtualization , and cluster computing . Individual blades are typically hot-swappable . As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers. Although blade-server technology in theory allows for open, cross-vendor systems, most users buy modules, enclosures, racks and management tools from

1296-543: The final program meeting in America. The name of the 1954 Conference appears in various publications and documents as: "The Transistor Conference", "The Conference on Transistor Circuits", "The Philadelphia Conference", or "The National Conference on Transistor Circuits". The current name "International Solid-State Circuits Conference" was settled by the organizers in 1960. While ISSCC was founded in Philadelphia, in

1344-444: The functional components to be considered a computer . Unlike a rack-mount server, a blade server fits inside a blade enclosure , which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in

1392-646: The inaugural conference in 1954 belonged to the Institute of Radio Engineers (IRE) Circuit Theory Group and the IRE subcommittee of Transistor Circuits. The conference was held in Philadelphia and local chapters of IRE and American Institute of Electrical Engineers (AIEE) were in attendance. Later on AIEE and IRE would merge to become the present-day IEEE. The first conference consisted of papers from six organizations: Bell Telephone Laboratories , General Electric , RCA , Philco , Massachusetts Institute of Technology and

1440-449: The latter sold its x86 server business to Lenovo in 2014 after selling its consumer PC line to Lenovo in 2005. In 2009, Cisco announced blades in its Unified Computing System product line, consisting of 6U high chassis, up to 8 blade servers in each chassis. It had a heavily modified Nexus 5K switch, rebranded as a fabric interconnect, and management software for the whole system. HP's initial line consisted of two chassis models,

1488-519: The mid-1960s the center of semiconductor development in the United States was shifting west. In 1978, the conference was held on alternate coasts with New York soon substituting for Philadelphia. In 1990, San Francisco became the Conference's permanent home. In 2013, ISSCC celebrated its 60th anniversary and will had several special programs to celebrate 60 years of circuit and SoC innovation. The Technical Program Committee (TPC) in early years

POWER6 - Misplaced Pages Continue

1536-563: The middle of 2005, and was bumped to 5.0 GHz in May 2008 with the introduction of the P595. The POWER6 is a dual-core processor. Each core is capable of two-way simultaneous multithreading (SMT). The POWER6 has approximately 790 million transistors and is 341 mm large fabricated on a 65 nm process. A notable difference from POWER5 is that the POWER6 executes instructions in-order instead of out-of-order . This change often requires software to be recompiled for optimal performance, but

1584-487: The number of PSUs required to provide a resilient power supply. The popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable uninterruptible power supply (or UPS) units, including units targeted specifically towards blade servers (such as the BladeUPS ). During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure

1632-411: The operation of the computer, even entry-level servers often have redundant power supplies, again adding to the bulk and heat output of the design. The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures. This setup reduces

1680-447: The pooling or sharing of common infrastructure to support the entire chassis, rather than providing each of these on a per server box basis. In 2011, research firm IDC identified the major players in the blade market as HP , IBM , Cisco , and Dell . Other companies selling blade servers include Supermicro , Hitachi . The prominent brands in the blade server market are Supermicro , Cisco Systems , HPE , Dell and IBM , though

1728-557: The proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans . A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer blade-enclosures feature variable-speed fans and control logic, or even liquid cooling systems that adjust to meet

1776-525: The real world implementation in Internet Data Centers where thermal as well as other maintenance and operating cost had become prohibitively expensive, this blade server architecture with remote automated provisioning, health and performance monitoring and management would be a significantly less expensive operating cost. The first commercialized blade-server architecture was invented by Christopher Hipp and David Kirkeby , and their patent

1824-589: The same vendor. Eventual standardization of the technology might result in more choices for consumers; as of 2009 increasing numbers of third-party software vendors have started to enter this growing field. Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from

1872-413: The server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades . The ability to boot the blade from

1920-414: The system's cooling requirements. At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This

1968-492: The table was a set of new interfaces to the hardware specifically to provide the capability to remotely monitor the health and performance of all major replaceable modules that could be changed/replaced while the system was in operation. The ability to change/replace or add modules within the system while it is in operation is known as Hot-Swap. Unique to any other server system the Ketris Blade servers routed Ethernet across

POWER6 - Misplaced Pages Continue

2016-440: The telecom industry's need for a high availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, the operating cost (manpower to manage and maintain) are dramatically lower, where operating cost often dwarf the acquisition cost for traditional servers. AdvancedTCA promote them for telecommunications customers, however in

2064-420: The then emerging Peripheral Component Interconnect bus PCI called CompactPCI . CompactPCI was actually invented by Ziatech Corp of San Luis Obispo, CA and developed into an industry standard. Common among these chassis-based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there

2112-537: Was adopted in Sept 2001. This provided the first open architecture for a multi-server chassis. The Second generation of Ketris would be developed at Intel as an architecture for the telecommunications industry to support the build out of IP base telecom services and in particular the LTE (Long Term Evolution) Cellular Network build-out. PICMG followed with this larger and more feature-rich AdvancedTCA specification, targeting

2160-443: Was always one master board in charge, or two redundant fail-over masters coordinating the operation of the entire system. Moreover, this system architecture provided management capabilities not present in typical rack mount computers, much more like in ultra-high reliability systems, managing power supplies, cooling fans as well as monitoring health of other internal components. Demands for managing hundreds and thousands of servers in

2208-456: Was assigned to Houston-based RLX Technologies . RLX, which consisted primarily of former Compaq Computer Corporation employees, including Hipp and Kirkeby, shipped its first commercial blade server in 2001. RLX was acquired by Hewlett-Packard in 2005. The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage ( flash memory or small hard disk (s)). This allowed manufacturers to package

2256-646: Was extremely fluid in order to deal with the constantly changing topics in the industry. By 1968 the list of subcommittees had settled to Digital, Analog (Linear), Microwave and Other, where the subcommittee members in Other would address the one-of-a-kind papers. In the 80's, the Microwave Subcommittee was dropped from the program as the overlap between the topics and attendees was diminishing. In addition, Digital split into Digital, Memory and Signal Processing subcommittees. In 1992, Emerging Technologies

2304-465: Was launched and chartered to seek out the one-of-a-kind applications which may find a home in ISSCC. Today there are 10 subcommittees: Analog, Data Converters, Energy Efficient Digital (EED), High-Performance Digital (HPD), Imagers, MEMs, Medical and Displays (IMMD), Memory, RF, Technology Directions (formerly Emerging Technologies), Wireless and Wireline. ISSCC is a strictly non-profit organization run by

#877122