The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet , Fibre Channel and InfiniBand switches.
121-438: The M1000e fits in a 19-inch rack and is 10 rack units high (44 cm), 17.6" (44.7 cm) wide and 29.7" (75.4 cm) deep. The empty blade enclosure weighs 44.5 kg while a fully loaded system can weigh up to 178.8 kg. On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules(s) (CMC or chassis management controller) and
242-553: A road case approved by the Air Transport Association of America (ATA), sometimes also referred to as a flight case . Road cases typically have plywood sides laminated with polyvinyl chloride (PVC), extruded aluminum edges, steel corners, handles, and latches. Larger cases typically have wheels for easy transport. Road case racks come in different heights based on the 1U standard and different depths. Non-isolated cases simply mount 19-inch mounting posts inside
363-419: A rack-mounted system , a rack-mount chassis , subrack , rack cabinet , rack-mountable , or occasionally simply shelf . The height of the electronic modules is also standardized as multiples of 1.75 inches (44.45 mm) or one rack unit or U (less commonly RU). The industry-standard rack cabinet is 42U tall; however, many data centers have racks taller than this. The term relay rack appeared first in
484-498: A 1.5U server or devices that are just 22.5 or 15 cm in width, allowing for 2 or 3 such devices to be installed side by side, but these are much less common. The height of a rack can vary from a few inches, such as in a broadcast console, to a floor-mounted rack whose interior is 45 rack units (200.2 centimetres or 78.82 inches) high. 42U is a common configuration. Many wall-mounted enclosures for industrial equipment use 19-inch racks. Some telecommunications and networking equipment
605-485: A 19-inch rack. With the prevalence of 23-inch racks in the Telecoms industry, the same practice is also common, but with equipment having 19-inch and 23-inch brackets available, enabling them to be mounted in existing racks. A key structural weakness of front-mounted support is the bending stress placed on the mounting brackets of the equipment, and the rack itself. As a result, 4-post racks have become common, featuring
726-473: A 40 Gbit/s switch to switch (stack) uplink or, with a break-out cable, 4 x 10 Gbit/s links. Dell offers direct attach cables with on one side the QSFP+ interface and 4 x SFP+ on the other end or a QSFP+ transceiver on one end and 4 fibre-optic pairs to be connected to SFP+ transceivers on the other side. Up to six MXL blade-switch can be stacked into one logical switch. Besides the above 2x40 QSFP module
847-483: A choice of 3 different on-board converged Ethernet adaptors for 10 Gbit/s Fibre Channel over Ethernet (FCoE) from Broadcom, Brocade or QLogic and up to two additional Mezzanine for Ethernet, Fibre Channel or InfiniBand I/O A full-height server of the 11th generation with up to 4x 10-core Intel XEON E7 CPU or 4 x 8 core XEON 7500 series or 2 x 8 core XEON 6500 series, 512 Gb or 1 Tb DDR3 RAM and two hot-swappable 2,5" hard-drives (spinning or SSD). It uses
968-451: A common system backbone. One limitation of mechanical KVM switches is that any computer not currently selected by the KVM switch does not 'see' a keyboard or mouse connected to it. In normal operation this is not a problem, but while the machine is booting up it will attempt to detect its keyboard and mouse and either fail to boot or boot with an unwanted (e.g. mouseless) configuration. Likewise,
1089-412: A cubic meter. Newer server rack cabinets come with adjustable mounting rails allowing the user to place the rails at a shorter depth if needed. There are a multitude of specialty server racks including soundproof server racks, air-conditioned server racks, NEMA-rated, seismic-rated, open frame, narrow, and even miniature 19-inch racks for smaller applications. Cabinets are generally sized to be no wider than
1210-410: A dedicated micro-controller and potentially specialized video capture hardware to capture the video, keyboard, and mouse signals, compress and convert them into packets, and send them over an Ethernet link to a remote console application that unpacks and reconstitutes the dynamic graphical image. KVM over IP subsystem is typically connected to a system's standby power plane so that it's available during
1331-408: A different operating system. KVM switches offer different methods of connecting the computers. Depending on the product, the switch may present native connectors on the device where standard keyboard, monitor and mouse cables can be attached. Another method to have a single DB25 or similar connector that aggregated connections at the switch with three independent keyboard, monitor and mouse cables to
SECTION 10
#17327870337301452-458: A failure to detect the monitor may result in the computer falling back to a low resolution such as (typically) 640x480. Thus, mechanical KVM switches may be unsuitable for controlling machines which can reboot automatically (e.g. after a power failure). Another problem encountered with mechanical devices is the failure of one or more switch contacts to make firm, low resistance electrical connections, often necessitating some wiggling or adjustment of
1573-430: A front-side and a back-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function as a backplane but has connectors at both sides where the front side is dedicated for server-blades and the back for I/O modules. The midplane is completely passive. The server-blades are inserted in the front side of the enclosure while all other components can be reached via
1694-674: A full-size "sleeve" that holds up to four M420 blades. It also has consequences for the "normal" I/O NIC assignment: most (half-size) blades have two LOMs (LAN On Motherboard): one connecting to the switch in the A1 fabric, the other to the A2 fabric. And the same applies to the Mezzanine cards B and C. All available I/O modules (except for the PCM6348 , MXL an MIOA ) have 16 internal ports: one for each half-size blade. As an M420 has two 10 Gb LOM NICs,
1815-467: A full-size sleeve to install. The current list are the currently available 11G blades and the latest generation 12 models. There are also older blades like the M605, M805 and M905 series. Released in 2012, PE M420 is a "quarter-size" blade: where most servers are 'half-size', allowing 16 blades per M1000e enclosure, with the new M420 up to 32 blade servers can be installed in a single chassis. Implementing
1936-468: A fully loaded chassis would require 2 × 32 internal switch ports for LOM and the same for Mezzanine. An M420 server only supports a single Mezzanine card (Mezzanine B OR Mezzanine C depending on their location) whereas all half-height and full-height systems support two Mezzanine cards. To support all on-board NICs one would need to deploy a 32-slot Ethernet switch such as the MXL or Force10 I/O Aggregator. But for
2057-978: A gap of 17.75 inches (450.85 mm), giving an overall rack width of 19 inches (482.60 mm). The posts have holes in them at regular intervals, with both posts matching, so that each hole is part of a horizontal pair with a center-to-center distance of 18.312 inches (465.12 mm). The holes in the posts are arranged vertically in repeating sets of three, with center-to-center separations of 0.5 inches (12.70 mm), 0.625 inches (15.88 mm), 0.625 inches (15.88 mm). The hole pattern thus repeats every 1.75 inches (44.45 mm). Holes so arranged can either be tapped (usually 10-32 UNF thread, or, less often, 6mm metric ) or have square holes for cage nuts . Racks are vertically divided into regions, 44.45 millimetres (1.75 in) in height. Each region has three complete hole pairs on each side. The holes are centered at 6.35 millimetres (0.25 in), 22.25 millimetres (0.88 in), and 38.15 millimetres (1.50 in) from
2178-474: A mirrored pair of rear mounting posts. Since the spacing between the front and rear mounting posts may differ between rack vendors and/or the configuration of the rack (some racks may incorporate front and rear rails that may be moved forwards and backward, e.g. APC SX-range racks), it is common for equipment that features 4-post mounting brackets to have an adjustable rear bracket. Servers and deep pieces of equipment are often mounted using rails that are bolted to
2299-513: A problem of some kind. This LCD display can also be used for the initial configuration wizard in a newly delivered (unconfigured) system, allowing the operator to configure the CMC IP address. All other parts and modules are placed at the rear of the M1000e. The rear-side is divided in 3 sections: top: here one insert the 3 management-modules: one or two CMC modules and an optional i KVM module. At
2420-536: A quad-core or six-core Intel 5500 or 5600 Xeon CPU and Intel 5520 chipset. RAM memory options via 12 DIMM slots for up to 192 Gb RAM DDR3. A maximum of two on-blade hot-pluggable 2.5-inch hard-disks or SSDs and a choice of built-in NICs for Ethernet or converged network adapter (CNA), Fibre Channel or InfiniBand. The server has the Intel 5520 chipset and a Matrox G200 video card A full-height blade server that has
2541-498: A quad-core or six-core Intel 5500 or 5600 Xeon CPU and up to 192 Gb RAM. A maximum of four on-blade hot-pluggable 2.5" hard-disks or SSD 's and a choice of built-in NICs for Ethernet or converged network adapter, Fibre Channel or InfiniBand. The video card is a Matrox G200.The server has the Intel 5520 chipset A two-socket version of the M710 but now in a half-height blade. CPU can be two quad-core or 6-core Xeon 5500 or 5600 with
SECTION 20
#17327870337302662-439: A range of RAID controller options. Two external and one internal USB ports and two SD card slots. The blades can come pre-installed with Windows 2008 R2 SP1, Windows 2012 R2, SuSE Linux Enterprise or RHEL. It can also be ordered with Citrix XenServer or VMWare vSphere ESXi or using Hyper-V which comes with W2K8 R2. According to the vendor all Generation 12 servers are optimized to run as virtualisation platform. Out-of-band management
2783-467: A range of switches for blade-systems from the main vendors. Besides the Dell M1000e enclosure Cisco offers similar switches also for HP, FSC and IBM blade-enclosures. 19-inch rack A 19-inch rack is a standardized frame or enclosure for mounting multiple electronic equipment modules. Each module has a front panel that is 19 inches (482.6 mm) wide. The 19 inch dimension includes
2904-494: A rotary-molded polyethylene outer shell are a lower-cost alternative to the more durable ATA-approved case. These cases are marketed to musicians and entertainers for equipment not subject to frequent transportation and rough handling. The polyethylene shell is not fiberglass reinforced and is not rigid. The shape of small cases is maintained by the rack rails and the cover seal extrusions alone. Larger cases are further reinforced with additional plywood or sheet metal. The outer shell
3025-440: A total of 512 computers equally accessed by any given user console. While HDMI , DisplayPort , and DVI switches have been manufactured, VGA is still the most common video connector found with KVM switches for industrial applications and manufacturing applications, although many switches are now compatible with HDMI and DisplayPort connectors. Analogue switches can be built with varying capacities for video bandwidth, affecting
3146-476: A virtual KVM switch . And the rear offers 6 bays for I/O modules numbered in 3 pairs: A1/A2, B1/B2 and C1/C2. The A bays connect the on-motherboard NICs to external systems (and/or allowing communication between the different blades within one enclosure). The Dell PowerConnect switches are modular switches for use in the Dell blade server enclosure M1000e . The M6220, M6348, M8024 and M8024K are all switches in
3267-546: A wider folded strip arranged around the corner of the rack. The posts are usually made of steel of around 2 mm thickness (the official standard recommends a minimum of 1.9 mm), or of slightly thicker aluminum . Racks, especially two-post racks, are often secured to the floor or adjacent building structure so as not to fall over. This is usually required by local building codes in seismic zones . According to Telcordia Technologies Generic Requirements document GR-63-CORE, during an earthquake, telecommunications equipment
3388-435: Is h = 1.75 n − 0.031 for calculating in inches, and h = 44.45 n − 0.794 for calculating in millimeters. This gap allows a bit of room above and below an installed piece of equipment so it may be removed without binding on the adjacent equipment. Originally, the mounting holes were tapped with a particular screw thread. When rack rails are too thin to tap, rivet nuts or other threaded inserts can be used, and when
3509-401: Is industrial power, control, and automation hardware . Typically, a piece of equipment being installed has a front panel height 1 ⁄ 32 inch (0.031 in; 0.79 mm) less than the allotted number of Us. Thus, a 1U rackmount computer is not 1.750 inches (44.5 mm) tall but is 1.719 inches (43.7 mm) tall. If n is number of rack units, the ideal formula for panel height
3630-563: Is 'end of sales' since November 2011 and replaced with the PCM8024-k. Since firmware update 4.2 the PCM8024-k supports partially FCoE via FIP (FCoE Initialisation Protocol) and thus Converged network adapters but unlike the PCM 8428-k it has no native fibre channel interfaces. Also since firmware 4.2 the PCM8024-k can be stacked using external 10 Gb Ethernet interfaces by assigning them as stacking ports. Although this new stacking-option
3751-406: Is a 48 port switch: 32 internal 1 Gb interfaces (two per serverblade) and 16 external copper (RJ45) gigabit interfaces. There are also two SFP+ slots for 10 Gb uplinks and two CX4 slots that can either be used for two extra 10 Gb uplinks or to stack several M6348's blades in one logical switch. The M6348 offers four 1 Gb interfaces to each blade which means that one can only utilize
Dell M1000e - Misplaced Pages Continue
3872-477: Is a hardware device that allows a user to control multiple computers from one or more sets of keyboards , video monitors , and mouse . Switches to connect multiple computers to one or more peripherals have had multiple names. The earliest name was Keyboard Video Switch (KVS). With the advent of the mouse, the Keyboard, Video and Mouse (KVM) switch became popular. The name was introduced by Remigius Shatas,
3993-410: Is a hardware device used in data centers that allows the control of multiple computers from a single keyboard, monitor and mouse (KVM). The switch allows data center personnel to connect to any server in the rack. A common example of home use is to enable the use of the full-size keyboard, mouse and monitor of the home PC with a portable device such as a laptop, tablet PC or PDA , or a computer using
4114-413: Is a large number of computers in a single rack, it is impractical for each one to have its own separate keyboard, mouse, and monitor. Instead, a KVM switch or LOM software is used to share a single keyboard/video/mouse set amongst many different computers. Since the mounting hole arrangement is vertically symmetric, it is possible to mount rack-mountable equipment upside-down. However, not all equipment
4235-538: Is a smaller blade system that shares modules with the M1000e. The blade servers, although following the traditional naming strategy e.g. M520, M620 (only blades supported) are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors. In 2018, Dell introduced the Dell PE MX7000 , a new MX enclosure model, next generation of Dell enclosures. The M1000e enclosure has
4356-509: Is a tendency for 4-post racks to be 600 mm (23.62 in) or 800 mm (31.50 in) wide, and for them to be 600 mm (23.62 in), 800 mm (31.50 in) or 1,010 mm (39.76 in) deep. This of course varies by manufacturer, the design of the rack and its purpose, but through common constraining factors (such as raised-floor tile dimensions), these dimensions have become quite common. The extra width and depth enables cabling to be routed with ease (also helping to maintain
4477-520: Is also introduced in the same firmware release for the PCT8024 and PCT8024-f one can't stack blade (PCM) and rack (PCT)-versions in a single stack. The new features are not available on the 'original' PCM8024. Firmware 4.2.x for the PCM8024 only corrected bugs: no new features or new functionality are added to 'end of sale' models. To use the PCM8024 -k switches one will need the backplane that supports
4598-563: Is available in a narrower 10-inch format with the same unit height as a standard 19-inch rack. Frames for holding rotary-dial telephone equipment such as step-by-step telephone switches were generally 11 feet 6 inches (3.51 m) high. A series of studies led to the adoption of frames 7 feet (2.1 m) high, with modular widths in multiples of 1 foot 1 inch (0.33 m)—most often 2 feet 2 inches (0.66 m) wide. KVM switch A KVM switch (with KVM being an abbreviation for "keyboard, video, and mouse")
4719-489: Is dependent on the analogue nature and state of the hardware. The same piece of equipment may require more bandwidth as it ages due to increased degradation of the source signal. Most conversion formulas attempt to approximate the amount of bandwidth needed, including a margin of safety. As a rule of thumb, switch circuitry should provide up to three times the bandwidth required by the original signal specification, as this allows most instances of signal loss to be contained outside
4840-562: Is done via iDRAC 7 via the CMC. A half-height server with up to 2x 22-core Intel Xeon E5-2600 v3/v4 CPUs, running the Intel C610 chipset and offering up to 768 GB RAM memory via 24 DIMM slots, or 640 GB RAM memory via 20 DIMM slots when using 145w CPUs. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M630 can also be used in
4961-411: Is frequently embossed in a self-mating pattern to combat the tendency for stacked cases to deform slightly creating a slope that encourages the upper case to slide off. The cases typically use extruded aluminum bands at the ends of the body with tongue-and-groove mating to like bands for the covers. End covers are typically secured with either a simple draw latch or a rotary cam butterfly latch, named for
Dell M1000e - Misplaced Pages Continue
5082-435: Is mounted via rails (or slides). A pair of rails is mounted directly onto the rack, and the equipment then slides into the rack along the rails, which support it. When in place, the equipment may also then be bolted to the rack. The rails may also be able to fully support the equipment in a position where it has been slid clear of the rack; this is useful for inspection or maintenance of equipment which will then be slid back into
5203-691: Is no option for direct communication of the server-blades in the chassis and the M4110: it only allows a user to pack a complete mini-datacentre in a single enclosure (19" rack, 10 RU ) Depending on the model and used disk driver the PS M4110 offers a system (raw) storage capacity between 4.5 TB (M4110XV with 14 × 146 Gb, 15K SAS HDD) and 14 TB (M4110E with 14 x 1 TB, 7,2K SAS HDD). The M4110XS offer 7.4 TB using 9 HDD's and 5 SSD's . Each M4110 comes with one or two controllers and two 10-gigabit Ethernet interfaces for iSCSI. The management of
5324-641: Is still used in legacy ILEC / CLEC facilities. Nineteen-inch racks in two-post or four-post form hold most equipment in enterprise data centers , ISP facilities, and professionally designed corporate server rooms , although hyperscale computing typically use wider racks. They allow for dense hardware configurations without occupying excessive floor space or requiring shelving. Nineteen-inch racks are also often used to house professional audio and video equipment, including amplifiers , effects units , interfaces, headphone amplifiers, and even small-scale audio mixers. A third common use for rack-mounted equipment
5445-409: Is subjected to motions that can over-stress equipment framework, circuit boards, and connectors. The amount of motion and resulting stress depends on the structural characteristics of the building and framework in which the equipment is contained and the severity of the earthquake. Seismic racks rated according to GR-63 , NEBS Requirements: Physical Protection, are available, with Zone 4 representing
5566-400: Is suitable for this type of mounting. For instance, most optical disc players will not work upside-down because the driving motor mechanism does not grip the disc. 19-inch server racks can vary in quality. A standard 19-inch server rack cabinet is typically 42u in height, 600 millimetres (24 in) wide, and 36 inches (914.40 mm) deep. This comprises a volume of 974 L, or just under
5687-577: Is the fact that most interfaces are internal interfaces that connect to the blade-servers via the midplane of the enclosure. Also the M-series can't be running outside the enclosure: it will only work when inserted in the enclosure. This is a 20-port switch: 16 internal and 4 external Gigabit Ethernet interfaces and the option to extend it with up to four 10 Gb external interfaces for uplinks or two 10 Gb uplinks and two stacking ports to stack several PCM6220's into one large logical switch. This
5808-455: Is typically used for the video protocol in IPMI and Intel AMT implementations. KVM switches are called KVM sharing devices because two or more computers can share a single set of KVM peripherals. Computer sharing devices function in reverse compared to KVM switches; that is, a single PC can be shared by multiple monitors, keyboards, and mice. A computer sharing device is sometimes referred to as
5929-546: Is via the on-board Matrox G200eW with 8 MB memory Each server comes with Ethernet NICs on the motherboard. These 'on board' NICs connect to a switch or pass-through module inserted in the A1 or the A2 bay at the back of the switch. To allow more NICs or non-Ethernet I/O each blade has two so-called mezzanine slots: slot B connecting to the switches/modules in bay B1 and B2 and slot C connecting to C1/C2: An M1000e chassis holds up to 6 switches or pass-through modules. For redundancy one would normally install switches in pairs:
6050-475: The Print Screen keys). Hot-key switching is often complemented with an on-screen display system that displays a list of connected computers. KVM switches differ in the number of computers that can be connected. Traditional switching configurations range from 2 to 64 possible computers attached to a single device. Enterprise-grade devices interconnected via daisy-chained and/or cascaded methods can support
6171-407: The 10GBASE-KR standard on fabric A ( 10GBASE-KR standard is supported on fabrics B&C). To have 10 Gb Ethernet on fabric A or 16 Gb Fibre Channel or InfiniBand FDR (and faster) on fabrics B&C, midplane 1.1 is required. Current versions of the enclosure come with midplane 1.1 and it is possible to upgrade the midplane. Via the markings on the back-side of the enclosure, just above
SECTION 50
#17327870337306292-454: The 10GBASE-KR standard. The external interfaces are mainly meant to be used as uplinks or stacking-interfaces but can also be used to connect non-blade servers to the network. On the link-level PCM switches support link aggregation : both static LAG's as well as LACP. As all PowerConnect switches the switches are running RSTP as Spanning Tree Protocol , but it is also possible to run MSTP or Multiple Spanning Tree. The internal ports towards
6413-536: The KVM switch . A blade enclosure offers centralized management for the servers and I/O systems of the blade-system. Most servers used in the blade-system offer an iDRAC card and one can connect to each servers iDRAC via the M1000e management system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server. In June 2013, Dell introduced the PowerEdge VRTX , which
6534-470: The PowerEdge VRTX system. Amulet HotKey offers a modified M630 server that can be fitted with a GPU or Teradici PCoIP Mezzanine module. A half-height server with up to 2x 28-core Xeon Scalable CPU. Supported on both the M1000e and PowerEdge VRTX chassis. The server can support up to 16 DDR4 RDIMM memory slots for up to 1024 GB RAM and 2 drive bays supporting SAS / SATA or NVMe drives (with an adapter). The server uses iDRAC 9. A full-height server with
6655-474: The new PC-M8024-k switch the switches need to run firmware version 4.2 or higher. In principle one can only stack switches of the same family; thus stacking multiple PCM6220's together or several PCM8024-k. The only exception is the capability to stack the blade PCM6348 together with the rack-switch PCT7024 or PCT7048. Stacks can contain multiple switches within one M1000e chassis but one can also stack switches from different chassis to form one logical switch. At
6776-466: The upper blades and 9-16 are directly beneath 1-8. When using full-height blades one use slot n (where n=1 to 8) and slot n+8 Integrated at the bottom of the front-side is a connection-option for 2 x USB , meant for a mouse and keyboard, as well as a standard VGA monitor connection (15 pin). Next to this is a power-button with power-indication. Next to this is a small LCD screen with navigation buttons which allows one to get system-information without
6897-674: The "whitelisting" or authority to connect to be implicitly enabled. Without the whitelist addition, the device will not work. This is by design and required to connect non-standard USB devices to KVMs. This is completed by noting the device's ID (usually copied from the Device manager in Windows), or documentation from the manufacturer of the USB device. Generally all HID or consumer grade USB peripherals are exempt, but more exotic devices like tablets, or digitisers or USB toggles require manual addition to
7018-441: The 2nd on-board NIC goes to interface 5 of fabric A2) I/O modules in fabric B1/B2 will connect to the (optional) Mezzanine card B or 2 in the server and fabric C to Mezzanine C or 3. All modules can be inserted or removed on a running enclosure ( Hot swapping ) An M1000e holds up to 32 quarter-height, 16 half-height blades or 8 full-height blades or a mix of them (e.g. 2 full height + 12 half-height). The 1/4 height blades require
7139-683: The Dell Interop 2012 in Las Vegas Dell announced the first FTOS based blade-switch: the Force10 MXL 10/40Gpbs blade switch, and later a 10/40 Gbit/s concentrator. The FTOS MXL 40 Gb was introduced on 19 July 2012. The MXL provides 32 internal 10 Gbit/s links (2 ports per blade in the chassis), two QSFP+ 40 Gbit/s ports and two empty expansion slots allowing a maximum of 4 additional QSFP+ 40 Gbit/s ports or 8 10 Gbit/s ports. Each QSFP+ port can be used for
7260-590: The I/O modules: if an "arrow down" can be seen above the 6 I/O slots the 1.0 midplane was installed in the factory; if there are 3 or 4 horizontal bars, midplane 1.1 was installed. As it is possible to upgrade the midplane the outside markings are not decisive: via the CMC management interface actual installed version of the midplane is visible Each M1000e enclosure can hold up to 32 quarter-height, 16 half-height blades or 8 full-height or combinations (e.g. 1 full-height + 14 half-height). The slots are numbered 1-16 where 1-8 are
7381-564: The Intel 5520 chipset. Via 18 DIMM slots up to 288 Gb DDR3 RAM can put on this blade and the standard choice of on-board Ethernet NICs based on Broadcom or Intel and one or two Mezzanine cards for Ethernet, Fibre Channel or InfiniBand. A full-height server with 4x 8 core Intel Xeon E5-4600 CPU, running the Intel C600 chipset and offering up to 1.5 TB RAM memory via 48 DIMM slots. Up to four on-blade 2,5" SAS HDD/SSD or two PCIe flash SSD are installable for local storage. The M820 offers
SECTION 60
#17327870337307502-423: The Intel C600 chipset and offering up to 384 Gb RAM memory via 12 DIMM slots. Two on-blade disks (2.5-inch PCIe SSD, SATA HDD or SAS HDD) are installable for local storage and a choice of Intel or Broadcom LOM + 2 Mezzanine slots for I/O. The M520 can also be used in the PowerEdge VRTX system. A half-height server with a Quad-Core Intel Xeon and 8 DIMM slots for up to 64 GB RAM A half-height server with
7623-641: The Intel E 7510 chipset. A choice of built-in NICs for Ethernet, Fibre Channel or InfiniBand Also a full-height 11G server using the AMD Opteron 6100 or 6200 series CPU with the AMD SR5670 and SP5100 chipset. Memory via 32 DDR3 DIMM slots offering up to 512 Gb RAM. On-board up to two 2,5 inch HDD or SSD's. The blade comes with a choice of on-board NICs and up to two mezzanine cards for dual-port 10 Gb Ethernet, dual-port FCoE, dual-port 8 Gb fibre-channel or dual port Mellanox Infiniband. Video
7744-469: The KR or IEEE 802.3ap standards All PowerConnect M-series ("PCM") switches are multi-layer switches thus offering both layer 2 (Ethernet) options as well as layer 3 or IP routing options. Depending on the model the switches offer internally 1 Gbit/s or 10 Gbit/s interfaces towards the blades in the chassis. The PowerConnect M series with "-k" in the model-name offer 10 Gb internal connections using
7865-417: The M1000e chassis: this SAN will take the same space in the enclosure as two half-height blades next to each other. Apart from the form factor (the physical size, getting power from the enclosure system etc.) it is a "normal" iSCSI SAN: the blades in the (same) chassis communicate via Ethernet and the system does require an accepted Ethernet blade-switch in the back (or a pass-through module + rack-switch): there
7986-455: The M420 has some consequences for the system: many people have reserved 16 IP addresses per chassis to support the "automatic IP address assignment" for the iDRAC management card in a blade, but as it is now possible to run 32 blades per chassis people might need to change their management IP assignment for the iDRAC. To support the M420 server one needs to run CMC firmware 4.1 or later and one needs
8107-476: The MXL also supports a 4x10 Gb SFP+ and a 4x10GbaseT module. All Ethernet extension modules for the MXL can also be used for the rack based N4000 series (fka Power connector 8100). The MXL switches also support Fibre Channel over Ethernet so that server-blades with a converged network adapter Mezzanine card can be used for both data as storage using a Fibre Channel storage system. The MXL 10/40 Gbit/s blade switch will run FTOS and because of this will be
8228-533: The Mezzanine card it is different: the connections from Mezzanine B on the PE M420 are "load-balanced" between the B and C-fabric of the M1000e: the Mezzanine card in "slot A" (top slot in the sleeve) connects to Fabric C while "slot B" (the second slot from the top) connects to fabric B, and that is then repeated for C and D slots in the sleeve. A half-height server with up to 2x 8 core Intel Xeon E5-2400 CPU, running
8349-624: The SAN goes via the chassis-management interface (CMC). Because the iSCSI uses 10 Gb interfaces the SAN should be used in combination with one of the 10G blade switches: the PCM 8024-k or the Force10 MXL switch. The enclosure's mid-plane hardware version should be at least version 1.1 to support 10 Gb KR connectivity At the rear side of the enclosure one will find the power-supplies, fan-trays, one or two chassis-management modules (the CMC's) and
8470-539: The back. The original midplane 1.0 capabilities are Fabric A - Ethernet 1 Gb; Fabrics B&C - Ethernet 1 Gb, 10 Gb, 40 Gb - Fibre Channel 4 Gb, 8 Gb - IfiniBand DDR, QDR, FDR10. The enhanced midplane 1.1 capabilities are Fabric A - Ethernet 1 Gb, 10 Gb; Fabrics B&C - Ethernet 1 Gb, 10 Gb, 40 Gb - Fibre Channel 4 Gb, 8 Gb, 16 Gb - IfiniBand DDR, QDR, FDR10, FDR. The original M1000e enclosures came with midplane version 1.0 but that midplane did not support
8591-488: The blades (or even only a SD-card with boot-OS like VMware ESX ). It is also possible to use completely diskless blades that boot via PXE or external storage. But regardless of the local and boot-storage: the majority of the data used by blades will be stored on SAN or NAS external from the blade-enclosure. Dell has put the EqualLogic PS M4110 models of iSCSI storage arrays that are physically installed in
8712-475: The blades and standard two 40 Gbit/s QSFP+ uplinks and offers two extension slots. Depending on one's requirements one can get extension modules for 40 Gb QSFP+ ports, 10 Gb SFP+ or 1-10 GBaseT copper interfaces. One can assign up to 16 x 10 Gb uplinks to one's distribution or core layer. The I/O aggregator supports FCoE and DCB ( Data center bridging ) features Dell also offered some Cisco Catalyst switches for this blade enclosure. Cisco offers
8833-410: The blades are by default set as edge or "portfast" ports. Another feature is to use link-dependency. One can, for example, configure the switch that all internal ports to the blades are shut down when the switch gets isolated because it loses its uplink to the rest of the network. All PCM switches can be configured as pure layer-2 switches or they can be configured to do all routing: both routing between
8954-448: The bottom of the enclosure there are 6 bays for power-supply units. A standard M1000e operates with three PSU's The area in between offers 3 x 3 bays for cooling-fans (left - middle - right) and up to 6 I/O modules: three modules to the left of the middle fans and three to the right. The I/O modules on the left are the I/O modules numbered A1, B1 and C1 while the right hand side has places for A2, B2 and C2. The A fabric I/O modules connect to
9075-664: The case. To protect equipment from shock and vibration road rack cases use an inner and outer case. These cases can be isolated by thick layers of foam or may use spring-loaded shock mounting. Touring musicians, theatrical productions and sound and light companies use road case racks. In 1965, a durable fiber-reinforced plastic 19-inch rackmount case was patented by ECS Composites and became widely used in military and commercial applications for electronic deployment and operation. Rackmount cases are also constructed of thermo-stamped composite, carbon fiber , and DuPont 's Kevlar for military and commercial uses. Portable rack cases using
9196-431: The cheapest devices on the market still use this technology. Mechanical switches usually have a rotary knob to select between computers. KVMs typically allow sharing of two or four computers, with a practical limit of about twelve machines imposed by limitations on available switch configurations. Modern hardware designs use active electronics rather than physical switch contacts with the potential to control many computers on
9317-400: The computers. Subsequently, these were replaced by a special KVM cable which combined the keyboard, video and mouse cables in a single wrapped extension cable. The advantage of the last approach is in the reduction of the number of cables between the KVM switch and connected computers. The disadvantage is the cost of these cables. The method of switching from one computer to another depends on
9438-418: The configured VLAN's as external routing. Besides static routes the switches also support OSPF and RIP routing. When using the switch as routing switch one need to configure vlan interfaces and assign an IP address to that vlan interface: it is not possible to assign an IP address directly to a physical interface. All PowerConnect blade switches, except for the original PC-M8024, can be stacked. To stack
9559-625: The controlled computers. There are software alternatives to some of the functionality of a hardware KVM switch, such as Multiplicity , Synergy , and Barrier, which does the switching in software and forwards input over standard network connections. This has the advantage of reducing the number of wires needed. Screen-edge switching allows the mouse to function over both monitors of two computers. There are two types of remote KVM devices that are best described as local remote and KVM over IP. Local remote KVM device design allows users to control computer equipment up to 1,000 feet (300 m) away from
9680-439: The edges or ears that protrude from each side of the equipment, allowing the module to be fastened to the rack frame with screws or bolts. Common uses include computer servers , telecommunications equipment and networking hardware , audiovisual production gear, professional audio equipment, and scientific equipment . Equipment designed to be placed in a rack is typically described as rack-mount , rack-mount instrument ,
9801-568: The enclosure one can place extra mezzanine cards on the blade. The same applies to adding a Fibre Channel host bus adapter or a Fibre Channel over Ethernet (FCoE) converged network adapter interface. Dell offers the following (converged) Ethernet mezzanine cards for their PowerEdge blades: Apart from the above the following mezzanine cards are available: In most setups the server-blades will use external storage ( NAS using iSCSI , FCoE or Fibre Channel ) in combination with local server-storage on each blade via hard disk drives or SSDs on
9922-558: The entire BIOS boot process. These devices allow multiple computers to be controlled locally or globally with the use of an IP connection. There are performance issues related with LAN/WAN hardware, standard protocols and network latency so user management is commonly referred to as "near real time". Access to most remote or "KVM" over IP devices today use a web browser , although many of the stand-alone viewer software applications provided by many manufacturers are also reliant on ActiveX or Java . Some KVM chipsets or manufacturers require
10043-602: The fans. However, some rack equipment has been designed to make fan replacement easy, using quick-change fan trays that can be accessed without removing the cabling or the device from the rack, and in some cases without turning off the device so that operation is uninterrupted during replacement. The formal standards for a 19-inch (482.6 mm) rack are available from the following: A rack's mounting fixture consists of two parallel metal strips (also referred to as posts or panel mounts ) standing vertically. The posts are each 0.625 inches (15.88 mm) wide, and are separated by
10164-591: The first M1000e I/O product without a Web graphical user interface . The MXL can either forward the FCoE traffic to an upstream switch or, using a 4 port 8 Gb FC module, perform the FCF function, connecting the MXL to a full FC switch or directly to a FC SAN. In October 2012 Dell also launched the I/O Aggregator for the M1000e chassis running on FTOS . The I/O Aggregator offers 32 internal 10 Gb ports towards
10285-515: The founder of Cybex (now Vertiv ), a peripheral switch manufacturer, in 1995. Some companies call their switches Keyboard, Video, Mouse and Peripheral (KVMP). USB keyboards, mice, and I/O devices are the most common devices connected to a KVM switch. The classes of KVM switches discussed below are based on different types of core technologies, which vary in how the KVM switch handles USB I/O devices—including keyboards, mice, touchscreen displays, etc. (USB-HID = USB Human Interface Device) A KVM Switch
10406-594: The front and exhaust on the rear. This prevents circular airflows where hot exhaust air is recirculated through an adjacent device and causes overheating. Although open-frame racks are the least expensive, they also expose air-cooled equipment to dust, lint, and other environmental contamination. An enclosed sealed cabinet with forced air fans permits air filtration to protect equipment from dust. Large server rooms will often group rack cabinets together so that racks on both sides of an aisle are either front-facing or rear-facing, which simplifies cooling by supplying cool air to
10527-423: The front and rear posts (as above, it is common for such rails to have an adjustable depth), allowing the equipment to be supported by four posts, while also enabling it to be easily installed and removed. Although there is no standard for the depth of equipment, nor specifying the outer width and depth of the rack enclosure itself (incorporating the structure, doors and panels that contain the mounting rails), there
10648-413: The front of the racks and collecting hot air from the rear of the racks. These aisles may themselves be enclosed into a cold air containment tunnel so that cooling air does not travel to other parts of the building where it is not needed or mixes with hot air, making it less efficient. Raised or false floor cooling in server rooms can serve a similar purpose; they permit cooling airflow to equipment through
10769-555: The graphics memory buffer, and as a result it must know which graphics chip it is working with, and what graphics mode this chip is currently in so that the contents of the buffer can be interpreted correctly as picture data. Newer techniques in OPMA management subsystem cards and other implementations get the video data directly using the DVI bus. Implementations can emulate either PS/2 or USB based keyboards and mice. An embedded VNC server
10890-434: The hardware also provides continuous support where computers require constant communication with the peripherals. Some types of active KVM switches do not emit signals that exactly match the physical keyboard, monitor, and mouse, which can result in unwanted behavior of the controlled machines. For example, the user of a multimedia keyboard connected to a KVM switch may find that the keyboard's multimedia keys have no effect on
11011-505: The knob to correct patchy colors on screen or unreliable peripheral response. Gold-plated contacts improve that aspect of switch performance, but add cost to the device. Most active (electronic rather than mechanical) KVM devices provide peripheral emulation, sending signals to the computers that are not currently selected to simulate a keyboard, mouse and monitor being connected. These are used to control machines which may reboot in unattended operation. Peripheral emulation services embedded in
11132-485: The minimum bend radius for fiber and copper cables) and deeper equipment to be utilized. A common feature in IT racks is mounting positions for zero-U accessories, such as power distribution units (PDUs) and vertical cable managers and ducts, that utilize the space between the rear rails and the side of the rack enclosure. The strength required of the mounting posts means they are invariably not merely flat strips but actually
11253-465: The most demanding environment. GR-3108 , Generic Requirements for Network Equipment in the Outside Plant (OSP), specifies the usable opening of seismic-compliant 19-inch racks. Heavy equipment or equipment that is commonly accessed for servicing, for which attaching or detaching at all four corners simultaneously would pose a problem, is often not mounted directly onto the rack but instead
11374-537: The need to access the CMC/management system of the enclosure. Basic status and configuration information is available via this display. To operate the display one can pull it towards one and tilt it for optimal view and access to the navigation button. For quick status checks, an indicator light sits alongside the LCD display and is always visible, with a blue LED indicating normal operation and an orange LED indicating
11495-402: The on-board I/O controllers which in most cases will be a dual 1 Gb or 10 Gb Ethernet NIC. When the blade has a dual port on-board 1 Gb NIC the first NIC will connect to the I/O module in fabric A1 and the 2nd NIC will connect to fabric A2 (and the blade-slot corresponds with the internal Ethernet interface: e.g. the on-board NIC in slot 5 will connect to interface 5 of fabric A1 and
11616-463: The operating system loads. Modern KVM over IP appliances or switches typically use at least 128-bit data encryption securing the KVM configuration over a WAN or LAN (using SSL ). KVM over IP devices can be implemented in different ways. With regards to video, PCI KVM over IP cards use a form of screen scraping where the PCI bus master KVM over IP card would access and copy out the screen directly from
11737-653: The particular class of equipment to be mounted is known in advance, some of the holes can be omitted from the mounting rails. Threaded mounting holes in racks where the equipment is frequently changed are problematic because the threads can be damaged or the mounting screws can break off; both problems render the mounting hole unusable. Tapping large numbers of holes that may never be used is expensive; nonetheless, tapped-hole racks are still in use, generally for hardware that rarely changes. Examples include telephone exchanges, network cabling panels, broadcast studios and some government and military applications. The tapped-hole rack
11858-523: The posts and allow the rack to be securely attached to the floor and/or roof for seismic safety. Equipment can be mounted either close to its center of gravity (to minimize load on its front panel), or via the equipment's front panel holes. The Relay Racks name comes from early two-post racks which housed telephone relay and switching equipment. Two-post racks are most often used for telecommunication installations. 19-inch equipment that needs to be moved often or protected from harsh treatment can be housed in
11979-418: The rack. Some rack slides even include a tilt mechanism allowing easy access to the top or bottom of rack-mounted equipment when it is fully extended from the rack. Slides or rails for computers and other data processing equipment such as disk arrays or routers often need to be purchased directly from the equipment manufacturer, as there is no standardization on such equipment's thickness (measurement from
12100-443: The range of the signal that is pertinent to picture quality. As CRT -based displays are dependent on refresh rate to prevent flickering, they generally require more bandwidth than comparable flat panel displays. High-resolution and High-refresh-rate monitors become standard setups for advanced high-end KVM switches (specially with Gaming PC). A monitor uses DDC and EDID , transmitted through specific pins, to identify itself to
12221-401: The rapid growth of the toll network, the engineering department of AT&T undertook a systematic redesign, resulting in a family of modular factory-assembled panels all "designed to mount on vertical supports spaced 19 1 ⁄ 2 inches between centers. The height of the different panels will vary,... but... in all cases to be a whole multiple of 1 + 3 ⁄ 4 inches." By 1934, it
12342-480: The same capabilities as the half-height M610 but offering an expansion module containing x16 PCI Express (PCIe) 2.0 expansion slots that can support up to two standard full-length/full-height PCIe cards. A half-height server with up to 2x 12 core Intel Xeon E5-2600 or Xeon E5-2600 v2 CPUs, running the Intel C600 chipset and offering up to 768 GB RAM memory via 24 DIMM slots. Two on-blade disks (2,5" PCIe SSD, SATA HDD or SAS HDD) are installable for local storage with
12463-468: The same family, based on the same fabrics ( Broadcom ) and running the same firmware-version. All the M-series switches are OSI layer 3 capable: so one can also say that these devices are layer 2 Ethernet switches with built-in router or layer3 functionality. The most important difference between the M-series switches and the Dell PowerConnect classic switches (e.g. the 8024 model )
12584-424: The shape of the twist handle. There is no standard for airflow and cooling of rack-mounted equipment. A variety of airflow patterns can be found, including front intakes and rear exhausts, as well as side intakes and exhausts. Low-wattage devices may not employ active cooling, but use only passive thermal radiation and convection to dissipate heat. For rack-mounted computer servers, devices generally intake air on
12705-407: The side of the rack to the equipment) or means for mounting to the rail. A rails kit may include a cable management arm (CMA), which folds the cables attached to the server and allows them to expand neatly when the server is slid out, without being disconnected. Computer servers designed for rack-mounting can include a number of extra features to make the server easy to use in the rack: When there
12826-461: The square-hole rack. Square-hole racks allow boltless mounting, such that the rack-mount equipment only needs to insert through and hook down into the lip of the square hole. Installation and removal of hardware in a square-hole rack is very easy and boltless, where the weight of the equipment and small retention clips are all that is necessary to hold the equipment in place. Older equipment meant for round-hole or tapped-hole racks can still be used, with
12947-618: The standard 24-inch-wide (610 mm) floor tiles used in most data centers. Racks carrying telecom equipment like routers and switches often have extra width to accommodate the many cables on the sides. Four-post racks allow for mounting rails to support the equipment at the front and rear. These racks may be open in construction without sides or doors or may be enclosed by front and/or rear doors, side panels, and tops. Most data centers use four-post racks. Two-post racks provide two vertical posts. These posts are typically heavy gauge metal or extruded aluminum. A top bar and wide foot connect
13068-539: The switch in bay A2 is normally the same as the A1 switch and connects the blades on-motherboard NICs to connect to the data or storage network. Standard blade-servers have one or more built-in NICs that connect to the 'default' switch-slot (the A-fabric ) in the enclosure (often blade-servers also offer one or more external NIC interfaces at the front of the blade) but if one want the server to have more physical (internal) interfaces or connect to different switch-blades in
13189-550: The switch to full capacity when using blades that offer 4 internal NICs on the A fabric (=the internal/on motherboard NIC). The M6348 can be stacked with other M6348 but also with the PCT7000 series rack-switches. The M8024 and M8024-k offer 16 internal autosensing 1 or 10 Gb interfaces and up to 8 external ports via one or two I/O modules each of which can offer: 4 × 10 Gb SFP+ slots, 3 x CX4 10 Gb (only) copper or 2 x 10G BaseT 1/10 Gb RJ-45 interfaces. The PCM8024
13310-508: The switch. The original peripheral switches (Rose, circa 1988) used a rotary switch while active electronic switches (Cybex, circa 1990) used push buttons on the KVM device. In both cases, the KVM aligns operation between different computers and the users' keyboard, monitor and mouse (user console). In 1992–1993, Cybex Corporation engineered keyboard hot-key commands. Today, most KVMs are controlled through non-invasive hot-key commands (e.g. Ctrl + Ctrl , Scroll Lock + Scroll Lock and
13431-487: The system. KVM switches may have different ways of handling these data transmissions: Microsoft guidelines recommend that KVM switches pass unaltered any I C traffic between the monitor and the PC hosts, and do not generate HPD events upon switching to a different port while maintaining stable non-noise signal on inactive ports. KVM switches were originally passive, mechanical devices based on multi-pole switches and some of
13552-472: The technology that is mounted within it has changed considerably and the set of fields to which racks are applied has greatly expanded. The 19-inch (482.6 mm) standard rack arrangement is widely used throughout the telecommunications , computing , audio , video , entertainment and other industries, though the Western Electric 23-inch standard , with holes on 1-inch (25.4 mm) centers,
13673-556: The top or bottom of the region. Such a region is commonly known as a U , for unit , RU for rack unit or, in German, HE , for Höheneinheit . Heights within racks are measured by this unit. Rack-mountable equipment is usually designed to occupy some integer number of U. For example, an oscilloscope might be 4U high. Rack-mountable computers and servers are mostly between 1U and 4U high. A blade server enclosure might require 10U. Occasionally, one may see fractional U devices such as
13794-400: The underfloor space to the underside of enclosed rack cabinets. A difficulty with forced air fan cooling in rack equipment is that fans can fail due to age or dust. The fans themselves can be difficult to replace. In the case of network equipment, it may be necessary to unplug 50 or more cables from the device, remove the device from the rack, and then disassemble the device chassis to replace
13915-410: The unit's overall cost and quality. A typical consumer-grade switch provides up to 200 MHz bandwidth, allowing for high-definition resolutions at 60 Hz. For analog video, resolution and refresh rate are the primary factors in determining the amount of bandwidth needed for the signal. The method of converting these factors into bandwidth requirements is a point of ambiguity, in part because it
14036-428: The use of cage nuts made for square-hole racks. Rack-mountable equipment is traditionally mounted by bolting or clipping its front panel to the rack. Within the IT industry, it is common for network/communications equipment to have multiple mounting positions, including tabletop and wall mounting, so rack-mountable equipment will often feature L-brackets that must be screwed or bolted to the equipment prior to mounting in
14157-467: The user consoles (keyboard, monitor and mouse). They always need direct cable connection from the computer to the KVM switch to the console and include support for standard category 5 cabling between computers and users interconnected by the switch device. In contrast, USB powered KVM devices are able to control computer equipment using a combination of USB, keyboard, mouse and monitor cables of up to 5 metres (16 ft). KVM switch over IP devices use
14278-419: The white list table of the KVM. In comparison to conventional methods of remote administration (for example in-band Virtual Network Computing or Terminal Services ), a KVM switch has the advantage that it doesn't depend on a software component running on the remote computer, thus allowing remote interaction with base level BIOS settings and monitoring of the entire booting process before, during, and after
14399-638: The world of telephony . By 1911, the term was also being used in railroad signaling . There is little evidence that the dimensions of these early racks were standardized. The 19-inch rack format with rack-units of 1.75 inches (44.45 mm) was established as a standard by AT&T around 1922 in order to reduce the space required for repeater and termination equipment in a telephone company central office . The earliest repeaters from 1914 were installed in ad hoc fashion on shelves, in wooden boxes and cabinets. Once serial production started, they were built into custom-made racks, one per repeater. But in light of
14520-474: Was an established standard with holes tapped for 12-24 screws with alternating spacings of 1.25 inches (31.75 mm) and 0.5 inches (12.70 mm) The EIA standard was revised again in 1992 to comply with the 1988 public law 100-418 , setting the standard U as 15.875 mm (0.625 in) + 15.875 mm (0.625 in) + 12.7 mm (0.500 in), making each U 44.45 millimetres (1.75 in). The 19-inch rack format has remained constant while
14641-460: Was first replaced by clearance-hole (Round Hole, Round Unthreaded Holes, and Versa Rail ) racks. The holes are large enough to permit a bolt to be freely inserted through without binding, and bolts are fastened in place using cage nuts . In the event of a nut being stripped out or a bolt breaking, the nut can be easily removed and replaced with a new one. Production of clearance-hole racks is less expensive. The next innovation in rack design has been
#729270