Infiniband: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>Howard C. Berkowitz
(New page: '''InfiniBand''', also called System I/O, is a point-to-point bidirectional serial link that, in storage area networks, connects computers to fabric switches. In addition, it has been ...)
 
imported>Howard C. Berkowitz
No edit summary
Line 1: Line 1:
{{subpages}}
'''InfiniBand''', also called System I/O, is a point-to-point bidirectional serial link that, in [[storage area network]]s, connects computers to fabric switches. In addition, it has been used as an interconnect inside computer chassis.   
'''InfiniBand''', also called System I/O, is a point-to-point bidirectional serial link that, in [[storage area network]]s, connects computers to fabric switches. In addition, it has been used as an interconnect inside computer chassis.   


It has lower latency than an Ethernet of the same signaling speed,   Case-by-case analysis, however, will is required to tell if InfiniBand of a 10 Gbps signaling rate, for example, is more cost-effective than a 40 Gbps Ethernet fabric.
It supports signaling rates of 10, 20, and 40 Gbps, and, as with [[PCI Express]], links can be put in parallel for additional bandwidth. For physical transmission, it supports board-level connection, both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km)<ref>{{citation
| publisher = InfiniBand Trade Association
| url = http://www.infinibandta.org/content/pages.php?pg=technology_faq
| title = Specification FAQ}}</ref>
 
The InfiniBand Trade association sees it as complementary to Ethernet and [[Fibre Channel ]]technologies, which they see as appropriate to feed an InfiniBand core switching fabric. It has lower latency than an [[IEEE 802.3|Ethernet]] of the same signaling speed.   Case-by-case analysis, however, will is required to tell if InfiniBand of a 10 Gbps signaling rate, for example, is more cost-effective than a 40 Gbps Ethernet fabric.


==History==
==History==
It came from the merger of two technologies. Compaq, IBM, and Hewlett-Packard developed the first, Future I/O. Tandem's ServerNet was the ancestor of the Compaq technology. The other half of the merger came from the Next Generation I/O team of Intel, Microsoft and Sun. Current implementations go into the 40 Gbps range; It supports several signaling rates and, as with [[PCI Express]], links can be bonded together for additional bandwidth.
It came from the merger of two technologies. Compaq, IBM, and Hewlett-Packard developed the first, Future I/O. Tandem's ServerNet was the ancestor of the Compaq technology. The other half of the merger came from the Next Generation I/O team of Intel, Microsoft and Sun.  


InfiniBand was initially deployed as a supercomputer interconnect, it was always envisioned as a "system area network", interconnecting computers, network devices, and storage arrays in data centers.
InfiniBand was initially deployed as a [[high performance computing]] interconnect, bur it was always envisioned as a "system area network", interconnecting computers, network devices, and storage arrays in data centers.
==Software==
The [[Open Fabrics Alliance]] has specified software stacks for it. Some vendors say they can achieve better performance, in the highly specialized supercomputing environment, by using proprietary extensions. <ref>{{citation
| url = http://www.supercomputingonline.com/latest/3-questions-david-smith-on-infiniband
| journal=Supercomputing Online
| title = 3 Questions: David Smith on InfiniBand}}</ref>
==Topologies==
For HPC clusters, [[Fat Tree]] is the most common topology, but others use torus or mesh topologies, especially when interconnecting thousands of processors.
==References==
{{reflist}}

Revision as of 13:04, 28 July 2010

This article is developing and not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

InfiniBand, also called System I/O, is a point-to-point bidirectional serial link that, in storage area networks, connects computers to fabric switches. In addition, it has been used as an interconnect inside computer chassis.

It supports signaling rates of 10, 20, and 40 Gbps, and, as with PCI Express, links can be put in parallel for additional bandwidth. For physical transmission, it supports board-level connection, both active and passive copper (up to 30 meters pending speeds) and fiber-optic cabling (up to 10km)[1]

The InfiniBand Trade association sees it as complementary to Ethernet and Fibre Channel technologies, which they see as appropriate to feed an InfiniBand core switching fabric. It has lower latency than an Ethernet of the same signaling speed. Case-by-case analysis, however, will is required to tell if InfiniBand of a 10 Gbps signaling rate, for example, is more cost-effective than a 40 Gbps Ethernet fabric.

History

It came from the merger of two technologies. Compaq, IBM, and Hewlett-Packard developed the first, Future I/O. Tandem's ServerNet was the ancestor of the Compaq technology. The other half of the merger came from the Next Generation I/O team of Intel, Microsoft and Sun.

InfiniBand was initially deployed as a high performance computing interconnect, bur it was always envisioned as a "system area network", interconnecting computers, network devices, and storage arrays in data centers.

Software

The Open Fabrics Alliance has specified software stacks for it. Some vendors say they can achieve better performance, in the highly specialized supercomputing environment, by using proprietary extensions. [2]

Topologies

For HPC clusters, Fat Tree is the most common topology, but others use torus or mesh topologies, especially when interconnecting thousands of processors.

References

  1. Specification FAQ, InfiniBand Trade Association
  2. "3 Questions: David Smith on InfiniBand", Supercomputing Online