It’s a beautiful time of year in New Orleans for the top supercomputing companies to show their wares. While I wouldn’t consider SC10 exactly the place to sell optical components, there were a few new developments there. SCinet – the network that is always built at the top HPC conference – boasted 100G Ethernet as well as OTu4. Alcatel-Lucent, Ciena, Cisco, Force10 and Juniper, among others donated equipment to build this network. Module vendors Avago Technologies, Finisar and Reflex Photonics contributed QSFP and CFP 40G and 100G devices to the cause.
Meanwhile, the Ethernet alliance was showing two demonstrations in its booth – a converged network running FCoE and RoCE over 40GigE and 100GigE. Nineteen different vendors participated in this demo that was run by the University of New Hampshire Interoperability Lab. Both CFPs and QSFPs were used in this demo.
Some of you may wonder why I would attend SC10. I keep my eye on the HPC market because it usually indicates where the broader data center market will be in a few years. And, in fact, even medium-sized businesses’ data centers with higher computational needs are starting to resemble small HPC centers with their server clusters using top-of-rack switching.
Most of the top optical transceiver vendors and even some of the smaller ones see this market as an opportunity as well. While InfiniBand still uses a lot of copper interconnects, for 40G and 120G, this is changing. QSFP was the standout for 40G IB displays and CXP AOCs were shown for 120G as well. Avago Technologies was the first to announce a CXP module at the show.
Some believe that the CXP will be short-lived because there is progress being made on 4x25 technologies – Luxtera announced its 25G receivers to go with its 25G transmitter that it announced earlier this year. But it will still be a few years before all of the components for 25G will be ready for system’s developers to spec in. Tyco Electronics had a demonstration at their booth showing it is possible to run 28G over eight inches of a PCB, but this was still a prototype. And Xilinx has announced a chip for 28G electrical transceivers that can be used with this board design. But, none of these devices are even being tested by equipment manufacturers yet and the CXP has already been adopted by a few. So I think the CXP may have more life in it than some people may think.
Thursday, November 18, 2010
SC10 and Optical Transcievers
Labels:
Avago Technologies,
Cisco,
Finisar,
Luxtera,
optical transceivers,
Reflex Photonics,
SC10
Tuesday, November 9, 2010
PCIe – An I/O Optical Interconnect Soon?
The Peripheral Component Interconnect (PCI) is that bus in computers that connects everything back to the processor. It has been around for as long as I can remember having workstations. But in recent years, it has been morphing in response to the need to connect to the processor at higher data rates.
PCI Express (PCIe) GEN1 defined PCIe over cable implementations in 2007. Molex was instrumental in helping to define this and up until now, it has been a purely copper solution using its iPass™ connection system. This system has been used mainly for “inside-the-box” applications first at 2.5G (GEN1) and then at 5G (GEN2). The adoption rate for PCIe over cable has been slow. It is mainly used for high-end multi-chassis applications including I/O expansion, disk array subsystems, high speed video and audio editing equipment and medical imaging systems.
PCIe GEN3 is running at 8G and some physical layer component vendors are looking to use an optical solution instead of the current copper cable along with trying to move it into a true I/O technology for the data center connections – servers to switches and eventually storage. While component vendors are excited about these applications, mainstream OEMs do not seem interested in supporting it. I believe it is because they see it as a threat to their Ethernet equipment revenue.
CXP AOCs seem to be a perfect fit for this GEN3 version of PCIe, but neither equipment manufacturers nor component suppliers believe it will reach the price level needed for this low-cost system. It is expected that the optical interconnect should cost 10’s of dollars, not 100’s. However, CXP AOCs may be used for PCIe GEN3 prototype testing for proof of concept. But if the first demonstrations are any indication, this will not be the case. PCIe GEN3 over optical cable was recently shown by PLX Technology using just a one channel optical engine next to its GEN3 chip with standard LC jumpers. PLX and other vendors are looking towards using optical engines with standard MPO patch cords to extend this to 4x and 8x implementations.
Columbia University and McGill University also demonstrated PCIe GEN3, but with eight lanes over a WDM optical interconnect. This is obviously much more expensive than even the CXP AOCs and is not expected to get any traction in real networks.
Another factor against PCIe as an I/O is the end user. In a data center, there are typically three types of support personnel – networking (switches), storage/server managers and data center managers. While the server managers are familiar with PCIe from an “inside-the-box” perspective, I’m not sure they are ready to replace their Ethernet connections outside the box. And, the others may have heard of PCIe, but probably aren’t open to changing their Ethernet connections either. They can run 10-Gigabit on their Ethernet connections today so really don’t see a need to learn an entirely new type of interconnect in their data center. In fact, they are all leaning towards consolidation instead – getting to one network throughout their data center like FCoE. But, as I've stated in previous posts, this won’t happen until it is shown to be more cost effective than just using 10GigE to connect their LANs and SANs. The fact that PCIe I/O could be cheaper than Ethernet may not be enough because Ethernet is pretty cost-effective itself and has the luxury of being the installed base.
PCI Express (PCIe) GEN1 defined PCIe over cable implementations in 2007. Molex was instrumental in helping to define this and up until now, it has been a purely copper solution using its iPass™ connection system. This system has been used mainly for “inside-the-box” applications first at 2.5G (GEN1) and then at 5G (GEN2). The adoption rate for PCIe over cable has been slow. It is mainly used for high-end multi-chassis applications including I/O expansion, disk array subsystems, high speed video and audio editing equipment and medical imaging systems.
PCIe GEN3 is running at 8G and some physical layer component vendors are looking to use an optical solution instead of the current copper cable along with trying to move it into a true I/O technology for the data center connections – servers to switches and eventually storage. While component vendors are excited about these applications, mainstream OEMs do not seem interested in supporting it. I believe it is because they see it as a threat to their Ethernet equipment revenue.
CXP AOCs seem to be a perfect fit for this GEN3 version of PCIe, but neither equipment manufacturers nor component suppliers believe it will reach the price level needed for this low-cost system. It is expected that the optical interconnect should cost 10’s of dollars, not 100’s. However, CXP AOCs may be used for PCIe GEN3 prototype testing for proof of concept. But if the first demonstrations are any indication, this will not be the case. PCIe GEN3 over optical cable was recently shown by PLX Technology using just a one channel optical engine next to its GEN3 chip with standard LC jumpers. PLX and other vendors are looking towards using optical engines with standard MPO patch cords to extend this to 4x and 8x implementations.
Columbia University and McGill University also demonstrated PCIe GEN3, but with eight lanes over a WDM optical interconnect. This is obviously much more expensive than even the CXP AOCs and is not expected to get any traction in real networks.
Another factor against PCIe as an I/O is the end user. In a data center, there are typically three types of support personnel – networking (switches), storage/server managers and data center managers. While the server managers are familiar with PCIe from an “inside-the-box” perspective, I’m not sure they are ready to replace their Ethernet connections outside the box. And, the others may have heard of PCIe, but probably aren’t open to changing their Ethernet connections either. They can run 10-Gigabit on their Ethernet connections today so really don’t see a need to learn an entirely new type of interconnect in their data center. In fact, they are all leaning towards consolidation instead – getting to one network throughout their data center like FCoE. But, as I've stated in previous posts, this won’t happen until it is shown to be more cost effective than just using 10GigE to connect their LANs and SANs. The fact that PCIe I/O could be cheaper than Ethernet may not be enough because Ethernet is pretty cost-effective itself and has the luxury of being the installed base.
Labels:
10G,
40/100G Ethernet,
CXP,
Molex,
PCIe,
PLX Technology
Thursday, November 4, 2010
Opportunities for CXP AOCs and Transceivers
For those of you that believe that 100-Gigabit Ethernet is just around the corner, I have a bridge I want to sell you. But seriously, we haven’t even seen the height of 10-Gigabit Ethernet adoption yet, and there are some equipment companies saying they will sell 100’s-of-thousands of CXP 100GBASE-SR ports in 2011. Are you kidding? What is the application and where is the need?
First, 10GE has taken more than eight years to get to a million ports – we believe it will take 40G and 100G even longer. Second, even for clustering applications, which could potentially drive demand faster, 100GE port-adoption won’t be that quick. Ethernet architecture is different than the InfiniBand (IB) one – the density of an IB director-type switch provides over a Terabit per second, whereas the newly released 40GE ones are around 250G (due to both slower data rate and fewer ports). IB is also based on a CLOS architecture where you have equal bandwidth everywhere, while Ethernet is more often used in an aggregated network so ends up having a lot less higher-speed ports than lower speed ones. This is further supported by clustering applications that use ToR switches that are currently Gigabit connections to the servers with 10G uplinks to the network core. These will be upgraded to 10G downlinks and 40G uplinks first and this won’t happen quickly.
While several IP router manufacturers claim to have the need for 100’s of thousands of 100GBASE-SR CXP ports in 2011, I have found no evidence of this. Who are their customers? In fact, even those companies that could use 100G ports today, i.e. Google, Facebook , IXCs, etc., would need six months to a year to evaluate IP router products before they would deploy them. Since these devices do not yet exist, the reality is that the market will really not begin to materialize until at least 2012. Right now, the majority of router connections are still Gigabit Ethernet or OC-48 (2.5G) or below with OC-192 (10G) or 10GE being implemented on an as-needed basis. Until routers transition through 10G, then probably 40G, 100G installations will be few and far between.
But, there is a market for CXP AOCs today – InfiniBand. This is becoming a volume market now and will continue to be the best opportunity for CXP AOCs for at least the next few years and probably over the lifetime of the CXP products. In fact, we expect the volume of InfiniBand CXP AOCs to be at about six million by 2015. By comparison, the total volume of Ethernet CXP AOCs is expected to be less than 100-thousand. While 100G Ethernet clustering applications will initially use CXP AOCs, customers in these markets prefer to use pluggable modules mainly because they are used to structured cabling solutions and like their flexibility and ease of use, so AOCs will quickly give way to pluggable modules as they are developed. 100GE CXP ports may eventually eclipse InfiniBand once it permeates most data center distribution and core networks, but this will take longer than any of these equipment vendors anticipate I think.
First, 10GE has taken more than eight years to get to a million ports – we believe it will take 40G and 100G even longer. Second, even for clustering applications, which could potentially drive demand faster, 100GE port-adoption won’t be that quick. Ethernet architecture is different than the InfiniBand (IB) one – the density of an IB director-type switch provides over a Terabit per second, whereas the newly released 40GE ones are around 250G (due to both slower data rate and fewer ports). IB is also based on a CLOS architecture where you have equal bandwidth everywhere, while Ethernet is more often used in an aggregated network so ends up having a lot less higher-speed ports than lower speed ones. This is further supported by clustering applications that use ToR switches that are currently Gigabit connections to the servers with 10G uplinks to the network core. These will be upgraded to 10G downlinks and 40G uplinks first and this won’t happen quickly.
While several IP router manufacturers claim to have the need for 100’s of thousands of 100GBASE-SR CXP ports in 2011, I have found no evidence of this. Who are their customers? In fact, even those companies that could use 100G ports today, i.e. Google, Facebook , IXCs, etc., would need six months to a year to evaluate IP router products before they would deploy them. Since these devices do not yet exist, the reality is that the market will really not begin to materialize until at least 2012. Right now, the majority of router connections are still Gigabit Ethernet or OC-48 (2.5G) or below with OC-192 (10G) or 10GE being implemented on an as-needed basis. Until routers transition through 10G, then probably 40G, 100G installations will be few and far between.
But, there is a market for CXP AOCs today – InfiniBand. This is becoming a volume market now and will continue to be the best opportunity for CXP AOCs for at least the next few years and probably over the lifetime of the CXP products. In fact, we expect the volume of InfiniBand CXP AOCs to be at about six million by 2015. By comparison, the total volume of Ethernet CXP AOCs is expected to be less than 100-thousand. While 100G Ethernet clustering applications will initially use CXP AOCs, customers in these markets prefer to use pluggable modules mainly because they are used to structured cabling solutions and like their flexibility and ease of use, so AOCs will quickly give way to pluggable modules as they are developed. 100GE CXP ports may eventually eclipse InfiniBand once it permeates most data center distribution and core networks, but this will take longer than any of these equipment vendors anticipate I think.
Labels:
100G,
40/100G Ethernet,
AOC,
clustering,
CXP,
InfiniBand,
IP Routers
Subscribe to:
Posts (Atom)