Wednesday, December 15, 2010

The 10X10 MSA: Niche, Distraction or the Right Answer? (Continued)

While Vipul has a point that this new MSA is probably a distraction, it is difficult to deny that there is a market for cost-effective devices with optical reaches between 100m and 10km. In fact, 100m to 300m is the market that multi-mode fiber has served so well for the last 20 years. And, 300m to 2km has been a niche for lower-cost 1310nm single mode products like 1000BASE-LX. So I have a slightly different opinion about this 10x10 MSA and whether it’s a niche, distraction or the right answer.

In a recent article written on Optical Reflection, Pauline Rigby quotes Google’s senior network architect, Bikash Koley. About 100GBASE-SR10, he says 100m isn’t long enough for Google – that it won’t even cover room-to-room connections and that “ribbon fibres are hard to deploy, hard to manage, hard to terminate and hard to connect. We don’t like them.” There is an answer for this ribbon-fiber problem – don’t use it. There are many optical fiber manufacturers that now provide round multi-fiber cables that are only “ribbonized” at the ends for use with the 12-position MPO connector and are much easier to install – Berk-Tek, A Nexans Company, AFL and even Corning have released products that address this concern. But, the 100m optical reach is another matter.

I have to agree with Google about one other thing – 4x25G QSFP+ solutions are at least four years away from reality (and I would say probably even longer). This solution will eventually have the low-cost, low-power and high-density Google requires, but not quick enough. I think something needs to be done to address Google’s and others requirements between 300m and 2km in the short term, but I also believe that it needs to be standardized. There is no IEEE variant that would currently cover a 10x10G single mode device. However, there is an effort currently going on in the IEEE for 40G over SMF up to 2km. Perhaps the members of the MSA should look to work with this group to expand its work or start a new related project to cover 100G for 2km as well? I know this was thrown out of the IEEE before, but so were 1000BASE-T and 10GBASE-T initially.

So what I'm saying is that the market is more than a niche - hundreds of millions of dollars of LOMF sales at 1G and 10G would attest to that. And it's more than a distraction because there is a need. But I don't think it's entirely the right answer without an IEEE variant to back it up.

Let us know what you think.

Monday, December 13, 2010

The 10X10 MSA: Niche, Distraction or the Right Answer?

{For today’s blog, our guest author is Vipul Bhatt. I have known Vipul for several years, since when he was the Director of High Speed Optical Subsystems at Finisar. He has served as the Chair of Optical PMD Subgroup of IEEE 802.3ah Ethernet in the First Mile (EFM), and the Chair of Equalization Ad Hoc of IEEE 802.3ae 10G Ethernet. He can be reached at vjb@SignalOptics.com.}

Last week, Google, JDSU, Brocade and Santur Corp announced the 10X10 Multi-Source Agreement (MSA) to establish sources of 100G transceivers. It will have 10 optical lanes of 10G each. Their focus is on using single mode fiber to achieve a link length of up to 2 km. The key idea is that a transceiver based on 10 lanes of 10G will have lower power consumption and cost because it doesn’t need the 10:4 gearbox and 25G components. But is this a good idea? What is the tradeoff? Based on my conversations with colleagues in the industry, it seems there are three different opinions emerging about how this will play out. I will label them as niche, distraction, or the right answer. Here is a paraphrasing of those three opinions.

It’s a niche: It’s a solution optimized for giant data centers – we’re talking about a minority of data centers (a) that are [already] rich in single mode fiber, (b) where the 100-meter reach of multi-mode 100GBASE-SR10 is inadequate, and (c) where the need for enormous bandwidth is so urgent that the density of 10G ports is not enough, and 100G ports can be consumed in respectable quantities in 2011.

It’s a distraction: Why create another MSA that is less comprehensive in scope than CFP, when the CFP has sufficient support and momentum already? Ethernet addresses various needs – large campuses, metro links, etc. – with specifications like the LR4 that need to support link lengths of well beyond 2 km over one pair of fiber. We [do] need an MSA that implements LR4, and the SR10 meets the needs of a vast majority of data centers, so why not go with CFP that can implement both LR4 and SR10? As for reducing power consumption and cost, the CFP folks are already working on it. And it’s not like we don’t have time – the 10G volume curve hasn’t peaked yet, and may not even peak in 2011. Question: What is the surest way to slow down the decisions of Ethernet switch vendors? Answer: Have one MSA too many.

It’s the right answer: What is the point of having a standard if we can’t implement it for two years? The CFP just isn’t at the right price-performance point today. The 10X10 MSA can be the “here and now” solution because it will be built with 10G components that have already traversed the experience curve. It can be built with power, density and cost figures that will excite the switch vendors, which may accelerate the adoption of 100G Ethernet, not distract it. As for 1-pair vs. 10-pairs of fiber, the first swelling of 100G demand will be in data centers where it’s easier to lay more fiber, if there isn’t plenty installed already. The 2-km length is sufficient to serve small campuses and large urban buildings as well.

Okay, so what do I think? I think the distraction argument is the most persuasive. An implementation that is neither SR10-compliant nor LR4-compliant is going to have a tough time winning the commitment of Ethernet switch vendors, even if it’s cheaper and cooler than the CFP in the short term.

Thursday, December 9, 2010

SFP+ - The New Optical RJ45?

For those of you that have been in the industry for what seems to be 100 years, but is really about 25 years, you know that the one “connector” that hasn’t changed much is the RJ45. While there have been improvements by adding compensation for the error that was made way back when AT&T developed the wiring pattern (splitting the pair causing major crosstalk issues), the connector itself has remained intact. Contrastingly, optical connectors for datacom applications have changed several times – ST to SC to MT-RJ to LC. They have finally seemed to settle on the LC and perhaps on a transceiver form factor – the SFP+. The SFP was originally introduced at 1G, was used for 2G and 4G and with slight improvements has become the SFP+ and the dominant form factor now used for 10G. Well, it is in the process of getting some slight improvements again and promises to make it all the way to 32G. That’s six generations of data rates – pretty impressive. But how?

The INCITS T11.2 Committee's Fibre Channel Physical Layer – 5 (FC-PI-5) standard was ratified in September. It specifies 16G Fibre Channel. Meanwhile, the top transceiver manufacturers have been demonstrating pre-standard 16G SFP+ SW devices. But, wait a minute – short-wavelength VCSELs were supposed to be very unstable when trying to modulate them at data rates above 10G right? Well, it seems that at least Avago and Finisar have figured this out. New microcontrollers and adding at least one clock and data recovery (CDR) device in the module to help clean up the signals have proven to be keys. Both vendors believe it is possible to do this and not add too much cost to the modules. In fact, both also think that possibly by adding electronic dispersion compensation (EDC) they can push the SFP+ to 32G as well - which is the next step for Fibre Channel - hoping to stop at 20G and 25G to cover developments in Ethernet and InfiniBand.

And what about long wavelength devices? It has always been a challenge fitting the components needed to drive long distances into such a small package mainly because the lasers need to be cooled. But not anymore – Opnext has figured it out. In fact, it was showing its 10km 16G FC SFP+ devices long before any of the SW ones were out (March 2010). Of course, this isn't surprising considering Opnext has already figured out 100G long haul as well.

These developments are important to datacom optical networking for a few of reasons:  
  1. They show that Fibre Channel is not dead.
  2. The optical connector and form factor "wars" have seemed to subsided so transceiver manufacturers and optical components vendors can focus on cooperation instead of positioning.
  3. They will impact the path other networking technologies are taking – Ethernet and InfiniBand are using parallel optics for speeds above 10G – will they switch back to serial?
Stay tuned for more on these points later.

 

Thursday, November 18, 2010

SC10 and Optical Transcievers

It’s a beautiful time of year in New Orleans for the top supercomputing companies to show their wares. While I wouldn’t consider SC10 exactly the place to sell optical components, there were a few new developments there. SCinet – the network that is always built at the top HPC conference – boasted 100G Ethernet as well as OTu4. Alcatel-Lucent, Ciena, Cisco, Force10 and Juniper, among others donated equipment to build this network. Module vendors Avago Technologies, Finisar and Reflex Photonics contributed QSFP and CFP 40G and 100G devices to the cause.

Meanwhile, the Ethernet alliance was showing two demonstrations in its booth – a converged network running FCoE and RoCE over 40GigE and 100GigE. Nineteen different vendors participated in this demo that was run by the University of New Hampshire Interoperability Lab. Both CFPs and QSFPs were used in this demo.

Some of you may wonder why I would attend SC10. I keep my eye on the HPC market because it usually indicates where the broader data center market will be in a few years. And, in fact, even medium-sized businesses’ data centers with higher computational needs are starting to resemble small HPC centers with their server clusters using top-of-rack switching.

Most of the top optical transceiver vendors and even some of the smaller ones see this market as an opportunity as well. While InfiniBand still uses a lot of copper interconnects, for 40G and 120G, this is changing. QSFP was the standout for 40G IB displays and CXP AOCs were shown for 120G as well. Avago Technologies was the first to announce a CXP module at the show.

Some believe that the CXP will be short-lived because there is progress being made on 4x25 technologies – Luxtera announced its 25G receivers to go with its 25G transmitter that it announced earlier this year. But it will still be a few years before all of the components for 25G will be ready for system’s developers to spec in. Tyco Electronics had a demonstration at their booth showing it is possible to run 28G over eight inches of a PCB, but this was still a prototype. And Xilinx has announced a chip for 28G electrical transceivers that can be used with this board design. But, none of these devices are even being tested by equipment manufacturers yet and the CXP has already been adopted by a few. So I think the CXP may have more life in it than some people may think.

Tuesday, November 9, 2010

PCIe – An I/O Optical Interconnect Soon?

The Peripheral Component Interconnect (PCI) is that bus in computers that connects everything back to the processor. It has been around for as long as I can remember having workstations. But in recent years, it has been morphing in response to the need to connect to the processor at higher data rates.

PCI Express (PCIe) GEN1 defined PCIe over cable implementations in 2007. Molex was instrumental in helping to define this and up until now, it has been a purely copper solution using its iPass™ connection system. This system has been used mainly for “inside-the-box” applications first at 2.5G (GEN1) and then at 5G (GEN2). The adoption rate for PCIe over cable has been slow. It is mainly used for high-end multi-chassis applications including I/O expansion, disk array subsystems, high speed video and audio editing equipment and medical imaging systems.

PCIe GEN3 is running at 8G and some physical layer component vendors are looking to use an optical solution instead of the current copper cable along with trying to move it into a true I/O technology for the data center connections – servers to switches and eventually storage. While component vendors are excited about these applications, mainstream OEMs do not seem interested in supporting it. I believe it is because they see it as a threat to their Ethernet equipment revenue.

CXP AOCs seem to be a perfect fit for this GEN3 version of PCIe, but neither equipment manufacturers nor component suppliers believe it will reach the price level needed for this low-cost system. It is expected that the optical interconnect should cost 10’s of dollars, not 100’s. However, CXP AOCs may be used for PCIe GEN3 prototype testing for proof of concept. But if the first demonstrations are any indication, this will not be the case. PCIe GEN3 over optical cable was recently shown by PLX Technology using just a one channel optical engine next to its GEN3 chip with standard LC jumpers. PLX and other vendors are looking towards using optical engines with standard MPO patch cords to extend this to 4x and 8x implementations.

Columbia University and McGill University also demonstrated PCIe GEN3, but with eight lanes over a WDM optical interconnect. This is obviously much more expensive than even the CXP AOCs and is not expected to get any traction in real networks.

Another factor against PCIe as an I/O is the end user. In a data center, there are typically three types of support personnel – networking (switches), storage/server managers and data center managers. While the server managers are familiar with PCIe from an “inside-the-box” perspective, I’m not sure they are ready to replace their Ethernet connections outside the box. And, the others may have heard of PCIe, but probably aren’t open to changing their Ethernet connections either. They can run 10-Gigabit on their Ethernet connections today so really don’t see a need to learn an entirely new type of interconnect in their data center. In fact, they are all leaning towards consolidation instead – getting to one network throughout their data center like FCoE. But, as I've stated in previous posts, this won’t happen until it is shown to be more cost effective than just using 10GigE to connect their LANs and SANs. The fact that PCIe I/O could be cheaper than Ethernet may not be enough because Ethernet is pretty cost-effective itself and has the luxury of being the installed base.

Thursday, November 4, 2010

Opportunities for CXP AOCs and Transceivers

For those of you that believe that 100-Gigabit Ethernet is just around the corner, I have a bridge I want to sell you. But seriously, we haven’t even seen the height of 10-Gigabit Ethernet adoption yet, and there are some equipment companies saying they will sell 100’s-of-thousands of CXP 100GBASE-SR ports in 2011. Are you kidding? What is the application and where is the need?

First, 10GE has taken more than eight years to get to a million ports – we believe it will take 40G and 100G even longer. Second, even for clustering applications, which could potentially drive demand faster, 100GE port-adoption won’t be that quick. Ethernet architecture is different than the InfiniBand (IB) one – the density of an IB director-type switch provides over a Terabit per second, whereas the newly released 40GE ones are around 250G (due to both slower data rate and fewer ports). IB is also based on a CLOS architecture where you have equal bandwidth everywhere, while Ethernet is more often used in an aggregated network so ends up having a lot less higher-speed ports than lower speed ones. This is further supported by clustering applications that use ToR switches that are currently Gigabit connections to the servers with 10G uplinks to the network core. These will be upgraded to 10G downlinks and 40G uplinks first and this won’t happen quickly.

While several IP router manufacturers claim to have the need for 100’s of thousands of 100GBASE-SR CXP ports in 2011, I have found no evidence of this. Who are their customers? In fact, even those companies that could use 100G ports today, i.e. Google, Facebook , IXCs, etc., would need six months to a year to evaluate IP router products before they would deploy them. Since these devices do not yet exist, the reality is that the market will really not begin to materialize until at least 2012. Right now, the majority of router connections are still Gigabit Ethernet or OC-48 (2.5G) or below with OC-192 (10G) or 10GE being implemented on an as-needed basis. Until routers transition through 10G, then probably 40G, 100G installations will be few and far between.

But, there is a market for CXP AOCs today – InfiniBand. This is becoming a volume market now and will continue to be the best opportunity for CXP AOCs for at least the next few years and probably over the lifetime of the CXP products. In fact, we expect the volume of InfiniBand CXP AOCs to be at about six million by 2015. By comparison, the total volume of Ethernet CXP AOCs is expected to be less than 100-thousand. While 100G Ethernet clustering applications will initially use CXP AOCs, customers in these markets prefer to use pluggable modules mainly because they are used to structured cabling solutions and like their flexibility and ease of use, so AOCs will quickly give way to pluggable modules as they are developed. 100GE CXP ports may eventually eclipse InfiniBand once it permeates most data center distribution and core networks, but this will take longer than any of these equipment vendors anticipate I think.

Monday, October 25, 2010

Interop, New York - Software Show Now

I hadn’t been to Interop since 2003 and I’d never been to the New York show so I decided this year I would give it a shot. I was woefully disappointed. I was expecting to see great product demonstrations from all the top equipment manufacturers, but instead received inquiries for meetings from software vendors who didn’t even bother to see that I cover data centers, optical components, structured cabling and interconnects.

The exhibit hall only had eight rows. Cisco was there, but half of its booth was taken up by its channel partners and the other half had virtual demos, not actual equipment running. Brocade was there, but had a much smaller booth and pretty much legacy equipment on a tabletop display. Most telling of course were the companies that didn’t participate in the exhibition – Extreme Networks and IBM to name just two.

Some of the programming was interesting, though, and maybe made it worth the travel costs. I sat in on the second day of the Enterprise Cloud Summit so actually got to meet some of the gurus in the industry of cloud computing. I also sat in on the “Evaluating New Data Center LAN Architectures” technical session which was a panel of equipment manufacturers that responded to an RFI from Boston Scientific for a data center expansion project. Interesting to note is that while Cisco was asked to respond, it did not. The panel consisted of Alcatel-Lucent, Extreme Networks, Force10 and Hewlett Packard. It is also interesting to note that the vendors responded with different architectures – some including top-of-rack solutions and others with end-of-row.

All-in-all, I think my time would have been better spent staying home and working on my optical components research…

Thursday, October 14, 2010

SFP+ Markets

SFP+ is primarily being used in two markets currently – 10G Ethernet and 8G Fibre Channel. The Fibre Channel market is almost exclusively optical SFP+ modules while, the 10G Ethernet market is expected to see its largest growth in SFP+ direct-attach copper (DAC) products. SFP+ DAC will mainly be used in top-of-rack (ToR) switches that connect to servers within a rack. Multi-million port forecasts that are predicted over the next few years are predicated on an anticipated high-rate of adoption of this architecture in the data center. Even if this topology is used, once 10GBASE-T becomes more readily available, SFP+ DAC will be sharing its market with this less costly variant. Some OEMs believe quantities of each will be about the same, but I have my doubts. If history tells us anything, which it usually does, once the less expensive BASE-T variant takes hold, it usually pushes the more costly alternatives to much smaller volume.

But, right now it is a matter of timing. I am skeptical that the 10GBASE-T adoption in switches will be able to keep pace with the port-density need in the data center in the short-term. Both products will be needed in the long term – 10GBASE-T for inexpensive connections and 10GBASE-CR (SFP+ DAC) for lower latency, lower power consumption and more flexibility. Currently, if you want both copper and fiber ports on your 10G switch, you need to use SFP+ because there are no switches that contain both 10GBASE-T and optical.

Friday, October 8, 2010

DARPAs Chip-to-Chip Optical Interconnects (C2OI) Program

The C2OI DARPA program is funding on-going optical components projects. Its end goal is to “demonstrate optical interconnections between multiple silicon chips that will enable data communications between chips to be as seamless as data communication within a chip.”

This program grew out of work initially done by Agilent (now Avago Technologies) under the DARPA Parallel Optical Network Interconnect (PONI) project. Agilent/Avago developed a 30 Gbps transmitter (2.5 Gbps/lane) that was eventually standardized as the SNAP-12.

IBM (with help from Avago) extended the work originally done by Agilent/Avago into inter-chip connections and in 2009 achieved optical interconnection with 16 parallel lanes of 10G. By early 2010, IBM was extending this work into board-to-board applications which resulted in the new Avago MicroPOD™ product that was specifically designed for IBM’s POWER7™ supercomputer.

While it was designed for HPC server interconnects, the MicroPOD could be used for on-board or chip-to-chip interconnects as well. As mentioned in previous posts, the devices use a newly designed miniature detachable connector from US CONEC called PRIZM™ LightTurn™. The system has separate transmitter and receiver modules that are connected through a 12-fiber ribbon. Each lane supports up to 12.5 Gbps. It uses 850nm VCSEL and PIN diode arrays. The embedded modules can be used for any board-level or I/O-level application by either using two PRIZM LightTurn connectors or one PRIZM LightTurn and one MPO.

While MicroPOD is targeted at high-density HPC environments, a natural expansion of its market reach would be into switches and routers in high-density Ethernet data center environments. While this may not happen in the next few years, for me it looks like it could be a more cost-effective solution than say a 40G serial one.

Tuesday, October 5, 2010

DARPAs Ultraperformance Nanophotonic Intrachip Communitcations (UNIC)

UNIC is a DARPA-funded project that started in 2008 and is slated to run for about five years. Sun/Oracle, Kotura and Luxtera are working to develop this chip-to-chip "high-performance, CMOS-compatible photonic technology for high-throughput, non-blocking and power-efficient intrachip photonic communications networks."


The first application of such technology is targeted for optical interconnects for microprocessors. The goal is to replace high-performance computing clusters with computers that consist of these arrays of microprocessors interconnected by optics. Another goal of the project is to make sure the new devices are "compatible" with CMOS processes in order to also integrate the associated electronic devices. Using its now proven Silicon CMOS Photonics technology, Luxtera has developed transmitters and receivers for the project. Kotura supported the project with new low-power, high-speed modulators made of silicon photonics.


Potential new products are expected by the end of 2012. While these will be initial products, commercialization is not expected until quite some time later, perhaps not until 2016 or so. Meanwhile, Luxtera will continue to use its technology to sell 10, 40 and 100G transceivers and AOCs.


With data rates increasing beyond 10G, chip-to-chip, on-board and board-to-board optical interconnects will become progressively more significant. Even at 10G, traditional printed-circuit boards cannot support transmission beyond about 12 inches without needing re-timers. As I’ve mentioned in previous posts, instead of spending money to develop more exotic PCBs using complicated digital signal processing (DSP), it may be time to embrace optical interconnects for both board-level and chip-level.

Wednesday, September 29, 2010

DARPA and NIST Funded Optical Interconnect Research Projects (Part 1)

While many private sector investments have been made in optical communications over the years, none have contributed as much ongoing support as the US federal government has. The Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology (NIST) have been at the forefront of funding for decades and are continuing to ante up now.

Some of these projects and how they might impact short-reach optics are reviewed:

TERAPIC™: The TERAPIC (Terabit Photonic Integrated Circuits) project that is funded by NIST has as its goal to "develop technology for optical components that can transmit and receive up to one terabit of data per second over one single-mode fiber, greatly reducing complexity and cost in high-capacity fiber-optic networks."

CyOptics and Kotura have partnered on this project to bring Terabit connectivity to data centers and HPC centers by reducing hundreds of individual components "to less than 10." The project is expected to produce an integrated component that can "transmit and receive up to one Tbps of data over one single-mode fiber across transmission distances of up to two-kilometers." The project team has been successful with 100G and 500G prototype devices that transmit up to two-kilometers and continue to work towards Terabit devices.

The PICs will be monolithic arrays of CWDM lasers and detectors that will be automatically assembled in CyOptics' manufacturing facility in Breinigsville, PA. Kotura has developed two integrated silicon photonics chips—data multiplexing/transmission and data demultiplexing/detection that are integrated into the overall transmit optical sub-assemlbies (TOSAs) and receive optical sub-assemblies (ROSAs) from CyOptics.

The TERAPIC project is set to be completed in 2010 so products are expected to be released in 2011, but early adopter customers have yet to be identified so actual revenue-generating opportunities may still be several years off. Especially since they are targeted for Terabit Ethernet, which has not even started on its standardization tract yet. However, the 100G and 500G parts that were developed could be made production-worthy in the interim.

More on DARPA and NIST-funded projects will be discussed in the coming days.

Monday, September 27, 2010

Transitioning Your Data Center from Copper to Fiber

Companies like Corning like to tell data center managers that with the advent of 10G, they should be transitioning their network from mostly copper connections to all fiber-optic. But, as many of you probably know, this is easier said than done. There are many things to consider:
  1. How much more is it going to cost me to install fiber instead of copper?
  2. Do I change my network architecture to ToR in order to facilitate using more fiber? What are the down sides of this?
  3. Is it really cost-effective to go all-optical?
  4. What is my ROI if I do go all-optical?
  5. Will it really help my power and cooling overload if I go all-optical?
It is very difficult to get to specific answers for a particular data center because each one is different. And guidelines from industry vendors may be skewed based on what they are trying to sell you. OFC/NFOEC management has recognized this and asked me to teach a new short course for their 2011 conference – Data Center Cabling – Transitioning from Copper to Fiber will be part of the special symposium Meeting the Computercom Challenge: Components and Architectures for Computational Systems and Data Centers. I invite your ideas on specifics you would like to see covered in this new short course.

 

Thursday, September 23, 2010

25G/40G VCSELs Driving Short-reach Optical Interconnects

Just a few years ago, laser designers were struggling with stability of their 10G VCSELs. But now, at least one, VI Systems GmbH, claims it will have production-ready 40G VCSELs within the next few years. The German start-up has developed two products it believes will take VCSELs beyond 10G applications - a directly-modulated (DM) device and an electro-optic modulated (EOM) DBR VCSEL. Both are short-wavelength (850nm) lasers.

In a recent press release, VI Systems explains that it “developed the VCSEL products at a wavelength of 850 nm along with a range of extremely fast integrated circuits based on the SiGe BiCMOS (silicon-germanium bipolar junction transistors in complementary metal-oxide-semiconductor) technology. The company uses a patent pending micro-assembly platform for the integration of the opto-electrical components and for alignment to a standard high performance multi-mode glass-based fiber.” The start-up has been presenting data supporting its claims of highly stable devices for more than a year now. It gets there by changing the laser active region material and structure to InAs quantum dot (QD).

Not only is VI Systems working on innovative laser structures, it has also developed new electro-optic integration methods to further reduce the cost of these devices.

I’ve noted in previous posts how VCSELs are the key to low-cost optical networks in the data center. These new VCSELs and packaging methods would bring an even more cost-effective “serial” solution for 40/100G. They could also be used for very short-reach optical connections like for chip-to-chip, on-board or board-to-board. Perhaps these inventive products will rival Avago’s MicroPOD and Luxtera’s OptoPHY (also in previous posts). Based on the presentations that VI Systems has released, it sure appears that its management completely understand the needs of both the data center and optical interconnect markets so could very well give incumbents in the industry some competition.

Tuesday, September 21, 2010

Intel’s Light Peak OR USB 3.0?

After Intel’s Developer’s Forum last week, there is renewed interest in Light Peak. For those of you that don’t remember one of my first blog posts, Light Peak is an Intel-developed optical interconnect techonolgy that uses a new controller chip with LOMF and a 10G 850nm VCSEL-based module with a new optical interface connector that looks very similar to the one used in Avago Technologies MicroPOD transceiver. Light Peak is aimed at replacing all of your external connectors on your PC including USB, IEEE 1394, HDMI, DP, PCIe, etc. It is also targeted at other consumer electronic devices like smart phones and MP3 players.

Many in the industry think Light Peak is intended to replace USB 3.0 even before USB 3.0 is finished being standardized. I tend to disagree. USB 3.0 is a 5G data rate and to me, will bridge the gap between existing USB 2.0 (480 Mbps max) and the 10G that Light Peak can provide. Just because they are being developed at the same time, doesn’t mean they will make it to production simultaneously.

While Intel is now saying that 2011 will be the year for Light Peak to take off, I’m still skeptical. There may be some really high-end applications like video editing that may need this bandwidth, but your run-of-the-mill PC user isn’t going to want to pay the extra money for it when you probably won’t be able to actually detect the improvement. And, what might be more important – what kind of power consumption difference is there and how does this affect battery life?Or is this technology not meant for laptops?  I’m not sure these questions have been sufficiently answered yet.

Thursday, September 16, 2010

10GBASE-T versus 10GBASE-SR – Tradeoffs in Costs Make the Difference

The 10GBASE-T standard was finalized some four years ago now, but, as I’ve mentioned before, equipment using these ports is really just starting to be deployed in actual networks. The main reason being that you couldn’t get a switch with these ports on it. So early implementations of 10G are 10GBASE-SR, 10GBASE-LR or 10GBASE-LRM, with the vast majority now being the SR. But now that switch manufacturers the likes of Blade Networks, Cisco and Extreme Networks have products offering up to 24 ports of 10GBASE-T, the market dynamics may change.

With Ethernet, history tends to repeat itself so let’s take a minute to review what happened at Gigabit. Early products were 1000BASE-CX, SX and LX because the 100m 1000BASE-T had not yet been standardized. But, as soon as it was and the switch manufacturers started adopting it, it took over the shorter-reach Gigabit Ethernet market. In fact, it still dominates today.

So, why would 10GBASE-T be any different? Well, my belief is that eventually, it won’t. Even though data center managers concerns have shifted from space, to power availability per rack and cooling hot spots, when they see the price tag difference between SR and T (still about 2:1 per installed port), it causes them to pause and rethink the T scenario. So although data center managers want to reduce their headaches with fat CAT6A cables, most are still not willing to pay that much more for the optical solution until they have to because of distances. So even though the T ports may push electricity bills up, for most, the increase isn’t significant enough to justify the up-front cost of SR.

Friday, September 10, 2010

10G Copper versus Fiber – Is Power Consumption Really the Issue?

For decades now, fiber has been slated to take over the data networking world, but somehow, some way, copper keeps reinventing itself. But are the ways in which copper can compensate for its lower bandwidth capacity coming to an end at 10G due to what seem to be astronomical power consumption issues? Probably not. I have listened to the rhetoric from the fiber-optic companies for more than five years now, and have conducted my own research to see if what they say is true. Their argument was that at 8 to 14W per port, copper is just too costly. But, now that the chips have reduced power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed and actually have room to cool the devices. Now that doesn’t seem to be an issue, evidenced by 24-port 10GBASE-T configurations that have been released by all the major players.

I believe decisions on copper versus fiber will be made around other parameters as well, such as latency. In a recent study, Data Center Cabling Cost Analysis - Copper Still Has Its Place, that we released at DataCenterStocks.com, we looked at the cost of 10G copper versus fiber and added in the higher power consumption. Using the specific example focused on a rack of EoR Cisco switches, copper was still more cost-effective even when considering higher electricity costs.

But our next area to study will be a rack of servers with a ToR switch. In this scenario, the power consumption difference may be enough to justify the cost of installing fiber over copper. The above referenced report and this next research are part of a series of research reports for our Data Center Network Operator Service.

Thursday, September 2, 2010

Why Polarity Matters (Part 2)

If you’ve read my previous posts on the subject, you know that polarity can be a tricky matter and it’s even more complicated when you try to choose it for your data center cabling. You really have to choose based on several factors:

  1. Patch cords – method A has two different patch cords that you have to stock, but the upside is that it’s pretty simple to follow where the signal is going and if you happen to be out of one type of patch cord, you can really take the one you have and just flip the fibers as a temporary fix until you can get the other patch cords. Of course this isn’t recommended, but if you’re in a bind and need to get up-and-running right away, it will work. With methods B and C you have the same patch cord on each end so no need to worry about this, but if you happen to have the wrong cassettes or backbone, nothing will work and you'll have to wait to get the correct ones.
  2. Cassettes and backbone cables – you need to make sure you buy all of one method of polarity or your system won’t work. If you’re concerned about supply, all three polarity methods are available from multiple vendors, but Method A is “preferred” by most.
  3. Upgradability – this is where it can get dicey. Typically your pre-terminated assemblies are running Gigabit applications today and a few may be running 10G. Any of the polarities will work at these data rates. But when you move to 40/100G, methods A and B have straight forward paths, while C does not. Also, you’ll want to make sure you use the highest grade of LOMF available, which is OM4 – this will give you the best chance of being able to reuse your backbones up to 125m. If you need something longer, you’ll need to go to SMF.
If you are thinking about installing pre-terminated cassette-based assemblies now for 10G with an upgrade path to 40 and 100G, you need to consider the polarity method you use. Unlike today's 2-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet implementations use multiple parallel 10G connections that are multiplexed. While 40/100G equipment vendors will tell you that polarity is not an issue, care must be taken if you want to reuse this installed base.

40G will use four 10G fibers to send and four 10G fibers to receive, while 100G uses either four 25G fibers or ten 10G fibers in each direction. Because 40 and 100G will be using the MPO connector, if the polarity method is carefully chosen, you will be able to reuse your backbone cables. This is enabled by the fact that the IEEE took much care in specifying the system so that you can connect any transmit within a connection on one end of the channel to any receive on the other end.

Those selecting fiber to support 10G now and 40G in the near future need to understand what will be involved in transitioning and repurposing their cable plant. In order to upgrade using method A, you can replace the cassettes with MPO-to-MPO patch panels and MPO-to-MPO patch cords and it will enable flexibility to address moves, adds and changes as well as promoting proper installation best practices. The polarity flip will need to be accomplished in either an A-to-A patch cord or possibly with a key up/key down patch panel.

Method B multimode backbone cables can also readily support 40G applications. For a structured cabling approach, method B will still use a patch panel and patch cords, though as with current method B, both patch cords could be A-to-B configuration. While Method C backbones could be used, they are not recommended for 40G as completing the channel involves complex patch cord configurations.

It appears that 100G will use either the 12-fiber (4x25G) or the 24-fiber (10x10G) MPO connector. With transmits in the top row and receives in the bottom row, the connection will still be best made using a standardized structured cabling approach as described above.

There are many suppliers of pre-terminated optical assemblies including Belden, Berk-Tek, a Nexans Company, CommScope, Corning, Panduit, Siemon, Tyco Electronics NetConnect as well as many smaller shops that give quick-turn assemblies like Cxtec CablExpress and Compulink.

Tuesday, August 31, 2010

Why Fiber Polarity Matters ( Part 1)

Again, polarity is the term used in the TIA-568 standard to explain how to fiber (wire) to make sure each transmitter is connected to a receiver on the other end of a multi-fiber cable.

Many data center managers are opting to use pre-terminated fiber assemblies due to their higher-quality factory termination, ease of use and quick installation. And many are using 12-fiber MPO backbone cables with cassettes and patch cords to transition to active equipment. When doing this, they choose a polarity method that makes sense for their operation.

Polarity Method A: This is the most straight-forward method. It uses straight-through patch cords (A-to-B) on one end that connect through a cassette (LC-to-MPO or SC-to-MPO depending on what the equipment connector is), a straight-through MPO-key-up-to-MPO-key-down backbone cable and a “cross-over” patch cord (A-to-A) at the other end.

Polarity Method B: The “cross-over” occurs in the cassette. The keys on the MPO cable connectors are in an up position at both ends, but the fiber that is at connector position 1 in one end is in position 12 at the opposite end, and the fiber that is in position 12 at the originating end is in position 1 at the opposing end. Only one type of patch cord is needed – A-to-B.

Polarity Method C: This is the most complicated. There is pair-wise “cross-over” in the backbone cable in this method. A-to-B patch cords are used on both ends, the cassette uses MPO-key-up-to-key-down and the backbone cable is pair-wise flipped so 1,2 connects to 2,1; 3,4 connects to 4,3; etc.

There is a fourth propietary method that I won't go over here since it's proprietary and not standardized.

If the end user does not get this correct and use all of the proper pieces together, their systems will not work. If you don’t understand what I’ve just explained above, you're not alone. There are diagrams in the TIA-568 standard as well as many white papers from leading structured cabling companies explaining fiber polarity in arrayed cabling systems. Here’s a link to Panduit’s white paper that may help. In the next post, I’ll explain how to upgrade to 40/100G and reuse your pre-terminated backbone.

Thursday, August 26, 2010

How the 40/100G Ethernet Shift to Parallel Optics Affects Data Center Cabling

Most data centers are cabled with at least Category 5e and some MMF. To upgrade to 10G, data center managers need to either test their entire installed base of Category 5e to make sure it is 10G-worthy or replace it with Category 6A or 7. And their MMF should be of at least the OM3 (2000 MHz.km) variety or the 300m optical-reach is in question. Unless, they want to use 10GBASE-LX4 or LRM modules that are about 10x the price of 10GBASE-SR devices. But what happens when you want to look beyond 10G to the next upgrade?

Last month I talked about how at 40/100G there is a shift to parallel-optics. Unlike today’s two-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet specify multiple parallel 10G connections that are aggregated. 40GBASE-SR4 will use four 10G fibers to send and four 10G fibers to receive, while 100GBASE-SR10 will use ten 10G fibers in each direction.

What this means to the data center operator is that they may need to install new cable. Unless they’ve started to install pre-terminated fiber assemblies using the 12-position MPO connectors – these can be re-used if polarity is chosen carefully. Polarity is the term used in the TIA-568 standard to explain how to fiber (wire) to make sure each transmitter is connected to a receiver on the other end of a multi-fiber cable.

There are three polarity methods defined in the TIA standard and each has its advantages and disadvantages, but only two of the three will allow you to easily reuse your installed pre-term assemblies for 40/100G – methods A or B. I’ll explain why in my subsequent posts.

Monday, August 23, 2010

When a Standard Isn’t Exactly a Standard

I’ve noted in a couple of posts now that equipment manufacturers charge a lot more for optical modules they sell to end users than what they actually pay for them from transceiver suppliers. Considering the pains OEMs go through to “qualify” their vendors, a healthy markup in the early stages of a new product adoption can be warranted. But, I’m not so sure keeping it at more than 5x the price five years down the road can be justified. And is it sustainable? Some transceiver manufacturers sell products at gross margins in the 20-percent range, while their biggest customers (OEMs) enjoy more like 40 percent.

And guess what, there’s not much the suppliers can do. It is well known that Cisco, Brocade and others purchase modules, and now SFP+ direct-attach copper cables, from well-known suppliers and resell them at much higher prices. And if I’m an end user, I MUST buy these from OEM or their designate or my equipment won’t work. These devices have EEPROMs that can be programmed with what some call a “magic key” that only allow them to work with specific equipment. So the OEM now has a captive market for modules and copper cables going into their equipment, and so they can pretty much charge what they want to. If I try to use a “standard” module or cable assembly – one that is compliant to the specification – it will not work unless it has this “magic key.”

I’ve experienced this first hand. I had a brand new HP ProCurve Gigabit Ethernet switch that I wanted to use for some cable testing I was doing. I had dozens of SFP modules from all of the top transceiver manufacturers, but none of them would work in the switch. I called HP and they said, “You have to buy the HP mini-GBIC.” Well, I knew that wasn’t exactly true. I didn’t really want to pay the $400+ each for four more SFPs that I didn’t need so I tried to work through my contacts at HP to get a firmware patch so I could use my existing devices. Long story short, I never did get that patch and ended up doing my testing with SMC switches instead.

Prime example of when an open standard is not so open. Will data center managers be able to sustain this when they have to move equipment around and need different modules or cable assemblies? Are the OEMs thinking about the aftermarket and the fact that data center managers are used to going to distributors to get these items? And are OEMs going to continue to gouge end users and potentially cripple their suppliers?

One added note - there are at least two equipment manufacturers that I know of that support an open standard:  Blade Networks and Extreme Networks. While they will both supply the modules and cable assemblies, they don't lock out other standards-compliant parts that customers may want to use.

Thursday, August 19, 2010

AOCs (Part 3 and last for now)

The last of my summary of AOC Implementations:

Reflex Photonics has gained an early customer base in InfiniBand and PCI Express extender applications with its SNAP 12 products, and is using the existing customer base to increase awareness of InterBoard products for data center customers. In developing InterBoard, Reflex Photonics moved into coarser channel implementations to meet industry AOC standards. The four-channel cables terminate in an array of 850nm VCSELs that use QSFP connectors suitable for both InfiniBand DDR and 40G Ethernet. What is also interesting about Reflex’s InterBoard is that it contains its optical engine technology, LightAble.

Zarlink (now part of Tyco) began its ZLynx product line with a CX4 interconnect, but quickly added QSFP as the module was standardized. Zarlink is unique in anticipating possible customer interest in dissimilar terminations by offering CX4-to-QSFP cables. Zarlink product developers say they will take the same attitude as CXP applications emerge. While most AOCs will use identical termination on both ends of the cable, the company will explore customer demand for hybrid connectors. Before it was acquired by Tyco, Zarlink was working on 40G implementations that were expected to be released this year. No announcements have been made as of yet, though. Tyco had its own QSFP AOC, namely the Paralight. It remains to be seen how Tyco will merge these product lines.

The first implementations of 40G Ethernet have indeed materialized as AOCs, but are expected to transition into actual optical modules as soon as transceiver manufacturers are ready with their products. What is nice for the end user is that if they want to implement 40G today, they can with AOCs and the same ports will then accept optical modules later if needed. InfiniBand AOC products are expected to stay as AOCs and not transition into optical modules, mainly because most of these connections are less than 30m so are easier to pull through pathways and spaces.

According to CIR, the market for AOCs is expected to be about $180 million (a rather small market for so many entrants) this year, most of which will be for data centers. However, by 2013, it is expected to grow to more than $1-billion – a steep climb and one that will need a lot of suppliers if it is actually going to happen.

Sunday, August 15, 2010

AOCs (Part 2)

Summary of a few more AOC Implementations:

Avago Technologies had a late entry into the AOC market with its 10GBASE-CX4 replacement and QSFP+ products. But they have a rich history in parallel optics so have quickly come up to speed their products. While they may have been somewhat late to market, Avago has an existing customer base to peddle its wares to.

Finisar’s products include Quadwire and Cwire AOCs to address early adoption of 40G and 100G. Quadwire is Finisar’s mainstream product, both in terms of its use of the VCSEL arrays the company produces in volume at its Texas fab, and in terms of its use of the popular QSFP form factor.

The high end of the Finisar product line is designed to exploit anticipated interest in 100G Ethernet and 12-channel QDR InfiniBand. Cwire offers an aggregate data rate of 150 Gbps and a CXP interface. Not only does this represent the direction of high-end enterprise cluster design, but it allows Finisar to utilize the most integrated VCSEL arrays it manufactures. The 12-channel array also represents the most cost-effective per-laser manufacturing option, allowing Finisar to take advantage of its expertise in designing large VCSEL-arrays. The benefit in high channel count can also be seen in power dissipation. While the single serial channel of Laserwire dissipates 500mW per end, the 12-channel Cwire dissipates less than 3W per end – half the power dissipation per channel.

MergeOptics (now part of FCI) was born of the old Infineon which was once a powerhouse in the optical transceiver markets—both telecom and datacom. It emerged in 2006 with its SFP and then SFP+ products and is now one of the first entrants for 40G and 100G AOCs. Unlike most of its competitors, it is focused on 10G and above products so can bring them to market rather quickly. Its technology is being leveraged for InfiniBand and Ethernet applications.

Stay tuned for the next post for just a little more on AOCs.

Thursday, August 12, 2010

Active Optical Cables (Part 1)

Active Optical Cables (AOC) are typically defined as a fiber subsystem intended for reaches of 3 to 300 meters, but Luxtera and Finisar both promise products stretching a kilometer or more for campus-network solutions. However, I don’t believe AOCs beyond 300 meters will get much traction in the market due to issues with trying to pull these delicate transceiver ends through kilometers of pathways and spaces (conduit or tray), around all types of obstacles. AOCs main applications in high-speed networks are in the data center, including (and probably most relevant) high-performance computing (HPC) clusters.

Intel (AOCs now part of Emcore) and Luxtera were among the first to promote AOCs for consumer and data-center markets. Zarlink (its optical products group is now part of Tyco Electronics) launched its AOC effort in 2007, Finisar introduced three varieties of vertical-market AOCs in 2009, and Avago announced its QSFP+ AOC in late 2009. Other participants include Lightwire, MergeOptics/FCI and Reflex Photonics. And, of late, we’ve even seen structured cabling companies like Siemon introduce some of these products, albiet by the looks of it, it is partnering with Luxtera to do so.

The QSFP+ form factor continues to be an enabler for 40G AOCs and in fact, was the first “form factor” released for this data rate. Since the QSFP+ supports Ethernet, Fibre Channel, InfiniBand and SAS, it will be an economic solution for all protocols. This AOC combines the QSFP physical module with management interfaces extendable to 40G, common protocols to support multiple physical layers in a single module and operates at 10G per lane producing a cost-effective solution. A significant ramp in quad data rate InfiniBand and 40G Ethernet will start to accelerate volume applications for these products. QSFP+ AOCs also give an easier path to market for tranceiver vendors, as they allow them to control both ends of the optical link, which is much easier to design - there are two less compliance points.

A summary of some of the product implentations of AOCs for high-data-rate networks:

Emcore has incorporated its existing technology into a pre-terminated active assembly using the current InfiniBand and 10GBASE-CX4 copper connector. So, what is presently being connected by copper can be replaced immediately by an active optical cable assembly. For 40G InfiniBand, this will turn into the CXP connection. The QDR 40 cable from Emcore was announced in mid-June 2008 and according to the company, has been shipping to select customers since early 2009. Yet, it does not seem to be a released product since the only reference to it on the Emcore Web site is its initial press release - no specifications are available there.

Luxtera is addressing the data center market with both InfiniBand- and Ethernet-compliant AOCs. It uses its CMOS photonics at 1490nm wavelength and a high-density QSFP+ directly attached to multi-fiber (ribbon or loose-tube) SMF. This is suitable for 40G applications and has proven a cost-effective solution for data centers that have discovered the difficulty with copper cables. Although the specifications for copper interconnects support 10m of cable, in reality there are both performance issues and mechanical problems with them.

To be continued in my next post...

Monday, August 9, 2010

Gigabit Transcievers

In our rush to want to discuss all the new technologies, it seems to me that analysts have forgotten that part of our job is to also point out ongoing trends in existing products. So while talking about Gigabit transceivers might not be as appealing as talking about Terabit Ethernet, it’s also a necessity – especially since, without these devices and the continuing revenue they produce, we wouldn’t have 40/100G or even 10G Ethernet. So what are the important points to make about Gigabit transceivers?
  • The market for Gigabit Ethernet transceivers (copper and optical) is expected to be about $2.5-billion in 2010 according to CIR, but it is also supposed to start declining in 2011 when more 10GigE will take its place.
  • Pricing for a 1000BASE-SX SFP module is now at about $20 for OEMs. End users still pay Cisco or Brocade or their agents about 8x that much (more about this later).
  • Low pricing makes it difficult on profit margins so transceiver vendors hope to make it up in volume.
  • While SFP is certainly the preferred form factor, there is still a decent amount of GBIC modules being sold.
  • SFP direct-attach copper cable assemblies have become an option for top-of-rack switches to servers instead of using UTP Category patch or fiber cabling, although the majority of implementations today are still UTP patch cords, mainly because the connections within the rack are still 100M with the uplink being Gigabit Ethernet of the 1000BASE-SX variety.
  • While 10/100/1000 ports are the norm for desktop and laptop computers, most of these devices are still connected back through standard Category 5e or 6 cabling to 100M switch ports in the telecom room.
  • Gigabit Fibre Channel business is pretty much non-existent now. It was quickly replaced by 2G and has progressed through 4G and 8G is expected to become the volume application this year. Look for more on Fibre Channel in future posts.
  • Avago Technologies and Finisar top the list of vendors for 1000BASE-SR devices. JDSU has all but disappeared from the scene, mainly because they have de-emphasized this business in favor of their telecom products. In fact, rumor has it that JDSU is shopping its datacom transceiver business and has been for some time.
A note on JDSU: It appears that the optical components giant has taken the technology that was developed at IBM, E2O and Picolight and thrown it away. Picolight was once a leader in parallel optics and, along with E2O, long-wavelength VCSELs. IBM pioneered v-groove technology and the oxide layer that enabled the next leap in speed and improved reliability for 850nm VCSELs. All of these technologies look like they are destined to die a slow, painful death after being acquired by JDSU. The company’s attention is clearly focused on its tunable technology and telecom applications, which is where, of course, it started. JDSU has never had a good reputation for assimilating acquisitions, so none of this should be a surprise. I was optimistic when JDSU bought these companies thinking that now these emerging technologies would be supported by a larger pocketbook. What is the reasoning for JDSU deemphasizing the technologies it acquired? Is it trying to get rid of short-reach competition in hopes that all optical networking would move towards long-wavelength devices? This would have been naïve; the likes of Finisar, Avago, MergeOptics and others would still be supporting 850nm optics and there remains a healthy market for them in enterprise networks and data centers—albeit a very competitive one as stated above.

 

Wednesday, August 4, 2010

Optical Engines

I was reviewing some research I recently conducted for the Optical Interconnect report I wrote for CIR and realized that I hadn’t yet “blogged” about what I would consider some exciting new product directions that many optical components suppliers are taking. We’ve been talking about optical integration for many years and some companies, like Infinera, have actually implemented it into their real-world products. But there are more cases of this than ever before and I think we’re on the brink of some true industry breakthroughs using what many have deemed “optical engines.”

Here is a summary of the component companies and their associated optical engine products:
  • BinOptics – it uses its InP PICs to build "custom integrated microphotonics solutions" for its customers
  • ColorChip – its silicon photonics is at the center of its 40G QSFP modules
  • Lightwire – its Opto-electronic Application Specific Integrated Subsystem (OASIS) promises low power and higher density
  • MergeOptics/FCI – OptoPack is at the center of its 10G and above transceiver designs
  • Reflex Photonics – LightAble is the building block for its transceiver modules
  • Santur – DFB/waveguide architecture has promise for not only tunable lasers, but many different optical interconnects
So what’s the big deal? In the past, optical integration was a science project looking for an application. Now, these companies are leveraging their research to create products such as QSFP modules or tunable transceivers that are selling today. So even though you could make these transceivers tiny, they package them in standard form factors in order to develop a revenue stream in hopes that the technology can truly be used for miniature devices in the near future. Pretty smart business plan I think – especially since we’ve already seen a glimpse of the miniaturization products with Avago’s MicroPOD, Intel’s Light Peak and Luxtera’s OptoPhy, which can also be considered optical engines. And, which are supposedly on the cusp of true adoption into active equipment.

 

Monday, August 2, 2010

When Does Passive Optical LAN Make Sense?

Are you purchasing a Transparent LAN service? Then you probably want to consider POL. Inter-building Transparent LANs often have distance limitations, which are currently overcome by installing significantly more expensive 1310 and 1550nm transceivers. As an active network, these higher cost modules are needed on both ends of every connection, and where seven-to-eight buildings are involved, the dollars spent can add up quickly.

With one long reach transceiver needed at the central office (or fed out of an enterprise data center), POLs can offer significant savings in multi-building campus environments. It is important to note how much more expensive 1550nm modules are as compared to their 850nm counterparts. At 10-Gigabit, a 10GBASE-SR (850nm) optical module costs approximately $1300/port (switch + transceiver cost). A comparable 10GBASE-ER (1550nm) longer reach device that is needed for an inter-building connection costs around $11,000/port (switch + transceiver) or nearly ten times as much. When connecting multiple buildings in a campus setting, these costs add up quickly, and a POL network can be a much more economical solution. The POL system uses 1310/1550nm diplexer optics and while more expensive than 850nm can still cover entire campuses at a fraction of the cost of the 1550nm Ethernet-based transceivers. And, since the signal from these devices can be split to as many as 64 users instead of 1, the cost-per-end-user is drastically reduced.

Passive optical LANs are being touted by their equipment suppliers as THE most cost effective solution for medium-to-large enterprises. According to Motorola, you can save more than 30-percent on your network infrastructure and as your number of users increases, so does your savings.

In our recent research for our POL report, we found that there is a subset of vertical markets – specifically, not-for-profits – that may be ripe to implement this disruptive technology. But how does this affect the data center network?

We’ve done our own cost analysis and the reason why POL is so cost effective as compared to a traditional switched-Ethernet network is because you can eliminate lots of copper and MMF cabling as well as workgroup switches. But, in the data center, you still need to connect to your WAN routers. With a POL, you could cover as many as 96 end users with one 4-port blade in an enterprise aggregation switch and ONE 10G uplink port to the WAN router. The equivalent switched-Ethernet network would need four workgroup switches connected to a core switch through 12 uplink Gigabit Ethernet ports and TWO 10G uplink ports from the core switch to the WAN router. So by installing POL, you may be able to cut your router uplink ports in half. I wouldn’t mind saving 10’s of thousands of dollars on router ports – would you?

Of course, this is all assuming a totally non-blocking architecture, which, in reality, isn’t necessarily needed. A switched-Ethernet oversubscribed network covering a 132-user, 4-floor building is still less expensive than a POL. For the details, see our POL report.

Friday, July 30, 2010

Laser-optimized Multi-mode Fiber (LOMF)

It occurred to me as I was writing the last post that many of you may not be aware of the different grades of multi-mode fiber and that for the purposes of this blog, it would be good to present the differences so many of my points can be thoroughly understood.

Right now, there are three standardized types of LOMF as well as what most of us in the industry call FDDI-grade fiber, which is not laser optimized. So first, what does laser-optimized actually mean? In basic terms, it just means that the fiber was designed to be used with lasers, and in the case of MMF, typically VCSELs. FDDI-grade fiber pre-dated the use of VCSELs so is not laser-optimized - it was intended for utilization with LEDs. Lasers were adopted as the light source of choice when scientists and engineers realized that LEDs became very unstable when trying to modulate them at data rates beyond 100 Mbps. They originally tried to use the same lasers that were being used in CD players, but these turned out to be unstable at Gigabit data rates as well. In the early 1990s, the development of the VCSEL enabled these higher data rates.

As the light sources evolved, the fiber progressed with them. So, for 850nm operation today we have four choices: 
  1. OM1 (FDDI):  Minimum OFL Bandwidth of 200 MHz•km; 10G Minimum Optical Reach of 33m
  2. OM2:  Minimum OFL BW of 500; 10G Minimum Optical Reach of 82m
  3. OM3: Minimum OFL BW of 1500; 10G Minimum Optical Reach of 300m
  4. OM4: Minimum OFL BW of 3500; 10G Minimum Optical Reach of 550m
As you can see, the bandwidth of the fiber is intimately tied to what type of light source is used and the optical reach is dependent on both bandwidth and data rate. And, while OM1 fiber wasn’t necessarily designed to be used with lasers, it works fine with them, albeit at a shorter distance than with LOMF. Of note as well is the fact that there are a few cable manufacturers that also provide what I would call OM1+ cable that is 62.5-micron, but is laser-optimized, so may have some improved bandwidth and reach.

All this leads to a very important point – when specifying a cabling system for your networks and data centers, it is important to understand not only the fiber you’re going to install, but also the equipment you’re trying to connect. Just because you're "only" installing Gigabit systems and you've used OM1 fiber for years, doesn't mean it's the best solution (or even the most economical) for today and tomorrow.

Wednesday, July 28, 2010

Other 10GigE Transceiver Markets – ER, LX4 and LRM

Some of you may have noticed that in my last post I neglected to talk about three other 10-Gigabit Ethernet variants – LX4, LRM and ER. That’s because the content was already long enough and I wanted to focus on the volume data center applications. Now, I’ll discuss the others.

10GBASE-ER runs over SMF at 1550nm for up to 40km. While there are a few service providers that might choose to do this, the vast majority of them choose WDM through the OTN instead. There may be some private networks that have need for this variant as well which is what prompted the IEEE include it in the standard. These transceivers are priced out of typical budgets for the average enterprise at over $4000.

10GBASE-LX4 was originally introduced to address installed base of FDDI-grade (not laser-optimized) MMF, but can also be used with higher-grades of LOMF as well as SMF. It uses CWDM to send four wavelengths (thus the X4) between 1269.0 to 1355.9nm running at data rates of 2.5 Gbps each. LX4 modules are available in both X2 and XENPAK form factors and from at least a couple of sources such as Emcore and Hitachi Cable. As you can imagine, because these devices have four lasers and four detectors with their associated electronics, they cost appreciably more than either the SR or LR transceivers. The module alone retails for about $2000, which means that the per port cost would probably around $2500. But, you most likely wouldn’t fill your switch with these modules of course; you would only use them as needed where you wanted to re-use existing installed fiber.

In response to the high priced LX4, 10GBASE-LRM variant was developed. It was enabled by chip companies such as Clariphy and Scintera with some new technology, electronic dispersion compensation (EDC), that could push 10G serial to longer distances on MMF. It took a while to develop the standard and the consequence was that LX4 really took much of the market it was intended for. However, once products were released and supported by the top-tier transceiver manufacturers (Avago, Finisar, Opnext), it has really taken much of the business away from LX4. LRM modules are now available in SFP+ packages as well, which clearly indicates the vendors think there will be an ongoing market for them.

One note of caution – if you intend to use either the LX4 or the LRM modules, you need to make sure that both ends of your channel have them, otherwise it won’t work.

Monday, July 26, 2010

Cost Effective 10G Data Center Networks

For networks running Gigabit Ethernet, it’s a no-brainer to use Category 5e or 6 cabling with low-cost copper switches for less than 100m connections because they are very reliable and cost about 40-percent less per port than short-wavelength optical ones. But for 10G, there are other factors to consider.

While 10GBASE-T ports are now supposedly available on switches (at least the top Ethernet switch vendors, Brocade, Cisco and Extreme Networks say they have them), is it really more cost effective to use these instead of 10GBASE-SR, 10GBASE-CX4 or 10GBASE-CR (SFP+ direct attach copper)? Well, again, it depends on what your network looks like and how well your data center electrical power and cooling structures are designed.

First, 10GBASE-CX4 is really a legacy product and is only available on a limited number of switches so you may want to rule this out right away. If you’re looking for higher density, but you can’t support too many higher power devices, I would opt for 10GBASE-SR because it has the lowest power consumption – usually less than 1W/port. And also, the useful life of LOMF is longer; it’s smaller so won’t take up as much space or block cooling airflow if installed under a raised floor.

If you don’t have a power or density problem and can make do with just a few 10G ports for a short distance, you may choose to use 10GBASE-CR (about $615/port). But, if you don’t have a power or density issue and you need to go longer than about 10m, you’ll still need to use 10GBASE-SR and if you need a reach of more than 300m you’ll need to either install OM4 cable (which will get you up to 600m in some cases) to use with your SR devices; or look at 10GBASE-LR modules ($2500/port) that will cost you about twice as much as the SR transceivers (about $1300/port). If your reach needs to be less than 100m and you can afford higher power, but you need the density, 10GBASE-T (<$500/port) may be your solution. If you have a mix of these requirements, you may want to make sure you opt for an SFP-based switch so you can mix long and short reaches and copper and fiber modules/cables for maximum flexibility.

So what’s the bottom line? Do an assessment of your needs in your data center (and the rest of your network for that matter) and plan them out in order to maximize the cost effectiveness of your 10G networks. One more thing – if you can wait a few months, you may want to consider delaying implementation of 10G until later this year when most of the 10GBASE-T chip manufacturers promise to have less than 2.5W devices commercially available, which will drastically reduce (about half) its per port power consumption.

Thursday, July 22, 2010

SFP+ Marks a Shift in Data Center Cabling

With the advent of top-or-rack (ToR) switching and SFP+ direct attach copper cables, more data centers are able to quickly implement cost-effective 10G and beyond connections. ToR designs are currently one of two configurations:
  1. GigE Category cabling (CAT5e, 6, or 6A) connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the main distribution area (MDA)
  2. SFP direct attach cabling connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the MDA
Either way, SFP and SFP+ modules and cable assemblies are starting to see huge inroads where Category cabling used to be the norm. Consequently, structured cabling companies have taken their shot at offering the copper variants of these devices. Panduit was one of the first that offered an SFP direct-attach cable for the data center, but Siemon quickly followed suit and surpassed Panduit by offering both the copper and optical versions of the assemblies as well as the parallel optics QSFP+ AOC. Others rumored of working on entering into this market are Belden and CommScope. This really marks a shift in philosophy for these companies who traditionally have stayed away from what they considered “interconnect” products. There are a couple of notable exceptions in Tyco Electronics and Molex that have both types of products, however.

So what makes these companies believe they can compete with the likes of Amphenol Interconnect, Molex and Tyco Electronics? Well, it might not be the fact that they think they can compete, but that they see some erosion of their patch cord businesses and view this as the only way to make sure the “interconnect” companies don’t get into certain customers. So, protecting their customer base by offering products they won’t necessarily make any money on – because, after all, many of them are actually private-labeled from the very companies they are trying to oust. Smart or risky? Smart, I think, because it seems to me that the future of the data center will be in short-reach copper and mid-range fiber in the form of laser-optimized multi-mode fiber (LOMF).

 

Tuesday, July 20, 2010

Common Electrical Interface for 25/28G – a Possibility or a Pipe-Dream?

Yesterday, I sat through a workshop hosted by the Optical Interconnecting Forum (OIF) on its “Common Electrical Interface (CEI) project for 28G Very Short Reach (VSR).” What quickly became clear to me was that I was in a room of VERY optimistic engineers.

I sat through presentations that were characterized as “Needs of the Industry,” which consisted of content from the leaders of IEEE 802.3ba (40/100G Ethernet), T11.2 (Fibre Channel) and InfiniBand standards groups. Yet all of these representatives made sure they carefully stated that what they were presenting was their own opinions and not that of their standards groups, which I found odd since most of what they showed was directly from the standards. Legalities I guess. I also noticed that they never really sited any kind of independent market research or analysis of what the “needs of the industry” were. For instance, one speaker said that InfiniBand has a need for 26AWG, 3-meter copper cable assemblies for 4x25G operation in order to keep the cost down within a cabinet. Yet, he did not present any data or even mention what customers are asking for this. Maybe this exists, but to me unless it is presented, the case for it is weak. I do have evidence directly from some clustering folks that they are moving away from copper in favor of fiber for many reasons – lower power consumption, weight of cabling, higher density, and room in cabinets, pathways and spaces.

Today, data center managers are really still just starting to realize the benefits of deploying 10G, which has yet to reach its market potential. I understand that standards groups must work on future data rates ahead of broad market demands, but this seems extremely premature. None of the current implementations for 40/100G that use 10G electrical signaling have even been deployed yet (except for maybe a few InfiniBand ones). And, from what I understand from at least one chip manufacturer who sells a lot of 10G repeaters to OEMs for their backplanes, it is difficult enough to push 10G across a backplane or PCB. Why wouldn’t the backplane and PCB experts solve this issue that is here today before they move onto trying to solve a “problem” that doesn’t even exist yet?

Maybe they need to revisit optical backplanes for 25G? It seems to me that 25G really won't be needed any time soon and that their time would be better spent on developing something that would appear to have relevancy beyond 25G. To me, designing some exotic DSP chip that would allow 25G signals to be transmitted over four-to-12 inches of PCB and maybe 3m of copper cable for one generation of equipment may not be very productive. Maybe this is simpler than I anticipate, but then again, there was a similar but a little more complicated problem with 10GBASE-T and look how that turned out...

Friday, July 16, 2010

40/100G – Paradigm Shift to Parallel Optics?

Parallel-optics have been around for more than a decade – remember SNAP12 and POP4? These were small 12 and four-fiber parallel-optics modules that were developed for telecom VSR applications. They never really caught on for Ethernet networks though. Other than a few CWDM solutions, volume applications for datacom transceivers have been serial short-wavelength ones. At 40G, this is changing.

High performance computing (HPC) centers have already adopted parallel optics at 40 and 120G using InfinBand (IB) 4x and 12x DDR. And, they are continuing this trend through their next data rate upgrades – 80 and 240G. While in the past I thought of HPC as a small, somewhat niche market, I now think this is shifting due to two major trends:

  • IB technology has crossed over into 40 and 100-Gigabit Ethernet in the form of both active optical cable assemblies, CFP and CXP modules.
  • More and more medium-to-large enterprise data centers are starting to look like HPC clusters with masses of parallel processing
Many of the top transceiver manufacturers including Avago Technologies and Finisar, as well as some startups have released several products in the last year to support these variants with short-reach solutions. The initial offerings are AOC products using QSFP+ and CXP form factors, which both use VCSEL and PIN arrays. At least one, Reflex Photonics , has released a CFP module that also uses these devices. To date, the only other transceiver product that seems to be available so far is the QSFP+ 40G module from MergeOptics , which is a natural extension of its QSFP AOCs. These products are already being deployed for IB systems and are planned to be used for the initial 40G Ethernet networks as well.

Once parallel-optics based transceivers are deployed for 40/100G networks, will we ever return to serial transmission?