Wednesday, September 29, 2010

DARPA and NIST Funded Optical Interconnect Research Projects (Part 1)

While many private sector investments have been made in optical communications over the years, none have contributed as much ongoing support as the US federal government has. The Defense Advanced Research Projects Agency (DARPA) and the National Institute of Standards and Technology (NIST) have been at the forefront of funding for decades and are continuing to ante up now.

Some of these projects and how they might impact short-reach optics are reviewed:

TERAPIC™: The TERAPIC (Terabit Photonic Integrated Circuits) project that is funded by NIST has as its goal to "develop technology for optical components that can transmit and receive up to one terabit of data per second over one single-mode fiber, greatly reducing complexity and cost in high-capacity fiber-optic networks."

CyOptics and Kotura have partnered on this project to bring Terabit connectivity to data centers and HPC centers by reducing hundreds of individual components "to less than 10." The project is expected to produce an integrated component that can "transmit and receive up to one Tbps of data over one single-mode fiber across transmission distances of up to two-kilometers." The project team has been successful with 100G and 500G prototype devices that transmit up to two-kilometers and continue to work towards Terabit devices.

The PICs will be monolithic arrays of CWDM lasers and detectors that will be automatically assembled in CyOptics' manufacturing facility in Breinigsville, PA. Kotura has developed two integrated silicon photonics chips—data multiplexing/transmission and data demultiplexing/detection that are integrated into the overall transmit optical sub-assemlbies (TOSAs) and receive optical sub-assemblies (ROSAs) from CyOptics.

The TERAPIC project is set to be completed in 2010 so products are expected to be released in 2011, but early adopter customers have yet to be identified so actual revenue-generating opportunities may still be several years off. Especially since they are targeted for Terabit Ethernet, which has not even started on its standardization tract yet. However, the 100G and 500G parts that were developed could be made production-worthy in the interim.

More on DARPA and NIST-funded projects will be discussed in the coming days.

Monday, September 27, 2010

Transitioning Your Data Center from Copper to Fiber

Companies like Corning like to tell data center managers that with the advent of 10G, they should be transitioning their network from mostly copper connections to all fiber-optic. But, as many of you probably know, this is easier said than done. There are many things to consider:
  1. How much more is it going to cost me to install fiber instead of copper?
  2. Do I change my network architecture to ToR in order to facilitate using more fiber? What are the down sides of this?
  3. Is it really cost-effective to go all-optical?
  4. What is my ROI if I do go all-optical?
  5. Will it really help my power and cooling overload if I go all-optical?
It is very difficult to get to specific answers for a particular data center because each one is different. And guidelines from industry vendors may be skewed based on what they are trying to sell you. OFC/NFOEC management has recognized this and asked me to teach a new short course for their 2011 conference – Data Center Cabling – Transitioning from Copper to Fiber will be part of the special symposium Meeting the Computercom Challenge: Components and Architectures for Computational Systems and Data Centers. I invite your ideas on specifics you would like to see covered in this new short course.


Thursday, September 23, 2010

25G/40G VCSELs Driving Short-reach Optical Interconnects

Just a few years ago, laser designers were struggling with stability of their 10G VCSELs. But now, at least one, VI Systems GmbH, claims it will have production-ready 40G VCSELs within the next few years. The German start-up has developed two products it believes will take VCSELs beyond 10G applications - a directly-modulated (DM) device and an electro-optic modulated (EOM) DBR VCSEL. Both are short-wavelength (850nm) lasers.

In a recent press release, VI Systems explains that it “developed the VCSEL products at a wavelength of 850 nm along with a range of extremely fast integrated circuits based on the SiGe BiCMOS (silicon-germanium bipolar junction transistors in complementary metal-oxide-semiconductor) technology. The company uses a patent pending micro-assembly platform for the integration of the opto-electrical components and for alignment to a standard high performance multi-mode glass-based fiber.” The start-up has been presenting data supporting its claims of highly stable devices for more than a year now. It gets there by changing the laser active region material and structure to InAs quantum dot (QD).

Not only is VI Systems working on innovative laser structures, it has also developed new electro-optic integration methods to further reduce the cost of these devices.

I’ve noted in previous posts how VCSELs are the key to low-cost optical networks in the data center. These new VCSELs and packaging methods would bring an even more cost-effective “serial” solution for 40/100G. They could also be used for very short-reach optical connections like for chip-to-chip, on-board or board-to-board. Perhaps these inventive products will rival Avago’s MicroPOD and Luxtera’s OptoPHY (also in previous posts). Based on the presentations that VI Systems has released, it sure appears that its management completely understand the needs of both the data center and optical interconnect markets so could very well give incumbents in the industry some competition.

Tuesday, September 21, 2010

Intel’s Light Peak OR USB 3.0?

After Intel’s Developer’s Forum last week, there is renewed interest in Light Peak. For those of you that don’t remember one of my first blog posts, Light Peak is an Intel-developed optical interconnect techonolgy that uses a new controller chip with LOMF and a 10G 850nm VCSEL-based module with a new optical interface connector that looks very similar to the one used in Avago Technologies MicroPOD transceiver. Light Peak is aimed at replacing all of your external connectors on your PC including USB, IEEE 1394, HDMI, DP, PCIe, etc. It is also targeted at other consumer electronic devices like smart phones and MP3 players.

Many in the industry think Light Peak is intended to replace USB 3.0 even before USB 3.0 is finished being standardized. I tend to disagree. USB 3.0 is a 5G data rate and to me, will bridge the gap between existing USB 2.0 (480 Mbps max) and the 10G that Light Peak can provide. Just because they are being developed at the same time, doesn’t mean they will make it to production simultaneously.

While Intel is now saying that 2011 will be the year for Light Peak to take off, I’m still skeptical. There may be some really high-end applications like video editing that may need this bandwidth, but your run-of-the-mill PC user isn’t going to want to pay the extra money for it when you probably won’t be able to actually detect the improvement. And, what might be more important – what kind of power consumption difference is there and how does this affect battery life?Or is this technology not meant for laptops?  I’m not sure these questions have been sufficiently answered yet.

Thursday, September 16, 2010

10GBASE-T versus 10GBASE-SR – Tradeoffs in Costs Make the Difference

The 10GBASE-T standard was finalized some four years ago now, but, as I’ve mentioned before, equipment using these ports is really just starting to be deployed in actual networks. The main reason being that you couldn’t get a switch with these ports on it. So early implementations of 10G are 10GBASE-SR, 10GBASE-LR or 10GBASE-LRM, with the vast majority now being the SR. But now that switch manufacturers the likes of Blade Networks, Cisco and Extreme Networks have products offering up to 24 ports of 10GBASE-T, the market dynamics may change.

With Ethernet, history tends to repeat itself so let’s take a minute to review what happened at Gigabit. Early products were 1000BASE-CX, SX and LX because the 100m 1000BASE-T had not yet been standardized. But, as soon as it was and the switch manufacturers started adopting it, it took over the shorter-reach Gigabit Ethernet market. In fact, it still dominates today.

So, why would 10GBASE-T be any different? Well, my belief is that eventually, it won’t. Even though data center managers concerns have shifted from space, to power availability per rack and cooling hot spots, when they see the price tag difference between SR and T (still about 2:1 per installed port), it causes them to pause and rethink the T scenario. So although data center managers want to reduce their headaches with fat CAT6A cables, most are still not willing to pay that much more for the optical solution until they have to because of distances. So even though the T ports may push electricity bills up, for most, the increase isn’t significant enough to justify the up-front cost of SR.

Friday, September 10, 2010

10G Copper versus Fiber – Is Power Consumption Really the Issue?

For decades now, fiber has been slated to take over the data networking world, but somehow, some way, copper keeps reinventing itself. But are the ways in which copper can compensate for its lower bandwidth capacity coming to an end at 10G due to what seem to be astronomical power consumption issues? Probably not. I have listened to the rhetoric from the fiber-optic companies for more than five years now, and have conducted my own research to see if what they say is true. Their argument was that at 8 to 14W per port, copper is just too costly. But, now that the chips have reduced power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed and actually have room to cool the devices. Now that doesn’t seem to be an issue, evidenced by 24-port 10GBASE-T configurations that have been released by all the major players.

I believe decisions on copper versus fiber will be made around other parameters as well, such as latency. In a recent study, Data Center Cabling Cost Analysis - Copper Still Has Its Place, that we released at, we looked at the cost of 10G copper versus fiber and added in the higher power consumption. Using the specific example focused on a rack of EoR Cisco switches, copper was still more cost-effective even when considering higher electricity costs.

But our next area to study will be a rack of servers with a ToR switch. In this scenario, the power consumption difference may be enough to justify the cost of installing fiber over copper. The above referenced report and this next research are part of a series of research reports for our Data Center Network Operator Service.

Thursday, September 2, 2010

Why Polarity Matters (Part 2)

If you’ve read my previous posts on the subject, you know that polarity can be a tricky matter and it’s even more complicated when you try to choose it for your data center cabling. You really have to choose based on several factors:

  1. Patch cords – method A has two different patch cords that you have to stock, but the upside is that it’s pretty simple to follow where the signal is going and if you happen to be out of one type of patch cord, you can really take the one you have and just flip the fibers as a temporary fix until you can get the other patch cords. Of course this isn’t recommended, but if you’re in a bind and need to get up-and-running right away, it will work. With methods B and C you have the same patch cord on each end so no need to worry about this, but if you happen to have the wrong cassettes or backbone, nothing will work and you'll have to wait to get the correct ones.
  2. Cassettes and backbone cables – you need to make sure you buy all of one method of polarity or your system won’t work. If you’re concerned about supply, all three polarity methods are available from multiple vendors, but Method A is “preferred” by most.
  3. Upgradability – this is where it can get dicey. Typically your pre-terminated assemblies are running Gigabit applications today and a few may be running 10G. Any of the polarities will work at these data rates. But when you move to 40/100G, methods A and B have straight forward paths, while C does not. Also, you’ll want to make sure you use the highest grade of LOMF available, which is OM4 – this will give you the best chance of being able to reuse your backbones up to 125m. If you need something longer, you’ll need to go to SMF.
If you are thinking about installing pre-terminated cassette-based assemblies now for 10G with an upgrade path to 40 and 100G, you need to consider the polarity method you use. Unlike today's 2-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet implementations use multiple parallel 10G connections that are multiplexed. While 40/100G equipment vendors will tell you that polarity is not an issue, care must be taken if you want to reuse this installed base.

40G will use four 10G fibers to send and four 10G fibers to receive, while 100G uses either four 25G fibers or ten 10G fibers in each direction. Because 40 and 100G will be using the MPO connector, if the polarity method is carefully chosen, you will be able to reuse your backbone cables. This is enabled by the fact that the IEEE took much care in specifying the system so that you can connect any transmit within a connection on one end of the channel to any receive on the other end.

Those selecting fiber to support 10G now and 40G in the near future need to understand what will be involved in transitioning and repurposing their cable plant. In order to upgrade using method A, you can replace the cassettes with MPO-to-MPO patch panels and MPO-to-MPO patch cords and it will enable flexibility to address moves, adds and changes as well as promoting proper installation best practices. The polarity flip will need to be accomplished in either an A-to-A patch cord or possibly with a key up/key down patch panel.

Method B multimode backbone cables can also readily support 40G applications. For a structured cabling approach, method B will still use a patch panel and patch cords, though as with current method B, both patch cords could be A-to-B configuration. While Method C backbones could be used, they are not recommended for 40G as completing the channel involves complex patch cord configurations.

It appears that 100G will use either the 12-fiber (4x25G) or the 24-fiber (10x10G) MPO connector. With transmits in the top row and receives in the bottom row, the connection will still be best made using a standardized structured cabling approach as described above.

There are many suppliers of pre-terminated optical assemblies including Belden, Berk-Tek, a Nexans Company, CommScope, Corning, Panduit, Siemon, Tyco Electronics NetConnect as well as many smaller shops that give quick-turn assemblies like Cxtec CablExpress and Compulink.