Showing posts with label 10GBASE-T. Show all posts
Showing posts with label 10GBASE-T. Show all posts

Thursday, October 14, 2010

SFP+ Markets

SFP+ is primarily being used in two markets currently – 10G Ethernet and 8G Fibre Channel. The Fibre Channel market is almost exclusively optical SFP+ modules while, the 10G Ethernet market is expected to see its largest growth in SFP+ direct-attach copper (DAC) products. SFP+ DAC will mainly be used in top-of-rack (ToR) switches that connect to servers within a rack. Multi-million port forecasts that are predicted over the next few years are predicated on an anticipated high-rate of adoption of this architecture in the data center. Even if this topology is used, once 10GBASE-T becomes more readily available, SFP+ DAC will be sharing its market with this less costly variant. Some OEMs believe quantities of each will be about the same, but I have my doubts. If history tells us anything, which it usually does, once the less expensive BASE-T variant takes hold, it usually pushes the more costly alternatives to much smaller volume.

But, right now it is a matter of timing. I am skeptical that the 10GBASE-T adoption in switches will be able to keep pace with the port-density need in the data center in the short-term. Both products will be needed in the long term – 10GBASE-T for inexpensive connections and 10GBASE-CR (SFP+ DAC) for lower latency, lower power consumption and more flexibility. Currently, if you want both copper and fiber ports on your 10G switch, you need to use SFP+ because there are no switches that contain both 10GBASE-T and optical.

Thursday, September 16, 2010

10GBASE-T versus 10GBASE-SR – Tradeoffs in Costs Make the Difference

The 10GBASE-T standard was finalized some four years ago now, but, as I’ve mentioned before, equipment using these ports is really just starting to be deployed in actual networks. The main reason being that you couldn’t get a switch with these ports on it. So early implementations of 10G are 10GBASE-SR, 10GBASE-LR or 10GBASE-LRM, with the vast majority now being the SR. But now that switch manufacturers the likes of Blade Networks, Cisco and Extreme Networks have products offering up to 24 ports of 10GBASE-T, the market dynamics may change.

With Ethernet, history tends to repeat itself so let’s take a minute to review what happened at Gigabit. Early products were 1000BASE-CX, SX and LX because the 100m 1000BASE-T had not yet been standardized. But, as soon as it was and the switch manufacturers started adopting it, it took over the shorter-reach Gigabit Ethernet market. In fact, it still dominates today.

So, why would 10GBASE-T be any different? Well, my belief is that eventually, it won’t. Even though data center managers concerns have shifted from space, to power availability per rack and cooling hot spots, when they see the price tag difference between SR and T (still about 2:1 per installed port), it causes them to pause and rethink the T scenario. So although data center managers want to reduce their headaches with fat CAT6A cables, most are still not willing to pay that much more for the optical solution until they have to because of distances. So even though the T ports may push electricity bills up, for most, the increase isn’t significant enough to justify the up-front cost of SR.

Monday, July 26, 2010

Cost Effective 10G Data Center Networks

For networks running Gigabit Ethernet, it’s a no-brainer to use Category 5e or 6 cabling with low-cost copper switches for less than 100m connections because they are very reliable and cost about 40-percent less per port than short-wavelength optical ones. But for 10G, there are other factors to consider.

While 10GBASE-T ports are now supposedly available on switches (at least the top Ethernet switch vendors, Brocade, Cisco and Extreme Networks say they have them), is it really more cost effective to use these instead of 10GBASE-SR, 10GBASE-CX4 or 10GBASE-CR (SFP+ direct attach copper)? Well, again, it depends on what your network looks like and how well your data center electrical power and cooling structures are designed.

First, 10GBASE-CX4 is really a legacy product and is only available on a limited number of switches so you may want to rule this out right away. If you’re looking for higher density, but you can’t support too many higher power devices, I would opt for 10GBASE-SR because it has the lowest power consumption – usually less than 1W/port. And also, the useful life of LOMF is longer; it’s smaller so won’t take up as much space or block cooling airflow if installed under a raised floor.

If you don’t have a power or density problem and can make do with just a few 10G ports for a short distance, you may choose to use 10GBASE-CR (about $615/port). But, if you don’t have a power or density issue and you need to go longer than about 10m, you’ll still need to use 10GBASE-SR and if you need a reach of more than 300m you’ll need to either install OM4 cable (which will get you up to 600m in some cases) to use with your SR devices; or look at 10GBASE-LR modules ($2500/port) that will cost you about twice as much as the SR transceivers (about $1300/port). If your reach needs to be less than 100m and you can afford higher power, but you need the density, 10GBASE-T (<$500/port) may be your solution. If you have a mix of these requirements, you may want to make sure you opt for an SFP-based switch so you can mix long and short reaches and copper and fiber modules/cables for maximum flexibility.

So what’s the bottom line? Do an assessment of your needs in your data center (and the rest of your network for that matter) and plan them out in order to maximize the cost effectiveness of your 10G networks. One more thing – if you can wait a few months, you may want to consider delaying implementation of 10G until later this year when most of the 10GBASE-T chip manufacturers promise to have less than 2.5W devices commercially available, which will drastically reduce (about half) its per port power consumption.

Tuesday, July 20, 2010

Common Electrical Interface for 25/28G – a Possibility or a Pipe-Dream?

Yesterday, I sat through a workshop hosted by the Optical Interconnecting Forum (OIF) on its “Common Electrical Interface (CEI) project for 28G Very Short Reach (VSR).” What quickly became clear to me was that I was in a room of VERY optimistic engineers.

I sat through presentations that were characterized as “Needs of the Industry,” which consisted of content from the leaders of IEEE 802.3ba (40/100G Ethernet), T11.2 (Fibre Channel) and InfiniBand standards groups. Yet all of these representatives made sure they carefully stated that what they were presenting was their own opinions and not that of their standards groups, which I found odd since most of what they showed was directly from the standards. Legalities I guess. I also noticed that they never really sited any kind of independent market research or analysis of what the “needs of the industry” were. For instance, one speaker said that InfiniBand has a need for 26AWG, 3-meter copper cable assemblies for 4x25G operation in order to keep the cost down within a cabinet. Yet, he did not present any data or even mention what customers are asking for this. Maybe this exists, but to me unless it is presented, the case for it is weak. I do have evidence directly from some clustering folks that they are moving away from copper in favor of fiber for many reasons – lower power consumption, weight of cabling, higher density, and room in cabinets, pathways and spaces.

Today, data center managers are really still just starting to realize the benefits of deploying 10G, which has yet to reach its market potential. I understand that standards groups must work on future data rates ahead of broad market demands, but this seems extremely premature. None of the current implementations for 40/100G that use 10G electrical signaling have even been deployed yet (except for maybe a few InfiniBand ones). And, from what I understand from at least one chip manufacturer who sells a lot of 10G repeaters to OEMs for their backplanes, it is difficult enough to push 10G across a backplane or PCB. Why wouldn’t the backplane and PCB experts solve this issue that is here today before they move onto trying to solve a “problem” that doesn’t even exist yet?

Maybe they need to revisit optical backplanes for 25G? It seems to me that 25G really won't be needed any time soon and that their time would be better spent on developing something that would appear to have relevancy beyond 25G. To me, designing some exotic DSP chip that would allow 25G signals to be transmitted over four-to-12 inches of PCB and maybe 3m of copper cable for one generation of equipment may not be very productive. Maybe this is simpler than I anticipate, but then again, there was a similar but a little more complicated problem with 10GBASE-T and look how that turned out...