I hadn’t been to Interop since 2003 and I’d never been to the New York show so I decided this year I would give it a shot. I was woefully disappointed. I was expecting to see great product demonstrations from all the top equipment manufacturers, but instead received inquiries for meetings from software vendors who didn’t even bother to see that I cover data centers, optical components, structured cabling and interconnects.
The exhibit hall only had eight rows. Cisco was there, but half of its booth was taken up by its channel partners and the other half had virtual demos, not actual equipment running. Brocade was there, but had a much smaller booth and pretty much legacy equipment on a tabletop display. Most telling of course were the companies that didn’t participate in the exhibition – Extreme Networks and IBM to name just two.
Some of the programming was interesting, though, and maybe made it worth the travel costs. I sat in on the second day of the Enterprise Cloud Summit so actually got to meet some of the gurus in the industry of cloud computing. I also sat in on the “Evaluating New Data Center LAN Architectures” technical session which was a panel of equipment manufacturers that responded to an RFI from Boston Scientific for a data center expansion project. Interesting to note is that while Cisco was asked to respond, it did not. The panel consisted of Alcatel-Lucent, Extreme Networks, Force10 and Hewlett Packard. It is also interesting to note that the vendors responded with different architectures – some including top-of-rack solutions and others with end-of-row.
All-in-all, I think my time would have been better spent staying home and working on my optical components research…
Showing posts with label Extreme Networks. Show all posts
Showing posts with label Extreme Networks. Show all posts
Monday, October 25, 2010
Interop, New York - Software Show Now
Thursday, September 16, 2010
10GBASE-T versus 10GBASE-SR – Tradeoffs in Costs Make the Difference
The 10GBASE-T standard was finalized some four years ago now, but, as I’ve mentioned before, equipment using these ports is really just starting to be deployed in actual networks. The main reason being that you couldn’t get a switch with these ports on it. So early implementations of 10G are 10GBASE-SR, 10GBASE-LR or 10GBASE-LRM, with the vast majority now being the SR. But now that switch manufacturers the likes of Blade Networks, Cisco and Extreme Networks have products offering up to 24 ports of 10GBASE-T, the market dynamics may change.
With Ethernet, history tends to repeat itself so let’s take a minute to review what happened at Gigabit. Early products were 1000BASE-CX, SX and LX because the 100m 1000BASE-T had not yet been standardized. But, as soon as it was and the switch manufacturers started adopting it, it took over the shorter-reach Gigabit Ethernet market. In fact, it still dominates today.
So, why would 10GBASE-T be any different? Well, my belief is that eventually, it won’t. Even though data center managers concerns have shifted from space, to power availability per rack and cooling hot spots, when they see the price tag difference between SR and T (still about 2:1 per installed port), it causes them to pause and rethink the T scenario. So although data center managers want to reduce their headaches with fat CAT6A cables, most are still not willing to pay that much more for the optical solution until they have to because of distances. So even though the T ports may push electricity bills up, for most, the increase isn’t significant enough to justify the up-front cost of SR.
With Ethernet, history tends to repeat itself so let’s take a minute to review what happened at Gigabit. Early products were 1000BASE-CX, SX and LX because the 100m 1000BASE-T had not yet been standardized. But, as soon as it was and the switch manufacturers started adopting it, it took over the shorter-reach Gigabit Ethernet market. In fact, it still dominates today.
So, why would 10GBASE-T be any different? Well, my belief is that eventually, it won’t. Even though data center managers concerns have shifted from space, to power availability per rack and cooling hot spots, when they see the price tag difference between SR and T (still about 2:1 per installed port), it causes them to pause and rethink the T scenario. So although data center managers want to reduce their headaches with fat CAT6A cables, most are still not willing to pay that much more for the optical solution until they have to because of distances. So even though the T ports may push electricity bills up, for most, the increase isn’t significant enough to justify the up-front cost of SR.
Labels:
10G,
10GBASE-SR,
10GBASE-T,
Blade Networks,
Cisco,
data center,
Extreme Networks
Monday, August 23, 2010
When a Standard Isn’t Exactly a Standard
I’ve noted in a couple of posts now that equipment manufacturers charge a lot more for optical modules they sell to end users than what they actually pay for them from transceiver suppliers. Considering the pains OEMs go through to “qualify” their vendors, a healthy markup in the early stages of a new product adoption can be warranted. But, I’m not so sure keeping it at more than 5x the price five years down the road can be justified. And is it sustainable? Some transceiver manufacturers sell products at gross margins in the 20-percent range, while their biggest customers (OEMs) enjoy more like 40 percent.
And guess what, there’s not much the suppliers can do. It is well known that Cisco, Brocade and others purchase modules, and now SFP+ direct-attach copper cables, from well-known suppliers and resell them at much higher prices. And if I’m an end user, I MUST buy these from OEM or their designate or my equipment won’t work. These devices have EEPROMs that can be programmed with what some call a “magic key” that only allow them to work with specific equipment. So the OEM now has a captive market for modules and copper cables going into their equipment, and so they can pretty much charge what they want to. If I try to use a “standard” module or cable assembly – one that is compliant to the specification – it will not work unless it has this “magic key.”
I’ve experienced this first hand. I had a brand new HP ProCurve Gigabit Ethernet switch that I wanted to use for some cable testing I was doing. I had dozens of SFP modules from all of the top transceiver manufacturers, but none of them would work in the switch. I called HP and they said, “You have to buy the HP mini-GBIC.” Well, I knew that wasn’t exactly true. I didn’t really want to pay the $400+ each for four more SFPs that I didn’t need so I tried to work through my contacts at HP to get a firmware patch so I could use my existing devices. Long story short, I never did get that patch and ended up doing my testing with SMC switches instead.
Prime example of when an open standard is not so open. Will data center managers be able to sustain this when they have to move equipment around and need different modules or cable assemblies? Are the OEMs thinking about the aftermarket and the fact that data center managers are used to going to distributors to get these items? And are OEMs going to continue to gouge end users and potentially cripple their suppliers?
One added note - there are at least two equipment manufacturers that I know of that support an open standard: Blade Networks and Extreme Networks. While they will both supply the modules and cable assemblies, they don't lock out other standards-compliant parts that customers may want to use.
And guess what, there’s not much the suppliers can do. It is well known that Cisco, Brocade and others purchase modules, and now SFP+ direct-attach copper cables, from well-known suppliers and resell them at much higher prices. And if I’m an end user, I MUST buy these from OEM or their designate or my equipment won’t work. These devices have EEPROMs that can be programmed with what some call a “magic key” that only allow them to work with specific equipment. So the OEM now has a captive market for modules and copper cables going into their equipment, and so they can pretty much charge what they want to. If I try to use a “standard” module or cable assembly – one that is compliant to the specification – it will not work unless it has this “magic key.”
I’ve experienced this first hand. I had a brand new HP ProCurve Gigabit Ethernet switch that I wanted to use for some cable testing I was doing. I had dozens of SFP modules from all of the top transceiver manufacturers, but none of them would work in the switch. I called HP and they said, “You have to buy the HP mini-GBIC.” Well, I knew that wasn’t exactly true. I didn’t really want to pay the $400+ each for four more SFPs that I didn’t need so I tried to work through my contacts at HP to get a firmware patch so I could use my existing devices. Long story short, I never did get that patch and ended up doing my testing with SMC switches instead.
Prime example of when an open standard is not so open. Will data center managers be able to sustain this when they have to move equipment around and need different modules or cable assemblies? Are the OEMs thinking about the aftermarket and the fact that data center managers are used to going to distributors to get these items? And are OEMs going to continue to gouge end users and potentially cripple their suppliers?
One added note - there are at least two equipment manufacturers that I know of that support an open standard: Blade Networks and Extreme Networks. While they will both supply the modules and cable assemblies, they don't lock out other standards-compliant parts that customers may want to use.
Labels:
Blade Networks,
Brocade,
Cisco,
Extreme Networks,
Gigabit,
HP,
SFP,
SFP+,
SMC
Monday, July 26, 2010
Cost Effective 10G Data Center Networks
For networks running Gigabit Ethernet, it’s a no-brainer to use Category 5e or 6 cabling with low-cost copper switches for less than 100m connections because they are very reliable and cost about 40-percent less per port than short-wavelength optical ones. But for 10G, there are other factors to consider.
While 10GBASE-T ports are now supposedly available on switches (at least the top Ethernet switch vendors, Brocade, Cisco and Extreme Networks say they have them), is it really more cost effective to use these instead of 10GBASE-SR, 10GBASE-CX4 or 10GBASE-CR (SFP+ direct attach copper)? Well, again, it depends on what your network looks like and how well your data center electrical power and cooling structures are designed.
First, 10GBASE-CX4 is really a legacy product and is only available on a limited number of switches so you may want to rule this out right away. If you’re looking for higher density, but you can’t support too many higher power devices, I would opt for 10GBASE-SR because it has the lowest power consumption – usually less than 1W/port. And also, the useful life of LOMF is longer; it’s smaller so won’t take up as much space or block cooling airflow if installed under a raised floor.
If you don’t have a power or density problem and can make do with just a few 10G ports for a short distance, you may choose to use 10GBASE-CR (about $615/port). But, if you don’t have a power or density issue and you need to go longer than about 10m, you’ll still need to use 10GBASE-SR and if you need a reach of more than 300m you’ll need to either install OM4 cable (which will get you up to 600m in some cases) to use with your SR devices; or look at 10GBASE-LR modules ($2500/port) that will cost you about twice as much as the SR transceivers (about $1300/port). If your reach needs to be less than 100m and you can afford higher power, but you need the density, 10GBASE-T (<$500/port) may be your solution. If you have a mix of these requirements, you may want to make sure you opt for an SFP-based switch so you can mix long and short reaches and copper and fiber modules/cables for maximum flexibility.
So what’s the bottom line? Do an assessment of your needs in your data center (and the rest of your network for that matter) and plan them out in order to maximize the cost effectiveness of your 10G networks. One more thing – if you can wait a few months, you may want to consider delaying implementation of 10G until later this year when most of the 10GBASE-T chip manufacturers promise to have less than 2.5W devices commercially available, which will drastically reduce (about half) its per port power consumption.
While 10GBASE-T ports are now supposedly available on switches (at least the top Ethernet switch vendors, Brocade, Cisco and Extreme Networks say they have them), is it really more cost effective to use these instead of 10GBASE-SR, 10GBASE-CX4 or 10GBASE-CR (SFP+ direct attach copper)? Well, again, it depends on what your network looks like and how well your data center electrical power and cooling structures are designed.
First, 10GBASE-CX4 is really a legacy product and is only available on a limited number of switches so you may want to rule this out right away. If you’re looking for higher density, but you can’t support too many higher power devices, I would opt for 10GBASE-SR because it has the lowest power consumption – usually less than 1W/port. And also, the useful life of LOMF is longer; it’s smaller so won’t take up as much space or block cooling airflow if installed under a raised floor.
If you don’t have a power or density problem and can make do with just a few 10G ports for a short distance, you may choose to use 10GBASE-CR (about $615/port). But, if you don’t have a power or density issue and you need to go longer than about 10m, you’ll still need to use 10GBASE-SR and if you need a reach of more than 300m you’ll need to either install OM4 cable (which will get you up to 600m in some cases) to use with your SR devices; or look at 10GBASE-LR modules ($2500/port) that will cost you about twice as much as the SR transceivers (about $1300/port). If your reach needs to be less than 100m and you can afford higher power, but you need the density, 10GBASE-T (<$500/port) may be your solution. If you have a mix of these requirements, you may want to make sure you opt for an SFP-based switch so you can mix long and short reaches and copper and fiber modules/cables for maximum flexibility.
So what’s the bottom line? Do an assessment of your needs in your data center (and the rest of your network for that matter) and plan them out in order to maximize the cost effectiveness of your 10G networks. One more thing – if you can wait a few months, you may want to consider delaying implementation of 10G until later this year when most of the 10GBASE-T chip manufacturers promise to have less than 2.5W devices commercially available, which will drastically reduce (about half) its per port power consumption.
Labels:
10G,
10GBASE-LR,
10GBASE-SR,
10GBASE-T,
Brocade,
category 6,
Cisco,
Extreme Networks,
fiber
Subscribe to:
Posts (Atom)