Again, polarity is the term used in the TIA-568 standard to explain how to fiber (wire) to make sure each transmitter is connected to a receiver on the other end of a multi-fiber cable.
Many data center managers are opting to use pre-terminated fiber assemblies due to their higher-quality factory termination, ease of use and quick installation. And many are using 12-fiber MPO backbone cables with cassettes and patch cords to transition to active equipment. When doing this, they choose a polarity method that makes sense for their operation.
Polarity Method A: This is the most straight-forward method. It uses straight-through patch cords (A-to-B) on one end that connect through a cassette (LC-to-MPO or SC-to-MPO depending on what the equipment connector is), a straight-through MPO-key-up-to-MPO-key-down backbone cable and a “cross-over” patch cord (A-to-A) at the other end.
Polarity Method B: The “cross-over” occurs in the cassette. The keys on the MPO cable connectors are in an up position at both ends, but the fiber that is at connector position 1 in one end is in position 12 at the opposite end, and the fiber that is in position 12 at the originating end is in position 1 at the opposing end. Only one type of patch cord is needed – A-to-B.
Polarity Method C: This is the most complicated. There is pair-wise “cross-over” in the backbone cable in this method. A-to-B patch cords are used on both ends, the cassette uses MPO-key-up-to-key-down and the backbone cable is pair-wise flipped so 1,2 connects to 2,1; 3,4 connects to 4,3; etc.
There is a fourth propietary method that I won't go over here since it's proprietary and not standardized.
If the end user does not get this correct and use all of the proper pieces together, their systems will not work. If you don’t understand what I’ve just explained above, you're not alone. There are diagrams in the TIA-568 standard as well as many white papers from leading structured cabling companies explaining fiber polarity in arrayed cabling systems. Here’s a link to Panduit’s white paper that may help. In the next post, I’ll explain how to upgrade to 40/100G and reuse your pre-terminated backbone.
Tuesday, August 31, 2010
Why Fiber Polarity Matters ( Part 1)
Thursday, August 26, 2010
How the 40/100G Ethernet Shift to Parallel Optics Affects Data Center Cabling
Most data centers are cabled with at least Category 5e and some MMF. To upgrade to 10G, data center managers need to either test their entire installed base of Category 5e to make sure it is 10G-worthy or replace it with Category 6A or 7. And their MMF should be of at least the OM3 (2000 MHz.km) variety or the 300m optical-reach is in question. Unless, they want to use 10GBASE-LX4 or LRM modules that are about 10x the price of 10GBASE-SR devices. But what happens when you want to look beyond 10G to the next upgrade?
Last month I talked about how at 40/100G there is a shift to parallel-optics. Unlike today’s two-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet specify multiple parallel 10G connections that are aggregated. 40GBASE-SR4 will use four 10G fibers to send and four 10G fibers to receive, while 100GBASE-SR10 will use ten 10G fibers in each direction.
What this means to the data center operator is that they may need to install new cable. Unless they’ve started to install pre-terminated fiber assemblies using the 12-position MPO connectors – these can be re-used if polarity is chosen carefully. Polarity is the term used in the TIA-568 standard to explain how to fiber (wire) to make sure each transmitter is connected to a receiver on the other end of a multi-fiber cable.
There are three polarity methods defined in the TIA standard and each has its advantages and disadvantages, but only two of the three will allow you to easily reuse your installed pre-term assemblies for 40/100G – methods A or B. I’ll explain why in my subsequent posts.
Last month I talked about how at 40/100G there is a shift to parallel-optics. Unlike today’s two-fiber configurations, with one send and one receive, the standards for 40G and 100G Ethernet specify multiple parallel 10G connections that are aggregated. 40GBASE-SR4 will use four 10G fibers to send and four 10G fibers to receive, while 100GBASE-SR10 will use ten 10G fibers in each direction.
What this means to the data center operator is that they may need to install new cable. Unless they’ve started to install pre-terminated fiber assemblies using the 12-position MPO connectors – these can be re-used if polarity is chosen carefully. Polarity is the term used in the TIA-568 standard to explain how to fiber (wire) to make sure each transmitter is connected to a receiver on the other end of a multi-fiber cable.
There are three polarity methods defined in the TIA standard and each has its advantages and disadvantages, but only two of the three will allow you to easily reuse your installed pre-term assemblies for 40/100G – methods A or B. I’ll explain why in my subsequent posts.
Monday, August 23, 2010
When a Standard Isn’t Exactly a Standard
I’ve noted in a couple of posts now that equipment manufacturers charge a lot more for optical modules they sell to end users than what they actually pay for them from transceiver suppliers. Considering the pains OEMs go through to “qualify” their vendors, a healthy markup in the early stages of a new product adoption can be warranted. But, I’m not so sure keeping it at more than 5x the price five years down the road can be justified. And is it sustainable? Some transceiver manufacturers sell products at gross margins in the 20-percent range, while their biggest customers (OEMs) enjoy more like 40 percent.
And guess what, there’s not much the suppliers can do. It is well known that Cisco, Brocade and others purchase modules, and now SFP+ direct-attach copper cables, from well-known suppliers and resell them at much higher prices. And if I’m an end user, I MUST buy these from OEM or their designate or my equipment won’t work. These devices have EEPROMs that can be programmed with what some call a “magic key” that only allow them to work with specific equipment. So the OEM now has a captive market for modules and copper cables going into their equipment, and so they can pretty much charge what they want to. If I try to use a “standard” module or cable assembly – one that is compliant to the specification – it will not work unless it has this “magic key.”
I’ve experienced this first hand. I had a brand new HP ProCurve Gigabit Ethernet switch that I wanted to use for some cable testing I was doing. I had dozens of SFP modules from all of the top transceiver manufacturers, but none of them would work in the switch. I called HP and they said, “You have to buy the HP mini-GBIC.” Well, I knew that wasn’t exactly true. I didn’t really want to pay the $400+ each for four more SFPs that I didn’t need so I tried to work through my contacts at HP to get a firmware patch so I could use my existing devices. Long story short, I never did get that patch and ended up doing my testing with SMC switches instead.
Prime example of when an open standard is not so open. Will data center managers be able to sustain this when they have to move equipment around and need different modules or cable assemblies? Are the OEMs thinking about the aftermarket and the fact that data center managers are used to going to distributors to get these items? And are OEMs going to continue to gouge end users and potentially cripple their suppliers?
One added note - there are at least two equipment manufacturers that I know of that support an open standard: Blade Networks and Extreme Networks. While they will both supply the modules and cable assemblies, they don't lock out other standards-compliant parts that customers may want to use.
And guess what, there’s not much the suppliers can do. It is well known that Cisco, Brocade and others purchase modules, and now SFP+ direct-attach copper cables, from well-known suppliers and resell them at much higher prices. And if I’m an end user, I MUST buy these from OEM or their designate or my equipment won’t work. These devices have EEPROMs that can be programmed with what some call a “magic key” that only allow them to work with specific equipment. So the OEM now has a captive market for modules and copper cables going into their equipment, and so they can pretty much charge what they want to. If I try to use a “standard” module or cable assembly – one that is compliant to the specification – it will not work unless it has this “magic key.”
I’ve experienced this first hand. I had a brand new HP ProCurve Gigabit Ethernet switch that I wanted to use for some cable testing I was doing. I had dozens of SFP modules from all of the top transceiver manufacturers, but none of them would work in the switch. I called HP and they said, “You have to buy the HP mini-GBIC.” Well, I knew that wasn’t exactly true. I didn’t really want to pay the $400+ each for four more SFPs that I didn’t need so I tried to work through my contacts at HP to get a firmware patch so I could use my existing devices. Long story short, I never did get that patch and ended up doing my testing with SMC switches instead.
Prime example of when an open standard is not so open. Will data center managers be able to sustain this when they have to move equipment around and need different modules or cable assemblies? Are the OEMs thinking about the aftermarket and the fact that data center managers are used to going to distributors to get these items? And are OEMs going to continue to gouge end users and potentially cripple their suppliers?
One added note - there are at least two equipment manufacturers that I know of that support an open standard: Blade Networks and Extreme Networks. While they will both supply the modules and cable assemblies, they don't lock out other standards-compliant parts that customers may want to use.
Labels:
Blade Networks,
Brocade,
Cisco,
Extreme Networks,
Gigabit,
HP,
SFP,
SFP+,
SMC
Thursday, August 19, 2010
AOCs (Part 3 and last for now)
The last of my summary of AOC Implementations:
Reflex Photonics has gained an early customer base in InfiniBand and PCI Express extender applications with its SNAP 12 products, and is using the existing customer base to increase awareness of InterBoard products for data center customers. In developing InterBoard, Reflex Photonics moved into coarser channel implementations to meet industry AOC standards. The four-channel cables terminate in an array of 850nm VCSELs that use QSFP connectors suitable for both InfiniBand DDR and 40G Ethernet. What is also interesting about Reflex’s InterBoard is that it contains its optical engine technology, LightAble.
Zarlink (now part of Tyco) began its ZLynx product line with a CX4 interconnect, but quickly added QSFP as the module was standardized. Zarlink is unique in anticipating possible customer interest in dissimilar terminations by offering CX4-to-QSFP cables. Zarlink product developers say they will take the same attitude as CXP applications emerge. While most AOCs will use identical termination on both ends of the cable, the company will explore customer demand for hybrid connectors. Before it was acquired by Tyco, Zarlink was working on 40G implementations that were expected to be released this year. No announcements have been made as of yet, though. Tyco had its own QSFP AOC, namely the Paralight. It remains to be seen how Tyco will merge these product lines.
The first implementations of 40G Ethernet have indeed materialized as AOCs, but are expected to transition into actual optical modules as soon as transceiver manufacturers are ready with their products. What is nice for the end user is that if they want to implement 40G today, they can with AOCs and the same ports will then accept optical modules later if needed. InfiniBand AOC products are expected to stay as AOCs and not transition into optical modules, mainly because most of these connections are less than 30m so are easier to pull through pathways and spaces.
According to CIR, the market for AOCs is expected to be about $180 million (a rather small market for so many entrants) this year, most of which will be for data centers. However, by 2013, it is expected to grow to more than $1-billion – a steep climb and one that will need a lot of suppliers if it is actually going to happen.
Reflex Photonics has gained an early customer base in InfiniBand and PCI Express extender applications with its SNAP 12 products, and is using the existing customer base to increase awareness of InterBoard products for data center customers. In developing InterBoard, Reflex Photonics moved into coarser channel implementations to meet industry AOC standards. The four-channel cables terminate in an array of 850nm VCSELs that use QSFP connectors suitable for both InfiniBand DDR and 40G Ethernet. What is also interesting about Reflex’s InterBoard is that it contains its optical engine technology, LightAble.
Zarlink (now part of Tyco) began its ZLynx product line with a CX4 interconnect, but quickly added QSFP as the module was standardized. Zarlink is unique in anticipating possible customer interest in dissimilar terminations by offering CX4-to-QSFP cables. Zarlink product developers say they will take the same attitude as CXP applications emerge. While most AOCs will use identical termination on both ends of the cable, the company will explore customer demand for hybrid connectors. Before it was acquired by Tyco, Zarlink was working on 40G implementations that were expected to be released this year. No announcements have been made as of yet, though. Tyco had its own QSFP AOC, namely the Paralight. It remains to be seen how Tyco will merge these product lines.
The first implementations of 40G Ethernet have indeed materialized as AOCs, but are expected to transition into actual optical modules as soon as transceiver manufacturers are ready with their products. What is nice for the end user is that if they want to implement 40G today, they can with AOCs and the same ports will then accept optical modules later if needed. InfiniBand AOC products are expected to stay as AOCs and not transition into optical modules, mainly because most of these connections are less than 30m so are easier to pull through pathways and spaces.
According to CIR, the market for AOCs is expected to be about $180 million (a rather small market for so many entrants) this year, most of which will be for data centers. However, by 2013, it is expected to grow to more than $1-billion – a steep climb and one that will need a lot of suppliers if it is actually going to happen.
Labels:
40/100G Ethernet,
active optical cables,
AOC,
InfiniBand,
Reflex Photonics,
Tyco,
Zarlink
Sunday, August 15, 2010
AOCs (Part 2)
Summary of a few more AOC Implementations:
Avago Technologies had a late entry into the AOC market with its 10GBASE-CX4 replacement and QSFP+ products. But they have a rich history in parallel optics so have quickly come up to speed their products. While they may have been somewhat late to market, Avago has an existing customer base to peddle its wares to.
Finisar’s products include Quadwire and Cwire AOCs to address early adoption of 40G and 100G. Quadwire is Finisar’s mainstream product, both in terms of its use of the VCSEL arrays the company produces in volume at its Texas fab, and in terms of its use of the popular QSFP form factor.
The high end of the Finisar product line is designed to exploit anticipated interest in 100G Ethernet and 12-channel QDR InfiniBand. Cwire offers an aggregate data rate of 150 Gbps and a CXP interface. Not only does this represent the direction of high-end enterprise cluster design, but it allows Finisar to utilize the most integrated VCSEL arrays it manufactures. The 12-channel array also represents the most cost-effective per-laser manufacturing option, allowing Finisar to take advantage of its expertise in designing large VCSEL-arrays. The benefit in high channel count can also be seen in power dissipation. While the single serial channel of Laserwire dissipates 500mW per end, the 12-channel Cwire dissipates less than 3W per end – half the power dissipation per channel.
MergeOptics (now part of FCI) was born of the old Infineon which was once a powerhouse in the optical transceiver markets—both telecom and datacom. It emerged in 2006 with its SFP and then SFP+ products and is now one of the first entrants for 40G and 100G AOCs. Unlike most of its competitors, it is focused on 10G and above products so can bring them to market rather quickly. Its technology is being leveraged for InfiniBand and Ethernet applications.
Stay tuned for the next post for just a little more on AOCs.
Avago Technologies had a late entry into the AOC market with its 10GBASE-CX4 replacement and QSFP+ products. But they have a rich history in parallel optics so have quickly come up to speed their products. While they may have been somewhat late to market, Avago has an existing customer base to peddle its wares to.
Finisar’s products include Quadwire and Cwire AOCs to address early adoption of 40G and 100G. Quadwire is Finisar’s mainstream product, both in terms of its use of the VCSEL arrays the company produces in volume at its Texas fab, and in terms of its use of the popular QSFP form factor.
The high end of the Finisar product line is designed to exploit anticipated interest in 100G Ethernet and 12-channel QDR InfiniBand. Cwire offers an aggregate data rate of 150 Gbps and a CXP interface. Not only does this represent the direction of high-end enterprise cluster design, but it allows Finisar to utilize the most integrated VCSEL arrays it manufactures. The 12-channel array also represents the most cost-effective per-laser manufacturing option, allowing Finisar to take advantage of its expertise in designing large VCSEL-arrays. The benefit in high channel count can also be seen in power dissipation. While the single serial channel of Laserwire dissipates 500mW per end, the 12-channel Cwire dissipates less than 3W per end – half the power dissipation per channel.
MergeOptics (now part of FCI) was born of the old Infineon which was once a powerhouse in the optical transceiver markets—both telecom and datacom. It emerged in 2006 with its SFP and then SFP+ products and is now one of the first entrants for 40G and 100G AOCs. Unlike most of its competitors, it is focused on 10G and above products so can bring them to market rather quickly. Its technology is being leveraged for InfiniBand and Ethernet applications.
Stay tuned for the next post for just a little more on AOCs.
Labels:
active optical cables,
AOC,
Avago Technologies,
Finisar,
MergeOptics,
VCSELs
Thursday, August 12, 2010
Active Optical Cables (Part 1)
Active Optical Cables (AOC) are typically defined as a fiber subsystem intended for reaches of 3 to 300 meters, but Luxtera and Finisar both promise products stretching a kilometer or more for campus-network solutions. However, I don’t believe AOCs beyond 300 meters will get much traction in the market due to issues with trying to pull these delicate transceiver ends through kilometers of pathways and spaces (conduit or tray), around all types of obstacles. AOCs main applications in high-speed networks are in the data center, including (and probably most relevant) high-performance computing (HPC) clusters.
Intel (AOCs now part of Emcore) and Luxtera were among the first to promote AOCs for consumer and data-center markets. Zarlink (its optical products group is now part of Tyco Electronics) launched its AOC effort in 2007, Finisar introduced three varieties of vertical-market AOCs in 2009, and Avago announced its QSFP+ AOC in late 2009. Other participants include Lightwire, MergeOptics/FCI and Reflex Photonics. And, of late, we’ve even seen structured cabling companies like Siemon introduce some of these products, albiet by the looks of it, it is partnering with Luxtera to do so.
The QSFP+ form factor continues to be an enabler for 40G AOCs and in fact, was the first “form factor” released for this data rate. Since the QSFP+ supports Ethernet, Fibre Channel, InfiniBand and SAS, it will be an economic solution for all protocols. This AOC combines the QSFP physical module with management interfaces extendable to 40G, common protocols to support multiple physical layers in a single module and operates at 10G per lane producing a cost-effective solution. A significant ramp in quad data rate InfiniBand and 40G Ethernet will start to accelerate volume applications for these products. QSFP+ AOCs also give an easier path to market for tranceiver vendors, as they allow them to control both ends of the optical link, which is much easier to design - there are two less compliance points.
A summary of some of the product implentations of AOCs for high-data-rate networks:
Emcore has incorporated its existing technology into a pre-terminated active assembly using the current InfiniBand and 10GBASE-CX4 copper connector. So, what is presently being connected by copper can be replaced immediately by an active optical cable assembly. For 40G InfiniBand, this will turn into the CXP connection. The QDR 40 cable from Emcore was announced in mid-June 2008 and according to the company, has been shipping to select customers since early 2009. Yet, it does not seem to be a released product since the only reference to it on the Emcore Web site is its initial press release - no specifications are available there.
Luxtera is addressing the data center market with both InfiniBand- and Ethernet-compliant AOCs. It uses its CMOS photonics at 1490nm wavelength and a high-density QSFP+ directly attached to multi-fiber (ribbon or loose-tube) SMF. This is suitable for 40G applications and has proven a cost-effective solution for data centers that have discovered the difficulty with copper cables. Although the specifications for copper interconnects support 10m of cable, in reality there are both performance issues and mechanical problems with them.
To be continued in my next post...
Intel (AOCs now part of Emcore) and Luxtera were among the first to promote AOCs for consumer and data-center markets. Zarlink (its optical products group is now part of Tyco Electronics) launched its AOC effort in 2007, Finisar introduced three varieties of vertical-market AOCs in 2009, and Avago announced its QSFP+ AOC in late 2009. Other participants include Lightwire, MergeOptics/FCI and Reflex Photonics. And, of late, we’ve even seen structured cabling companies like Siemon introduce some of these products, albiet by the looks of it, it is partnering with Luxtera to do so.
The QSFP+ form factor continues to be an enabler for 40G AOCs and in fact, was the first “form factor” released for this data rate. Since the QSFP+ supports Ethernet, Fibre Channel, InfiniBand and SAS, it will be an economic solution for all protocols. This AOC combines the QSFP physical module with management interfaces extendable to 40G, common protocols to support multiple physical layers in a single module and operates at 10G per lane producing a cost-effective solution. A significant ramp in quad data rate InfiniBand and 40G Ethernet will start to accelerate volume applications for these products. QSFP+ AOCs also give an easier path to market for tranceiver vendors, as they allow them to control both ends of the optical link, which is much easier to design - there are two less compliance points.
A summary of some of the product implentations of AOCs for high-data-rate networks:
Emcore has incorporated its existing technology into a pre-terminated active assembly using the current InfiniBand and 10GBASE-CX4 copper connector. So, what is presently being connected by copper can be replaced immediately by an active optical cable assembly. For 40G InfiniBand, this will turn into the CXP connection. The QDR 40 cable from Emcore was announced in mid-June 2008 and according to the company, has been shipping to select customers since early 2009. Yet, it does not seem to be a released product since the only reference to it on the Emcore Web site is its initial press release - no specifications are available there.
Luxtera is addressing the data center market with both InfiniBand- and Ethernet-compliant AOCs. It uses its CMOS photonics at 1490nm wavelength and a high-density QSFP+ directly attached to multi-fiber (ribbon or loose-tube) SMF. This is suitable for 40G applications and has proven a cost-effective solution for data centers that have discovered the difficulty with copper cables. Although the specifications for copper interconnects support 10m of cable, in reality there are both performance issues and mechanical problems with them.
To be continued in my next post...
Monday, August 9, 2010
Gigabit Transcievers
In our rush to want to discuss all the new technologies, it seems to me that analysts have forgotten that part of our job is to also point out ongoing trends in existing products. So while talking about Gigabit transceivers might not be as appealing as talking about Terabit Ethernet, it’s also a necessity – especially since, without these devices and the continuing revenue they produce, we wouldn’t have 40/100G or even 10G Ethernet. So what are the important points to make about Gigabit transceivers?
- The market for Gigabit Ethernet transceivers (copper and optical) is expected to be about $2.5-billion in 2010 according to CIR, but it is also supposed to start declining in 2011 when more 10GigE will take its place.
- Pricing for a 1000BASE-SX SFP module is now at about $20 for OEMs. End users still pay Cisco or Brocade or their agents about 8x that much (more about this later).
- Low pricing makes it difficult on profit margins so transceiver vendors hope to make it up in volume.
- While SFP is certainly the preferred form factor, there is still a decent amount of GBIC modules being sold.
- SFP direct-attach copper cable assemblies have become an option for top-of-rack switches to servers instead of using UTP Category patch or fiber cabling, although the majority of implementations today are still UTP patch cords, mainly because the connections within the rack are still 100M with the uplink being Gigabit Ethernet of the 1000BASE-SX variety.
- While 10/100/1000 ports are the norm for desktop and laptop computers, most of these devices are still connected back through standard Category 5e or 6 cabling to 100M switch ports in the telecom room.
- Gigabit Fibre Channel business is pretty much non-existent now. It was quickly replaced by 2G and has progressed through 4G and 8G is expected to become the volume application this year. Look for more on Fibre Channel in future posts.
- Avago Technologies and Finisar top the list of vendors for 1000BASE-SR devices. JDSU has all but disappeared from the scene, mainly because they have de-emphasized this business in favor of their telecom products. In fact, rumor has it that JDSU is shopping its datacom transceiver business and has been for some time.
Labels:
Avago Technologies,
E2O,
Fibre Channel,
Finisar,
Gigabit,
IBM,
JDSU,
Picolight,
transceivers
Wednesday, August 4, 2010
Optical Engines
I was reviewing some research I recently conducted for the Optical Interconnect report I wrote for CIR and realized that I hadn’t yet “blogged” about what I would consider some exciting new product directions that many optical components suppliers are taking. We’ve been talking about optical integration for many years and some companies, like Infinera, have actually implemented it into their real-world products. But there are more cases of this than ever before and I think we’re on the brink of some true industry breakthroughs using what many have deemed “optical engines.”
Here is a summary of the component companies and their associated optical engine products:
Here is a summary of the component companies and their associated optical engine products:
- BinOptics – it uses its InP PICs to build "custom integrated microphotonics solutions" for its customers
- ColorChip – its silicon photonics is at the center of its 40G QSFP modules
- Lightwire – its Opto-electronic Application Specific Integrated Subsystem (OASIS) promises low power and higher density
- MergeOptics/FCI – OptoPack is at the center of its 10G and above transceiver designs
- Reflex Photonics – LightAble is the building block for its transceiver modules
- Santur – DFB/waveguide architecture has promise for not only tunable lasers, but many different optical interconnects
Labels:
Avago Technologies,
BinOptics,
ColorChip,
Intel,
Lightwire,
Luxtera,
MergeOptics,
optical engine,
optical interconnect,
Reflex Photonics,
Santur
Monday, August 2, 2010
When Does Passive Optical LAN Make Sense?
Are you purchasing a Transparent LAN service? Then you probably want to consider POL. Inter-building Transparent LANs often have distance limitations, which are currently overcome by installing significantly more expensive 1310 and 1550nm transceivers. As an active network, these higher cost modules are needed on both ends of every connection, and where seven-to-eight buildings are involved, the dollars spent can add up quickly.
With one long reach transceiver needed at the central office (or fed out of an enterprise data center), POLs can offer significant savings in multi-building campus environments. It is important to note how much more expensive 1550nm modules are as compared to their 850nm counterparts. At 10-Gigabit, a 10GBASE-SR (850nm) optical module costs approximately $1300/port (switch + transceiver cost). A comparable 10GBASE-ER (1550nm) longer reach device that is needed for an inter-building connection costs around $11,000/port (switch + transceiver) or nearly ten times as much. When connecting multiple buildings in a campus setting, these costs add up quickly, and a POL network can be a much more economical solution. The POL system uses 1310/1550nm diplexer optics and while more expensive than 850nm can still cover entire campuses at a fraction of the cost of the 1550nm Ethernet-based transceivers. And, since the signal from these devices can be split to as many as 64 users instead of 1, the cost-per-end-user is drastically reduced.
Passive optical LANs are being touted by their equipment suppliers as THE most cost effective solution for medium-to-large enterprises. According to Motorola, you can save more than 30-percent on your network infrastructure and as your number of users increases, so does your savings.
In our recent research for our POL report, we found that there is a subset of vertical markets – specifically, not-for-profits – that may be ripe to implement this disruptive technology. But how does this affect the data center network?
We’ve done our own cost analysis and the reason why POL is so cost effective as compared to a traditional switched-Ethernet network is because you can eliminate lots of copper and MMF cabling as well as workgroup switches. But, in the data center, you still need to connect to your WAN routers. With a POL, you could cover as many as 96 end users with one 4-port blade in an enterprise aggregation switch and ONE 10G uplink port to the WAN router. The equivalent switched-Ethernet network would need four workgroup switches connected to a core switch through 12 uplink Gigabit Ethernet ports and TWO 10G uplink ports from the core switch to the WAN router. So by installing POL, you may be able to cut your router uplink ports in half. I wouldn’t mind saving 10’s of thousands of dollars on router ports – would you?
Of course, this is all assuming a totally non-blocking architecture, which, in reality, isn’t necessarily needed. A switched-Ethernet oversubscribed network covering a 132-user, 4-floor building is still less expensive than a POL. For the details, see our POL report.
With one long reach transceiver needed at the central office (or fed out of an enterprise data center), POLs can offer significant savings in multi-building campus environments. It is important to note how much more expensive 1550nm modules are as compared to their 850nm counterparts. At 10-Gigabit, a 10GBASE-SR (850nm) optical module costs approximately $1300/port (switch + transceiver cost). A comparable 10GBASE-ER (1550nm) longer reach device that is needed for an inter-building connection costs around $11,000/port (switch + transceiver) or nearly ten times as much. When connecting multiple buildings in a campus setting, these costs add up quickly, and a POL network can be a much more economical solution. The POL system uses 1310/1550nm diplexer optics and while more expensive than 850nm can still cover entire campuses at a fraction of the cost of the 1550nm Ethernet-based transceivers. And, since the signal from these devices can be split to as many as 64 users instead of 1, the cost-per-end-user is drastically reduced.
Passive optical LANs are being touted by their equipment suppliers as THE most cost effective solution for medium-to-large enterprises. According to Motorola, you can save more than 30-percent on your network infrastructure and as your number of users increases, so does your savings.
In our recent research for our POL report, we found that there is a subset of vertical markets – specifically, not-for-profits – that may be ripe to implement this disruptive technology. But how does this affect the data center network?
We’ve done our own cost analysis and the reason why POL is so cost effective as compared to a traditional switched-Ethernet network is because you can eliminate lots of copper and MMF cabling as well as workgroup switches. But, in the data center, you still need to connect to your WAN routers. With a POL, you could cover as many as 96 end users with one 4-port blade in an enterprise aggregation switch and ONE 10G uplink port to the WAN router. The equivalent switched-Ethernet network would need four workgroup switches connected to a core switch through 12 uplink Gigabit Ethernet ports and TWO 10G uplink ports from the core switch to the WAN router. So by installing POL, you may be able to cut your router uplink ports in half. I wouldn’t mind saving 10’s of thousands of dollars on router ports – would you?
Of course, this is all assuming a totally non-blocking architecture, which, in reality, isn’t necessarily needed. A switched-Ethernet oversubscribed network covering a 132-user, 4-floor building is still less expensive than a POL. For the details, see our POL report.
Subscribe to:
Posts (Atom)