Friday, July 30, 2010

Laser-optimized Multi-mode Fiber (LOMF)

It occurred to me as I was writing the last post that many of you may not be aware of the different grades of multi-mode fiber and that for the purposes of this blog, it would be good to present the differences so many of my points can be thoroughly understood.

Right now, there are three standardized types of LOMF as well as what most of us in the industry call FDDI-grade fiber, which is not laser optimized. So first, what does laser-optimized actually mean? In basic terms, it just means that the fiber was designed to be used with lasers, and in the case of MMF, typically VCSELs. FDDI-grade fiber pre-dated the use of VCSELs so is not laser-optimized - it was intended for utilization with LEDs. Lasers were adopted as the light source of choice when scientists and engineers realized that LEDs became very unstable when trying to modulate them at data rates beyond 100 Mbps. They originally tried to use the same lasers that were being used in CD players, but these turned out to be unstable at Gigabit data rates as well. In the early 1990s, the development of the VCSEL enabled these higher data rates.

As the light sources evolved, the fiber progressed with them. So, for 850nm operation today we have four choices: 
  1. OM1 (FDDI):  Minimum OFL Bandwidth of 200 MHz•km; 10G Minimum Optical Reach of 33m
  2. OM2:  Minimum OFL BW of 500; 10G Minimum Optical Reach of 82m
  3. OM3: Minimum OFL BW of 1500; 10G Minimum Optical Reach of 300m
  4. OM4: Minimum OFL BW of 3500; 10G Minimum Optical Reach of 550m
As you can see, the bandwidth of the fiber is intimately tied to what type of light source is used and the optical reach is dependent on both bandwidth and data rate. And, while OM1 fiber wasn’t necessarily designed to be used with lasers, it works fine with them, albeit at a shorter distance than with LOMF. Of note as well is the fact that there are a few cable manufacturers that also provide what I would call OM1+ cable that is 62.5-micron, but is laser-optimized, so may have some improved bandwidth and reach.

All this leads to a very important point – when specifying a cabling system for your networks and data centers, it is important to understand not only the fiber you’re going to install, but also the equipment you’re trying to connect. Just because you're "only" installing Gigabit systems and you've used OM1 fiber for years, doesn't mean it's the best solution (or even the most economical) for today and tomorrow.

Wednesday, July 28, 2010

Other 10GigE Transceiver Markets – ER, LX4 and LRM

Some of you may have noticed that in my last post I neglected to talk about three other 10-Gigabit Ethernet variants – LX4, LRM and ER. That’s because the content was already long enough and I wanted to focus on the volume data center applications. Now, I’ll discuss the others.

10GBASE-ER runs over SMF at 1550nm for up to 40km. While there are a few service providers that might choose to do this, the vast majority of them choose WDM through the OTN instead. There may be some private networks that have need for this variant as well which is what prompted the IEEE include it in the standard. These transceivers are priced out of typical budgets for the average enterprise at over $4000.

10GBASE-LX4 was originally introduced to address installed base of FDDI-grade (not laser-optimized) MMF, but can also be used with higher-grades of LOMF as well as SMF. It uses CWDM to send four wavelengths (thus the X4) between 1269.0 to 1355.9nm running at data rates of 2.5 Gbps each. LX4 modules are available in both X2 and XENPAK form factors and from at least a couple of sources such as Emcore and Hitachi Cable. As you can imagine, because these devices have four lasers and four detectors with their associated electronics, they cost appreciably more than either the SR or LR transceivers. The module alone retails for about $2000, which means that the per port cost would probably around $2500. But, you most likely wouldn’t fill your switch with these modules of course; you would only use them as needed where you wanted to re-use existing installed fiber.

In response to the high priced LX4, 10GBASE-LRM variant was developed. It was enabled by chip companies such as Clariphy and Scintera with some new technology, electronic dispersion compensation (EDC), that could push 10G serial to longer distances on MMF. It took a while to develop the standard and the consequence was that LX4 really took much of the market it was intended for. However, once products were released and supported by the top-tier transceiver manufacturers (Avago, Finisar, Opnext), it has really taken much of the business away from LX4. LRM modules are now available in SFP+ packages as well, which clearly indicates the vendors think there will be an ongoing market for them.

One note of caution – if you intend to use either the LX4 or the LRM modules, you need to make sure that both ends of your channel have them, otherwise it won’t work.

Monday, July 26, 2010

Cost Effective 10G Data Center Networks

For networks running Gigabit Ethernet, it’s a no-brainer to use Category 5e or 6 cabling with low-cost copper switches for less than 100m connections because they are very reliable and cost about 40-percent less per port than short-wavelength optical ones. But for 10G, there are other factors to consider.

While 10GBASE-T ports are now supposedly available on switches (at least the top Ethernet switch vendors, Brocade, Cisco and Extreme Networks say they have them), is it really more cost effective to use these instead of 10GBASE-SR, 10GBASE-CX4 or 10GBASE-CR (SFP+ direct attach copper)? Well, again, it depends on what your network looks like and how well your data center electrical power and cooling structures are designed.

First, 10GBASE-CX4 is really a legacy product and is only available on a limited number of switches so you may want to rule this out right away. If you’re looking for higher density, but you can’t support too many higher power devices, I would opt for 10GBASE-SR because it has the lowest power consumption – usually less than 1W/port. And also, the useful life of LOMF is longer; it’s smaller so won’t take up as much space or block cooling airflow if installed under a raised floor.

If you don’t have a power or density problem and can make do with just a few 10G ports for a short distance, you may choose to use 10GBASE-CR (about $615/port). But, if you don’t have a power or density issue and you need to go longer than about 10m, you’ll still need to use 10GBASE-SR and if you need a reach of more than 300m you’ll need to either install OM4 cable (which will get you up to 600m in some cases) to use with your SR devices; or look at 10GBASE-LR modules ($2500/port) that will cost you about twice as much as the SR transceivers (about $1300/port). If your reach needs to be less than 100m and you can afford higher power, but you need the density, 10GBASE-T (<$500/port) may be your solution. If you have a mix of these requirements, you may want to make sure you opt for an SFP-based switch so you can mix long and short reaches and copper and fiber modules/cables for maximum flexibility.

So what’s the bottom line? Do an assessment of your needs in your data center (and the rest of your network for that matter) and plan them out in order to maximize the cost effectiveness of your 10G networks. One more thing – if you can wait a few months, you may want to consider delaying implementation of 10G until later this year when most of the 10GBASE-T chip manufacturers promise to have less than 2.5W devices commercially available, which will drastically reduce (about half) its per port power consumption.

Thursday, July 22, 2010

SFP+ Marks a Shift in Data Center Cabling

With the advent of top-or-rack (ToR) switching and SFP+ direct attach copper cables, more data centers are able to quickly implement cost-effective 10G and beyond connections. ToR designs are currently one of two configurations:
  1. GigE Category cabling (CAT5e, 6, or 6A) connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the main distribution area (MDA)
  2. SFP direct attach cabling connection to each server with a 10G SFP+ or XFP uplink to either an EoR switch or back to a switch in the MDA
Either way, SFP and SFP+ modules and cable assemblies are starting to see huge inroads where Category cabling used to be the norm. Consequently, structured cabling companies have taken their shot at offering the copper variants of these devices. Panduit was one of the first that offered an SFP direct-attach cable for the data center, but Siemon quickly followed suit and surpassed Panduit by offering both the copper and optical versions of the assemblies as well as the parallel optics QSFP+ AOC. Others rumored of working on entering into this market are Belden and CommScope. This really marks a shift in philosophy for these companies who traditionally have stayed away from what they considered “interconnect” products. There are a couple of notable exceptions in Tyco Electronics and Molex that have both types of products, however.

So what makes these companies believe they can compete with the likes of Amphenol Interconnect, Molex and Tyco Electronics? Well, it might not be the fact that they think they can compete, but that they see some erosion of their patch cord businesses and view this as the only way to make sure the “interconnect” companies don’t get into certain customers. So, protecting their customer base by offering products they won’t necessarily make any money on – because, after all, many of them are actually private-labeled from the very companies they are trying to oust. Smart or risky? Smart, I think, because it seems to me that the future of the data center will be in short-reach copper and mid-range fiber in the form of laser-optimized multi-mode fiber (LOMF).

 

Tuesday, July 20, 2010

Common Electrical Interface for 25/28G – a Possibility or a Pipe-Dream?

Yesterday, I sat through a workshop hosted by the Optical Interconnecting Forum (OIF) on its “Common Electrical Interface (CEI) project for 28G Very Short Reach (VSR).” What quickly became clear to me was that I was in a room of VERY optimistic engineers.

I sat through presentations that were characterized as “Needs of the Industry,” which consisted of content from the leaders of IEEE 802.3ba (40/100G Ethernet), T11.2 (Fibre Channel) and InfiniBand standards groups. Yet all of these representatives made sure they carefully stated that what they were presenting was their own opinions and not that of their standards groups, which I found odd since most of what they showed was directly from the standards. Legalities I guess. I also noticed that they never really sited any kind of independent market research or analysis of what the “needs of the industry” were. For instance, one speaker said that InfiniBand has a need for 26AWG, 3-meter copper cable assemblies for 4x25G operation in order to keep the cost down within a cabinet. Yet, he did not present any data or even mention what customers are asking for this. Maybe this exists, but to me unless it is presented, the case for it is weak. I do have evidence directly from some clustering folks that they are moving away from copper in favor of fiber for many reasons – lower power consumption, weight of cabling, higher density, and room in cabinets, pathways and spaces.

Today, data center managers are really still just starting to realize the benefits of deploying 10G, which has yet to reach its market potential. I understand that standards groups must work on future data rates ahead of broad market demands, but this seems extremely premature. None of the current implementations for 40/100G that use 10G electrical signaling have even been deployed yet (except for maybe a few InfiniBand ones). And, from what I understand from at least one chip manufacturer who sells a lot of 10G repeaters to OEMs for their backplanes, it is difficult enough to push 10G across a backplane or PCB. Why wouldn’t the backplane and PCB experts solve this issue that is here today before they move onto trying to solve a “problem” that doesn’t even exist yet?

Maybe they need to revisit optical backplanes for 25G? It seems to me that 25G really won't be needed any time soon and that their time would be better spent on developing something that would appear to have relevancy beyond 25G. To me, designing some exotic DSP chip that would allow 25G signals to be transmitted over four-to-12 inches of PCB and maybe 3m of copper cable for one generation of equipment may not be very productive. Maybe this is simpler than I anticipate, but then again, there was a similar but a little more complicated problem with 10GBASE-T and look how that turned out...

Friday, July 16, 2010

40/100G – Paradigm Shift to Parallel Optics?

Parallel-optics have been around for more than a decade – remember SNAP12 and POP4? These were small 12 and four-fiber parallel-optics modules that were developed for telecom VSR applications. They never really caught on for Ethernet networks though. Other than a few CWDM solutions, volume applications for datacom transceivers have been serial short-wavelength ones. At 40G, this is changing.

High performance computing (HPC) centers have already adopted parallel optics at 40 and 120G using InfinBand (IB) 4x and 12x DDR. And, they are continuing this trend through their next data rate upgrades – 80 and 240G. While in the past I thought of HPC as a small, somewhat niche market, I now think this is shifting due to two major trends:

  • IB technology has crossed over into 40 and 100-Gigabit Ethernet in the form of both active optical cable assemblies, CFP and CXP modules.
  • More and more medium-to-large enterprise data centers are starting to look like HPC clusters with masses of parallel processing
Many of the top transceiver manufacturers including Avago Technologies and Finisar, as well as some startups have released several products in the last year to support these variants with short-reach solutions. The initial offerings are AOC products using QSFP+ and CXP form factors, which both use VCSEL and PIN arrays. At least one, Reflex Photonics , has released a CFP module that also uses these devices. To date, the only other transceiver product that seems to be available so far is the QSFP+ 40G module from MergeOptics , which is a natural extension of its QSFP AOCs. These products are already being deployed for IB systems and are planned to be used for the initial 40G Ethernet networks as well.

Once parallel-optics based transceivers are deployed for 40/100G networks, will we ever return to serial transmission?

Thursday, July 15, 2010

VCSELs – The Enabling Factor of Fiber in the Data Center

Optical technology proponents have argued for many years that fiber is about to take over all of networking. But, time and time again we have seen copper technologies reinvent themselves to serve at least the last 100 meters in LANs. But with the complicated digital signal processing that is needed to enable the 100-meter operation of copper comes a cost – power consumption. And in data center operations, power consumption may be the single most important issue still needing to be solved.

Before the EPA performed their study on data center power consumption and before the creation of the Green Grid, data center managers were worried more about running out of space than out of power. Now, with complicated electronics and the better utilization of server processing through virtualization, lowering power consumption has become more imperative.

VCSEL-based short-wavelength fiber optic networks may be the answer. As mentioned in a previous post, on a port-by-port basis, 10GBASE-SR devices consume four times less power than 10GBASE-T ones. And when you have thousands of these ports within your data center, the total power consumption adds up quickly. Stay tuned for further quantitative analyses of copper versus fiber in the data center.

Finisar is one of the leading manufacturers of VCSELs, and they sell the short wavelength transceivers to LAN and SAN equipment providers including Brocade, Cisco, QLogic , HP and EMC. While transceivers generally sell at very low margins, they are an essential part of keeping costs down and power consumption low in data center networks.

Contributing Analyst - David Gross.

Wednesday, July 14, 2010

Fiber Versus Copper at 40/100G

I am reminded every now and then that fiber optics has been around in the U.S. since the early 1980s and that it has always been advertised as the end-all solution for networking. But somehow, copper seems to perpetually reinvent itself to be able to compete at least up to 100 meters. And it seems that this may happen again at 40 and 100 Gig.

The current copper spec within the IEEE 802.3ba standard is for up to 10 meters of twinax assemblies that may use active equalization in both the transmitter and receiver. But there is a movement about to develop a “call for interest” for a 40G/100G IEEE project for Category cables. This wouldn’t be the first time that the copper standards lagged the optical ones – Gigabit (1000BASE-T) and 10-Gigabit (10GBASE-T) were both developed after the initial fiber-optic ones. Indeed, there is much R&D being conducted at universities (Penn State) and industry (Nexans, Inc., The Siemon Company, Broadcom) to determine what can be done. While I am skeptical (based on lukewarm response to presentations given at recent Ethernet Alliance events) that a project will materialize within the IEEE any time soon, I would not rule it out—especially since recent simulations that were based on real cabling data have shown that at least 40G twisted-pair copper systems are feasible.

What keeps me a cynic about a 40/100G Category cabling solution is the extremely slow adoption of even 10GBASE-T. While it was 1000BASE-T that really propelled the Gigabit Ethernet market in 1999, it has been 10GBASE-SR (850nm VCSEL-based variant) that has enabled 10G market growth. And we really have yet to see 10GBASE-T take off at all, mainly due to its slow adoption in Ethernet switches based on its high power consumption. As I’ve stated in the market research report I wrote for CIR earlier this year, until this is solved (which may be a long time from now according to some chip suppliers), 10GBASE-T will continue to see very slow uptake.

Tuesday, July 13, 2010

FCoE does NOT Mean the End of Fibre Channel and InfiniBand

Today, servers in the data center typically have two to three network cards. Each adapter attaches to a different element of the data center—one supports storage over Fibre Channel, a second for Ethernet networking, and a third card for clustering, which is probably InfiniBand. Data center managers must then deal with multiple networks. A single network that could address all of these applications would greatly simplify administration within the data center.

Fibre Channel over Ethernet (FCoE) is one approach that has been proposed to accomplish this goal. It is a planned mapping of Fibre Channel frames over full duplex IEEE 802.3 Ethernet networks. Fibre Channel will leverage 10-Gigabit Ethernet networks while preserving the Fibre Channel protocol. For this to work, Ethernet must first be modified so that it no longer drops or reorders packets, an outcome of the array of CEE standards in development (IEEE 802.1p and IEEE 802.1q.)

With the implementation of FCoE, data centers would realize:


• Reduced number of server network cards and interconnections
• Simplified network
• Leveraging of the best of Fibre Channel, Ethernet and installed base of cabling
• Minimum of 10G network card on each network element


For this to work, the various applications would be collapsed to one converged network adapter (CNA) in an FCoE/CEE environment.

While this would greatly simplify the data center environment, it still seems too costly to implement in every network element – one CNA with an SFP+ port costs upwards of $1200, while the total of three separate one, two and 2.5 Gigabit ports still cost less than $600. This is one of the main reasons that FCoE/CEE will only be deployed where the flexibility that it provides makes sense – which is most likely at servers on the edge of the SAN.

Monday, July 12, 2010

Datacom Transceiver Vendors Transitioning into New Businesses

Have you noticed that it seems like all of the top datacom transceiver suppliers are transitioning their businesses? I’ve already talked about Avago’s new venture with its MicroPOD technology. They seemed to have supplanted Finisar as the technology leader in the space. Finisar has expanded its offerings into more telecom markets and JDSU is all but gone from the scene and focusing more on telecom again.

Finisar seems to be enjoying what may well be short-term success with its Laserwire offering. Since it is a non-standards based solution, it is difficult to believe it will become a mainstream one. While Finisar is offering other AOCs—C.wire (CXP-based) and Quadwire (QSFP-based), it does not seem to be participating in what seems to be a chip-to-chip optical interconnection revolution like Avago and Luxtera are (see previous posts for details) . Finisar used to be the technology leader in the optical transceiver space, it has veered off-course from that strength in preference for market diversification instead—now covering telecom and HPC standards-based solutions as an alternative. But perhaps this is the right move for Finisar, since it has not seemed to hurt its revenue position at all.

JDSU seems to be absent from the short-reach module market. It appears that the optical components giant has taken the technology that was developed at IBM, E2O and Picolight and thrown it away. Picolight was once a leader in parallel optics and, along with E2O, long-wavelength VCSELs. IBM pioneered v-groove technology and the oxide layer that enabled the next leap in speed and improved reliability for 850nm VCSELs. All of these technologies look like they are destined to die a slow, painful death after being acquired by JDSU. The company’s attention is clearly focused on its tunable technology and telecom applications, which is where, of course, it started. JDSU has never had a good reputation for assimilating acquisitions, so none of this should be a surprise. I was optimistic when JDSU bought these companies thinking that now these emerging technologies would be supported by a larger pocketbook. What was the reasoning for JDSU deemphasizing the technologies it acquired? Was it trying to get rid of short-reach competition in hopes that all optical networking would move towards long-wavelength devices? This would have been naïve; the likes of Finisar, Avago, MergeOptics and others would still be supporting 850nm optics and there remains a healthy market for them in enterprise networks and data centers—albeit a very competitive one.

According to JDSU, it is focusing on the LH and ULH versions of 40G and 100G first because it does not see the value in the CXP module for short-reach applications. For short-reach, it is focusing on QSFP+ modules, but development of these will take longer. The company claims it is not de-emphasizing its 850nm technology, but just focusing elsewhere first. I’m not so sure. Rumor has it, and I tend to believe that JDSU is looking for buyers for its short-wavelength business.

Thursday, July 8, 2010

Quantum Dot Lasers Now Reality

In recent research I conducted for the Optical Interconnect report I wrote for CIR, I found some encouraging news on quantum dot lasers. Measuring 20nm in diameter, a quantum dot (QD) is defined as a semiconductor whose electrons are confined in all three spatial dimensions. As a result, it has properties that lie somewhere between those of bulk semiconductors and those of discrete molecules. QDs have been studied for a wide range of applications such as transistors, solar cells, displays, medical imaging, optical amplifiers, sensors, drug delivery and light emitters (both LEDs and diode lasers). QDs also could be used as the physical "incarnation" of qubits in quantum computing R&D and in quantum encryption systems. All of these applications are based on the fact that QDs are zero dimensional, which gives them superior transport and optical properties. They also need very little power.

QD Laser, a Japanese firm backed by Fujitsu Limited, Mitsui Ventures and Mizuho Capital Co., Ltd, announced what I believe is the first commercially viable QD laser in March 2009. Since then, it has added several products to its portfolio. They include FP/DFB laser chips, TO-can and TOSA and wide-band SOA butterfly components. The lasers have capability to run at data rates up to 10 Gbps. These devices are well suited for datacom and telecom equipment.

It seems that QD lasers may have future applications in chip-to-chip optical connections. They may also have applications outside of telecom in sensors and in future quantum encryption/quantum computing systems. In addition to QD Laser's devices, Taiwanese researchers have built tunable QD VCSELs. Also, VI Systems (VIS), a German-based start-up, is working on QD-enhanced VCSELs. This components company has recently received substantial funding and we note that at recent industry conferences VI Systems presented a paper on 25-Gbps VCSELs that were rendered temperature insensitive with the use of QDs. VIS has recently released a product catalog of TOSAs, ROSAs, VCSELs, PINs, arrays, TIAs, VCSEL drivers and high-speed test boards that utilize its technology. Its products are suitable for 850nm 25G and 40G operation. It seems as though this may be one of the only currently viable solutions for stable operation of VCSELs beyond 10G.

Wednesday, July 7, 2010

Luxtera’s Contribution to a Push for All-Optical Networks

Yesterday I wrote about Avago’s new miniaturized transmitters and receivers so today I’d like to introduce you to a similar product from Luxtera. Well known for its CMOS photonics technology, Luxtera actually introduced its OptoPHY transceivers first – in late 2009.

Luxtera took a different approach to its new high-density, optical interconnect solution. It is a transceiver module and is based on LW (1490nm) optics. Just like Avago’s devices, the transceivers use 12-fiber ribbon cables provided by Luxtera, but that’s really were the similarities end. The entire 10G–per-lane module only uses about 800mW compared to Avago's 3W, and they are true transceivers as opposed to separate transmitters and receivers. Luxtera is shipping its device to customers, but have not announced which ones yet.

In addition to the projected low cost for these devices, what should also be noted is that all of the solutions mentioned in the last three entries – Intel’s Light Peak; Avago’s MicroPOD and Luxtera’s OptoPHY – have moved away from the pluggable module product theme to board-mounted devices. This in and of itself may not seem significant until you think about why there were pluggable products to begin with. The original intent was to give OEMs and end users flexibility in design so they could use an electrical, SW optical or LW optical device in a port depending on what length of cable needed to be supported. You could also grow as you needed to – so only populate those ports required at the time of installation and add others when necessary. The need for this flexibility has seemed to have waned in recent years in favor of density, lower cost and lower power consumption. The majority of pluggable ports are now optical ones, so why not just move back to board-mounted products that can achieve the miniaturization, price points and lower power consumption?

Tuesday, July 6, 2010

Optical Interconnects for All-Optical Networking May be Closer to Reality than You Think

On-board interconnects have for some time just been handled with copper traces, but with data rates now reaching beyond 10G, this is ripe for change. In fact, it is already changing; evidenced by the big splash IBM and Avago Technologies made at this year's OFC/NFOEC conference. The computer giant and transceiver manufacturer teamed to develop what they are calling "the fastest, most energy-efficient embedded interconnect technology of its kind."

Deemed the MicroPOD™, Avago developed it for IBM's next generation supercomputer, POWER7™. While it was designed for HPC server interconnects, it could be used for on-board or chip-to-chip interconnects as well. The devices use a newly designed miniature detachable connector from US CONEC called PRIZM™ LightTurn™. The system has separate transmitter and receiver modules that are connected through a 12-fiber ribbon. Each lane supports up to 12.5 Gbps. It uses 850nm VCSEL and PIN diode arrays. The embedded modules can be used for any board-level or I/O-level application by either using two PRIZM LightTurn connectors or one PRIZM LightTurn and one MPO.

While these modules are currently for the HPC market, Avago designed something very similar for Intel and its Light Peak interconnect system (see previous post for details) for what some are calling “optical USB.” MicroPOD is targeted at high-density environments so a natural extension of its market reach would be into switches and routers. The market for such devices probably will not become huge in the next few years, but it is exciting to see that companies in this space have started to spend R&D dollars again and that there are at least a few customers willing to employ the technology right out of the gate. Of course, it must be noted that this project was partially funded by DARPA.

But this technology MUST be too expensive for the typical piece of network equipment right? Not so, says Avago, because the manufacturing process is 100-percent automated and with Avago's vertical integration, the prices (at volume) may actually be able to rival those of today's transceivers. I’ll hold judgment until Avago proves it can win more than one big customer, however, I think MicroPOD holds the promise to change the paradigm for on-board, board-to-board and even network-element-to-network-element optical interconnects.

Thursday, July 1, 2010

Intel’s Light Peak Optical Interconnect System

Intel made a big splash at its developer's forum in September 2009 by introducing its Light Peak technology. Light Peak uses a new controller chip with LOMF and a 10G 850nm VCSEL-based module with a new optical interface connector, which has yet to be determined. It is aimed at replacing all of your external connectors on your PC including USB, IEEE 1394, HDMI, DP, PCIe, etc. It is also targeted at other consumer electronic devices like smart phones and MP3 players.


Intel designed both the controller chip and the optical module, but will only supply the chip. It is working with a couple of top optical component manufacturers on the modules—Avago and TDK. The semiconductor giant expects to ship its first products this year, but would not say who its initial customers will be. It did tell me that it has "received general support from the PC makers" and both SONY and Nokia have gone on record publically supporting Light Peak. Both companies are willing to entertain a new standard centered on the Light Peak technology. These labors are expected to pay off with formal standardization starting this year.

According to Intel, Light Peak is expected to start shipping this year with several PC manufacturers evaluating it. Next year is anticipated to be a transitional year and by 2012, we should start seeing Light Peak commercially available on PCs. While I would never bet against Intel, this is quite an aggressive schedule. Even USB took longer than that to be adopted and it was a copper solution that was easily implemented by consumers. However, it sure would be nice to have just one connector for my PC!

While Intel says it has targeted this technology at consumers, with its 10G data rate and 100-meter optical reach, it could easily be extended to LAN and/or data center applications.