For decades now, fiber has been slated to take over the data networking world, but somehow, some way, copper keeps reinventing itself. But are the ways in which copper can compensate for its lower bandwidth capacity coming to an end at 10G due to what seem to be astronomical power consumption issues? Probably not. I have listened to the rhetoric from the fiber-optic companies for more than five years now, and have conducted my own research to see if what they say is true. Their argument was that at 8 to 14W per port, copper is just too costly. But, now that the chips have reduced power consumption to less than 3W per port, 10GBASE-T is a viable data center networking solution. Actually, even at 14W per port, it was viable, just not practical for switch manufacturers to incorporate in their designs because they couldn’t get the port density they needed and actually have room to cool the devices. Now that doesn’t seem to be an issue, evidenced by 24-port 10GBASE-T configurations that have been released by all the major players.
I believe decisions on copper versus fiber will be made around other parameters as well, such as latency. In a recent study, Data Center Cabling Cost Analysis - Copper Still Has Its Place, that we released at DataCenterStocks.com, we looked at the cost of 10G copper versus fiber and added in the higher power consumption. Using the specific example focused on a rack of EoR Cisco switches, copper was still more cost-effective even when considering higher electricity costs.
But our next area to study will be a rack of servers with a ToR switch. In this scenario, the power consumption difference may be enough to justify the cost of installing fiber over copper. The above referenced report and this next research are part of a series of research reports for our Data Center Network Operator Service.
nice info........
ReplyDeletecheck it out.. FOLLOW ME