Fujitsu Demos Silicon Photonics Server Links
There is a growing consensus among system makers, driven in large part by work done by Intel and members of the Open Compute Project founded by Facebook, that the components on a system should be broken apart and linked by silicon photonics circuits paired with fiber optic pipes.
By doing so, elements of the system that would normally have to be placed on or plugged into a motherboard can be separated from one another. This would offer more flexible configuration options for systems, allowing for more peripheral expansion than is currently possible in a server today and also enabling for hot CPUs and hot flash or coprocessor cards to be spread out so they can be more easily cooled.
Server maker Fujitsu has worked with Intel to prototype the first disaggregated server using the Optical PCI-Express (OPCIe) protocol, which uses silicon photonics chips and MXC optical cables developed by Intel. (In the graphic above, that is the green cable with the light streaming out of it, and it carries ten times the data as a the copper cable to its right, which weighs roughly ten times as much.)
To test out the OPCIe protocol as a means of linking disparate parts of the server, Fujitsu grabbed two of tis Primergy RX2200 rack servers and added an Intel silicon photonics module to it:
That module has a field programmable gate array that conditions the PCI-Express signals coming and going off the server bus so they can be passed over an optical MXC cable. The second machine was an expansion box packed with solid state disks and several Xeon Phi X86 coprocessors. And as far as the server knew, those SSDs and Xeon Phis were all local, attached to the motherboard.
Intel has demonstrated the use of silicon photonics connectors for server components with the Open Compute project, and has shown that it can push the speed of the links up to 100 Gb/sec. In September, Intel demonstrated silicon photonics links running the Ethernet protocol, and announced a partnership with Corning using MXC connectors and Corning's ClearCurve LX fiber optic cables showing signals traveling 300 meters with 25 Gb/sec bandwidth. Intel has subsequently shown that it can send data at 25 Gb/sec using multimodal fiber optics over 800 meters. In the Fujitsu demonstration, the ClearCurve optical cables were 10 meters long and could deliver 68 Gb/sec of bandwidth.
You can use copper cables to connect machines together over long distances, of course. But copper cables are subject to electromagnetic interference and have to have amplifiers and signal conditioners to send signals beyond a few feet.
While Intel has been enthusiastic about using its silicon manufacturing skills to create less expensive and more compact silicon photonics interconnects, it has not given a timeframe for when such technologies will be commercialized. The company's Rack Scale effort to provide a commercial alternative to the disaggregated rack and server designs put forth by the Open Compute Project are in full swing, too. There will be, it seems, a few different ways that this light pipe connection for system components gets used.
Over the long haul, the ability to break CPUs not only from I/O but also from blocks of main memory would seem to be necessary to truly disaggregate the server. The reason is that the pace of change for processor technology is much faster than for memory technology, and right now, a system has the two tied together. If you want to use more memory or faster memory, you are limited to the memory that is supported by the on-chip memory controller and the slots on the motherboard. What Intel needs to do – and what it has hinted it knows how to do – is to create a more generic memory controller for the chip that can then talk to DDR3, DDR4, or DDR5 main memory, or any other new kind of memory that might be hooked into the server. Such a development, combined with silicon photonics, would truly make servers and racks more malleable.