HPE Labs VIDEO | How Photonics Can Help Prevent a Digital Energy Crisis

Oct 19, 2017 9:45 AM ET

HPE Labs | Photonics

Data centers are the factories of the 21st century, processing the ever-expanding volumes of information that make the global economy go. But progress comes with a cost. In 2015, data centers used more electricity than the entire United Kingdom, about 400 terawatt-hours—a figure that could triple by 2020, according to a recent study in the journal Challenges.

As much as 40 percent of that electricity is spent just moving information inside data centers. There’s a more efficient alternative that could halve the amount of energy these transmissions use, the equivalent of decommissioning as many as 250 big power plants. So far, no one has figured out how to make it work at scale.

In a white-walled lab filled with racks of test equipment, bales of cable, and machines that shoot green and blue light, a team of researchers are close to pulling it off.

Over the past two years, they’ve developed a way to repurpose machinery used in semiconductor production to manufacture bundles of fast lasers—each about a tenth the size of a human hair—and components necessary to make them transmit data. It’s a breakthrough they predict will make it cost-effective for computers to send information over short distances using light, or photons, a technique that to date has only made economic sense for transmissions covering several kilometers or more.

Broad adoption of this new approach could help stabilize power consumption in data centers, allowing the tech sector to continue growing without triggering a full-blown energy crisis.

Three years ago, Bash and his team at Labs were developing software to predict and manage data center energy consumption. But given the rate that new data centers are being opened, they realized that their efforts were unlikely to stop the upward trajectory of electricity use.

We realized we had to reduce the energy consumption of the hardware itself

For the first time, Labs researchers began to fundamentally rethink a computer’s architecture and underlying components.

That led to a broad and ambitious effort to build The Machine, a new kind of computer that combines photonic transmission with two other energy-efficient breakthroughs: nonvolatile memory that retains information even when it isn’t drawing power, and systems-on-a-chip that package processors and memory to greatly speed data processing.

When available, The Machine will use on the order of 1 percent of the energy per calculation achievable today, with much of those improvements from using photonic data transmission.

Decision point

For most of its history, the computing industry was all about performance, says Vinod Namboodiri, an associate professor at Wichita State University who has studied data center energy use. “It was not even a thought that energy has to be considered,” he adds.

Now, he says, the tech industry must finally decide whether incremental performance gains are worth the extra energy use that come with them.

Computers today send signals via electrons over copper wire, and these electrical currents lose energy as they travel over the wires. To successfully transmit data, the signals need to be amplified with additional power on the other end.

Photons don’t have those characteristics. Sending photons over fiber-optic cable could increase data throughput by a factor of 10 while requiring many times less energy than traditional copper wire transmission.

Photonic transmissions use tiny devices called transceivers that consist of an electronic chip, a laser, and a lens that focuses the light from the laser into the fiber. The packaging also includes an optical connector that a fiber cable can plug into, much like a charger plugs into a smartphone. A similar device on the other end includes a photodetector and electronics that decode the signals.

Telecommunications carriers already send data this way over their fiber-optic networks. Photonic transmission is not used widely for data center applications because making the transceivers is an expensive, manual process. For carriers, spending thousands of dollars on a transceiver is a small expense compared with the cost of laying down the hundreds of miles of optical fiber the signals will travel through.

Using a process similar to the standard method of manufacturing low-cost light-emitting diodes, or LEDs, the machine fabricates the wafer into approximately 80,000 individual high-speed lasers, each one equipped with four metal pads that can conduct an electrical current.

Meanwhile, the team takes a different wafer containing the parts that will align the optical connector to the laser, and bonds it to a piece of glass that has microscopic lenses and metal pads identical to the metal pads on the lasers.

A different machine attaches solder balls to each of the lasers and cuts each wafer of 80,000 lasers into bundles of 24 lasers. This newly cut piece is then placed on the glass wafer. The wafers are heated, melting the solder and causing the metal pads on each piece to connect with one another.

This process is a huge step toward the inexpensive mass production of fiber optics, says Mike Tan, who leads the opto-electronics group at Hewlett Packard Labs.

The result: Thousands of new lasers precisely aligned to lenses all neatly bundled on the glass. These are cut into 5-millimeter by 3-millimeter pieces the team calls an “optical engine.”

“The work that we’re doing is very practical,” he says. “We’re addressing how we reduce costs.”

 

The path to market

Lasers send data through pulses. Right now, the Labs lasers pulse fast enough to transmit about 32 gigabits per second, more than adequate to meet the most demanding data center applications into the next decade.

Tan and his team have spent the last year working on a next-generation optical engine capable of sending 100 gigabits of data per second while using approximately the same amount of energy as before, an achievement that could further reduce energy consumption.

To pull this off, they are using four 25-gigabit lasers that each emit a particular color of infrared light, or wavelength. Each laser has a lens fabricated directly on it.

The lenses direct the light from the four lasers into a small glass slab, which has reflective mirrors on one side and color filters on the other. The light zigzags within the glass, which focuses and refocuses the four beams, combining them into one. That combined beam is then directed into a single fiber.

On the other end, filters separate the different wavelengths into individual signals. The resulting optical engine can send four times as much data as the prior generation, and instantly quadruple the capacity of a data center’s fiber-optic infrastructure without adding a single additional fiber.

The team presented the prototype at a conference in November. “That’s the whole beauty of innovation,” Tan says. “You come up with an idea, you test it, change it, and in the end it works.”