Ahead of the Hot Chips conference this week, photonics chip startup Lightmatter revealed the first technical details about its upcoming test chip. Unlike conventional processors and graphics cards, the test chip uses light to send signals, promising orders of magnitude higher performance and efficiency.
The technology underpinning the test chip — photonic integrated circuits — stems from a 2017 paper coauthored by Lightmatter CEO and MIT alumnus Nicholas Harris that described a novel way to perform machine learning workloads using optical interference. Chips like the test chip, which is on track for a fall 2021 release, require only a limited amount of energy because light produces less heat than electricity. They also benefit from reduced latency and are less susceptible to changes in temperature, electromagnetic fields, and noise.
Lightmatter makes remarkable claims about the test chip, asserting its over 1 billion transistors can process more than 2TB of data per second. The company also says — albeit without benchmarks — that the test chip outperforms Nvidia graphics cards, Intel and AMD processors, and even special-purpose hardware like Google’s tensor processing units (TPUs) on state-of-the-art AI models. (Lightmatter says a test chip ran the ResNet-50 object classification model on the open source ImageNet data set with 99% of single-precision floating point accuracy.)
“Lightmatter’s chip and communications platform could power the entirety of Google Search on just one rack,” the company wrote in a press release.
Hyperbole aside, Lightmatter says its communications platform — project Wormhole, named after the system of flow control in computer networking called “wormhole switching” — allows roughly 50 test chip processors to exchange data at rates exceeding 100Tbps without optical fibers. Communicating via a “wafer-scale” photonic platform, clusters of chips behave as though they’re one massive system, sending data far across the array.
Like many-colored beams of light refracting through a prism, an individual test chip can perform calculations using different wavelengths of light simultaneously. It encodes data and sends it through optics by modulating the brightness in wire-like components called waveguides. Test chips communicate with other chips and the outside world much like standard electronics chips (i.e., by sending a series of electrical signals). When they finish performing calculations on light beams, the P1 chips leverage a photodetector akin to a solar panel to convert the light signals back into electrical signals that can be stored or read.
So how does that benefit machine learning? When an object detection algorithm processes an image, it divides each pixel into three channels — red, green, and blue — and converts the image line by line into a collection of values (a vector). Separate red, green, and blue vectors are passed through a processor, which executes an algorithm on the vectors to identify in-image objects.
Digital processors pass the vectors through arrays called multiply-accumulate units (MACs) to perform the execution. A silicon processor has a small number of MACs, while a GPU has an entire array of MACs, making the latter more performant. But optical chips like Lightmatter’s test chip are able to execute algorithms by passing entries of vectors through a range of digital-to-analog converters, where they’re converted from digital sequences into proportionally sized electrical signals.
An optical modulator within the test chip converts the signals into optical signals carried by beams of lasers within waveguides made of silicon. The vectors encoded in light and guided by waveguides shine through a 2D array of optical devices that perform the same operations as a MAC. But in contrast to digital processor MACs, where each layer has to wait for the previous layer to finish, calculations in the test chip occur while the beam of light is “flying” (typically in 80 picoseconds).
Lightmatter’s hardware, which is designed to be plugged into a standard server or workstation, isn’t immune to the limitations of optical processing. Speedy photonic circuits require speedy memory, and then there’s the matter of packaging every component — including lasers, modulators, and optical combiners — onto a tiny chip wafer. Plus, questions remain about what kinds of nonlinear operations — basic building blocks of models that enable them to make predictions — can be executed in the optical domain.
That may be why companies like Intel and LightOn are pursuing hybrid approaches that combine silicon and optical circuits on the same die, such that parts of the model run optically and parts of it run electronically. These companies are not alone — startup Lightelligence has so far demonstrated the MNIST benchmark machine learning model, which uses computer vision to recognize handwritten digits, on its accelerator. And LightOn, Optalysis, and Fathom Computing, all vying for a slice of the budding optical chip market, have raised tens of millions in venture capital for their own chips. Not to be outdone, Boston-based Lightmatter has raised a total of $33 million from GV (Google’s AI-focused venture arm), Spark Capital, and Matrix Partners, among other investors. Lightmatter says its current focus beyond hardware is ensuring the test chip works with popular AI software, including Google’s TensorFlow machine learning framework.
Author: Kyle Wiggers.
Source: Venturebeat