Geomaticians

Diffractive Optical Network Enables Snapshot Multispectral Imaging

Diffractive Optical Network Enables Snapshot Multispectral Imaging
UCLA researchers led by Aydogan Ozcan and Mona Jarrahi have developed a multispectral imaging technology capable of turning a monochrome sensor into a multispectral one. Rather than the traditional absorptive filters used for multispectral imaging, the technology uses a diffractive optical network to form 16 unique spectral bands periodically repeating at the output image field of view to form a virtual multispectral pixel array. Despite the widespread use of spectral imagers such as the color camera for various imaging applications, scaling up the number of the absorptive spectral filter arrays — that feature in the design of traditional RGB cameras — to collect richer spectral information from many distinct color bands poses various challenges. This is due to their low power efficiency, high spectral cross-talk, and poor color representation quality. To construct a diffractive optical network, which can function as an alternative to these absorptive filters, several transmissive layers that are structured to compute using light-matter interactions are fabricated to form an actual material stack. The design of the network is based on deep learning, and the transmissive layers serve as an all-optical processor. As the input light is transmitted through these thin elements, different computational tasks such as image classification or reconstruction can be completed at the speed of light propagation, Ozcan said.