Research

Our research covers optics, signal processing, optimization, and machine learning to design and optimize the hardware and software of computational imaging systems.

Information theory for imaging system design

We apply information theory, a mathematical framework originally designed for noisy communication systems, to imaging systems, which “communicate” information about the objects being imaged.
We develop a practical method to measure how much information about the object is captured in the raw data from different imaging systems. This measure can be used to compare diverse systems or be directly optimized to design new information-optimal imaging systems through gradient-based methods.

Check out the project page for more information.

Space-time methods for dynamic reconstruction

While we think of imaging primarily as acquiring spatial information about an object, our world is inherently dynamic. Reconstruction algorithms that work with many time points jointly are able to combine information shared across time, improving reconstruction quality and making it possible to image dynamic objects with previously slow systems.

We have implemented space-time algorithms based on NeRF-like implicit neural representations, which can compactly represent large 2D/3D+time data cubes and impose flexible space-time priors while remaining tractable to optimize directly from captured measurements.
For superresolution microscopy:

Machine learning for computational imaging

Machine learning and neural networks are powerful tools for optimizing both the hardware and algorithms of computational imaging systems. We focus mainly on physics-based machine learning, which incorporates what we know about how the physics of a system works instead of relying only on large training datasets.

Machine learned LED illumination patterns

End-to-end design of optics and algorithms

Machine learned LED illumination patterns

Through the magic of gradient descent, we can directly optimize the design of optical components in an imaging system jointly with its reconstruction algorithm. All we need is a differentiable forward model of how the image is formed and a training dataset.

We applied this to learn LED array patterns for Fourier ptychography (left).

And phase mask design for single-shot 3D microscopy (right).

Data-driven adaptive microscopy

Using physics-based machine learning, microscopes can adaptively image samples based on feedback from the sample itself.

Single shot autofocus

Using only the addition of one or a few LEDs as an illumination source, a physics-based neural network can be used to focus a microscope based on a single image of the sample. [Tutorial]

H. Pinkard, Z. Phillips, A. Babakhani, D.A. Fletcher, and L. Waller Optica (2019)

Learned adaptive illumination multiphoton microscopy
Using a physics-based neural network, the shape of a sample can be used to predict the correct laser power needed to image a highly scattering sample using multiphoton microscopy. This enables physiologically-accurate imaging of developing immune responses on an organ-wide level with cellular resolution. [Tutorial]

H. Pinkard, H. Baghdassarian, A. Mujal, et al. Nat. Commun. (2021).

Lensless imaging and DiffuserCam

Traditional lensed cameras use lenses to focus light into a sharp image. DiffuserCam is a lensless camera that replaces lenses with a piece of bumpy plastic, called a diffuser, placed in front of a bare image sensor (left). Since there are no other elements in the system, the camera is cheap, compact, and easy to build.

We’ve demonstrated that our simple system can be used for both 2D photography and 3D image reconstruction from a single acquisition (right).

Check out the project page or Build your own DiffuserCam: tutorial for more information.

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller Optica (2018)
New Lensless Camera Creates Detailed 3D Images Without Scanning OSA (2017)

Lensless single-shot video and hyperspectral imaging

We’ve extended DiffuserCam to two new dimensions: wavelength and time through compressed sensing.

Single-shot Hyperspectral: Spectral DiffuserCam combines a spectral filter array and diffuser in order to capture single-shot hyperspectral volumes with 64 spectral bands in a single image with higher resolution than would be possible with a lens-based system (right).

Check out the project page for more information.

Single-shot Video: For time, a diffuser is used to encode different time points by leveraging a camera’s rolling shutter, enabling the recovery of 40 video frames at over 4,500 frames per second from a single image (left).

N. Antipa, P. Oare, E. Bostan, R Ng, and L. Waller ICCP (2019)


DiffuserCam for microscopy

We applied the pseudorandom encoding principles of DiffuserCam to design a number of single-shot 3D fluorescence microscopy systems for different applications:

Miniscope3D is a microscope the size of a quarter that can be used for in vivo neuroscience (left top).


Another compact form factor system we built is a flat, on-chip microscope (left bottom).

For less compact but high-quality images, Fourier DiffuserScope is capable of capturing large, high-resolution volumes at video rates (phase mask design on right).

Structured illumination & imaging with scattering

We adapt the existing framework for structured illumination (SI) super-resolution microscopy towards SI imaging with unknown patterns. This allows super-resolution fluorescent reconstruction of biological samples after illuminating with unknown and uncalibrated patterns, and has applications when imaging through aberrations or turbid media. Furthermore, this enables high throughput fluorescence imaging – Click the photo to the right to see a high resolution fluorescent image across a large field of view. We further develop new scattering models and apply them to reconstruct increasingly complex samples.

Phase imagingphase is important

Light is a wave, having both an amplitude and phase. Our eyes and cameras, however, only see real values (i.e. intensity), so cannot measure phase directly. Phase is important, especially in biological imaging, where cells are typically transparent (i.e. invisible) but yet impose phase delays. 3D phase imaging also reveals volumetric refractive indices, which is helpful in studying thick transparent samples, such as embryos. When we can acquire quantitative phase information, we get back important shape and density maps. We develop methods for phase imaging from simple experimental setups and efficient algorithms, which can be implemented in optics, X-ray, neutron imaging, etc.

Phase from chromatic aberrations: L. Waller, S. Kou, C. Sheppard, G. Barbastathis, Optics Express 18(22), 22817-22825 (2010).
Phase from through-focus: L. Waller, M. Tsang, S. Ponda, G. Barbastathis, Opt. Express 19 (2011).
Z. Jingshan, J Dauwels, M. A. Vasquez, L. Waller, Optics Express 21(15), 18125-18137 (2013).
Z. Jingshan, L. Tian, J. Dauwels, L. Waller, Biomedical Optics Express 6(1), 257-265 (2014)
Differential phase contrast: Z.F. Phillips, M. Chen, L. Waller, PLOS ONE12(2), e0171228 (2017).
M. Kellman, M. Chen, Z.F. Phillips, M. Lustig, L. Waller, Biomed. Optics Express 9(11), 5456-5466 (2018).
M. Chen, Z.F. Phillips, L. Waller, Optics Express 26(25), 32888-32899 (2018).
3D phase imaging: L. Tian and L. Waller, Optica 2, 104-111 (2015).
M. Chen, L. Tian, and L. Waller, Biomedical Optics Express 7(10), 3940-3950 (2016).
Applications in lithography: A. Shanker, M. Sczyrba, B. Connolly, F. Kalk, A. Neureuther, L. Waller, in SPIE Photomask Technology (2013).
A. Shanker, M. Sczyrba, B. Connolly, F. Kalk, A. Neureuther, L. Waller, in SPIE Advanced Lithography paper 9052-49, February 2014, San Jose, CA.
R. Claus, A. Neureuther, P. Naulleau, L. Waller, Optics Express 23(20), 26672-26682 (2015).

Fourier ptychography algorithms

The algorithms behind achieving large field-of–view and high resolution are rooted in phase retrieval. We use large-scale nonlinear non-convex optimization, much like neural networks in machine learning, but we have new challenges for imaging applications.

Yeh, J. Dong, Z. Jingshan, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, L. Waller, Optics Express 23(26), 33212-33238 (2015).

R. Eckert, Z.F. Phillips, L. Waller, Applied Optics 57(19), 5434-5442 (2018).

Picture1

LED array microscope

We work on a new type of microscope hack, where the lamp of a regular microscope is replaced with an LED array, allowing many new capabilities. We do brightfield, darkfield, phase contrast, super-resolution or 3D phase imaging, all by computational illumination tricks.

L. Tian, X. Li, K. Ramchandran, L. Waller, Biomedical Optics Express 5(7), 2376-2389 (2014).
L. Tian, J. Wang, L. Waller, Optics Letters, 39(5), 1326-1329 (2014).
L. Tian, Z. Liu, L. Yeh, M. Chen, Z. Jingshan, L. Waller, Optica 2(10), 904-911 (2015).
Z.F. Phillips, R. Eckert, L. Waller, OSA Imaging and Applied Optics; paper IW4E.5 (2017).

We used this system to create the Berkeley Single Cell Computational Microscopy (BSCCM) dataset (right), which contains over 12,000,000 images of 400,000 of individual white blood cells under different LED array illumination patterns and paired with a variety of fluorescence measurements.

H. Pinkard, C. Liu, F. Nyatigo, D. Fletcher, L. Waller, (2024).

Funding:

 

 

 

 

tumblr visitor