Our research covers optics, signal processing, optimization, and machine learning to design and optimize the hardware and software of computational imaging systems. |
|
Information theory for imaging system designWe apply information theory, a mathematical framework originally designed for noisy communication systems, to imaging systems, which “communicate” information about the objects being imaged. Check out the project page for more information. | |
Space-time methods for dynamic reconstructionWhile we think of imaging primarily as acquiring spatial information about an object, our world is inherently dynamic. Reconstruction algorithms that work with many time points jointly are able to combine information shared across time, improving reconstruction quality and making it possible to image dynamic objects with previously slow systems. We have implemented space-time algorithms based on NeRF-like implicit neural representations, which can compactly represent large 2D/3D+time data cubes and impose flexible space-time priors while remaining tractable to optimize directly from captured measurements. And for lensless imaging: T. Chien, R. Cao, R.L. Liu, L.A. Kabuli, and L. Waller Opt. Express (2024).
|
|
Machine learning for computational imagingMachine learning and neural networks are powerful tools for optimizing both the hardware and algorithms of computational imaging systems. We focus mainly on physics-based machine learning, which incorporates what we know about how the physics of a system works instead of relying only on large training datasets. | |
End-to-end design of optics and algorithmsThrough the magic of gradient descent, we can directly optimize the design of optical components in an imaging system jointly with its reconstruction algorithm. All we need is a differentiable forward model of how the image is formed and a training dataset. We applied this to learn LED array patterns for Fourier ptychography (left). And phase mask design for single-shot 3D microscopy (right). |
|
Data-driven adaptive microscopyUsing physics-based machine learning, microscopes can adaptively image samples based on feedback from the sample itself. Single shot autofocus Using only the addition of one or a few LEDs as an illumination source, a physics-based neural network can be used to focus a microscope based on a single image of the sample. [Tutorial]
H. Pinkard, Z. Phillips, A. Babakhani, D.A. Fletcher, and L. Waller Optica (2019) Learned adaptive illumination multiphoton microscopy H. Pinkard, H. Baghdassarian, A. Mujal, et al. Nat. Commun. (2021). |
|
Lensless imaging and DiffuserCamTraditional lensed cameras use lenses to focus light into a sharp image. DiffuserCam is a lensless camera that replaces lenses with a piece of bumpy plastic, called a diffuser, placed in front of a bare image sensor (left). Since there are no other elements in the system, the camera is cheap, compact, and easy to build. We’ve demonstrated that our simple system can be used for both 2D photography and 3D image reconstruction from a single 2D acquisition (right).
Check out the project page or Build your own DiffuserCam: tutorial for more information. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller Optica (2018) |
|
Lensless single-shot video and hyperspectral imagingBecause DiffuserCam randomly multiplexes input light onto the measurement, we can use compressed sensing to solve underdetermined problems: from a single 2D camera capture, we can recover extra dimensions of information, such as wavelength and time. Single-shot Hyperspectral: Spectral DiffuserCam combines a spectral filter array and diffuser in order to capture single-shot hyperspectral volumes with 64 spectral bands in a single image (right). Check out the project page for more information. Single-shot Video: For time, a diffuser is used to encode different time points by leveraging a camera’s rolling shutter, enabling the recovery of 40 video frames at over 4,500 frames per second from a single image (left). N. Antipa, P. Oare, E. Bostan, R Ng, and L. Waller ICCP (2019) |
|
|
|
Computational imaging at non-visible wavelengths: electron microscopy, X-ray, EUVWhile many systems we work on are at visible wavelengths, the same principles can be applied to other imaging systems, expanding our reach to new scientific domains and other applications. To work on these systems that we don’t build ourselves, we rely on close collaborations with institutions like the Lawrence Berkeley National Lab. Electron microscopy for materials scienceElectron microscopy can achieve resolution on the scale of individual atoms, allowing for precise determination of the atomic structure of unknown materials. Multiple scattering models for highly scattering samples (right): D. Ren, C. Ophus, M. Chen, and L. Waller Ultramicroscopy (2020).
Space-time reconstruction for unwanted dynamics: T. Chien, C. Ophus, and L. Waller NeurIPS Workshop on Deep Learning and Inverse Problems (2023).
X-ray and Extreme ultraviolet (EUV) for microscopy and lithographyMany applications are pushing towards shorter wavelengths because they allow for higher resolution, e.g. in lithography, EUV can pattern semiconductors at higher density. However, these systems pose significant engineering challenges in error tolerance, aberration correction, physical constraints to optical design, and more. |
|
Imaging through scatteringMost microscopy relies on a single-scattering assumption: light traveling through the sample bounces off or interacts with just one part of the sample before being imaged onto the camera. This assumption breaks down if we try to image deeper into thicker and more three-dimensional samples, making it much more challenging to image deep into the brain and other biological tissues. More accurate multiple scattering models: 3D phase imaging of multiple-scattering samples (above right): S. Chowdhury, M. Chen, R. Eckert, R. Ren, F. Wu, N. Repina, and L. Waller Optica (2019).
Combining 3D phase imaging with fluorescence to correct aberrations and scattering: Y. Xue, D. Ren, and L. Waller Biomedical Optics Express (2022).
For optogenetics: Y. Xue, L. Waller, H. Adesnik, and N. Pegard eLife (2022).
|
|
Phase imagingLight is a wave, having both amplitude and phase, but our eyes and cameras only see real values (i.e. intensity), so cannot measure phase directly. However, phase is important, especially in biological imaging, where cells are typically transparent (i.e. invisible) but impose phase delays (left). 3D phase imaging also reveals volumetric refractive indices, which is helpful in studying thick transparent samples, such as embryos (right). Our phase imaging systems focus on simple experimental setups and efficient algorithms. | |
|
|
LED array microscopeWe work on a new type of microscope hack, where the lamp of a regular microscope is replaced with an LED array, allowing many new capabilities. We do brightfield, darkfield, phase contrast, super-resolution or 3D phase imaging, all by computational illumination tricks. L. Tian, X. Li, K. Ramchandran, and L. Waller, Biomedical Optics Express (2014). We used this system to create the Berkeley Single Cell Computational Microscopy (BSCCM) dataset (right), which contains over 12,000,000 images of 400,000 of individual white blood cells under different LED array illumination patterns and paired with a variety of fluorescence measurements. H. Pinkard, C. Liu, F. Nyatigo, D. Fletcher, and L. Waller (2024). |
Funding: