Our research covers optics, signal processing, optimization, and machine learning to design and optimize the hardware and software of computational imaging systems. |
|
Information theory for imaging system designWe apply information theory, a mathematical framework originally designed for noisy communication systems, to imaging systems, which “communicate” information about the objects being imaged. Check out the project page for more information. |
|
Space-time methods for dynamic reconstructionWhile we think of imaging primarily as acquiring spatial information about an object, our world is inherently dynamic. Reconstruction algorithms that work with many time points jointly are able to combine information shared across time, improving reconstruction quality and making it possible to image dynamic objects with previously slow systems. We have implemented space-time algorithms based on NeRF-like implicit neural representations, which can compactly represent large 2D/3D+time data cubes and impose flexible space-time priors while remaining tractable to optimize directly from captured measurements. And for lensless imaging: T. Chien, R. Cao, R.L. Liu, L.A. Kabuli, and L. Waller Opt. Express (2024).
|
|
Machine learning for computational imagingMachine learning and neural networks are powerful tools for optimizing both the hardware and algorithms of computational imaging systems. We focus mainly on physics-based machine learning, which incorporates what we know about how the physics of a system works instead of relying only on large training datasets. | |
End-to-end design of optics and algorithmsThrough the magic of gradient descent, we can directly optimize the design of optical components in an imaging system jointly with its reconstruction algorithm. All we need is a differentiable forward model of how the image is formed and a training dataset. We applied this to learn LED array patterns for Fourier ptychography (left). And phase mask design for single-shot 3D microscopy (right). |
|
Data-driven adaptive microscopyUsing physics-based machine learning, microscopes can adaptively image samples based on feedback from the sample itself. Single shot autofocus Using only the addition of one or a few LEDs as an illumination source, a physics-based neural network can be used to focus a microscope based on a single image of the sample. [Tutorial]
H. Pinkard, Z. Phillips, A. Babakhani, D.A. Fletcher, and L. Waller Optica (2019) Learned adaptive illumination multiphoton microscopy H. Pinkard, H. Baghdassarian, A. Mujal, et al. Nat. Commun. (2021). |
|
Lensless imaging and DiffuserCamTraditional lensed cameras use lenses to focus light into a sharp image. DiffuserCam is a lensless camera that replaces lenses with a piece of bumpy plastic, called a diffuser, placed in front of a bare image sensor (left). Since there are no other elements in the system, the camera is cheap, compact, and easy to build. We’ve demonstrated that our simple system can be used for both 2D photography and 3D image reconstruction from a single acquisition (right).
Check out the project page or Build your own DiffuserCam: tutorial for more information. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller Optica (2018) |
|
Lensless single-shot video and hyperspectral imagingWe’ve extended DiffuserCam to two new dimensions: wavelength and time through compressed sensing. Single-shot Hyperspectral: Spectral DiffuserCam combines a spectral filter array and diffuser in order to capture single-shot hyperspectral volumes with 64 spectral bands in a single image with higher resolution than would be possible with a lens-based system (right). Check out the project page for more information. Single-shot Video: For time, a diffuser is used to encode different time points by leveraging a camera’s rolling shutter, enabling the recovery of 40 video frames at over 4,500 frames per second from a single image (left). N. Antipa, P. Oare, E. Bostan, R Ng, and L. Waller ICCP (2019) |
|
|
|
Structured illumination & imaging with scatteringWe adapt the existing framework for structured illumination (SI) super-resolution microscopy towards SI imaging with unknown patterns. This allows super-resolution fluorescent reconstruction of biological samples after illuminating with unknown and uncalibrated patterns, and has applications when imaging through aberrations or turbid media. Furthermore, this enables high throughput fluorescence imaging – Click the photo to the right to see a high resolution fluorescent image across a large field of view. We further develop new scattering models and apply them to reconstruct increasingly complex samples.
|
|
Phase imagingLight is a wave, having both an amplitude and phase. Our eyes and cameras, however, only see real values (i.e. intensity), so cannot measure phase directly. Phase is important, especially in biological imaging, where cells are typically transparent (i.e. invisible) but yet impose phase delays. 3D phase imaging also reveals volumetric refractive indices, which is helpful in studying thick transparent samples, such as embryos. When we can acquire quantitative phase information, we get back important shape and density maps. We develop methods for phase imaging from simple experimental setups and efficient algorithms, which can be implemented in optics, X-ray, neutron imaging, etc. Phase from chromatic aberrations: L. Waller, S. Kou, C. Sheppard, G. Barbastathis, Optics Express 18(22), 22817-22825 (2010). Phase from through-focus: L. Waller, M. Tsang, S. Ponda, G. Barbastathis, Opt. Express 19 (2011). Z. Jingshan, J Dauwels, M. A. Vasquez, L. Waller, Optics Express 21(15), 18125-18137 (2013). Z. Jingshan, L. Tian, J. Dauwels, L. Waller, Biomedical Optics Express 6(1), 257-265 (2014) Differential phase contrast: Z.F. Phillips, M. Chen, L. Waller, PLOS ONE12(2), e0171228 (2017). M. Kellman, M. Chen, Z.F. Phillips, M. Lustig, L. Waller, Biomed. Optics Express 9(11), 5456-5466 (2018). M. Chen, Z.F. Phillips, L. Waller, Optics Express 26(25), 32888-32899 (2018). 3D phase imaging: L. Tian and L. Waller, Optica 2, 104-111 (2015). M. Chen, L. Tian, and L. Waller, Biomedical Optics Express 7(10), 3940-3950 (2016). Applications in lithography: A. Shanker, M. Sczyrba, B. Connolly, F. Kalk, A. Neureuther, L. Waller, in SPIE Photomask Technology (2013). A. Shanker, M. Sczyrba, B. Connolly, F. Kalk, A. Neureuther, L. Waller, in SPIE Advanced Lithography paper 9052-49, February 2014, San Jose, CA. R. Claus, A. Neureuther, P. Naulleau, L. Waller, Optics Express 23(20), 26672-26682 (2015). |
|
Fourier ptychography algorithmsThe algorithms behind achieving large field-of–view and high resolution are rooted in phase retrieval. We use large-scale nonlinear non-convex optimization, much like neural networks in machine learning, but we have new challenges for imaging applications. R. Eckert, Z.F. Phillips, L. Waller, Applied Optics 57(19), 5434-5442 (2018). |
|
LED array microscopeWe work on a new type of microscope hack, where the lamp of a regular microscope is replaced with an LED array, allowing many new capabilities. We do brightfield, darkfield, phase contrast, super-resolution or 3D phase imaging, all by computational illumination tricks. L. Tian, X. Li, K. Ramchandran, L. Waller, Biomedical Optics Express 5(7), 2376-2389 (2014). We used this system to create the Berkeley Single Cell Computational Microscopy (BSCCM) dataset (right), which contains over 12,000,000 images of 400,000 of individual white blood cells under different LED array illumination patterns and paired with a variety of fluorescence measurements. H. Pinkard, C. Liu, F. Nyatigo, D. Fletcher, L. Waller, (2024). |
Funding: