Publications by Grace Kuo
2021
Kristina Monakhova; Vi Tran; Grace Kuo; Laura Waller
Untrained networks for compressive lensless photography Journal Article
In: Opt. Express, vol. 29, no. 13, pp. 20913–20929, 2021.
@article{Monakhova:21,
title = {Untrained networks for compressive lensless photography},
author = {Kristina Monakhova and Vi Tran and Grace Kuo and Laura Waller},
url = {http://www.opticsexpress.org/abstract.cfm?URI=oe-29-13-20913},
doi = {10.1364/OE.424075},
year = {2021},
date = {2021-06-01},
journal = {Opt. Express},
volume = {29},
number = {13},
pages = {20913--20929},
publisher = {OSA},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2020
Grace Kuo
Exploiting Randomness in Computational Cameras and Displays PhD Thesis
EECS Department, University of California, Berkeley, 2020.
@phdthesis{Kuo:EECS-2020-218,
title = {Exploiting Randomness in Computational Cameras and Displays},
author = {Grace Kuo},
url = {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2020/EECS-2020-218.html},
year = {2020},
date = {2020-12-01},
number = {UCB/EECS-2020-218},
school = {EECS Department, University of California, Berkeley},
abstract = {Despite its desirability, capturing and displaying higher dimensional content is still a novelty since image sensors and display panels are inherently 2D. A popular option is to use scanning mechanisms to sequentially capture 3D data or display content at a variety of depths. This approach is akin to directly measuring (or displaying) the content of interest, which has low computational cost but sacrifices temporal resolution and requires complex physical hardware with moving parts. The exacting specifications on the hardware make it challenging to miniaturize these optical systems for demanding applications such as neural imaging in animals or head-mounted augmented reality displays.
In this dissertation, I propose moving the burden of 3D capture from hardware into computation by replacing the physical scanning mechanisms with a simple static diffuser (a transparent optical element with pseudorandom thickness) and formulating image recovery as an optimization problem. First, I highlight the versatility of the diffuser by showing that it can replace a lens to create an easy-to-assemble, compact camera that is robust to missing pixels; although the raw data is not intelligible by a human, it contains information that we extract with optimization using an efficient physically-based model of the optics. Next, I show that the randomness of the diffuser makes the system well-suited for compressed sensing; we leverage this to recover 3D volumes from a single acquisition of raw data. Additionally, I extend our lensless 3D imaging system to fluorescence microscopy and introduce a new diffuser design with improved noise performance. Finally, I show how incorporating the diffuser in a 3D holographic display expands the field-of-view, and I demonstrate state-of-the-art performance by using perceptually inspired loss functions when optimizing the display panel pattern. These results show how randomness in the optical system in conjunction with optimization-based algorithms can both improve the physical form factor and expand the capabilities of cameras, microscopes, and displays.},
keywords = {},
pubstate = {published},
tppubtype = {phdthesis}
}
In this dissertation, I propose moving the burden of 3D capture from hardware into computation by replacing the physical scanning mechanisms with a simple static diffuser (a transparent optical element with pseudorandom thickness) and formulating image recovery as an optimization problem. First, I highlight the versatility of the diffuser by showing that it can replace a lens to create an easy-to-assemble, compact camera that is robust to missing pixels; although the raw data is not intelligible by a human, it contains information that we extract with optimization using an efficient physically-based model of the optics. Next, I show that the randomness of the diffuser makes the system well-suited for compressed sensing; we leverage this to recover 3D volumes from a single acquisition of raw data. Additionally, I extend our lensless 3D imaging system to fluorescence microscopy and introduce a new diffuser design with improved noise performance. Finally, I show how incorporating the diffuser in a 3D holographic display expands the field-of-view, and I demonstrate state-of-the-art performance by using perceptually inspired loss functions when optimizing the display panel pattern. These results show how randomness in the optical system in conjunction with optimization-based algorithms can both improve the physical form factor and expand the capabilities of cameras, microscopes, and displays.
Fanglin Linda Liu; Grace Kuo; Nick Antipa; Kyrollos Yanny; Laura Waller
Fourier DiffuserScope: single-shot 3D Fourier light field microscopy with a diffuser Journal Article
In: Opt. Express, vol. 28, no. 20, pp. 28969–28986, 2020.
@article{LindaLiu:20,
title = {Fourier DiffuserScope: single-shot 3D Fourier light field microscopy with a diffuser},
author = {Fanglin Linda Liu and Grace Kuo and Nick Antipa and Kyrollos Yanny and Laura Waller},
url = {http://www.opticsexpress.org/abstract.cfm?URI=oe-28-20-28969},
doi = {10.1364/OE.400876},
year = {2020},
date = {2020-09-01},
journal = {Opt. Express},
volume = {28},
number = {20},
pages = {28969--28986},
publisher = {OSA},
abstract = {Light field microscopy (LFM) uses a microlens array (MLA) near the sensor plane of a microscope to achieve single-shot 3D imaging of a sample without any moving parts. Unfortunately, the 3D capability of LFM comes with a significant loss of lateral resolution at the focal plane. Placing the MLA near the pupil plane of the microscope, instead of the image plane, can mitigate the artifacts and provide an efficient forward model, at the expense of field-of-view (FOV). Here, we demonstrate improved resolution across a large volume with Fourier DiffuserScope, which uses a diffuser in the pupil plane to encode 3D information, then computationally reconstructs the volume by solving a sparsity-constrained inverse problem. Our diffuser consists of randomly placed microlenses with varying focal lengths; the random positions provide a larger FOV compared to a conventional MLA, and the diverse focal lengths improve the axial depth range. To predict system performance based on diffuser parameters, we, for the first time, establish a theoretical framework and design guidelines, which are verified by numerical simulations, and then build an experimental system that achieves < 3 µm lateral and 4 µm axial resolution over a 1000 × 1000 × 280 µm3 volume. Our diffuser design outperforms the MLA used in LFM, providing more uniform resolution over a larger volume, both laterally and axially.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Grace Kuo; Fanglin Linda Liu; Irene Grossrubatscher; Ren Ng; Laura Waller
On-chip fluorescence microscopy with a random microlens diffuser Journal Article
In: Optics Express, vol. 28, no. 6, pp. 8384–8399, 2020.
@article{kuo2020chip,
title = {On-chip fluorescence microscopy with a random microlens diffuser},
author = { Grace Kuo and Fanglin Linda Liu and Irene Grossrubatscher and Ren Ng and Laura Waller},
url = {https://doi.org/10.1364/OE.382055},
doi = {10.1364/OE.382055},
year = {2020},
date = {2020-03-09},
journal = {Optics Express},
volume = {28},
number = {6},
pages = {8384--8399},
publisher = {Optical Society of America},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Grace Kuo; Kristina Monakhova; Kyrollos Yanny; Ren Ng; Laura Waller
Spatially-varying microscope calibration from unstructured sparse inputs Inproceedings
In: Imaging and Applied Optics Congress, pp. CF4C.4, Optical Society of America, 2020.
@inproceedings{Kuo:20,
title = {Spatially-varying microscope calibration from unstructured sparse inputs},
author = {Grace Kuo and Kristina Monakhova and Kyrollos Yanny and Ren Ng and Laura Waller},
url = {http://www.osapublishing.org/abstract.cfm?URI=COSI-2020-CF4C.4},
year = {2020},
date = {2020-01-01},
booktitle = {Imaging and Applied Optics Congress},
journal = {Imaging and Applied Optics Congress},
pages = {CF4C.4},
publisher = {Optical Society of America},
abstract = {We propose a method based on blind deconvolution to calibrate the spatially-varying point spread functions of a coded-aperture microscope system. From easy-to- acquire measurements of unstructured fluorescent beads, we recover a spatially-varying forward model that outperforms prior approaches.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Grace Kuo; Laura Waller; Ren Ng; Andrew Maimone
High Resolution éTendue Expansion for Holographic Displays Journal Article
In: ACM Trans. Graph., vol. 39, no. 4, 2020, ISSN: 0730-0301.
@article{10.1145/3386569.3392414,
title = {High Resolution éTendue Expansion for Holographic Displays},
author = {Grace Kuo and Laura Waller and Ren Ng and Andrew Maimone},
url = {https://doi.org/10.1145/3386569.3392414},
doi = {10.1145/3386569.3392414},
issn = {0730-0301},
year = {2020},
date = {2020-01-01},
journal = {ACM Trans. Graph.},
volume = {39},
number = {4},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
abstract = {Holographic displays can create high quality 3D images while maintaining a small form factor suitable for head-mounted virtual and augmented reality systems. However, holographic displays have limited étendue based on the number of pixels in their spatial light modulators, creating a tradeoff between the eyebox size and the field-of-view. Scattering-based étendue expansion, in which coherent light is focused into an image after being scattered by a static mask, is a promising avenue to break this tradeoff. However, to date, this approach has been limited to very sparse content consisting of, for example, only tens of spots.In this work, we introduce new algorithms to scattering-based étendue expansion that support dense, photorealistic imagery at the native resolution of the spatial light modulator, offering up to a 20 dB improvement in peak signal to noise ratio over baseline methods. We propose spatial and frequency constraints to optimize performance for human perception, and performance is characterized both through simulation and a preliminary benchtop prototype. We further demonstrate the ability to generate content at multiple depths, and we provide a path for the miniaturization of our benchtop prototype into a sunglasses-like form factor.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2019
Kristina Monakhova; Joshua Yurtsever; Grace Kuo; Nick Antipa; Kyrollos Yanny; Laura Waller
Learned reconstructions for practical mask-based lensless imaging Journal Article
In: Optics express, vol. 27, no. 20, pp. 28075–28090, 2019.
@article{monakhova2019learned,
title = {Learned reconstructions for practical mask-based lensless imaging},
author = { Kristina Monakhova and Joshua Yurtsever and Grace Kuo and Nick Antipa and Kyrollos Yanny and Laura Waller},
url = {https://doi.org/10.1364/OE.27.028075},
doi = {10.1364/OE.27.028075},
year = {2019},
date = {2019-09-30},
journal = {Optics express},
volume = {27},
number = {20},
pages = {28075--28090},
publisher = {Optical Society of America},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kristina Monakhova; Joshua Yurtsever; Grace Kuo; Nick Antipa; Kyrollos Yanny; Laura Waller
Unrolled, model-based networks for lensless imaging Journal Article
In: 2019.
@article{monakhova2019unrolled,
title = {Unrolled, model-based networks for lensless imaging},
author = { Kristina Monakhova and Joshua Yurtsever and Grace Kuo and Nick Antipa and Kyrollos Yanny and Laura Waller},
url = {https://pdfs.semanticscholar.org/6a49/3ac2a0c8a3be888ece00b52bc1ec013df2bd.pdf},
year = {2019},
date = {2019-09-14},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Fanglin Linda Liu; Vaishnavi Madhavan; Nick Antipa; Grace Kuo; Saul Kato; Laura Waller
Single-shot 3D fluorescence microscopy with Fourier DiffuserCam Inproceedings
In: Novel Techniques in Microscopy, pp. NS2B–3, Optical Society of America 2019.
@inproceedings{liu2019single,
title = {Single-shot 3D fluorescence microscopy with Fourier DiffuserCam},
author = { Fanglin Linda Liu and Vaishnavi Madhavan and Nick Antipa and Grace Kuo and Saul Kato and Laura Waller},
url = {https://doi.org/10.1364/NTM.2019.NS2B.3},
doi = {10.1364/NTM.2019.NS2B.3},
year = {2019},
date = {2019-04-14},
booktitle = {Novel Techniques in Microscopy},
pages = {NS2B--3},
organization = {Optical Society of America},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
2018
Nick Antipa; Grace Kuo; Laura Waller
Lensless Cameras May Offer Detailed Imaging of Neural Circuitry Online
Photonics Media 2018, visited: 17.04.2020.
@online{antipalensless,
title = {Lensless Cameras May Offer Detailed Imaging of Neural Circuitry},
author = { Nick Antipa and Grace Kuo and Laura Waller},
url = {https://www.photonics.com/Articles/Lensless_Cameras_May_Offer_Detailed_Imaging_of/a63680},
year = {2018},
date = {2018-07-01},
urldate = {2020-04-17},
organization = {Photonics Media},
keywords = {},
pubstate = {published},
tppubtype = {online}
}
Grace Kuo; Nick Antipa; Ren Ng; Laura Waller
3D fluorescence microscopy with DiffuserCam Inproceedings
In: Computational Optical Sensing and Imaging, pp. CM3E–3, Optical Society of America 2018.
@inproceedings{kuo20183d,
title = {3D fluorescence microscopy with DiffuserCam},
author = { Grace Kuo and Nick Antipa and Ren Ng and Laura Waller},
url = {https://doi.org/10.1364/COSI.2018.CM3E.3},
doi = {10.1364/COSI.2018.CM3E.3},
year = {2018},
date = {2018-06-25},
booktitle = {Computational Optical Sensing and Imaging},
pages = {CM3E--3},
organization = {Optical Society of America},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nick Antipa; Grace Kuo; Reinhard Heckel; Ben Mildenhall; Emrah Bostan; Ren Ng; Laura Waller
DiffuserCam: Lensless single-exposure 3D imaging Journal Article
In: Optica, vol. 5, no. 1, pp. 1–9, 2018.
@article{antipa2018diffusercam,
title = {DiffuserCam: Lensless single-exposure 3D imaging},
author = { Nick Antipa and Grace Kuo and Reinhard Heckel and Ben Mildenhall and Emrah Bostan and Ren Ng and Laura Waller},
url = {https://doi.org/10.1364/OPTICA.5.000001},
doi = {10.1364/OPTICA.5.000001},
year = {2018},
date = {2018-01-20},
journal = {Optica},
volume = {5},
number = {1},
pages = {1--9},
publisher = {Optical Society of America},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2017
Nick Antipa; Grace Kuo; Ren Ng; Laura Waller
3D DiffuserCam: Single-shot compressive lensless imaging Inproceedings
In: Computational Optical Sensing and Imaging, pp. CM2B–2, Optical Society of America 2017.
@inproceedings{antipa20173d,
title = {3D DiffuserCam: Single-shot compressive lensless imaging},
author = { Nick Antipa and Grace Kuo and Ren Ng and Laura Waller},
url = {https://doi.org/10.1364/COSI.2017.CM2B.2},
doi = {10.1364/COSI.2017.CM2B.2},
year = {2017},
date = {2017-06-27},
booktitle = {Computational Optical Sensing and Imaging},
pages = {CM2B--2},
organization = {Optical Society of America},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Grace Kuo; Nick Antipa; Ren Ng; Laura Waller
DiffuserCam: Diffuser-based lensless cameras Inproceedings
In: Computational Optical Sensing and Imaging, pp. CTu3B–2, Optical Society of America 2017.
@inproceedings{kuo2017diffusercam,
title = {DiffuserCam: Diffuser-based lensless cameras},
author = { Grace Kuo and Nick Antipa and Ren Ng and Laura Waller},
url = {https://doi.org/10.1364/COSI.2017.CTu3B.2},
doi = {10.1364/COSI.2017.CTu3B.2},
year = {2017},
date = {2017-06-26},
booktitle = {Computational Optical Sensing and Imaging},
pages = {CTu3B--2},
organization = {Optical Society of America},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}