Publications
Tiffany Chien; Ruiming Cao; Fanglin Linda Liu; Leyla A. Kabuli; Laura Waller
Space-time reconstruction for lensless imaging using implicit neural representations Journal Article
In: Opt. Express, vol. 32, no. 20, pp. 35725–35732, 2024.
Abstract | Links | BibTeX | Tags: Computational imaging; Imaging systems; Inverse design; Machine learning; Machine vision; Neural networks
@article{Chien:24,
title = {Space-time reconstruction for lensless imaging using implicit neural representations},
author = {Tiffany Chien and Ruiming Cao and Fanglin Linda Liu and Leyla A. Kabuli and Laura Waller},
url = {https://opg.optica.org/oe/abstract.cfm?URI=oe-32-20-35725},
doi = {10.1364/OE.530480},
year = {2024},
date = {2024-09-01},
journal = {Opt. Express},
volume = {32},
number = {20},
pages = {35725--35732},
publisher = {Optica Publishing Group},
abstract = {Many computational imaging inverse problems are challenged by noise, model mismatch, and other imperfections that decrease reconstruction quality. For data taken sequentially in time, instead of reconstructing each frame independently, space-time algorithms simultaneously reconstruct multiple frames, thereby taking advantage of temporal redundancy through space-time priors. This helps with denoising and provides improved reconstruction quality, but often requires significant computational and memory resources. Designing effective but flexible temporal priors is also challenging. Here, we propose using an implicit neural representation to model dynamics and act as a computationally tractable and flexible space-time prior. We demonstrate this approach on video captured with a lensless imager, DiffuserCam, and show improved reconstruction results and robustness to noise compared to frame-by-frame methods.},
keywords = {Computational imaging; Imaging systems; Inverse design; Machine learning; Machine vision; Neural networks},
pubstate = {published},
tppubtype = {article}
}
Many computational imaging inverse problems are challenged by noise, model mismatch, and other imperfections that decrease reconstruction quality. For data taken sequentially in time, instead of reconstructing each frame independently, space-time algorithms simultaneously reconstruct multiple frames, thereby taking advantage of temporal redundancy through space-time priors. This helps with denoising and provides improved reconstruction quality, but often requires significant computational and memory resources. Designing effective but flexible temporal priors is also challenging. Here, we propose using an implicit neural representation to model dynamics and act as a computationally tractable and flexible space-time prior. We demonstrate this approach on video captured with a lensless imager, DiffuserCam, and show improved reconstruction results and robustness to noise compared to frame-by-frame methods.