Sign Up

6548 Forest Park Pkwy, St. Louis, MO 63112, USA

View map

Inference-Time Scaling for Robust and Adaptive Computational Imaging

Computational imaging has transformed fields such as medical imaging, computational photography, and remote sensing by combining physical measurement models with advanced image reconstruction algorithms. Two major paradigms have driven recent progress: model-based deep learning (MBDL), which integrates learned priors with physics-based data consistency, and generative models, especially diffusion models, which capture complex image distributions and enable posterior sampling for uncertainty quantification. Despite their success, most existing methods still operate under static inference: they rely on fixed pretrained priors, predetermined sampling procedures, and a fixed forward model during reconstruction. This rigidity limits their robustness in the presence of distribution shift, complex posterior geometry, and uncertainty in the measurement process.

This dissertation introduces Inference-Time Scaling for computational imaging, a framework that improves reconstruction by allocating additional computation during inference to adapt the reconstruction procedure to each measurement instance. Instead of applying the same fixed pipeline to every input, the proposed framework uses extra test-time computation to strengthen the image prior, refine the sampling process, or adapt the measurement model to the observed data. The central thesis is that allocating additional computation at inference time to adapt the prior, the sampling process, or the measurement model to the observed data leads to more accurate, robust, and reliable solutions to imaging inverse problems.

The first part of this dissertation studies how to scale the prior at inference time to better handle structured degradations and distribution shift. In particular, it develops Deep Restoration Priors (DRP) and Stochastic Deep Restoration Priors (ShaRP), which generalize denoising-based priors by incorporating more powerful restoration operators and test-time ensembling, yielding stronger and more robust regularization for image reconstruction.

The second part studies how to scale the sampling process in generative reconstruction. It introduces several sampling-based methods, including Kernel Density Steering (KDS), DiffGEPCI, and DBCR, which improve inference through collaborative sampling, volumetric enhancement, and multimodal diffusion bridges. These approaches improve reconstruction fidelity and reduce stochastic artifacts and hallucinations in challenging inverse problems.

The third part studies how to scale the likelihood model when the measurement process itself is uncertain. It develops methods such as ADOBI and SPICER, which address blind inverse problems by using additional test-time optimization to jointly estimate the image and unknown forward-model parameters from the measurements themselves, thereby improving reconstruction under model mismatch.

Taken together, these contributions show that inference in computational imaging should not be viewed as a fixed procedure, but as an adaptive process that can be improved through additional test-time computation. By scaling the prior, the sampler, and the likelihood during inference, this dissertation establishes a unified framework for building computational imaging systems that are more adaptive, robust, and reliable in real-world settings.

0 people are interested in this event

6548 Forest Park Pkwy, St. Louis, MO 63112, USA

View map

Inference-Time Scaling for Robust and Adaptive Computational Imaging

Computational imaging has transformed fields such as medical imaging, computational photography, and remote sensing by combining physical measurement models with advanced image reconstruction algorithms. Two major paradigms have driven recent progress: model-based deep learning (MBDL), which integrates learned priors with physics-based data consistency, and generative models, especially diffusion models, which capture complex image distributions and enable posterior sampling for uncertainty quantification. Despite their success, most existing methods still operate under static inference: they rely on fixed pretrained priors, predetermined sampling procedures, and a fixed forward model during reconstruction. This rigidity limits their robustness in the presence of distribution shift, complex posterior geometry, and uncertainty in the measurement process.

This dissertation introduces Inference-Time Scaling for computational imaging, a framework that improves reconstruction by allocating additional computation during inference to adapt the reconstruction procedure to each measurement instance. Instead of applying the same fixed pipeline to every input, the proposed framework uses extra test-time computation to strengthen the image prior, refine the sampling process, or adapt the measurement model to the observed data. The central thesis is that allocating additional computation at inference time to adapt the prior, the sampling process, or the measurement model to the observed data leads to more accurate, robust, and reliable solutions to imaging inverse problems.

The first part of this dissertation studies how to scale the prior at inference time to better handle structured degradations and distribution shift. In particular, it develops Deep Restoration Priors (DRP) and Stochastic Deep Restoration Priors (ShaRP), which generalize denoising-based priors by incorporating more powerful restoration operators and test-time ensembling, yielding stronger and more robust regularization for image reconstruction.

The second part studies how to scale the sampling process in generative reconstruction. It introduces several sampling-based methods, including Kernel Density Steering (KDS), DiffGEPCI, and DBCR, which improve inference through collaborative sampling, volumetric enhancement, and multimodal diffusion bridges. These approaches improve reconstruction fidelity and reduce stochastic artifacts and hallucinations in challenging inverse problems.

The third part studies how to scale the likelihood model when the measurement process itself is uncertain. It develops methods such as ADOBI and SPICER, which address blind inverse problems by using additional test-time optimization to jointly estimate the image and unknown forward-model parameters from the measurements themselves, thereby improving reconstruction under model mismatch.

Taken together, these contributions show that inference in computational imaging should not be viewed as a fixed procedure, but as an adaptive process that can be improved through additional test-time computation. By scaling the prior, the sampler, and the likelihood during inference, this dissertation establishes a unified framework for building computational imaging systems that are more adaptive, robust, and reliable in real-world settings.