By means of multiphoton laser scanning microscopy, neuroscientists can look inside the brain deeper than has ever been possible before. Multiphoton fluorescent images, as all optical images, suffer from degradation caused by a variety of sources (e.g. light dispersion and absorption in the tissue, laser fluctuations, spurious photodetection and staining deficiency). From a modelling perspective, such degradations can be considered the sum of stochastic noise and a background signal. Among the methods proposed in the literature to perform image deconvolution in either confocal or multiphoton fluorescent microscopy, Vicidomini. (2009) were the first to incorporate models for noise (a Poisson process) and background signal (spatially constant) in the context of regularized inverse problems. Unfortunately, the so-called split-gradient deconvolution method (SGM) they used did not consider possible spatial variations in the background signal. In this paper, we extend the SGM by adding a maximum-likelihood estimation step for the determination of a spatially varying background signal. We demonstrate that the assumption of a constant background is not always valid in multiphoton laser microscopy and by using synthetic and actual multiphoton fluorescent images, we evaluate the face of validity of the proposed method, and compare its accuracy with the previously introduced SGM algorithm.
- Image deconvolution
- Maximum likelihood estimators
- Multiphoton laser scanning microscopy
- Split-gradient method