New methods for background estimation and denoising fight back
Mauro Silberberg
Departamento de Física Universidad de Buenos Aires Argentina 🇦🇷 ⭐️⭐️⭐️ |
|
IFIBA-CONICET |
AI has taken the field by storm
Publishing model-based methods in:
It promises to learn the model from data.
Is it the end for model-based science?
Model-based leverage prior knowledge of the measurement process.
Super-resolution achieved by modelling the illumination.
The diffraction limit applies when we ignore it.
in fluorescence microscopy image.
Calculate ratio:
Simple segmentation: \[\text{cell} \leftarrow \text{intensity} > \text{background}\]
AI-based segmentation:
Splits intensity histogram to minimize intra-class variance
Variants: Li, Yen, isodata, etc.
They ignore spatial information.
“Rolls a ball” below the intensity profile to obtain a “local minimum”.
Default method in ImageJ/FIJI
Process
> Subtract background
To be used as ground truth.
SMO provides a fair sample of the background
not a segmentation.
Hypothesis: background is flat compared to the noise
It distinguishes “flat” from “non-flat” regions
where flat is compared to the noise
“Cell” \(\rightarrow\) a gaussian intensity profile.
Intensity thresholding recovers a biased distribution.
SMO thresholding recovers an unbiased distribution.
What happens when we apply it to “background” (flat-regions)?
Does SMO still work here?
SMO generates an unbiased local sampling
What can we do with that?
Example: moving median on the SMO selection to estimate the background
Wavelet: Haar
Forward transform: \[ \begin{cases} a = (x + y) \,/\, 2 \\ d = (x - y) \,/\, 2 \end{cases} \]
Inverse transform: \[ \begin{cases} x = a + d \\ y = a - d \end{cases} \]
This process is recursively repeated with the \(a\) coefficients.
How to select a proper threshold?
Normalizations:
\[ \begin{cases} a = (x + y) \,/\, 2 \\ d = (x - y) \,/\, 2 \end{cases} \]
\[ \begin{cases} a = (x + y) \,/\, \sqrt{2} \\ d = (x - y) \,/\, \sqrt{2} \end{cases} \]
\[ \begin{cases} a = (x + y) \\ d = (x - y) \end{cases} \]
Useful when:
\[ x, y \sim N(\mu, \sigma) \]
\[ x, y \sim Poisson(\mu_{\{x,y\}}) \]
We obtain:
\[ \begin{cases} a \sim N(\mu, \sigma) \\ d \sim N(0, \sigma) \end{cases} \]
\[ \begin{cases} a \sim Poisson(\mu_x + \mu_y) \\ d \sim \quad ? \end{cases} \]
Threshold:
Global for all \(d\)
Define test for \(x=y\).
Example: two channel signal alternating between for values.
Multichannel information prevents wrongly averanging the values near 0.
Example: ratio between two channels
Transformation information allows to average “different” values.
Example: ratio between two channels
\(2-\sigma\) in bias in the nucleus or cytoplasm for Deep Learning
This work was part of my PhD with Dr. Hernán E. Grecco