Wiki
- white noise
- white refers to the way signal power is distributed (independently) over time or along frequencies
- white noise vector if each component has probability distribution with mean 0 and a finite variance, and are statistically independent
- gaussian white noise
- white noise vector where each component assumes a gaussian distribution
- the vector is a multivariate gaussian distribution
- white noise vector where each component assumes a gaussian distribution
- nonlocal means: https://en.wikipedia.org/wiki/Non-local_means
- weighted sum of all pixels’ local mean filter
- bilateral filter: https://en.wikipedia.org/wiki/Bilateral_filter
- total variation: https://en.wikipedia.org/wiki/Total_variation_denoising
- denoising but edge perserving
- 2D problem harder to solve (primal dual: https://link.springer.com/content/pdf/10.1023%2FB%3AJMIV.0000011325.36760.1e.pdf)
Denoising
- 1992_nonlinear_total_variantion_based_noise_removal_algorithms
- abstract
- denoising by minimizing total variations, constraint using Lagrange multipliers
- solved by gradient-projection
- intro
- total variation norm
- L1 norms of derivatives
- nonlinear, computationall complex (compared to L2)
- problem formulation
- constraint optimization as time dependent nonlinear parabolic pde
- derive the euler-lagrange equation for the optimization
- solved with with a time-steping algorithm
- constraint optimization as time dependent nonlinear parabolic pde
- total variation norm
- results
- denoise in flat regions AND preserves edge details
- some follow-ups
- primal-dual method for minimizing total variation
- https://www.uni-muenster.de/AMM/num/Vorlesungen/MathemBV_SS16/literature/Chambolle2004.pdf
- primal-dual method for minimizing total variation
- abstract
- 1998_bilateral_filtering_for_gray_and_color_images
- bilateral filtering
- 2008_nonlocal_image_and_movie_denoising
- nonlocal mean filtering
- 2010_fast_image_recovery_using_variable_splitting_and_constraint_optimization
- abstract
- using ADMM for solving unconstrained problem where objective includes
- L2 data-fidelity term
- non-smooth regularizer
- variable splitting
- equivalent to constrained optimization, addressed with augmented Lagrangian method
- using ADMM for solving unconstrained problem where objective includes
- problem formulation
- synthesis approach (i.e. with wavelets)
x = W\beta
whereW
are elements of wavelet frame\beta
are parameter to be estimated
- analysis approach
x
sampled randomly- based on regularizers that analyzes the image itself rather than representation in wavelet domain
- i.e total variation regularizer
- unified view
min_x 1/2 || Ax - y ||_2^2 + \tau \phi(x)
A = BW
whereA=B
for analysis approach
- solvers
- iterative shrinkage / thresholding (IST)
- relies on denoising function
\Psi(y)
- iteration:
x_{k+1} = \Psi ( x_t - 1/\gamma A^T(Ax - y))
- problem: slow
- relies on denoising function
- iterative shrinkage / thresholding (IST)
- synthesis approach (i.e. with wavelets)
- proposed algo: split augmented Lagrangian shrinkage algorithm (SALSA)
- a variant of ADMM
- abstract
- 2014_progressive_image_denoising_through_hybrid_graph_laplacian_regularization
- abstract
- laplacian regularized image denoising
- semisupervised learning
- intro
- variational problem
- minimize data fidelity term and regularization term
- priors
- locally smooth (nearby pixels more likely to have same/similar intensity values)
- non-local self-similarity (pixels on same structure likely to have same or similar intensity)
- variational problem
- graph laplacian regularized regression
- graph laplacian regularizer
R(f) = \sum_{i,j} (f(x_i) - f(x_j))^2 w_{ij}
w_ij
is edge weight which reflects affinity between two verticesx_i
andx_j
- want to design filters that is edge-preserving (bilateral filtering)
- graph laplacian regularizer
- abstract
Plug and Play P3 Prior
- 2013_plug_and_play_priors_for_model_based_reconstruction
- 2016_algorithm_induced_prior_for_image_restoration
2016_plug_and_play_admm_for_image_restoration_fixed_point_convergence_and_applications
- 2017_regularization_by_denoising
- intro
- plug-and-play (P3)
- use implicit priors for regularizing general inverse problems
- problems
- no clear objective function
- parameter tuning ADMM
- use of denoising engine for regularization of inverse problems
rho(x) = 1/2 x^T (x - f(x))
- is proved to be convex (guaranteed convergence)
- the goodness
- explicit objective
- gradient manageable
- any inverse problem handled by calling denoising engine iteratively
- applications
- single-image superresolution
- blurring
- plug-and-play (P3)
- intro
Learning denoising prior for inverse problems
- 2017_learning_deep_cnn_denoiser_prior_for_image_restoration
- abstract
- use CNN to learn powerful denoiser priors, then plug into ADMM, HQS (half quadratic splittings)
- abstract
- 2017_learning_proximal_operators_using_denoising_networks_for_regularizing_inverse_image_problems
- abstract
- use CNN to learn proximal operators
- did not perform better than flexISP for demosaiking
- abstract
Review
- 2018_review_modern_regularization_methods_for_inverse_problems
- linear/nonlinear regularization for inverse problems
- pretty involved discussion