The USC Andrew and Erna Viterbi School of Engineering USC Signal and Image Processing Institute USC Ming Hsieh Department of Electrical and Computer Engineering University of Southern California

Technical Report USC-SIPI-136

“Parallel Deterministic and Stochastic Optimization Algorithms for Computer Vision”

by Tal Simchony

December 1988

We investigate two types of models for problems in computer vision; mechanical and stochastic models. Using the calculus of variations on the former and maximum a posteriori (MAP) techniques on the latter, we reframe the problems in an optimization framework. Gibbs distributions are an excellent class of stochastic models. The equivalence between the stochastic and deterministic formulations is then straightforward as we equate the Hamiltonian in the Gibbs distribution formulations is the cost function in the deterministic setup. This equivalence allows us to show that the weak membrane (a mechanical model for interpolation) can be subsumed under a general class of compound Gauss-Markov random field (GMRF) models. Graduated Non-Convexity algorithms, (GNC) originating in the deterministic formulation are then extended to MAP image estimation in the presence of additive noise.

Robust vision systems need to incorporate multiple sources of information. We suggest using Gibbs distributions in formulating a unified framework and then minimize the corresponding non-quadratic Hamiltonian using robust line search conjugate gradient (CG) algorithms. The special structure of the Hamiltonian is then used to derive a pyramid implementation for the algorithm.

The variational approach transforms the optimization problem that arises in many early vision problems such as shape from shading (SFS), optical flow, the lightness problem, reconstruction of surface depth from orientation etc. into a set of linear or nonlinear Poisson equations. Direct methods and embedding techniques are used in deriving efficient Poisson solvers. We use this formulation to derive a new algorithm for SFS from stereo images.

In the Bayesian framework, stochastic models allow us to experiment with different optimally criteria. We use the MAP and the minimum probability of error per pixel (MPM) criteria in the texture segmentation problem. For the sake of comparison, the deterministic Iterated Conditional Modes (ICM) algorithm is also implemented. This allows us to make the observations that these kinds of labeling problems are highly non-convex (this remark is supported by recent work on boundary detection with random neighborhoods and by the use of hill-climbing learning algorithms to texture segmentation). We also show that estimates that interpolate between MAP and MPM can be obtained.

To download the report in PDF format click here: USC-SIPI-136.pdf (6.4Mb)