Package wsh.opt
Class GaussNewtonSolver
- java.lang.Object
-
- wsh.opt.GaussNewtonSolver
-
public class GaussNewtonSolver extends java.lang.Object
Solve least-squares inverse of a non-linear Transform. See QuadraticSolver to solve least-squares inverse of a linear Transform.
-
-
Method Summary
All Methods Static Methods Concrete Methods Modifier and Type Method Description static void
setExpensiveDebug(boolean debug)
Turn on expensive checking of transform and vector properties during solving of equations.static Vect
solve(VectConst data, VectConst referenceModel, VectConst perturbModel, Transform transform, boolean dampOnlyPerturbation, int conjugateGradIterations, int lineSearchIterations, int linearizationIterations, double lineSearchError, Monitor monitor)
Solve nonquadratic objective function with Gauss Newton iterations.
-
-
-
Method Detail
-
solve
public static Vect solve(VectConst data, VectConst referenceModel, VectConst perturbModel, Transform transform, boolean dampOnlyPerturbation, int conjugateGradIterations, int lineSearchIterations, int linearizationIterations, double lineSearchError, Monitor monitor)
Solve nonquadratic objective function with Gauss Newton iterations. Minimizes[f(m+x)-data]'N[f(m+x)-data] + (m+x)'M(m+x)
if dampOnlyPerturbation is true and[f(m+x)-data]'N[f(m+x)-data] + (x)'M(x)
if dampOnlyPerturbation is false. m is the reference model and x is the perturbation of that model, Returns full solution m+x. Iterative linearization of f(m+x) ~= f(m) + Fx makes the objective function quadratic in x: [f(m)+Fx-data]'N[f(m)+Fx-data] + (m+x)'M(m+x) x is solved with the specified number of conjugate gradient iterations. This perturbation is then scaled after searching the nonquadratic objective function with the specified number of line search iterations. The scaled perturbation x is added to the previous reference model m to update the new reference model m. Relinearization is repeated for the specified number of linearization iterations. Cost is proprotional tolinearizationIterations*( 2* conjugateGradIterations + lineSearchIterations );
Hard constraints, if any, will be applied during line searches, and to the final result. "Line search error" is an acceptable fraction of imprecision in the scale factor for the line search. A very small value will cause the maximum number of line seach iterations to be used.- Parameters:
data
- The data to be fit.referenceModel
- This is the starting velocity model. The optimized model will be a revised instance of this class.perturbModel
- If non-null, then use instances of this model to perturb the reference model. It must be possible to project between the perturbed and reference model. The initial state of this vector is ignored.transform
- Describes the linear or nonlinear transform.dampOnlyPerturbation
- If true then, only damp perturbations to model. If false, then damp the reference model plus the perturbation.linearizationIterations
- Number of times to relinearize the non-linear transform. Set to 1 if transform is already linear. (Anything less than 1 will be set to 1)lineSearchIterations
- Number of iterations for a a line search to scale a pertubation before adding to reference model. Recommend 20 or greater. Use 0 if you want to disable the line search altogether and add the perturbation with a scale factor of 1.conjugateGradIterations
- The specified number of conjugate gradient iterations.lineSearchError
- is an acceptable fraction of imprecision in the scale factor for the line search. Recommend 0.001 or smaller.monitor
- Report progress here, if non-null.- Returns:
- Result of optimization, using a cloned instance of referenceModel.
-
setExpensiveDebug
public static void setExpensiveDebug(boolean debug)
Turn on expensive checking of transform and vector properties during solving of equations.- Parameters:
debug
- If true, then turn on expensive debugging.
-
-