The title
refers to Slow Newton’s method convergence.
This document gives a practical solution to minimizing with a penalty
along with an example of the failure of
The quantity to be minimized with respect to b is
(1.1)
(1.2)
(1.3)
The penalty exists and may be enormous with b equal to zero. In this case with
(1.4)
The solution in this approximation is
(1.6)
Minimizing (1.5) with respect to b involves finding a b for which
This is solved iteratively.
Each estimate bi is expanded in terms of its value and first
derivative, which are the first and second derivatives of c2,
= then solving for the bi+1 that make this expansion zero. –
(1.8)
The first derivative is in (1.7), the second derivative is
(1.10)
A hint of the trouble can be seen in the fact that for b such that the first derivative is zero, so is the second. Proceeding blindly with (1.9)
Thus if the initial b is off, each iteration moves only 1/5 of the way towards the correct answer. That is suppose b0=A+d, then . Iterative schemes such as (1.11) can be extrapolated to convergence Aitkins.doc.
The penalty problem is easier. Begin with b=A, then let the minimization proceed from there. The first few steps will involve the derivatives of chi-square, then a step will jump into the penalty region. The penalty will be far beyond that estimated because it is not well approximated by a quadratic expansion. Either the ..\amoeba\AMOEBA.doc or Robmin\Robmin.doc .htm will decide that this is not the region to go to and the step size in this direction will be shrunk. Eventually a solution may settle on the boundary of the penalty, but at that point the first and second derivatives will involve both the penalty and chi-square.