In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function.
Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method.
It is only here that the Hessian matrix of the SSE is positive and the first derivative of the SSE is close to zero.
In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method.
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function.
The most basic version starts with a single-variable function until a sufficiently precise value is reached.
Arthur Cayley in 1879 in The Newton–Fourier imaginary problem was the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values.
This opened the way to the study of the theory of iterations of rational functions.
To overcome this problem one can often linearise the function that is being optimized using calculus, logs, differentials, or even using evolutionary algorithms, such as the stochastic funnel algorithm.
Good initial estimates lie close to the final globally optimal parameter estimate.