|Common Numerical Root Finding Methods
Common root finding schemes for functions with one independent variable are briefly discussed in this section. They include:
The idea of the bisection method is based on the fact that a function will change sign when it passes through zero. By evaluating the function at the middle of an interval and replacing whichever limit has the same sign, the bisection method can halve the size of the interval in each iteration and eventually find the root.
When an interval contains a root, the bisection method is the one that will not fail. However, it is among the slowest. When an interval contains more than one root, the bisection method can find one of them. When an interval contains a singularity, the bisection method converges to that singularity.
To improve the slow convergence of the bisection method, the secant method assumes that the function is approximately linear in the local region of interest and uses the zero-crossing of the line connecting the limits of the interval as the new reference point. The next iteration starts from evaluating the function at the new reference point and then forms another line. The process is repeated until the root is found.
Mathematically, the secant method converges more rapidly near a root than the false position method (discussed below). However, since the secant method does not always bracket the root, the algorithm may not converge for functions that are not sufficiently smooth.
|False Position Method
Similar to the secant method, the false position method also uses a straight line to approximate the function in the local region of interest. The only difference between these two methods is that the secant method keeps the most recent two estimates, while the false position method retains the most recent estimate and the next recent one which has an opposite sign in the function value.
The false position method, which sometimes keeps an older reference point to maintain an opposite sign bracket around the root, has a lower and uncertain convergence rate compared to the secant method. The emphasis on bracketing the root may sometimes restrict the false position method in difficult situations while solving highly nonlinear equations.
The Newton-Raphson method finds the slope (the tangent line) of the function at the current point and uses the zero of the tangent line as the next reference point. The process is repeated until the root is found.
The Newton-Raphson method is much more efficient than other "simple" methods such as the bisection method. However, the Newton-Raphson method requires the calculation of the derivative of a function at the reference point, which is not always easy. Furthermore, the tangent line often shoots wildly and might occasionally be trapped in a loop. The promised efficiency is then unfortunately too good to be true. It is recommended to monitor the step obtained by the Newton-Raphson method. When the step is too large or the value is oscillating, other more conservative methods should take over the case.