First, I was a bit too emphatic about the relationship between round off error and the rate of convergence. In general, we want a fast rate of convergence to help to limit the effect of accumulated round off error. However, there may be other factors that are important. For example, the method with faster convergence could require significantly more computation and thus result in a higher round off error.
The two root finding methods I introduced are good examples of how different methods to solve the same problem can have quite different rates of convergence. However, these methods are not good examples of the effect of accumulated round off error because, by their definitions, they are not greatly affected by accumulated error when we are finding a single root.
Accumulated error is still a serious problem with root finding methods when we have to discover multiple roots. In the problem I gave you for the homework, we divide off each root as it is found, and thus the calculation used to find each root depends on the previously found roots. The round off and approximation errors introduced in finding a root will accumulate with each root we divide off. Thus we are less and less likely to converge to a solution for the smaller roots.
Another problem with root finding and round off errors is if there are double roots. In this case, the round off error may cause the method to find two different roots or none at all.
One other point concerns the initial guess used by Newton's Method. In our error analysis, we assumed that the initial guess had a relative error of no more than 10^(-1). Also, I demonstrated that Newton's Method is not guaranteed to converge to a root in all situations. Therefore, in real world situations, Newton's Method is often combined with a slower method such as bisection which is guaranteed to converge. To find a root, the bisection method is first used until the root guess is close enough to the actual root that we can switch to Newton's Method and quickly converge in just a few more iterations.