Lecture Outline for Week 4 I. Stability of computations - a method is stable if a small change in input yields a small change in output - given f(x), how stable is it? - if x has round off error, how big will the error of f be? - consider the relative change of f vs. the relative change of x [(f(x+h)-f(x))/f(x)] / [((x+h)-x)/x] - as h->0, this converges to: xf'(x)/f(x) - called the "condition number" of f at x - a small absolute condition number mean the function is "well conditioned", and thus stable Example: f(x) = (x-1)^(1/2) xf'(x)/f(x) = x/[2(x-1)] as x-> 1, condition number -> inf. as x-> inf., condition number -> 1/2 Example: f(x) = cos(x) xf'(x)/f(x) = -xtan(x) - trouble near odd multiples of pi/2 - trouble for very large x - this is why we always restrict x to [0, 2pi) II. Numerical Methods, example: Root Finding - solve f(x)=0 for x - finds the roots of x - example: solve f(x) = x^2 - A = 0 fo A > 0 - used by computers and calculators to compute square root - many methods for finding roots, we will examine 2 - assume that we can evaluate f(x) III. First root finding method, bisection method - if a continuous function changes sign across an interval, it must take value 0 somewhere in that interval - assume f(a)f(b) < 0 - let c = (a + b)/2 - if f(c) = 0, we are done - if f(a)f(c) < 0, let b=c and repeat - if f(a)f(c) > 0, let a=c and repeat - the approximation error is the difference between our guess and the correct answer - the x_n be the current guess at the nth iteration - let a_n, b_n be the values of a b at the nth iteration - let e_n be the approximation error at the nth iteration e_n = x_n - r - assume our method converges, what is the rate of convergence - it is the rate the error decreases with each iteration - want it to be a power of the current error |e_n| = |x_n - r| < (1/2)(b_(n-1)-a_(n-1)) = [1/2^n](b-a) so |e_n+1| "=" 1/2 |e_n| ("=" is short for approximately) - linear convergence - each step decreases error by about 1/2 of current error - in binary machines, we increase number of significant bits by 1 on each iteration - how many steps needed if we want 10^(-9) approximation error assuming (b - a) = 1? 1/2^n <= 10^(-9) => n "=" 30 - thus about 30 iterations are needed IV. Relation between rate of convergence and total error - total error is the actual error of the value - it is made up of all types of errors - generally, round off error starts very small and increases with each iteration (though not always, the bisection method is not much affected by round off error) - approximation error tends to start large and decrease with each iteration - combining them, the total error tends to decrease with each iteration at first, and then increase - the "point of diminishing returns" is the point where the total error stops decreasing - no point in continuing method for another iteration - we generally want a fast rate of convergence so that the round off error does not become the dominating term V. Second root finding method, Newton's Method - need f'(x) to exist and be continuous - pick initial guess x_0 - calculate f(x_0) and f'(x_0) - f'(x_0) is the slope of the tangential line to f at x_0 - this gives the next guess - general formula: x_(n+1) = x_n - f(x_n)/f'(x_n) - another derivation: - the Taylor Polynomial gives an approximation of f(x+h) if we know f(x) and its derivatives f(x+h) = f(x)+f'(x)h+f"(x)h^2/2+...+(f^(k)(x)h^k)/k!+Remainder - find h so that f(x+h)=0 - truncate series so f(x+h)"="f(x)+f'(x)h f(x)+f'(x)h=0 h=-f(x)/f'(x) - thus if x_0 is our current guess, x_0-f(x_0)/f'(x_0) is the next guess - for Newton's Method to work, we want to avoid situations where the first derivative is undefined or close to 0 - assume N.M. converges, how fast does it converge? e_(n+1) = x_(n+1) - r = x_n - f(x_n)/f'(x_n) - r = e_n - f(x_n)/f'(x_n) = [e_n*f'(x_n)+f(r)-f(x_n)]/f'(x_n) - look at Taylor Polynomial expanding f(r) about f(x_n) f(r) = f(x_n-e_n) = f(x_n) + f'(x_n)(-e_n) + (f"(z_n)(-e_n)^2)/2 where z_n is between r and x_n f(r) - f(x_n) = -f'(x_n)e_n + (f"(z_n)(e_n)^2)/2 Thus, e_(n+1) = 1/2 [f"(z_n)/f'(x_n)] (e_n)^2 As n->inf, the first term converges to a constant Thus, e_(n+1) "=" k(e_n)^2 - so convergence is quadratic in e_n Example: how may steps needed if we want 10^(-9) approximation error assume the initial error is 10^(-1) 1 iteration -> 10^(-2), 2 iterations -> 10^(-4), etc. so, approximately 4-5 iterations needed vs. 30 for bisection