# Assignment 1¶

Due Date: January 21, 8:59pm

Please see the guidelines at https://www.cs.toronto.edu/~lczhang/338/homework.html

### What to Hand In¶

• Python File containing all your code, named csc338_a1.py. If you are using Jupyter Notebook to complete the work, your notebook can be exported as a .py file (File -> Download As -> Python). Your code will be auto-graded using Python 3.6, so please make sure that your code runs. There will be a 20% penalty if you need a remark due to small issues that renders your code untestable.
• PDF file named csc338_a1_written.pdf containing your solutions to the written parts of the assignment, which includes Q2(a), Q3(c,d), Q5(e), Q6(b,d), and Q7. Your solution can be hand-written, but must be legible. Graders may deduct marks for illegible or poorly presented solutions.

Submit the assignment on MarkUs by 9pm on the due date. See the syllabus for the course policy regarding late assignments. All assignments must be done individually.

In [ ]:
import math
import numpy as np
import matplotlib.pyplot as plt


## Question 1. Absolute and Relative Error¶

[6pt]

What is the approximate absolute and relative errors in approximating $e$ by the following values?

1. 3
2. 2.72
3. 2.7183

You may assume that the "true" value of $e$ is represented by math.e. Save the results in the variables q1_abs, q1_rel, q2_abs, q2_rel, and q3_abs, q3_rel.

In [ ]:
q1_abs = 0
q1_rel = 0
q2_abs = 0
q2_rel = 0
q3_abs = 0
q3_rel = 0


## Question 2. Computation Error¶

[10pt]

Consider the function, which is also implemented and plotted below.

$f(x) = \frac{1-cos(x)}{x^2}$

In [ ]:
def f(x):
return (1 - math.cos(x)) / pow(x, 2)

In [ ]:
xs = [x for x in np.arange(-3.0, 3.0, 0.05) if abs(x) > 0.05]
ys = [f(x) for x in xs]
plt.plot(xs, ys, 'bo')


### Part (a)¶

[3pt]

What is $f(0.00000001)$? Given that $f$ is continuous except at $x = 0$, what should $f(0.00000001)$ be?

Why does the Python statement compute such inaccurate values of $f(0.00000001)$?

In [ ]:
f(0.00000001)


### Part (b)¶

[5pt]

Define a Python function f2(x) so that it computes accurate values for small positive values of $x$ as well as for larger values of $0 < x \le \frac{\pi}{2}$. The relative error should be no more than 1% for those values of $x$.

### Part (c)¶

[2pt]

What is the condition number of f at $x=0.00000001$? Store this value in the variable CN_f.

What is the condition number of f2 at $x=0.00000001$? Store this value in the variable CN_f2.

In [ ]:
CN_f  = 0
CN_f2 = 0


## Question 3. Sensitivity and Conditioning¶

[12pt]

Consider the function $g(x) = x^3 - 2x^2 + x$. Suppose there is a small error, $h$ in the value of $x$.

### Part (a)¶

[3 pt]

What is the absolute error in $g(x)$? Implement a function g_abs_err(x, h) that computes this quantity.

In [ ]:
def g_abs_err(x, h):
"""Returns the absolute error of g at x when x is
perturbed by a small value h.
"""
return 0


### Part (b)¶

[3 pt]

What is the relative error in $g(x)$? Implement a function g_rel_err(x, h) that computes this quantity.

In [ ]:
def g_rel_err(x, h):
"""Returns the relative error of g at x when x is
perturbed by a small value h.
"""
return 0


### Part (c)¶

[3 pt]

Estimate the condition number for the problem. Simplify this answer. Include your solution in your PDF file.

### Part (d)¶

[3 pt]

For what value of the argument $x$ is the problem highly sensitive? Include your solution in your PDF file.

## Question 4.¶

[12pt]

The sine function is given by the infinite series

$sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + ...$

Compute the absolute forward and backwards error if we approximate the sine function by the first two terms of the series for elements of the array xs below.

Save your solution in the array q4_forward and q4_backward, so that q4_forward[i] and q4_backward[i] are the absolute forward and backward errors corresponding to the input xs[i]. If an error value does not exist, enter the string "DNE".

You may find the functions math.sin and math.asin helpful.

In [ ]:
xs = [0.1, 0.5, 1.0, 2.0, 3.0, 4.0]

In [ ]:
q4_forward = []
q4_backward = []

In [ ]:
math.sin(0.2)

In [ ]:
math.asin(0.2) # arcsin


## Question 5. A Floating Point System¶

[30pts]

We'll be implementing a floating point system. For most of this question, assume that we are working with:

• base $\beta$ = 3,
• precision $p$ = 4, and
• exponent range [$L$, $U$] = [-3, 3]

Our floating point system will be normalized.

In [ ]:
# Constants we will use in our code.
# We will change this later on, so make sure your code uses these constants.
BASE = 3
PRECISION = 4
L = -3
U = +3


### Part (a)¶

[11pt]

The Python class FloatRep is our floating point representation. You will need to do two things:

1. [5pt] Write a set of assertions in the __init__ function to verify that the floating point number is normalized, and that the mantissa, exponent, and sign are all valid.
2. [6pt] Complete the function __float__, which returns a Python floating point number displayable in decimal.
In [ ]:
class FloatRep:
def __init__(self, mantissa, exponent, sign):
"""Return a floating point number.

Arguments:
mantissa -- a list of integers of length PRECISION
exponent -- an integer between L and U (inclusive)
sign     -- either 1 or -1
"""
# TODO: more assertions

self.mantissa = mantissa
self.exponent = exponent
self.sign = sign

def __float__(self):
"""Return a Python floating point representation of
this number. These examples are for your understanding.

>>> float(FloatRep([0] * (PRECISION), 0, 1))
0.0
>>> float(FloatRep([1, 0, 0, 1], 1, 1))
3.111111111111111
"""
# TODO

In [ ]:
float(FloatRep([1, 0, 0, 1], 1, 1))


### Part (b)¶

[5pt]

Construct a FloatRep object for each of the following:

1. The largest positive number representable in our floating point system. Store it in the variale largest_positive.
2. The largest negative number representable in our floating point system. Store it in the variable largest_negative.
3. The smallest positive number (greater than 0) in our floating point system. Store it in the variable smallest_positive
4. The value of math.sqrt(11) represented in our floating system. Assume chopping is used for rounding. Store it in the variable float_sqrt11.
5. The value of 32 represented in our floating system. Assume chopping is used for rounding. Store it in the variable float_32.
In [ ]:
largest_positive  = FloatRep([1, 0, 0, 0], 0, 1)
largest_negative  = FloatRep([1, 0, 0, 0], 0, 1)
smallest_positive = FloatRep([1, 0, 0, 0], 0, 1)
float_sqrt11      = FloatRep([1, 0, 0, 0], 0, 1)
float_30          = FloatRep([1, 0, 0, 0], 0, 1)


### Part (c)¶

[4pt]

Create a variable all_floats that contains a list of all normalized floating point numbers in our floating point system.

Verify that len(all_floats) corresponds to the number we obtain from the formula discussed in class.

In [ ]:
all_floats = []


The below code plots all floating point numbers between -1 and 1. Run the code.

In [ ]:
plt.figure(None, figsize=(18, 1))

lt1 = [float(x) for x in all_floats]
lt1 = [x for x in lt1 if abs(x) <= 1]

plt.plot(lt1, len(lt1) * [1], ".")


### Part(d)¶

[6pt]

Write a function add_float that takes two non-negative FloatRep values x and y, and returns a FloatRep value representing their sum.

In [ ]:
def add_float(x, y):
"""Returns a FloatRep containing the sum (x + y),
where x and y are both FloatRep.

Precondition: float(x) >= 0, float(y) >= 0
"""


### Part (e)¶

[4pt]

Show that add_float(x,y) is not associative. Include this part of your work in your PDF file.

## Question 6. Cancellation¶

We are going to show catestrophic cancellation by computing $\frac{1}{1-x}$ and $e^x$ using their taylor series expansion. Recall that:

$\frac{1}{1-x} = 1 + x + x^2 + x^3 + ... = \sum_{n=0}^\infty x^n$ for $x \in (-1, 1)$, and

$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + ... = \sum_{n=0}^\infty \frac{x^n}{n!}$

### Part (a)¶

[4pt] Complete the function h1 and h2 according to their specifications below.

In [ ]:
def h1(x, n):
"""Returns a list of the first n terms of the Taylor Series expansion
of 1/(1-x). The examples are for illustrative purposes only, not for
testing.

>>> h1(0.5, 3)
[1.0, 0.5, 0.25]
>>> sum(h1(0.5,1000))
2.0
>>> 1/(1-0.5)
2.0
"""
return [pow(x,i) for i in range(n)]

In [ ]:
def h2(x, n):
"""Returns a list of the first n terms of the Taylor Series expansion
of e^x. The examples are for illustrative purposes only, not for
testing.

>>> h2(0.5, 3)
[1.0, 0.5, 0.125]
>>> sum(h2(0.5,100))
1.6487212707001278
>>> math.exp(0.5)
1.6487212707001282
"""
return [pow(x,i)/math.factorial(i) for i in range(n)]


### Part (b)¶

[3pt]

Does computing $\frac{1}{1-x}$ using sum(h1(x,n)) suffer from catestrophic cancellation? If so, show an example: choose an $x$ where catestrophic cancellation occurs and show the list $h1(x,n)$. If not, briefly explain why not.

Include your solution in the PDF writeup.

### Part (c)¶

[1pt]

We already showed in class that computing $e^x$ in this way gives disasterous results for x < 0. For the rest of this part of the assignment, set $x = -30$.

First, compute $e^x$ using math.exp.

Now, compute the estimate: h2(-30, n) for n = 20, 40, 60, 80, 100, 120, 140, 160.

Save the result as a list, in order of ascending values of $n$, in the variable exp_estimates.

In [ ]:
ns = [20, 40, 60, 80, 100, 120, 140, 160]
exp_estimates = [sum(h2(-30, n)) for n in ns]


### Part (d)¶

[2pt]

Describe the values of exp_estimates. What does the first 4 values look like, and why? Do the estimates eventually converge? Why or why not?

Include your solution in the PDF writeup.

In [ ]:
exp_estimates


## Question 7. More Floating Point Cancellation¶

[5 pt]

Consider the function z(n) below.

In [ ]:
def z(n):
a = pow(2.0, n) + 10.0
b = (pow(2.0, n) + 5.0) + 5.0
return a - b


### Part (a)¶

[1 pt]

Using Python, find all positive integers n for which the value of z(n) is nonzero. Save the result in a list called nonzero_zn.

In [ ]:
nonzero_zn = []


### Part (b)¶

[4 pt]

Using the 4 rounding rules described in Section 1.3.4 of Heath, Explain why the z(n) takes on these particular non-zero values. Why is the expression zero for all other values of n? You may assume that Python uses IEEE Double Precision for floating-point arithmetic (i.e., round to even in base 2 with a mantissa of 53 bits). Hint: look at the rounding error produced by each of the three additions.