Numerical Analysis Technical Reports

Department of Computer Science
University of Toronto

This site provides access to the Technical Reports of the Numerical Analysis and Scientific Computing Group of the Department of Computer Science at the University of Toronto. The reports are listed from newest to oldest.

You may also wish to look at the main page of the Numerical Analysis and Scientific Computing Group.

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007 2006 2005

2004

2003 2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990


Title: Analysis of high-order time-stepping schemes for nonsmooth initial conditions in financial pricing
Authors:Dawei Wang, Christina Christara and Kirill Serkh
Date: December 2023
Pages: 29
Keywords: high-order time-stepping methods, BDF4, BDF3, BDF, RK3, high-order finite differences, Fourier analysis, high-frequency error, low-frequency error, nonsmooth initial conditions, Dirac delta, Heaviside, ramp, butterfly spread, pricing of options
File: Time.pdf

Abstract: Nonsmooth payoff functions are common in financial contracts and pose difficulties in obtaining high-order solutions of the contract prices. In this work, we consider convection-diffusion equations with initial conditions involving various types of nonsmoothness. We apply a fourth-order finite difference (FD) discretization on a uniform grid in space, and BDF4 time stepping initialized with two steps of an explicit third-order Runge-Kutta (RK3) method and one step of BDF3. From the Fourier analysis of the discrete system, we prove that the low-order errors generated by RK3 for nonsmooth data in the high-frequency domain get damped away by BDF steps, while low-order errors in the low-frequency domain come from the low-order initial condition discretization. To achieve globally high-order convergence, we apply fourth-order smoothing to the initial conditions, and provide explicit formulas of the discretization. By combining initial condition smoothing with the proposed time-stepping scheme, we mathematically prove and numerically verify that fourth-order convergence is obtained. Numerical examples on the model PDE and various option pricing problems are also given to demonstrate the fourth-order convergence of our method.


Title: On the Approximation of Singular Functions by Series of Non-integer Powers
Authors:Mohan Zhao and Kirill Serkh
Date:August 2023, updated December 2023
Pages: 46
File: Approx_of_singular_functions_v2.pdf

Abstract:

In this paper, we describe an algorithm for approximating functions of the form \(f(x) = \langle\sigma(\mu), x^\mu\rangle\) over \([0,1] \subset \mathbb{R}\), where \(\sigma(\mu)\) is some distribution supported on \([a,b]\), with \( 0 \lt a \lt b\lt \infty\). One example from this class of functions is \(x^c (\log{x})^m=(-1)^m \langle\delta^{(m)}(\mu-c), x^\mu\rangle\), where \(a\leq c \leq b\) and \(m \geq 0\) is an integer. Given the desired accuracy \(\epsilon\) and the values of \(a\) and \(b\), our method determines a priori a collection of non-integer powers \(t_1, t_2, \ldots, t_N\), so that the functions are approximated by series of the form \(f(x)\approx \sum_{j=1}^N c_j x^{t_j}\), and a set of collocation points \(x_1, x_2, \ldots, x_N\), such that the expansion coefficients can be found by collocating the function at these points. We prove that our method has a small uniform approximation error which is proportional to \(\epsilon\) multiplied by some small constants. We demonstrate the performance of our algorithm with several numerical experiments, and show that the number of singular powers and collocation points grows as \(N=O(\log{\frac{1}{\epsilon}})\).


Title: A Continuation method for fitting a bandlimited curve to points in the plane
Authors:Mohan Zhao and Kirill Serkh
Date:January 2023, updated May 2023
Pages: 38
File: Continuation_method_for_fitting_a_bandlimited_curve_v2.pdf

Abstract:

In this paper, we describe an algorithm for fitting an analytic and bandlimited closed or open curve to interpolate an arbitrary collection of points in ℝ². The main idea is to smooth the parametrization of the curve by iteratively filtering the Fourier or Chebyshev coefficients of both the derivative of the arc length function and the tangential angle of the curve, and applying smooth perturbations, after each filtering step, until the curve is represented by a reasonably small number of coefficients. The algorithm produces a curve passing through the set of points to an accuracy of machine precision, after a limited number of iterations. It costs O(N log N) operations at each iteration, provided that the number of discretization nodes is N. The resulting curves are smooth and visually appealing, and do not exhibit any ringing artifacts. The bandwidths of the constructed curves are much smaller than those of curves constructed by previous methods. We demonstrate the performance of our algorithm with several numerical experiments.


Title: On polynomial interpolation in the monomial basis
Authors:Zewen Shen and Kirill Serkh
Date:December 2022, updated March 2023, December 2023
Pages: 26
File: monomial_v4.pdf

Abstract:

In this paper, we show that the monomial basis is generally as good as a well-conditioned polynomial basis for interpolation, provided that the condition number of the Vandermonde matrix is smaller than the reciprocal of machine epsilon.


Title: Bilateral XVA pricing under stochastic default intensity: PDE modelling and computation,
Author: Yuwei Chen
Date: February 2023
Pages: 123
Note:Ph.D. thesis, Computer Science Department, University of Toronto
File: ywchen-23-phd.pdf

Abstract: Adjusting derivative prices to take into account default risk has attracted the attention of several researchers and practitioners, especially after the 2007-2008 financial crisis. The thesis is a study, via numerical Partial Differential Equation (PDE) approaches, of the modeling and computation of valuation adjustment problems in financial derivative pricing if we consider the bilateral counterparty default intensities.
Under some assumptions, the valuation of financial derivatives, including a value adjustment to account for bilateral default risk (XVA), assuming constant default intensities gives rise to a nonlinear PDE \cite{burgard2011partial}. We formulate a penalty iteration method to handle the nonlinearity in this PDE, study its convergence, and extend it to American derivatives, resulting in a double-penalty iteration. We also propose boundary conditions and their discretization for the XVA PDE problem in the cases of call, put options, and forward contracts. Numerical results demonstrate the effectiveness of our methods.
Then, we derive a novel PDE for derivative pricing including XVA, assuming that the default risk of one of the counterparties follows a Cox-Ingersoll-Ross (CIR) process, while the other party has constant default risk. The time-dependent PDE derived is of Black-Scholes type and involves two ``space'' variables, namely the asset price and the buyer default intensity, as well as a nonlinear source term. We formulate boundary conditions appropriate for the default intensity variable. The numerical solution of the PDE uses a penalty-like iteration for handling the nonlinearity. We also develop a novel asymptotic approximation formula for the adjusted price of derivatives, resulting in a very efficient, accurate, and convenient for practitioners formula. Numerical results indicate stable second order convergence for the 2D PDE solution in terms of the discretization size and convergence of order at least 1.5 for the asymptotic approximation in terms of inverse of the mean-reversion rate.
We extend the PDE model we developed to price European derivatives including XVA, considering stochastic counterparty default intensity, to American derivatives. We also extend the asymptotic approximation to the American put option problem. A key step is to derive the asymptotic approximation to the free boundary. We present numerical experiments in order to study the accuracy and effectiveness of the 2D PDE and asymptotic approximations.


Title: Bilateral XVA pricing under stochastic default intensity: PDE modelling and computation
Authors:Yuwei Chen and Christina Christara
Date: June 2022
Pages: 32
Note:A later version is to be published in Applied Numerical Mathematics, vol 185, pages 236-259, March 2023. link to journal page
Keywords: Partial Differential Equations, Black-Scholes, Crank-Nicolson finite differences, mean reversion CIR process, stochastic default intensity, asymptotic approximation, nonlinear iteration, penalty iteration method.
File: sto_inten_xva_v3.pdf

Abstract: Adjusting derivative prices to take into account default risk has attracted the attention of several researchers and practitioners, especially after the 2007-2008 financial crisis. We derive a novel partial differential equation (PDE) model for derivative pricing including the adjustment for default risk, assuming that the default risk of one of the counterparties (the buyer) follows a Cox-Ingersoll-Ross (CIR) process, while the other party has constant default risk. The time-dependent PDE derived is of Black-Scholes type and involves two ``space'' variables, namely the asset price and the buyer default intensity, as well as a nonlinear source term. We formulate boundary conditions appropriate for the default intensity variable. The numerical solution of the PDE is based on standard finite differences, and a penalty-like iteration for handling the nonlinearity. We also develop and analyze a novel asymptotic approximation formula for the adjusted price of derivatives, resulting in a very efficient, accurate, and convenient for practitioners formula. We present numerical results that indicate stable second order convergence for the 2D PDE solution in terms of the discretization size. We compare the effectiveness of the 2D PDE and asymptotic approximations. We study the effect of various numerical and market parameters to the values of the adjusted prices and to the accuracy of the computed solutions.


Title: Rapid evaluation of Newtonian potentials on planar domains
Authors:Zewen Shen and Kirill Serkh
Date: August 2022, updated September 2022, June 2023, October 2023
Pages: 25
File: Rapid_evaluation_of_Newtonian_potentials_v4.pdf

Abstract:

The accurate and efficient evaluation of Newtonian potentials over general 2-D domains is important for the numerical solution of Poisson's equation and volume integral equations. In this paper, we present a simple and efficient high-order algorithm for computing the Newtonian potential over a planar domain discretized by an unstructured mesh. The algorithm is based on the use of Green's third identity for transforming the Newtonian potential into a collection of layer potentials over the boundaries of the mesh elements, which can be easily evaluated by the Helsing-Ojala method. One important component of our algorithm is the use of high-order (up to order 20) bivariate polynomial interpolation in the monomial basis, for which we provide extensive justification. The performance of our algorithm is illustrated through several numerical experiments.


Title: Penalty and Penalty-Like Methods for Nonlinear HJB PDEs
Authors:Christina Christara and Ruining Wu
Date: June 2022
Pages: 27
Note:A later version is published in Applied Mathematics and Computation, vol 425, pages 1-19, 2022. link to journal page
Keywords: Partial Differential Equations, Black-Scholes, nonlinear iteration, finite differences, Crank-Nicolson, control problem, Hamilton-Jacobi-Bellman (HJB) equation, transaction costs, stock borrowing fees, penalty methods, policy iteration.
File: hjb-penalty.pdf

Abstract: There are numerous financial problems that can be posed as optimal control problems, leading to Hamilton-Jacobi-Bellman or Hamilton-Jacobi-Bellman-Issacs equations. We reformulate these problems as nonlinear PDEs, involving max and/or min terms of the unknown function, and/or its first and second spatial derivatives. We suggest efficient numerical methods for handling the nonlinearity in the PDE through an adaptation of the discrete penalty method [Forsyth, Vetzal 2002] that gives rise to tridiagonal penalty matrices. We formulate a penalty-like method for the use with European exercise rights, and extend this to American exercise rights resulting in a double-penalty method. We also use our findings to improve the policy iteration algorithms described in [Forsyth, Labahn 2007]. Numerical results are provided showing clear second-order convergence, and where applicable, we prove the convergence of our algorithms.


Title: Accelerating potential evaluation over unstructured meshes in two dimensions
Authors:Zewen Shen and Kirill Serkh
Date: May 2022, updated June 2022
Pages: 47
File: Accelerating_2d_potential_evaluation_over_unstructured_meshes_v2.pdf

Abstract:

The accurate and efficient evaluation of potentials with singular or weakly-singular kernels is of great importance for the numerical solution of partial differential equations. When the integration domain of the potential is irregular and is discretized by an unstructured mesh, the function spaces of near field and self-interactions are non-compact, and, thus, their computations cannot be easily accelerated. In this paper, we propose several novel and complementary techniques for accelerating the evaluation of potentials over unstructured meshes. Firstly, we observe that the standard approximation of the near field by a ball or a triangle often leads to an over-estimated near field. We rigorously characterize the geometry of the near field, and show that this analysis can be used to reduce the number of near field interaction computations dramatically. Secondly, as the near field can be made arbitrarily small by increasing the order of the far field quadrature rule, the expensive near field interaction computation can be efficiently offloaded onto the FMM-based far field interaction computation, which leverages the computational efficiency of highly optimized parallel FMM libraries. Finally, we observe that the usual arrangement in which the interpolation nodes are placed on the same mesh over which the potential is integrated results in an artificially large number of near field interaction calculations, since the discretization points tend to cluster near the boundaries of mesh elements. We show that the use of a separate staggered mesh for interpolation effectively reduces the cost of near field and self-interaction computations. Besides these contributions, we present a robust and extensible framework for the evaluation and interpolation of 2-D volume potentials over complicated geometries. We demonstrate the effectiveness of the techniques with several numerical experiments.


Title: A high-order deferred correction method for the solution of free boundary problems using penalty iteration, with an application to American option pricing
Authors:Dawei Wang, Kirill Serkh, and Christina Christara
Date: May 2022, updated November 2022, March 2023
Pages: 41
Note: A later version published in Journal of Computational and Applied Mathematics, Volume 432, November 2023, 115272, Link to journal page
Keywords: Green's functions; deferred correction; free boundary problems; immersed interface methods; finite differences; American options; Linear Complementarity Problem; penalty iteration; derivative jumps
File: HOFB.pdf

Abstract:

This paper presents a high-order deferred correction algorithm combined with penalty iteration for solving free and moving boundary problems, using a fourth-order finite difference method. Typically, when free boundary problems are solved on a fixed computational grid, the order of the solution is low due to the discontinuity in the solution at the free boundary, even if a high order method is used. Using a detailed error analysis, we observe that the order of convergence of the solution can be increased to fourth-order by solving successively corrected finite difference systems, where the corrections are derived from the previously computed lower order solutions. The corrections are applied solely to the right-hand side, and leave the finite difference matrix unchanged. The penalty iterations converge quickly given a good initial guess. We demonstrate the accuracy and efficiency of our algorithm using several examples. Numerical results show that our algorithm gives up to fourth-order convergence for both the solution and the free boundary location. We also test our algorithm on the challenging American put option pricing problem. Our algorithm gives the expected high-order convergence.


Title: On the Evaluation of the Eigendecomposition of the Airy Integral Operator
Authors: Zewen Shen and Kirill Serkh
Date: April 2021, updated June 2022
Pages: 48
Note:Published in App. Comput. Harmon. Anal. 57 (2022): 105–150. link to journal page
File: On_the_Evaluation_of_the_Eigendecomposition_v4.pdf

Abstract:

The distributions of the k-th largest level at the soft edge scaling limit of Gaussian ensembles are some of the most important distributions in random matrix theory, and their numerical evaluation is a subject of great practical importance. One numerical method for evaluating the distributions uses the fact that they can be represented as Fredholm determinants involving the so-called Airy integral operator. When the spectrum of the integral operator is computed by discretizing it directly, the eigenvalues are known to at most absolute precision. Remarkably, the Airy integral operator is an example of a so-called bispectral operator, which admits a commuting differential operator that shares the same eigenfunctions. In this report, we develop an efficient numerical algorithm for evaluating the eigendecomposition of the Airy integral operator to full relative precision, using the eigendecomposition of the commuting differential operator. This allows us to rapidly evaluate the distributions of the k-th largest level to full relative precision rapidly everywhere except in the left tail, where they are computed to absolute precision. In addition, we characterize the eigenfunctions of the Airy integral operator, and describe their extremal properties in relation to an uncertainty principle involving the Airy transform. We observe that the Airy integral operator is fairly universal, and we describe a separate application to Airy beams in optics. Using the eigenfunctions, we compute a finite-energy Airy beam that is optimal, in the sense that the beam is both maximally concentrated, and maximally non-diffracting and self-accelerating.


Title: On the Efficient Evaluation of the Azimuthal Fourier Components of the Green's Function for Helmholtz's Equation in Cylindrical Coordinates
Authors: James Garritano, Yuval Kluger, Vladimir Rokhlin, and Kirill Serkh
Date: April 2021
Pages: 58
Note:Published in J. Comput. Phys. 471 (2022): 111585. link to journal page
File: On_the_Efficient_Evaluation_of_the_Azimuthal_Fourier_Components.pdf

Abstract:

In this report, we develop an efficient algorithm to evaluate the azimuthal Fourier components of the Green's function for the Helmholtz equation in cylindrical coordinates. A computationally efficient algorithm for this modal Green's function is essential for solvers for electromagnetic scattering from bodies of revolution (e.g., radar cross sections, antennas). Current algorithms to evaluate this modal Green's function become computationally intractable when the source and target are close or when the wavenumber is large. Furthermore, most state of the art methods cannot be easily parallelized. In this report, we present an algorithm for evaluating the modal Green's function that has performance independent of both source-to-target proximity and wavenumber, and whose cost grows as O(m), where m is the Fourier mode. Furthermore, our algorithm is embarrassingly parallelizable.


Title: Penalty Methods for Nonlinear Problems in Financial Option Pricing
Authors: Ruining (Ray) Wu
Date: February 2021
Pages: 121
Note: MSc Research Paper, Computer Science Department, University of Toronto
File: rwu-21-msc.pdf

Abstract: We study the pricing of nonlinear problems in computational finance by numerical Partial Differential Equation (PDE) methods. We examine the numerical solution obtained via the use of Policy Iteration methods from Hamilton-Jacobi-Bellman equations and introduce new penalty-like iterative methods to solve the nonlinear equations. We consider the following problems: Unequal Borrowing and Lending rates, Stock Borrowing Fees, Stock Borrowing Fees with American exercise rights, Uncertain Volatility, and Transaction cost models. We demonstrate second-order convergence of the solution and of the Greeks, and for the nonlinear problems we study the number of nonlinear iterations taken as a proxy to the total computational cost. Furthermore, the effects of various details such as different boundary conditions and nonuniform discretization grids are studied. Finally, where applicable, we prove monotonicity of the new penalty-like methods that we have introduced and demonstrate that our numerical implementations uphold this property.


Title: A Provably Componentwise Backward Stable O(n²) QR Algorithm for the Diagonalization of Colleague Matrices
Authors: Kirill Serkh and Vladimir Rokhlin
Date: February 2021
Pages: 51
File: A_Provably_Componentwise_Backward_Stable_QR.pdf

Abstract:

The roots of a monic polynomial expressed in a Chebyshev basis are known to be the eigenvalues of the so-called colleague matrix, which is a Hessenberg matrix that is the sum of a symmetric tridiagonal matrix and a rank-1 matrix. The rootfinding problem is thus reformulated as an eigenproblem, making the computation of the eigenvalues of such matrices a subject of significant practical importance. In this report, we describe an O(n²) explicit structured QR algorithm for colleague matrices and prove that it is componentwise backward stable, in the sense that the backward error in the colleague matrix can be represented as relative perturbations to its components. A recent result of Noferini, Robol, and Vandebril shows that componentwise backward stability implies that the backward error δc in the vector c of Chebyshev expansion coefficients of the polynomial has the bound ‖δc‖ ≲ ‖c‖u, where u is machine precision. Thus, the algorithm we describe has both the optimal backward error in the coefficients and the optimal cost O(n²). We illustrate the performance of the algorithm with several numerical examples.


Title: A Modified Simplex Method for Solving Ax = b, x ≥ 0, for Very Large Matrices A Arising from a Calibration Problem
Author: Zoe MacDonald
Date: July 2020
Pages: 45
Note: MSc Research Paper, Computer Science Department, University of Toronto
File: Zoe_MacDonald_MSc_Research_Paper.pdf

Abstract:

In the area of correlated multivariate discrete distributions, including, in particular, multivariate Poisson distributions, the calibration problem is the problem of finding a multidimensional discrete probability distribution with given marginals and a desired correlation matrix. This can be done by taking a convex combination of a set of "extreme" multivariate discrete distributions, each with the given marginals and an "extreme" correlation matrix. The problem of determining the coefficients for this convex combination can be solved by finding a solution to an underdetermined system of linear equations Ax = b, with the constraint x ≥ 0.

At smaller dimensions, a solution to this underdetermined system of linear equations Ax = b, with the constraint x ≥ 0, can be computed using phase 1 of the simplex method. However, the number of extreme distributions involved in such a problem increases exponentially as the dimensionality of the desired correlation matrix increases. Since each extreme distribution is associated with a column of the matrix A, the number of columns in the matrix A also increases exponentially as the dimensionality of the desired correlation matrix increases. This paper describes a variant of the simplex method for use on this specific problem, the cost of which, in most cases that we have tested, grows polynomially with the dimensionality of the desired correlation matrix (and polylogarithmically with the number of extreme distributions). This allows for solutions to be calculated with correlation matrices up to size 52 x 52 in a reasonable amount of time (approximately 20 minutes on a home desktop computer). This was the largest problem tested with this method, due to precision limits, but in principle there is no reason it would not work on even larger problems.


Title: Backward Simulation of Multivariate Mixed Poisson Processes
Authors: Michael Chiu, Kenneth R. Jackson and Alexander Kreinin
Date: July 2020
Pages: 22
Note: Submitted to INFORMS Journal on Computing
File: Link to the paper on arXiv

Abstract:
The Backward Simulation (BS) approach was developed to generate, simply and efficiently, sample paths of correlated multivariate Poisson process with negative correlation coefficients between their components. In this paper, we extend the BS approach to model multivariate Mixed Poisson processes which have many important applications in Insurance, Finance, Geophysics and many other areas of Applied Probability. We also extend the Forward Continuation approach, introduced in our earlier work, to multivariate Mixed Poisson processes.


Title: Penalty Methods for Bilateral XVA Pricing in European and American Contingent Claims by a PDE Model
Authors: Yuwei Chen and Christina C. Christara
Date: Nov 2019, Rev. Feb 2020
Pages: 22
Keywords: Partial Differential Equations, Black-Scholes, credit risk, default risk, credit valuation adjustment, call option, put option, forward contract, nonlinear iteration, finite differences, Crank-Nicolson, control problem, Hamilton-Jacobi-Bellman (HJB) equation.
Note: A version of this paper will appear in the
Journal of Computational Finance
File: ccc/xva_full.pdf

Abstract:
Taking into account default risk in the valuation of financial derivatives has become increasingly important, especially after the 2007-2008 financial crisis. Under some assumptions, the valuation of financial derivatives, including a value adjustment to account for default risk (the so-called XVA), gives rise to a nonlinear PDE \cite{Christoph Burgard and Mats Kjaer, 2011}. We propose numerical methods for handling the nonlinearity in this PDE, the most efficient of which are discrete penalty iteration methods. We first formulate a penalty iteration method for the case of European contingent claims, and study its convergence. We then extend the method to the case of American contingent claims, resulting in a double-penalty iteration. We also propose boundary conditions and their discretization for the XVA PDE problem in the cases of call and put options, as well as the case of a forward contract. Numerical results demonstrate the effectiveness of our methods.


Title: An Investigation of Alternative Techniques for Parameter Estimation Problems for Systems of ODEs
Authors: Jienan Yao
Date: January 2020
Pages: 45
Note:MSc Research Paper, Computer Science Department, University of Toronto
File: jnyao-20-msc.pdf

Abstract:

Estimating parameters for system of ODEs that best fit observed data can be challenging. The evaluation of the objective function for the parameter estimation usually involves numerically simulating the system of ODEs, which can be computationally expensive and may even suffer from simulation failure. Previous investigation suggested the use of a two-stage procedure by first determining promising candidate parameters using a related objective function that is easy to evaluate before applying a gradient-based optimizer using the full simulation objective function.

In this investigation, techniques are considered for the first stage in the two-stage procedure for the parameter estimation problem. These techniques include physics-informed neural networks, Gaussian process regression, Koopman-based lifting methods and unscented Kalman filtering. Other modifications to these techniques that improve the initial guesses for a better chance of converging to the global optimum are also investigated. Experimental results for a set of test problems from the literature are included for comparison between these approaches.


Title: Parameter Estimation for Systems of Ordinary Differential Equations
Authors: Jonathan Calver
Date: January 2019
Pages: 127
Note:PhD Thesis, Department of Computer Science, University of Toronto
File: jcalver-19-phd.pdf

Abstract:

We consider the formulation and solution of the inverse problem that arises when fitting systems of ordinary differential equations (ODEs) to observed data. This parameter estimation task can be computationally intensive if the ODEs are complex and require the use of numerical methods to approximate their solution. The thesis focuses on addressing a common criticism of the single shooting approach, which is that it can have trouble converging to the globally optimal parameters if a sufficiently good initial guess for the parameters is unavailable. We demonstrate that it is often the case that a good initial guess can be obtained by solving a related inverse problem. We show that gradient based shooting approaches can be effective by investigating how they perform on some challenging test problems from the literature. We also discuss how the approach can be applied to systems of delay differential equations (DDEs). We make use of parallelism and the structure of the underlying ODE models to further improve the computational efficiency.

Some existing methods for the efficient computation of the model sensitivities required by a Levenberg-Marquardt least squares optimizer are compared and implemented in a parallel computing environment. The effectiveness of using the adjoint approach for the efficient computation of the gradient required by a BFGS optimizer is also investigated for both systems of ODEs and systems of DDEs. The case of unobserved components of the state vector is then considered and we demonstrate how the structure of the model and the observed data can sometimes be exploited to cope with this situation.


Title: Detecting Early Stage Lung Cancer using a Neural Network Trained with Patches from Synthetically Generated X-Rays
Authors: Abhishek Moturu and Alex Chang
Date: January 2019
Pages: 9
Note:Report on an Undergraduate Summer Research Project
File: Project_Report_Moturu_Chang_2.pdf

Abstract:

The aim of this research is to train a neural network to detect early stage lung cancer with high accuracy. Since X-rays are a relatively cheap and quick procedure that provide a preliminary look into a patient's lungs and because real X-rays are often difficult to obtain due to privacy concerns, a neural network can be trained using patches from synthetically generated frontal chest X-rays using Beer's Law on chest X-ray Computed Tomography (CT) scans with and without randomly inserted lung nodules. This provides a large, diverse training dataset that can be used to train a neural network to classify each patch of a given X-ray based on whether or not it contains a nodule. This research project involves bone removal to obtain soft-tissue dual-energy X-rays, lung segmentation to separate lungs within CT scans and randomize nodule placement, nodule generation to grow nodules of various shapes, sizes, and radiodensities, X-ray generation using Beer's Law, image processing to produce realistic and effective X-rays, preliminary neural network exploration to study the feasibility of detecting nodules, sliding windows to create heatmap visualizations of X-rays to highlight areas of high nodule probability, and analyzing these various methods and the results of the neural network to improve accuracy while reducing space and time complexity. This research may be helpful in detecting lung cancer at a very early stage. This report is an extension of our previous report, "Creation of Synthetic X-Rays to Train a Neural Network to Detect Lung Cancer.


Title: Mixing Monte Carlo and Partial Differential Equation Methods for Multi-Dimensional Optimal Stopping Problems Under Stochastic Volatility
Authors: David Farahany
Date: November 2018
Pages: 124
Note:PhD Thesis, Department of Statistical Sciences, University of Toronto
File: Farahany_David_PhD_Thesis.pdf

Abstract:

In this thesis, we develop a numerical approach for solving multi-dimensional optimal stopping problems (OSPs) under stochastic volatility (SV) that combines least squares Monte Carlo (LSMC) with partial differential equation (PDE) techniques. The algorithm provides dimensional reduction from the PDE and regression perspective along with variance and dimensional reduction from the MC perspective.

In Chapter 2, we begin by laying the mathematical foundation of mixed MC-PDE techniques for OSPs. Next, we show the basic mechanics of the algorithm and, under certain mild assumptions, prove it converges almost surely. We apply the algorithm to the one dimensional Heston model and demonstrate that the hybrid algorithm outperforms traditional LSMC techniques in terms of both estimating prices and optimal exercise boundaries (OEBs).

In Chapter 3, we describe methods for reducing the complexity and run time of the algorithm along with techniques for computing sensitivities. To reduce the complexity, we apply two methods: clustering via sufficient statistics and multi-level Monte Carlo (mlMC)/multi-grids. While the clustering method allows us to reduce computational run times by a third for high dimensional problems, mlMC provides an order of magnitude reduction in complexity. To compute sensitivities, we employ a grid based method for derivatives with respect to the asset, S, and an MC method that uses initial dispersions for sensitivities with respect to variance, v. To test our approximations and computation of sensitivities, we revisit the one dimensional Heston model and find our approximations introduce little-to-no error and that our computation of sensitivities is highly accurate in comparison to standard LSMC. To demonstrate the utility of our new computational techniques, we apply the hybrid algorithm to the multi-dimensional Heston model and show that the algorithm is highly accurate in terms of estimating prices, OEBs, and sensitivities, especially in comparison to standard LSMC.

In Chapter 4, we highlight the importance of multi-factor SV models and apply our hybrid algorithm to two specific examples: the Double Heston model and a mean-reverting commodity model with jumps. Again, we were able to obtain low variance estimates of the prices, OEBs, and sensitivities.


Title: Creation of Synthetic X-Rays to Train a Neural Network to Detect Lung Cancer
Authors: Abhishek Moturu and Alex Chang
Date: August 2018
Pages: 16
Note:Report on an Undergraduate Summer Research Project
File: Project_Report_Moturu_Chang_1.pdf

Abstract:

The purpose of this research is to create effective training data for a neural network to detect lung cancer. Since X-rays are a relatively cheap and quick procedure that provide a preliminary look into a patient's lungs and real X-rays are often difficult to obtain due to privacy concerns, creating synthetic frontal chest X-rays using ray tracing and Beer's Law on several chest X-ray Computed Tomography (CT) scans with and without randomly inserted lung nodules can provide a large, diverse training dataset. This research project involves lung segmentation to separate lungs within CT scans and randomize nodule placement, nodule generation to grow nodules of random size and radiodensity, bone removal to obtain dual-energy X-rays, ray tracing to create X-rays from CT scans from several point sources using Beer's Law, image processing to produce realistic X-rays with uniform orientation, dimensions, and contrast, and analyzing these various methods and the results of the neural network to improve accuracy when compared to real X-rays, while reducing space complexity and time complexity. This research may be helpful in detecting lung cancer at a very early stage.


Title: Computation of Loss Distribution Based on the Structural Model for Credit Portfolios
Author: Meng Han
Date: February 2018
Pages: 275
Note:PhD Thesis, Computer Science Department, University of Toronto
File: Meng_Han_2018_PhD_Thesis.pdf

Abstract:

Credit risk analysis and management at the portfolio level is a challenging issue for financial institutions due to their portfolios' large size, heterogeneity and complex correlation structure. In this thesis, we propose several new asymptotic methods and exact methods to compute the distribution and VaR of a loan portfolio's loss in the CreditMatrics framework. For asymptotic methods, we give an approximation based on the Central Limit Theorem (CLT), which gives more accurate approximations to the conditional portfolio loss probabilities compared with existing approximations. For exact methods, we improve the efficiency by exploiting the sparsity that often arises in the obligors' conditional losses. A sparse convolution method and a sparse FFT method are proposed, which enjoy significant speedups compared with the straightforward convolution method. We also construct truncated versions of the sparse convolution method and the sparse FFT method to further improve their efficiency. To control the aliasing errors and roundoff errors incurred in the truncated sparse FFT method, an optimal exponential windowing approach is developed as well. For lumpy portfolios, we introduce hybrid methods which combine an asymptotic approximation with Monte Carlo simulation or one of exact methods to achieve a good balance between efficiency and accuracy.


Title: Numerical Methods for Pricing Multi-Asset Options
Author: Yuwei Chen
Date: January 2018
Pages: 75
Note:M.Sc. thesis, Computer Science Department, University of Toronto
File: ywchen-18-msc.pdf

Abstract:

We consider the pricing of two-asset European and American options by numerical Partial Differential Equation (PDE) methods, and compare the results with certain analytical formulae. Two cases of options are tested: exchange option and spread option. For exchange options, the analytical formula considered is the (exact) Margrabe formula. For spread options, we consider Kirk's formula and the formula by Li, Deng and Zhou. In pricing European two-asset options, the basic numerical PDE model is the two-dimensional Black-Scholes PDE. Different boundary conditions are considered, and the effect of them on the solution at various points of the grid is studied. Furthermore, various types of non-uniform grids are considered, aiming at reducing the error at certain areas of the grid. Moreover, the effect of the truncated domain on the PDE approximation is studied. We also discuss the effect of certain problem parameters, such as the length of maturity time, and the values of volatility and correlation, to the accuracy and convergence of the PDE approximations. The experiments indicate that the numerical PDE computed price and Greeks are second-order, for appropriately chosen grid discretizations. In the American pricing problem, the discrete penalty method is applied to the linear complementarity problem involving the two-dimensional Black-Scholes PDE and additional constraints. The convergence of the American Spread put option approximation computed with penalty iteration remains second-order, with the number of penalty iterations per timestep remaining small (2-3). We also consider an iterative method with preconditioning techniques for solving the arising large sparse linear system at each timestep, and show that this solution technique is asymptotically optimal.


Title: An Importance Sampling Scheme For Multi-Credit-State Portfolios
Author: Adam Sturge
Date: January 2018
Pages: 66
Note:MSc Research Paper, Computer Science Department, University of Toronto
File: Sturge_MSc_Research_Paper_2018.pdf

Abstract:

Portfolio credit risk based on the Gaussian copula factor model is generally evaluated through Monte Carlo Integration. Glasserman and Li purposed a 2 level importance sampling scheme to improve the accuracy of this numerical approximation. However their result is limited to the binary credit state model. Zhe Wang developed a novel importance sampling technique for the outer level based on the work of Meng Han. While this method shows significant variance reduction, it introduces a bias away from the true answer. In this research paper we remove the bias from the outer level of Wang's method and extend the inner level of Glasserman and Li's method to the multi-credit-state model, combining them into a single powerful technique.


Title: Analysis of quantization error in financial pricing via finite difference methods
Authors: Christina C. Christara and Nat Chun-Ho Leung
Date: July 2017, rev. March 2018
Pages: 27
Keywords: non-smooth initial conditions, option pricing, numerical solution, partial differential equation, convection-diffusion equations, Fourier analysis, finite difference methods
Note: A version of this paper appeared in SIAM J. Numerical Analysis 56:3, 2018, pp. 1731-1757
File: ccc/paper-error.pdf

Abstract:
In this paper, we study the error of a second order finite difference scheme for the one-dimensional convection-diffusion equation. We consider non-smooth initial conditions commonly encountered in financial pricing applications. For these initial conditions, we establish the explicit expression of the quantization error, which is loosely defined as the error of the numerical solution due to the placement of the point of non-smoothness on the numerical grid. Based on our analysis, we study the issue of optimal placement of such non-smoothness points on the grid, and the effect of smoothing operators on quantization errors.


Title: Partial Differential Equation pricing of contingent claims under stochastic correlation
Authors: Nat Chun-Ho Leung, Christina C. Christara and Duy-Minh Dang
Date: October 2017
Pages: 30
Keywords: stochastic correlation, option pricing, numerical solution, asymptotic solution, partial differential equation
Note: A version of this paper appeared in SIAM J. Scientific Computing, Vol 40, Issue 1, pages B1-B31 (2018).
File: paper_sto_corrL.pdf

Abstract:
In this paper, we study a partial differential equation (PDE) framework for option pricing where the underlying factors exhibit stochastic correlation, with an emphasis on computation. We derive a multi-dimensional time-dependent PDE for the corresponding pricing problem, and present a numerical PDE solution. We prove a stability result, and study numerical issues regarding the boundary conditions used. Moreover, we develop and analyze an asymptotic analytical approximation to the solution, leading to a novel computational asymptotic approach based on quadrature with a perturbed transition density. Numerical results are presented to verify second order convergence of the numerical PDE solution and to demonstrate its agreement The effect of certain problem parameters to the PDE solution, as well as to the asymptotic approximation solution, is also studied.


Title: Spread option pricing using ADI methods
Authors: Vida Heidarpour-Dehkordi and Christina C. Christara
Date: July 2017
Pages: 17
Keywords: Modified Craig-Sneyd, Alternating Direction Implicit method, two-dimensional Black-Scholes, American option, spread option, exchange option, analytical approximation, numerical PDE solution, penalty iteration
Note: Published in International J. Numerical Analysis and Modelling, 15:3, 2018. pp. 353-369.
File: spreadADI.pdf

Abstract:
Spread option contracts are becoming increasingly important, as they frequently arise in the energy derivative markets, e.g. exchange electricity for oil. In this paper, we study the pricing of European and American spread options. We consider the two-dimensional Black-Scholes Partial Differential Equation (PDE), use finite difference discretization in space and consider Crank-Nicolson (CN) and Modified Craig-Sneyd (MCS) Alternating Direction Implicit (ADI) methods for timestepping. In order to handle the early exercise feature arising in American options, we employ the discrete penalty iteration method, introduced and studied in Forsyth and Vetzal (2002), for one-dimensional PDEs discretized in time by the CN method. The main novelty of our work is the incorporation of the ADI method into the discrete penalty iteration method, in a highly efficient way, so that it can be used for two or higher-dimensional problems. The results from spread option pricing are compared with those obtained from the closed-form approximation formulae of Kirk (1995), Venkatramanan and Alexander (2011), Monte Carlo simulations, and the Brennan-Schwartz ADI Douglas-Rachford method, as implemented in MATLAB. In all spread option test cases we considered, including American ones, our ADI-MCS method, implemented on appropriate non-uniform grids, gives more accurate prices and Greeks than the MATLAB ADI method.


Title: PDE option pricing with variable correlations
Authors: Christina C. Christara and Nat Chun-Ho Leung
Date: May 2017
Pages: 17
Keywords: stochastic correlation, option pricing, regime switching, numerical solution, partial differential equation, system of PDEs, exchange option, basket option, Greeks.
Note: Proceedings of the International Conference on Computational and Informational Sciences and Engineering, honoring Professor Elias N. Houstis, University of Thessaly publications, September 2018, pp. 43-57.
File: paperRS.pdf

Abstract:
Modelling correlation between financial quantities is important in the accurate pricing of financial derivatives. In this paper, we introduce some stochasticity in correlation, by considering a regime-switching correlation model, in which the transition rates between regimes are given. We present a derivation of the associated Partial Differential Equation (PDE) problem. The problem involves a system of $\lsc$ PDEs, where $\lsc$ is the number of regimes. We formulate a finite difference method for the solution of the PDE system, and numerically demonstrate that it converges with second order. We study the effect of certain model parameters on the computed prices. We compare prices from this model, associated PDE and method with those from a stochastic correlation model, associated PDE and method in \cite{Emmerich06, Leung2017, LeungChristara2017} and discuss advantages and disadvantages.


Title: PDE option pricing: analysis and application to stochastic correlation,
Author: Nat Chun-Ho Leung
Date: June 2017
Pages: 119
Note:Ph.D. Thesis, Depart. of Computer Science, University of Toronto, 2017.
File: nat-17-phd.pdf
Keywords: quantization error, Fourier analysis, digital options, call and put options, stochastic correlation, spread options, quanto options, max options, partial differential equation, localization, boundary conditions, convergence, stability.

Abstract: This thesis is a study of numerical Partial Differential Equation (PDE) methods in financial derivatives pricing. The first part of the thesis is concerned with the behaviour of a numerical PDE solution when the initial condition is not smooth. The second part of the thesis develops computational PDE methods for option pricing problems with stochastic correlation.
In the first part of this thesis, we provide an analysis of the error arising from a non-smooth initial condition when solving a pricing problem with a finite difference method. We build our framework on the sharp error estimate in Giles and Carter (2006), and study three types of non-smoothness that are of financial interest. We show that the error of the numerical solution under Crank-Nicolson-Rannacher timestepping with central spatial differences can be decomposed into two components, respectively a second order error resulting from the approximation to the heat kernel by a discrete operator, and a quantization error that depends on the positioning of non-smoothness on the grid. We discuss how this positioning affects the quality of the numerical solution, and the possibility of an optimal placement of the non-smoothness in the mesh. We also study explicitly the effect of smoothing on the approximation error.
The second part of the thesis focuses on the pricing of European options using a stochastic correlation model. We derive a time-dependent PDE for the pricing problem under stochastic correlation, and develop computational approaches for its solution. The first approach is a finite difference scheme. We study such issues as localization of domain, boundary conditions and stability of the numerical scheme. The second approach is an asymptotic solution of the PDE, appropriate for cases when the correlation process exhibits fast mean reversion and when a numerical PDE solution is considered costly.
Numerical experiments demonstrate the effectiveness of our methods, and the agreement among the two solutions and Monte Carlo simulations. We also experimentally demonstrate the effect of smoothing on the numerical solution, and study the effect of certain problem parameters on the approximate solution.


Title:A Dimension and Variance Reduction Monte-Carlo Method For Option Pricing Under Jump-Diffusion Models
Authors: Duy-Minh Dang, Kenneth R. Jackson and Scott Sues
Date: September 2017
(An earlier draft of this paper appeared in October 2015.)
Pages: 35
Keywords: conditional Monte Carlo, variance reduction, dimension reduction, partial-integro-differential equations, jump diffusions, fast Fourier transform, normal, double-exponential
Note: To appear in the Journal of Applied Mathematical Finance.
File: Link to the paper on the SSRN

Abstract:

We develop a highly efficient MC method for computing plain vanilla European option prices and hedging parameters under a very general jump-diffusion option pricing model which includes stochastic variance and multi-factor Gaussian interest short rate(s). The focus of our MC approach is variance reduction via dimension reduction. More specifically, the option price is expressed as an expectation of a unique solution to a conditional Partial Integro-Differential Equation (PIDE), which is then solved using a Fourier transform technique. Important features of our approach are (i) the analytical tractability of the conditional PIDE is fully determined by that of the Black-Scholes-Merton model augmented with the same jump component as in our model, and (ii) the variances associated with all the interest rate factors are completely removed when evaluating the expectation via iterated conditioning applied to only the Brownian motion associated with the variance factor. For certain cases when numerical methods are either needed or preferred, we propose a discrete fast Fourier transform method to numerically solve the conditional PIDE efficiently. Our method can also effectively compute hedging parameters. Numerical results show that the proposed method is highly efficient.


Title:Correlated Multivariate Poisson Processes and Extreme Measures
Authors: Michael Chiu, Kenneth R. Jackson and Alexander Kreinin
Date: June 2017
Pages: 13
Note: Submitted to the Journal of Model Assisted Statistics and Applications (MASA).
File: Correlated_Poisson_Processes_1.pdf

Abstract:

Multivariate Poisson processes have many important applications in Insurance, Finance, and many other areas of Applied Probability. In this paper we study the backward simulation approach to modelling multivariate Poisson processes and analyze the connection to the extreme measures describing the joint distribution of the processes at the terminal simulation time.


Title: Option pricing in jump diffusion models with quadratic spline collocation
Authors: Christina C. Christara and Nat Chun-Ho Leung
Date: May 2014; revised May 2015
Pages: 15
Keywords: quadratic spline, collocation, finite difference, European option, American option, partial integro-differential equation, Merton model, Kou model, calculation of Greeks
Note: Published in Applied Mathematics and Computation, Vol 279, 10 April 2016, pp 28-42.
File: paper-jump.pdf,

Abstract:
In this paper, we develop a robust numerical method in pricing options, when the underlying asset follows a jump diffusion model. We demonstrate that, with the quadratic spline collocation method, the integral approximation in the pricing PIDE is intuitively simple, and comes down to the evaluation of the probabilistic moments of the jump density. The calculation of Greeks is also simple. When combined with a Picard iteration scheme, the pricing problem can be solved efficiently. We present the method and the numerical results from pricing European and American options with Merton's and Kou's models.


Title:Mixing LSMC and PDE Methods to Price Bermudan Option
Authors:David Farahany, Kenneth R. Jackson and Sebastian Jaimungal
Date: November 2016; revised August 2017
Pages: 33
Note: To be published in SIAM J. Financial Mathematics
File: Link to the paper on the SSRN

Abstract:

We develop a mixed least square Monte Carlo-partial differential equation (LSMC-PDE) method for pricing Bermudan style options with assets whose dynamics are driven by latent variables. The algorithm is formulated for an arbitrary number of assets and volatility processes, and its probabilistic convergence is established. Our numerical examples focus on the Heston model and we compare our hybrid algorithm with a full LSMC approach. Using Fourier methods we are able to derive efficient FFT based solutions, and we demonstrate that our algorithm greatly reduces the variance in the computed prices and optimal exercise boundaries. We also compare the early exercise boundaries and prices computed by our hybrid algorithm with those produced by finite difference methods and find excellent agreement.


Title:Numerical Methods for Computing Sensitivities for ODEs and DDEs
Author:J. Calver and W.H. Enright
Date: September 2016
Pages: 17
Keywords:Ordinary differential equations, delay differential equations, adjoint method, variational equations, sensitivities, parameter estimation
Note: A revised version of this paper will appear in Numerical Algorithms
File: adjoint_paperNSERC.pdf

Abstract:

We investigate the performance of the adjoint approach and the variational approach for computing the sensitivities of the least squares ob- jective function commonly used when fitting models to observations. We note that the discrete nature of the objective function makes the cost of the adjoint approach for computing the sensitivities dependent on the number of observa- tions. In the case of ODEs, this dependence is due to having to interrupt the computation at each observation point during numerical solution of the adjoint equations. Each observation introduces a jump discontinuity in the solution of the adjoint differential equations. These discontinuities are propagated in the case of DDEs, making the performance of the adjoint approach even more sensitive to the number of observations for DDEs. We quantify this cost and suggest ways to make the adjoint approach scale better with the number of observations. In numerical experiments, we compare the adjoint approach with the variational approach for computing the sensitivities.


Title: A Neural Network Approach to Efficient Valuation of Large VA Portfolios
Author:Seyed Amir Hejazi
Date: August 2016
Pages: 162
Keywords:Variable annuity, Spatial interpolation, Neural network, Portfolio valuation, Solvency Capital Requirement (SCR)
Note: PhD Thesis
File: Seyed_Amir_Hejazi_2016_PhD_Thesis.pdf

Abstract:

Variable annuity (VA) products expose insurance companies to considerable risk because of the guarantees they provide to buyers of these products. Managing and hedging the risks associated with VA products requires intraday valuation of key risk metrics for these products. The complex structure of VA products and computational complexity of their accurate evaluation has compelled insurance companies to adopt Monte Carlo (MC) simulations to value their large portfolios of VA products. Because the MC simulations are computationally demanding, especially for intraday valuations, insurance companies need more efficient valuation techniques.

Existing academic methodologies focus on fair valuation of a single VA contract, exploiting ideas in option theory and regression. In most cases, the computational complexity of these methods surpasses the computational requirements of MC simulations. Recently, a framework based on Kriging has been proposed that can significantly decrease the computational complexity of MC simulation. Kriging methods are an important class of spatial interpolation techniques. In this thesis, we study the performance of prominent traditional spatial interpolation techniques. Our study shows that traditional interpolation techniques require the definition of a distance function that can significantly impact their accuracy. Moreover, none of the traditional spatial interpolation techniques provide all of the key properties of accuracy, efficiency, and granularity. Therefore, in this thesis, we present a neural network approach for the spatial interpolation framework that affords an efficient way to find an effective distance function. The proposed approach is accurate, efficient, and provides an accurate granular view of the input portfolio. Our numerical experiments illustrate the superiority of the performance of the proposed neural network approach in estimation of the delta value and also the solvency capital requirement for large portfolios of VA products compared to the traditional spatial interpolation schemes and MC simulations.


Title: Limiting Deviation Method for Coupling Codes with Adaptive Window-Size Control using Digital Filters
Author: Rohan Palaniappan
Date: July 2015
Pages: 75
Note:MSc Research Paper, Computer Science Department, University of Toronto
File: rohan-palaniappan-msc-research-paper.pdf

Abstract:

Multi-physics systems model multiple simultaneous physical phenomena that affect one another. Simulating such systems can be done using the Operator Splitting (OS) method, wherein separate simulations are run concurrently for each phenomenon and data is exchanged between the simulations periodically. In this work, we explore, in the context of the OS method, what we call the Limiting Deviation (LD) and Limiting Deviation with Interior Check (LDIC) methods, which indirectly limit the global solution error in an unconventional way. We test the LD and LDIC methods using "canned data" for five test cases using thirteen different adaptive step-size controllers. We identify the best controllers for two experiments, point out undesired behaviour in certain controllers and examine the effect of two key parameters. Finally, we discuss implementation options. While our results are quite positive, we consider them preliminary, because the tests do not factor-in the coupling between the separate simulations.


Title: Efficient and Accurate Numerical PDE Methods for Pricing Financial Derivatives,
Author: Mufan (Bill) Li
Date: April 2015
Pages: 35
Note:B.A.Sc. Thesis, Depart. of Electrical and Computer Eng., Univ. of Toronto, 2015.
File: mufanLi-15-basc.pdf
Keywords: American options, (implicit) penalty iteration, operator splitting (explicity penalty) method, convergence, accuracy, complexity, efficiency, Greek calculation.

Abstract: The main difficulty in pricing American options comes from the early exercise right, creating a non-linear constraint on the Black-Scholes PDE. Under a finite difference discretization of the PDE, the price of an American can be approximated, with several techniques to properly handle the American Constraint. While both an iterative penalty method and a direct operator splitting method are convergent, the efficiency and quality require a comprehensive study. Using Crank-Nicolson time stepping and non-uniform grids, the methods are compared in numerical experiments. The criteria include order of convergence, efficiency, and complexity.


Title: Efficient Valuation of SCR via a Neural Network Approach
Author:Seyed Amir Hejazi and Kenneth R. Jackson
Date: First published September 2015; revised October 2016.
Pages: 29
Keywords:Variable annuity, Spatial interpolation, Neural network, Portfolio valuation, Solvency Capital Requirement (SCR)
Note: Accepted for publication in Journal of Computational and Applied Mathematics.
An earlier version of this paper can be found here.
File: CAM-Paper3.pdf

Abstract:

As part of the new regulatory framework of Solvency II, introduced by the European Union, insurance companies are required to monitor their solvency by computing a key risk metric called the Solvency Capital Requirement (SCR). The official description of the SCR is not rigorous and has lead researchers to develop their own mathematical frameworks for calculation of the SCR. These frameworks are complex and are difficult to implement. Recently, Bauer et al. suggested a nested Monte Carlo (MC) simulation framework to calculate the SCR. But the proposed MC framework is computationally expensive even for a simple insurance product. In this paper, we propose incorporating a neural network approach into the nested simulation framework to significantly reduce the computational complexity in the calculation. We study the performance of our neural network approach in estimating the SCR for a large portfolio of an important class of insurance products called Variable Annuities (VAs). Our experiments show that the proposed neural network approach is both efficient and accurate.


Title: A Neural Network Approch to Efficient Valuation of Large Portfolios of Variable Annuities
Author:Seyed Amir Hejazi and Kenneth R. Jackson
Date: May 2016 (earlier drafts September 2015 and January 2016)
Pages: 37
Keywords:Variable annuity, Spatial interpolation, Neural Network, Portfolio valuation
Note: To appear in Insurance: Mathematics and Economics
File: IME-Paper2b.pdf

(Earlier versions of this paper can be found in draft1 and draft2.)

Abstract:

Managing and hedging the risks associated with Variable Annuity (VA) products require intraday valuation of key risk metrics for these products. The complex structure of VA products and computational complexity of their accurate evaluation have compelled insurance companies to adopt Monte Carlo (MC) simulations to value their large portfolios of VA products. Because the MC simulations are computationally demanding, especially for intraday valuations, insurance companies need more efficient valuation techniques. Recently, a framework based on traditional spatial interpolation techniques has been proposed that can significantly decrease the computational complexity of MC simulation (Gan and Lin, 2015). However, traditional interpolation techniques require the definition of a distance function that can significantly impact their accuracy. Moreover, none of the traditional spatial interpolation techniques provide all of the key properties of accuracy, efficiency, and granularity (Hejazi et al., 2015). In this paper, we present a neural network approach for the spatial interpolation framework that affords an efficient way to find an effective distance function. The proposed approach is accurate, efficient, and provides an accurate granular view of the input portfolio. Our numerical experiments illustrate the superiority of the performance of the proposed neural network approach compared to the traditional spatial interpolation schemes.


Title: Computational Modeling and Analysis of Complex Muscle Architecture
Author:Dongwoon Lee
Date: August 2015
Pages: 94
Keywords:skeletal muscle; muscle architecture; cadaveric specimen; PCSA; pennation angle
Note: PhD Thesis
File: Lee_Dongwoon_PhDThesis.pdf

Abstract:

Muscle architecture is a primary determinant of the muscle function associated with body movement. An assessment of muscle architecture is therefore of great importance, not only for investigating anatomical aspects of muscle but also for predicting its functional capacity. Most muscles have a variable complexity in their architectures, making it challenging to accurately assess them. Previous cadaveric approaches only take into account limited portions of architecture. On the other hand, conventional radiological approaches, such as ultrasonography and MRI, examine two-dimensional projected images. Neither of these approaches provides a thorough understanding of the entire muscle architecture. This may lead to under- or over-estimation of architectural parameters that are significant for both clinical and computational studies. Therefore, the purpose of this thesis is to develop a computational modeling approach to facilitate quantification and reconstruction of complex muscle architecture. Cadaveric specimen data are used to investigate muscle architecture and to reconstruct accurate models. Associated geometric complexity and variation are carefully examined to yield consistent estimation of architectural parameters. This method demonstrates robustness against non-uniformity in the data and consistency over various types of muscle architecture; less than 10% error in PCSA estimation. By incorporating ultrasonographic assessment, this method is extended to approximate muscle architecture in living subjects, which enables estimation of PCSA for in vivo muscle in a more consistent manner. Validation experiments demonstrate 0.4% - 8.4% estimation errors between the original architecture and its approximation, depending on the anatomical complexity, which provides a practical insight into the quantification of PCSA for in vivo muscle.


Title: New Approaches to Importance Sampling for Portfolio Credit Risk Valuation
Author:Zhe (Robert) Wang
Date: July 2015
Pages: 57
Keywords:portfolio credit risk; Gaussian copula; Monte Carlo simulation; importance sampling; variance reduction
Note: MSc Research Paper
File: Zhe.Wang.MSC.Thesis.pdf

Abstract:

Portfolio credit risk based on the Gaussian Copula model has been widely studied and generally evaluated through Monte Carlo simulations. The two-level structure, namely systematic factors and individual factors, complicates the problem in a way that traditional variance reduction techniques become very hard to apply. Glasserman and Li proposed a two-level importance sampling approach to tackle a simplified binary credit states problem. The inner level was approximated through a conditional importance sampling approach using an exponential twisting technique. In this research project, we propose an alternative importance sampling approach which uses the Central Limit Theorem for the inner level. Our approach can be easily generalized to multi-credit states. Based on this approximation, we then propose two novel approaches motivated from research in machine learning. Instead of finding the importance sampler through an optimization problem, the first approach approximates the zero variance function by learning from the samples which are generated from Markov Chain Monte Carlo. The second approach treats the problem as a Bayesian inference problem and evaluates the tail probability through Bayesian Monte Carlo. Compared to Glasserman and Li's method, numerical results show that these two new approaches have advantages in both accuracy and speed and are also more easily adapted to other applications.


Title: A Practical Approach to in vivo Quantification of Physiological Cross-Sectional Area for Human Skeletal Muscle
Authors:Dongwoon Lee, Soo Kim, Ken Jackson, Eugene Fiume and Anne Agur
Date: May 2015
Pages: 9
Keywords:skeletal muscle; muscle architecture; PCSA; ultrasonography; digitization
Note: Submitted to the Journal of Biomechanics
File: Lee.JBM.2015.pdf

Abstract:

Physiological cross-sectional area (PCSA) is an important property used to predict the maximal force capacity of skeletal muscle. A common approach to estimate PCSA uses an algebraic formula based on muscle volume (MV), fascicle length (FL) and pennation angle (PA) that are measured by magnetic resonance imaging (MRI) and ultrasonographic assessments. Due to the limited capability of assessing architecturally complex muscles with these imaging modalities, the accuracy of measurements and ultimately PCSA estimation is compromised. On the other hand, cadaveric modeling provides a more accurate quantification by effectively dealing with the variable complexity of the muscle but it may not be straightforward to directly apply to in vivo studies. Thus, the purpose of our study is to provide a practical approach to PCSA estimation for in vivo muscle by integrating both cadaveric and ultrasound data. The muscle architecture in vivo is approximated by fitting 3D cadaveric data onto 2D ultrasound data of living subjects. The fitted architectural data is used for PCSA quantification. Validation experiments based on synthetic muscle and cadaveric data, respectively, demonstrate 0.4 — 8.4% errors between original architecture and its approximation, depending on the anatomical complexity. Furthermore, it is shown that, despite the large inter-subject variability of cadaveric data (standard deviation: ±153.2 mm2), their transformation toward 2D ultrasound data consistently yields a narrow distribution of PCSA estimation (standard deviation: ±24.6 ~ ±35.7 mm2), which provides a practical insight into accurate quantification of PCSA for in vivo muscle.


Title:A Spatial Interpolation Framework for Efficient Valuation of Large Portfolios of Variable Annuities
Authors:Seyed Amir Hejazi, Kenneth R. Jackson and Guojun Gan
Date: April 2017
Pages: 27
Keywords:Variable annuity, Spatial interpolation, Kriging, Inverse distance weighting, Radial basis function, Portfolio valuation
Note: To appear in the special issue on Computational Finance and Insurance of the Journal of Quantitative Finance and Economics.
File: IME-Paper1.pdf

Abstract:

Variable Annuity (VA) products expose insurance companies to considerable risk because of the guarantees they provide to buyers of these products. Managing and hedging these risks requires insurers to find the value of key risk metrics for a large portfolio of VA products. In practice, many companies rely on nested Monte Carlo (MC) simulations to find key risk metrics. MC simulations are computationally demanding, forcing insurance companies to invest hundreds of thousands of dollars in computational infrastructure per year. Moreover, existing academic methodologies are focused on fair valuation of a single VA contract, exploiting ideas in option theory and regression. In most cases, the computational complexity of these methods surpasses the computational requirements of MC simulations. Therefore, academic methodologies cannot scale well to large portfolios of VA contracts. In this paper, we present a framework for valuing such portfolios based on spatial interpolation. We provide a comprehensive study of this framework with existing interpolation schemes. Our numerical results show superior performance, in terms of both computational efficiency and accuracy, for these methods compared to nested MC simulations. We also present insights into the challenge of finding an effective interpolation scheme in this framework, and suggest guidelines that help us build a fully automated scheme that is efficient and accurate.


Title:Dimension and Variance Reduction for Monte Carlo Methods for High-Dimensional Models in Finance
Authors:Duy-Minh Dang, Kenneth R. Jackson and Mohammadreza Mohammadi
Date: January 2015
Pages: 21
Keywords:conditional Monte Carlo, variance reduction, dimension reduction, cross-currency, Fourier transform, partial differential equations
JEL Classification: E40, E43, G12, G13, C61, C63
File: Link to the paper on the SSRN

Abstract:

One-way coupling often occurs in multi-dimensional models in finance. In this paper, we present a dimension reduction technique for Monte Carlo (MC) methods, referred to as drMC, that exploits this structure for pricing plain-vanilla European options under a N-dimensional one-way coupled model, where N is arbitrary. The dimension reduction also often produces a significant variance reduction.

The drMC method is a dimension reduction technique built upon (i) the conditional MC technique applied to one dimension and (ii) the derivation of a closed-form solution for the conditional Partial Differential Equation (PDE) that arises via Fourier transforms. In the drMC approach, the option price can be computed simply by taking the expectation of this closed-form solution. Hence, the approach results in a powerful dimension reduction from N to one, which often results in a significant variance reduction as well, since the variance associated with the other (N-1) factors in the original model are completely removed from the drMC simulation. Moreover, under the drMC framework, hedging parameters, or Greeks, can be computed in a much more efficient way than in traditional MC techniques.

A variance reduction analysis of the method is presented and numerical results illustrating the method's efficiency are provided.


Title:Improving a Cross Entropy Approach to Parameter Estimation for ODEs and DDEs
Author:Jonathan Calver
Date: February 2014
Pages: 30
Note: MSc Thesis
File: j_calver_MSc_paper.pdf

Abstract:

We investigate and extend the cross entropy (CE) approach for parameter estimation for ODE and DDE models introduced in [27]. Software is developed to allow models to be easily specified for use with an existing package for working with DDEs. Our software implements CE and is used to explore potential improvements to the method. A 3-stage optimization procedure is presented. First, an inexpensive initial guess for some of the parameters is obtained to reduce the size of the search space. Second, a global search phase is performed to obtain an estimate for the parameters that is ‘close’ to the optimal values. Third, a local optimizer, that can converge super-linearly, is used to obtain the final estimates for the parameters. The performance of this 3-stage procedure is demonstrated by estimating parameters on several test problems from the literature. Additionally, we look at ways to reduce the computational cost of simulating the ODEs and DDEs during the estimation procedure.


Title:A Survey of Numerical Methods for Lévy Markets
Author:Michael Chiu
Date: August 2014
Pages: 29
Note: M. Eng. Project, MIE Dept., University of Toronto
File: Michael.Chiu.MEng.Project.pdf

Abstract:

The modeling of financial markets by Lévy proceses has become an active area of research during recent years. This has motivated an equal amount of, if not more, research activity into the necessary numerical methods. Due to the large body of work in this area, we focus our survey on fast numerical methods for Lévy markets. Particular emphasis is placed on grid-based methods.


Title:A Three-Dimensional Approach to Pennation Angle Estimation for Human Skeletal Muscle
Authors:Dongwoon Lee, Zhi Li, Qazi Zain Sohail, Ken Jackson, Eugene Fiume and Anne Agur
Date: March 2014 (earlier draft July 2013)
Pages: 15
Keywords: pennation angle; line of action; ultrasonography; digitization; cadaveric specimen
Note: Published in the Journal of Computer Methods in Biomechanics and Biomedical Engineering
An earlier version of this paper can be found here.
File: Lee.PA.2014.pdf

Abstract:

Pennation angle (PA) is an important property of human skeletal muscle that plays a significant role in determining the force contribution of fascicles to skeletal movement. Two-dimensional (2D) ultrasonography is the most common approach to measure PA. However, in principle, it is challenging to infer knowledge of three-dimensional (3D) architecture from 2D assessment. Furthermore, architectural complexity and variation impose more difficulties on reliable and consistent quantification of PA. Thus, the purpose of our study is to provide accurate insight into the correspondence between 2D assessment and the underlying 3D architecture. To this end, a 3D method was developed to directly quantify PA based on 3D architectural data that were acquired from cadaveric specimens through dissection and digitization. Those data were then assessed two-dimensionally by simulating ultrasound imaging. To achieve consistency over intermuscular variation, our proposed 3D method is based on the geometric analysis of fascicle attachment. Comparative results show a wide range of differences (1.1% - 47.1%) between 2D and 3D measurements. That is, ultrasound can under- or over-estimate PA, depending on architecture.


Title:A Highly Efficient Implementation on GPU Clusters of PDE-Based Pricing Methods for Path-Dependent Foreign Exchange Interest Rate Derivatives
Authors:Duy Minh Dang, Christina Christara and Kenneth R. Jackson
Date: Submitted March 2013, revised April 2013.
Pages: 17
Keywords: Power Reverse Dual Currency (PRDC) Swaps, Bermudan Cancelable, Partial Differential Equation (PDE), Alternating Direction Implicit (ADI), Finite Differences, Graphics Processing Units (GPUs), GPU Clusters, MPI, Parallel Computing
Note: Published in Springer Lecture Notes in Computer Science, Volume 7975, 2013, pp.~107--126, Proceedings of The 13th International Conference on Computational Science and Its Applications (ICCSA 2013), Ho Chi Minh City, Vietnam, 24--27 June 2013.
File: Link to the paper on the SSRN

Abstract:

We present a highly efficient parallelization of the computation of the price of exotic cross-currency interest rate derivatives with path-dependent features via a Partial Differential Equation (PDE) approach. In particular, we focus on the parallel pricing on Graphics Processing Unit (GPU) clusters of long-dated foreign exchange (FX) interest rate derivatives, namely Power-Reverse Dual-Currency (PRDC) swaps with FX Target Redemption (FX-TARN) features under a three-factor model. Challenges in pricing these derivatives via a PDE approach arise from the high-dimensionality of the model PDE, as well as from path-dependency of the FX-TARN feature. The PDE pricing framework for FX-TARN PRDC swaps is based on partitioning the pricing problem into several independent pricing sub-problems over each time period of the swap's tenor structure, with possible communication at the end of the time period. Finite difference methods on non-uniform grids are used for the spatial discretization of the PDE, and the Alternating Direction Implicit (ADI) technique is employed for the time discretization. Our implementation of the pricing procedure on a GPU cluster involves (i) efficiently solving each independent sub-problems on a GPU via a parallelization of the ADI timestepping technique, and (ii) utilizing MPI for the communication between pricing processes at the end of the time period of the swap's tenor structure. Numerical results showing the efficiency of the parallel methods are provided.


Title:New Greedy Heuristics for Approximating Set Cover and Set Packing
Author:David Kordalewski
Date: April 2013
Pages: 58
Note: MSc Thesis, Computer Science Dept., Univ. of Toronto, 2013.
File: Kordalewski.MSc.Thesis.pdf

Abstract:

The Set Cover problem (SCP) and Set Packing problem (SPP) are standard NP-hard combinatorial optimization problems. Their decision problem versions are shown to be NP-Complete in Karp's 1972 paper. We specify a rough guide to constructing approximation heuristics that may have widespread applications and apply it to devise greedy approximation algorithms for SCP and SPP, where the selection heuristic is a variation of that in the standard greedy approximation algorithm. Our technique involves assigning to each input set a valuation and then selecting, in each round, the set whose valuation is highest. We prove that the technique we use for determining a valuation of the input sets yields a unique value for all Set Cover instances. For both SCP and SPP we give experimental evidence that the valuations we specify are unique and can be computed to high precision quickly by an iterative algorithm. Others have experimented with testing the observed approximation ratio of various algorithms over a variety of randomly generated instances, and we have extensive experimental evidence to show the quality of the new algorithm relative to greedy heuristics in common use. Our algorithms are somewhat more computationally intensive than the standard heuristics, though they are still practical for large instances. We discuss some ways to speed up our algorithms that do not significantly distort their effectiveness in practice on random instances.


Title:Iterative Reconstruction Algorithms for Polyenergetic X-Ray Computerized Tomography
Author:Nargol Rezvani
Date: August 2012
Pages: 127
Note: PhD Thesis, Computer Science Dept., Univ. of Toronto, 2012.
File: Nargol.Rezvani.PhD.Thesis.pdf

Abstract:

A reconstruction algorithm in computerized tomography is a procedure for reconstructing the attenuation coefficientscient, a real-valued function associated with the object of interest, from the measured projection data. Generally speaking, reconstruction algorithms in CT fall into two categories: direct, e.g., filtered back-projection (FBP), or iterative. In this thesis, we discuss a new fast matrix-free iterative reconstruction method based on a polyenergetic model.

While most modern x-ray CT scanners rely on the well-known filtered back-projection algorithm, the corresponding reconstructions can be corrupted by beam hardening artifacts. These artifacts arise from the unrealistic physical assumption of monoenergetic x-ray beams. In this thesis, to compensate, we use an alternative model that accounts for differential absorption of polyenergetic x-ray photons and discretize it directly. We do not assume any prior knowledge about the physical properties of the scanned object. We study and implement different solvers and nonlinear unconstrained optimization methods, such as a Newton-like method and an extension of the Levenberg-Marquardt-Fletcher algorithm. We explain how we can use the structure of the Radon matrix and the properties of FBP to make our method matrix-free and fast. Finally, we discuss how we regularize our problem by applying different regularization methods, such as Tikhonov and regularization in the 1-norm. We present numerical reconstructions based on the associated nonlinear discrete formulation incorporating various iterative optimization methods.


Title:An Efficient Numerical PDE Approach for Pricing Foreign Exchange Interest Rate Hybrid Derivatives
Authors: Duy Minh Dang, Christina Christara, Kenneth R. Jackson and Asif Lakhany
Date: March 2012; revised May 2013
Pages: 36
Keywords: Power-Reverse Dual-Currency (PRDC) swaps, Target Redemption (TARN), knockout, Partial Differential Equation (PDE), finite differences, non-uniform grids.
Note: Accepted for publication by the Journal of Computational Finance, Vol 18, Issue 4, 2015, pp 39-93.
File: Link to the paper on the SSRN

Abstract:

We discuss efficient pricing methods via a Partial Differential Equation (PDE) approach for long dated foreign exchange (FX) interest rate hybrids under a three-factor multi-currency pricing model with FX volatility skew. The emphasis of the paper is on Power-Reverse Dual-Currency (PRDC) swaps with popular exotic features, namely knockout and FX Target Redemption (FX-TARN). Challenges in pricing these derivatives via a PDE approach arise from the high-dimensionality of the model PDE, as well as from the complexities in handling the exotic features, especially in the case of the FX-TARN provision, due to its path-dependency. Our proposed PDE pricing framework for FX-TARN PRDC swaps is based on partitioning the pricing problem into several independent pricing sub-problems over each time period of the swap's tenor structure, with possible communication at the end of the time period. Each of these pricing sub-problems can be viewed as equivalent to a knockout PRDC swap with a known time-dependent barrier, and requires a solution of the model PDE, which, in our case, is a time-dependent parabolic PDE in three space dimensions. Finite difference schemes on non-uniform grids are used for the spatial discretization of the model PDE, and the Alternating Direction Implicit (ADI) timestepping methods are employed for its time discretization. Numerical examples illustrating the convergence properties and efficiency of the numerical methods are provided.


Title:Accurate First-Order Sensitivity Analysis for Delay Differential Equations: Part1: The Forward Approach
Authors: H. ZivariPiran and W.H. Enright
Date: 2011
Pages: 22
Note: A revised version of this paper will appear in SIAM J. Sci. Comp., 2012.
File: ZE_SISC.pdf

Abstract:

In this paper, we derive an equation governing the dynamics of first-order forward sensitivities for a general system of parametric neutral delay differential equations (NDDEs). We also derive a formula which identifies the size of jumps that appear at discontinuity points when the sensitivity equations are integrated. The formula leads to an algorithm which can compute sensitivities for various types of parameters very accurately and efficiently.


Title:Improved Implementation of Multiple Shooting for BVPs
Authors: Weidong Zhang
Date: January, 2012
Pages: 26
Note: MSc. Thesis, Computer Science Dept., Univ. of Toronto, 2012.
File: Weidong_Zhang_Thesis.pdf

Abstract:

Boundary value problems arise in many applications, and shooting methods are one approach to approximate the solution of such problems.A Shooting method transforms a boundary value problem into a sequence of initial value problems,and takes the advantage of the speed and adaptivity of initial value problem solvers. The implementation of continuous Runge-Kutta methods with defect control for initial value problems gives efficient and reliable solutions. In this paper, we design and implement a boundary value solver that is based on a shooting method using a continuous Runge-Kutta method to solve the associated initial value problems.Numerical tests on a selection of problems show that this approach achieves better performance than another widely used existing shooting method.


Title:Reducing the Uncertainty when Approximating the Solution of ODEs
Authors: W.H. Enright
Date: 2012
Pages: 13
Note: A revised version of this paper will appear in proceedings of IFIP WG 2.5 Conference on Uncertainty Quantification, Boulder, August 2011.
File: E_Boulder.pdf

Abstract:

One can reduce the uncertainty in the quality of an approximate solution of an ordinary differential equation (ODE) by implementing methods which have a rigorous error control strategy and which deliver an approximate solution that is much more likely to satisfy the expectations of the user. We have developed such a class of ODE methods as well as a collection of software tools that will deliver a piecewise polynomial as the approximate solution and facilitate the investigation of various aspects of the problem that are often of as much interest as the approximate solution itself. We will introduce measures that can be used to quantify the reliability of an approximate solution and discuss how one can implement methods that, at some extra cost, can produce very reliable approximate solutions and therefore significantly reduce the uncertainty in the computed results.


Title:Superconvergent Interpolants for Collocation Methods applied to Volterra Integro-differential Equations with Delay
Authors: M. Shakourifar and W.H. Enright
Date: 2012
Pages: 16
Note: A revised version of this paper will appear in BIT 2012.
File: SE_BIT.pdf

Abstract:

Standard software based on the collocation method for differential equations delivers a continuous approximation (called the collocation solution) which augments the high order discrete approximate solution that is provided at mesh points. This continuous approximation is less accurate than the discrete approximation. For 'non-standard' Volterra integro-differential equations with constant delay, that often arise in modeling predator-prey systems in Ecology, the collocation solution is $C^0$ continuous. The accuracy is $O(h^{s+1})$ at off-mesh points and $O(h^{2 s})$ at mesh points where $s$ is the number of Gauss points used per subinterval and $h$ refers to the stepsize. We will show how to construct $C^1$ interpolants with an accuracy at off-mesh points and mesh points of the same order ($2 s$). This implies that even for coarse mesh selections we achieve an accurate and smooth approximate solution. Specific schemes are presented for $s = 2, 3$, and numerical results demonstrate the effectiveness of the new interpolants.


Title:Robust Estimation of PCSA and Geometric Reconstruction for Human Skeletal Muscle
Authors: Dongwoon Lee, Kajeandra Ravichandiran, Ken Jackson, Eugene Fiume and Anne Agur
Date: January, 2012
Pages: 10
Keywords: Skeletal muscle; Muscle Architecture; Computational Geometry; Digitization
Note: To appear in the Journal of Biomechanics under a slightly different title: "Robust estimation of physiological cross-sectional area and geometric reconstruction for human skeletal muscle".
File: Lee.JBM.2012.pdf

Abstract:

Understanding muscle architecture is crucial to determining the mechanical function of muscle during body movements, because architectural parameters directly correspond to muscle performance. Accurate parameters are thus essential for reliable simulation. Human cadaveric muscle specimen data provides the anatomical detail needed for in-depth understanding of muscle and accurate parameter estimation. However, as muscle generally has non-uniform architecture, parameter estimation, specifically, physiological cross-sectional area (PCSA), is rarely straightforward. To deal ectively with this non-uniformity, we propose a geometric approach in which a polygon is sought to best approximate the cross-sectional area of each fascicle by accounting for its three-dimensional trajectory and arrangement in the muscle. Those polygons are then aggregated to determine PCSA and volume of muscle. Experiments are run using both synthetic data and muscle specimen data. From comparison of PCSA using synthetic data, we conclude that the proposed method enhances the robustness of PCSA estimation against variation in muscle architecture. Furthermore, we suggest reconstruction methods to extract 3D muscle geometry directly from fascicle data and estimated parameters using the level set method.


Title:Parameter Estimation for ODEs using a Cross-Entropy Approach
Authors: Bo Wang
Date: January, 2012
Pages: 34
Note: MSc. Thesis, Computer Science Dept., Univ. of Toronto, 2012.
File: Bo_Wang_Thesis.pdf

Abstract:

Parameter Estimation for ODEs and DDEs is an important topic in numerical analysis. In this paper, we present a novel approach to address this inverse problem. Cross-entropy algorithms are general a lgorithm which can be applied to solve global optimization problems. The main steps of cross-entrop y methods are first to generate a set of trial samples from a certain distribution, then to update the distribution based on these generated sample trials. To overcome the prohibitive computation o f standard cross-entropy algorithms, we develop a modification combining local search techniques. T he modified cross-entropy algorithm can speed the convergence rate and improve the accuracy simulta neously. Two different coding schemes (continuous coding and discrete coding) are also introduced. Continuou s coding uses a truncated multivariate Gaussian to generate trial samples, while discrete coding re duces the search space to a finite (but dense) subset of the feasible parameter values and uses a B ernoulli distribution to generate the trial samples (which are fixed point approximation of the act ual parameters) . Extensive numerical and real experiments are conducted to illustrate the power an d advantages of the proposed methods. Compared to other existing state-of-the-art approaches on som e benchmark problems for parameter estimation, our methods have three main advantages: 1) They are robust to noise in the data to be fitted; 2) They are not sensitive to the number of observation po ints (in contrast to most existing approaches) ; 3) The modified versions exhibit faster convergenc e than the original versions, thus they are more efficient, without sacrificing accuracy.


Title:Modeling Multi-Factor Financial Derivatives by a Partial Differential Equation Approach with Efficient Implementation on Graphics Processing Units
Author:Duy Minh Dang
Date: August 2011
Pages: 229
Note: PhD Thesis, Computer Science Dept., Univ. of Toronto, 2011.
File: Duy_Minh_Dang_PhD_Thesis.pdf

Abstract:

This thesis develops efficient modeling frameworks via a Partial Differential Equation (PDE) approach for multi-factor financial derivatives, with emphasis on three-factor models, and studies highly efficient implementations of the numerical methods on novel high-performance computer architectures, with particular focus on Graphics Processing Units (GPUs) and multi-GPU platforms/clusters of GPUs. Two important classes of multi-factor financial instruments are considered: cross-currency/foreign exchange (FX) interest rate derivatives and multi-asset options.

For cross-currency interest rate derivatives, the focus of the thesis is on Power Reverse Dual Currency (PRDC) swaps with three of the most popular exotic features, namely Bermudan cancelability, knockout, and FX Target Redemption. The modeling of PRDC swaps using one-factor Gaussian models for the domestic and foreign interest short rates, and a one-factor skew model for the spot FX rate results in a time-dependent parabolic PDE in three space dimensions. Our proposed PDE pricing framework is based on partitioning the pricing problem into several independent pricing subproblems over each time period of the swap's tenor structure, with possible communication at the end of the time period. Each of these subproblems requires a solution of the model PDE. We then develop a highly efficient GPU-based parallelization of the Alternating Direction Implicit (ADI) timestepping methods for solving the model PDE. To further handle the substantially increased computational requirements due to the exotic features, we extend the pricing procedures to multi-GPU platforms/clusters of GPUs to solve each of these independent subproblems on a separate GPU. Numerical results indicate that the proposed GPU-based parallel numerical methods are highly efficient and provide significant increase in performance over CPU-based methods when pricing PRDC swaps. An analysis of the impact of the FX volatility skew on the price of PRDC swaps is provided.

In the second part of the thesis, we develop efficient pricing algorithms for multi-asset options under the Black-Scholes-Merton framework, with strong emphasis on multi-asset American options. Our proposed pricing approach is built upon a combination of (i) a discrete penalty approach for the linear complementarity problem arising due to the free boundary and (ii) a GPU-based parallel ADI Approximate Factorization technique for the solution of the linear algebraic system arising from each penalty iteration. A timestep size selector implemented efficiently on GPUs is used to further increase the efficiency of the methods. We demonstrate the efficiency and accuracy of the proposed GPU-based parallel numerical methods by pricing American options written on three assets.


Title:Reliable Approximate Solution of Systems of Volterra Integro-differential Equations with Time-dependent Delays
Authors: M. Shakourifar and W.H. Enright
Date: 2011
Pages: 24
Note: A revised version of this paper appeared in SIAM J. Sci. Comp. 33, 1134-1158.
File: SE_SISC.pdf

Abstract:

Volterra integro-differential equations with time-dependent delay arguments (DVIDEs) can provide us with realistic models of many real-world phenomena. Delayed Lokta-Volterra predator-prey systems arise in Ecology and are well-known examples of DVIDEs first introduced by Volterra in 1928. We investigate the numerical solution of systems of DVIDEs using an adaptive stepsize selection strategy. We will present a generic variable stepsize approach for solving systems of neutral DVIDEs based on an explicit continuous Runge-Kutta method using defect error control and study the convergence of the resulting numerical method for various kind of delay arguments. We will show that the global error of the numerical solution can be effectively and reliably controlled by monitoring the size of the defect of the approximate solution and adjusting the stepsize on each step of the integration. Numerical results will be presented to demonstrate the effectiveness of this approach.


Title:Adaptive Time-Stepping for the Strong Numerical Solution of Stochastic Differential Equations
Authors: Silvana Ilie, Kenneth R. Jackson and Wayne H. Enright
Date: January 2014.
Pages: 21
Keywords: stochastic differential equations, adaptive time-stepping, PI control, Milstein method, strong numerical approximation, commutative noise
Note: Submitted to Numerical Algorithms
File: SDE.Adaptive.Timestepping.NA.1.pdf

Abstract:

Models based on stochastic differential equations are of high interest today due to their many important practical applications. Thus the need for efficient and accurate numerical methods to approximate their solution. This paper focuses on the strong numerical solution of stochastic differential equations in Itô form, driven by multiple Wiener processes satisfying the commutativity condition. We introduce some adaptive time-stepping methods which allow arbitrary values of the stepsize. The explicit Milstein method is applied to approximate the solution of the problem and the adaptive implementations are based on a measure of the local error. Numerical tests on several models arising in applications show that our adaptive time-stepping schemes perform better than the fixed stepsize and the existing adaptive Brownian tree methods.


Title:Calibration of Multi-Period Single-Factor Gaussian Copula Models for CDO Pricing
Author: Max S. Kaznady
Date: April, 2011
Pages: 163
Note: MSc. Thesis, Computer Science Dept., Univ. of Toronto, 2011.
File: Max_S_Kaznady_MSc_thesis_April_2011.pdf

Abstract:

A Collaterized Debt Obligation (CDO) is a multi-name credit derivative, which redistributes the risk of defaults in a collection (also known as the basket or pool) of underlying assets, into fixed income securities, known as the tranches. Each tranche is associated with a certain fraction of first-to-default underlyings. Synthetic CDOs have a pool that consists of Credit Default Swaps (CDSs). If all CDSs have equal notionals, then the pool is termed homogeneous.

Single-period single-factor copula models approximate the probability of underlying defaults using a percentile to percentile transformation, and incorporate the underlying pool correlation structure for multi-name credit derivatives, such as CDOs. Currently, such models are static in time and do not calibrate consistently against market quotes. Recently Jackson, Kreinin and Zhang (JKZ) proposed a discrete-time Multi-period Single-factor Copula Model (MSCM), for which the default correlations are time-independent, allowing the model to systematically fit the market quotes. For homogeneous pools, the JKZ MSCM provides a chaining technique, which avoids expensive Monte Carlo simulation, previously used by other multi-period copula models.

However, even for homogeneous pools, the tree-based example of MSCM presented by JKZ has three drawbacks: derivatives are difficult to obtain for calibration, probabilities of the copula correlation parameter paths do not accurately represent its movements, and the model is not extremely parsimonious.

In this thesis, we develop an improved implementation of MSCM: we use an alternative multi-path parameterization of the copula correlation parameter paths and the corresponding probabilities. This allows us to calculate first-order derivatives for the MSCM in closed form for a reasonable range of parameter values, and to vary the number of parameters used by the model. We also develop and implement a practical error control heuristic for the error in the pool loss probabilities and their derivatives. We develop theoretical error bounds for the pool loss probabilities as well. We also explore a variety of optimization algorithms and demonstrate that the improved MSCM is inexpensive to calibrate. In addition, we show how MSCM calibrates to CDO data for periods before, during and after the September 2008 stock market crash.


Title:An Efficient GPU-Based Parallel Algorithm for Pricing Multi-Asset American Options
Authors:Duy Minh Dang, Christina C. Christara and Kenneth R. Jackson
Date:March 2011
Pages: 14
Keywords: American Option, Multi-Asset, Penalty Method, Alternating Direction Implicit Approximate Factorization (ADI-AF), Graphics Processing Units, GPUs, Parallel Computing, Finite Difference
Note: A revised version of this paper (with a slightly different title) appeared in the Journal of Concurrency and Computation: Practice and Experience (JCCPE) special issue on high-performance computational finance, Vol. 24, No. 8, June 2012, pp. 849-866.
File: Link to the paper on the SSRN

Abstract:

We develop highly-efficient parallel Partial Differential Equation (PDE) based pricing methods on Graphics Processing Units (GPUs) for multi-asset American options. Our pricing approach is built upon a combination of a discrete penalty approach for the linear complementarity problem arising due to the free boundary and a GPU-based parallel Alternating Direction Implicit Approximate Factorization technique with finite differences on uniform grids for the solution of the linear algebraic system arising from each penalty iteration. A timestep size selector implemented efficiently on GPUs is used to further increase the efficiency of the methods. We demonstrate the efficiency and accuracy of the parallel numerical methods by pricing American options written on three assets.


Title:Pricing Multi-Asset American Options on Graphics Processing Units Using a PDE Approach
Authors:Duy Minh Dang, Christina C. Christara and Kenneth R. Jackson
Date:September 2010
Pages: 8
Keywords: American Option, Multi-Asset, Penalty Method, Alternating Direction Implicit Approximate Factorization (ADI-AF), Graphics Processing Units, GPUs, Parallel Computing, Finite Difference
Note: Proceedings of the Third Workshop on High Performance Computational Finance at SC10, The International Conference for High Performance Computing, Networking, Storage, and Analysis, sponsored by ACM/IEEE, New Orleans, USA, 13-19 November 2010.
File: DOI:10.1109/WHPCF.2010.5671831
link to pdf file in ieeexplore

Abstract:

We develop highly efficient parallel pricing methods on Graphics Processing Units (GPUs) for multi-asset American options via a Partial Differential Equation (PDE) approach. The linear complementarity problem arising due to the free boundary is handled by a penalty method. Finite difference methods on uniform grids are considered for the space discretization of the PDE, while classical finite differences, such as Crank-Nicolson, are used for the time discretization. The discrete nonlinear penalized equations at each timestep are solved using a penalty iteration. A GPU-based parallel Alternating Direction Implicit Approximate Factorization technique is employed for the solution of the linear algebraic system arising from each penalty iteration. We demonstrate the efficiency and accuracy of the parallel numerical methods by pricing American options written on three assets.


Title:Energy-Based Error Control Strategies Suitable for Long MD Simulations
Authors: Kante Easley
Date: August, 2010
Pages: 51
Note: MSc. Thesis, Computer Science Dept., Univ. of Toronto, 2010.
File: Kante_Easley_Thesis.pdf

Abstract:

When evaluating integration schemes used in molecular dynamics (MD) simulations, energy conservation is often cited as the primary criterion by which the integrators should be compared. As a result variable stepsize Runge-Kutta methods are often ruled out of consideration due to their characteristic energy drift. We have shown that by appropriately modifying the stepsize selection strategy in a variable-stepsize RK method it is possible for the MD practitioner to obtain substantial control over the energy drift during the course of a simulation. This ability has been previously unreported in the literature, and can be achieved without sacrificing computational efficiency under currently obtainable timescales. We present Numerical examples to confirm this.


Title:A PDE pricing framework for cross-currency interest rate derivatives with Target Redemption features
Authors:Christina C. Christara, Duy Minh Dang, Kenneth R. Jackson, and Asif Lakhany
Date:July 2010
Pages: 4
Keywords: Alternating Direction Implicit, ADI, Partial Differential Equation, PDE, finite difference, Power Reverse Dual Currency swap, PRDC, Target Redemption, TARN, auxiliary state variable
Note: To appear in the proceedings of the International Conference of Numerical Analysis and Applied Mathematics 2010 (ICNAAM 2010), Symposium in Computational Finance, Rhodes, Greece, 19-25/09/2010.
File: Link to the paper on the SSRN

Abstract:

We propose a general framework for efficient pricing via a partial differential equation (PDE) approach for exotic cross-currency interest rate (IR) derivatives, with strong emphasis on long-dated foreign exchange (FX) IR hybrids, namely Power Reverse Dual Currency (PRDC) swaps with a FX Target Redemption (FX-TARN) provision. The FX-TARN provision provides a cap on the FX-linked PRDC coupon amounts, and once the accumulated coupon amount reaches this cap, the underlying PRDC swap terminates. Our PDE pricing framework is based on an auxiliary state variable to keep track of the total accumulated PRDC coupon amount. Finite differences on uniform grids and the Alternating Direction Implicit (ADI) method are used for the spatial and time discretizations, respectively, of the model-dependent PDE corresponding to each discretized value of the auxiliary variable. Numerical examples illustrating the convergence properties of the numerical methods are provided.


Title:Numerical Solution of Stochastic Models of Biochemical Kinetics
Author:Silvana Ilie, Wayne H. Enright and Kenneth R. Jackson
Date:Original draft June 2010; revised Aug. 2010
Pages: 20
Keywords: Chemical Master Equation, Chemical Langevin Equation, Gillespie algorithm, tau-leaping, stiff systems, multi-scale simulations, hybrid methods, stochastic biochemical modeling
Note: To appear in the Canadian Applied Mathematics Quarterly (CAMQ), Vol. 17, No. 3, 2009, pp. 523-554. An earlier version of this paper can be found here.
File: NA.Bio.Kin.Survey.rev1.pdf

Abstract:

Cellular processes are typically viewed as systems of chemical reactions. Often such processes involve some species with low population numbers, for which a traditional deterministic model of classical chemical kinetics fails to accurately capture the dynamics of the system. In this case, stochastic models are needed to account for the random fluctuations observed at the level of a single cell. We survey the stochastic models of well-stirred biochemical systems and discuss important recent advances in the development of numerical methods for simulating them. Finally, we identify some key topics for future research.


Title:A Parallel Implementation on GPUs of ADI Finite Difference Methods for Parabolic PDEs with Applications in Finance
Author:Duy Minh Dang, Christina C. Christara and Kenneth R. Jackson
Date:March 2010; revised December 2010
Pages: 21
Keywords: Alternating Direction Implicit, ADI, Partial Differential Equation, PDE, Graphics Processing Units, GPUs, parallel computing, finite difference, multi-asset options, multi-currency swaps
Note: To appear in the Canadian Applied Mathematics Quarterly (CAMQ), Vol. 17, No. 4, 2009, pp. 627-660.
File: Link to the paper on the SSRN

Abstract:

We study a parallel implementation on a Graphics Processing Unit (GPU) of Alternating Direction Implicit (ADI) time-discretization methods for solving time-dependent parabolic Partial Differential Equations (PDEs) in three spatial dimensions with mixed spatial derivatives in a variety of applications in computational finance. Finite differences on uniform grids are used for the spatial discretization of the PDEs. As examples, we apply the GPU-based parallel methods to price European rainbow and European basket options, each written on three assets. Numerical results showing the efficiency of the parallel methods are provided.


Title:GPU Pricing of Exotic Cross-Currency Interest Rate Derivatives with a Foreign Exchange Volatility Skew Model
Author:Duy Minh Dang, Christina Christara and Kenneth R. Jackson
Date:February 2010
Pages: 15
Keywords: Power Reverse Dual Currency (PRDC) Swaps; Bermudan Cancelable; Partial Differential Equation (PDE); Alternating Direction Implicit (ADI), Finite Differences; Graphics Processing Units (GPUs)
Note: To appear in the Special Issue on "Computational Finance" of the Journal of Concurrency and Computation: Practice and Experience (JCCPE), Volume 26, Issue 9, 2014, pp.~1609--1625.
File: Link to the paper on the SSRN

Abstract:

We present a Graphics Processing Unit (GPU) parallelization of the computation of the price of exotic cross-currency interest rate derivatives via a Partial Differential Equation (PDE) approach. In particular, we focus on the GPU-based parallel pricing of long-dated foreign exchange (FX) interest rate hybrids, namely Power Reverse Dual Currency (PRDC) swaps with Bermudan cancelable features. We consider a three-factor pricing model with FX volatility skew which results in a time-dependent parabolic PDE in three spatial dimensions. Finite difference methods on uniform grids are used for the spatial discretization of the PDE, and the Alternating Direction Implicit (ADI) technique is employed for the time discretization. We then exploit the parallel architectural features of GPUs together with the Compute Unified Device Architecture (CUDA) framework to design and implement an efficient parallel algorithm for pricing PRDC swaps. Over each period of the tenor structure, we divide the pricing of a Bermudan cancelable PRDC swap into two independent pricing subproblems, each of which can efficiently be solved on a GPU via a parallelization of the ADI timestepping technique. Numerical results indicate that GPUs can provide significant increase in performance over CPUs when pricing PRDC swaps. An analysis of the impact of the FX skew on such derivatives is provided.


Title:Application of Generic Interpolants in the Investigation and Visualization of Approximate Solutions of PDEs on Coarse Unstructured Meshes
Authors: Hassan Goldani Moghaddam
Date: January, 2010
Pages: 140
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2010.
File: Hassan_Goldani_Thesis.pdf

Abstract:

In scientific computing, it is very common to visualize the approximate solution obtained by a numerical PDE solver by drawing surface or contour plots of all or some components of the associated approximate solutions. These plots are used to investigate the behavior of the solution and to display important properties or characteristics of the approximate solutions. In this thesis, we consider techniques for drawing such contour plots for the solution of two and three dimensional PDEs. We first present three fast contouring algorithms in two dimensions over an underlying unstructured mesh. Unlike standard contouring algorithms, our algorithms do not require a fine structured approximation. We assume that the underlying PDE solver generates approximations at some scattered data points in the domain of interest. We then generate a piecewise cubic polynomial interpolant (PCI) which approximates the solution of a PDE at off-mesh points based on the DEI (Differential Equation Interpolant) approach. The DEI approach assumes that accurate approximations to the solution and first-order derivatives exist at a set of discrete mesh points. The extra information required to uniquely define the associated piecewise polynomial is determined based on almost satisfying the PDE at a set of collocation points. In the process of generating contour plots, the PCI is used whenever we need an accurate approximation at a point inside the domain. The direct extension of the both DEI-based interpolant and the contouring algorithm to three dimensions is also investigated.

The use of the DEI-based interpolant we introduce for visualization can also be used to develop effective Adaptive Mesh Refinement (AMR) techniques and global error estimates. In particular, we introduce and investigate four AMR techniques along with a hybrid mesh refinement technique. Our interest is in investigating how well such a `generic' mesh selection strategy, based on properties of the problem alone, can perform compared with a special-purpose strategy that is designed for a specific PDE method. We also introduce an \`{a} posteriori global error estimator by introducing the solution of a companion PDE defined in terms of the associated PCI.


Title:On Computational Methods for the Valuation of Credit Derivatives
Author:Wanhe Zhang
Date:January 2010
Pages: 82
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2010.
File: Wanhe.Zhang.PhD.Thesis.pdf

Abstract:

A credit derivative is a financial instrument whose value depends on the credit risk of an underlying asset or assets. Credit risk is the possibility that the obligor fails to honor any payment obligation. This thesis proposes four new computational methods for the valuation of credit derivatives.

Compared with synthetic collateralized debt obligations (CDOs) or basket default swaps (BDS), the value of which depends on the defaults of a prescribed underlying portfolio, a forward-starting CDO or BDS has a random underlying portfolio, as some "names" may default before the CDO or BDS starts. We develop an approach to convert a forward product to an equivalent standard one. Therefore, we avoid having to consider the default combinations in the period between the start of the forward contract and the start of the associated CDO or BDS. In addition, we propose a hybrid method combining Monte Carlo simulation with an analytical method to obtain an effective method for pricing forward-starting BDS.

Current factor copula models are static and fail to calibrate consistently against market quotes. To overcome this deficiency, we develop a novel chaining technique to build a multi-period factor copula model from several one-period factor copula models. This allows the default correlations to be time-dependent, thereby allowing the model to fit market quotes consistently. Previously developed multi-period factor copula models require multi-dimensional integration, usually computed by Monte Carlo simulation, which makes the calibration extremely time consuming. Our chaining method, on the other hand, possesses the Markov property. This allows us to compute the portfolio loss distribution of a completely homogeneous pool analytically.

The multi-period factor copula is a discrete-time dynamic model. As a first step towards developing a continuous-time dynamic model, we model the default of an underlying by the first hitting time of a Wiener process, which starts from a random initial state. We find an explicit relation between the default distribution and the initial state distribution of the Wiener process. Furthermore, conditions on the existence of such a relation are discussed. This approach allows us to match market quotes consistently.


Title:A Survey of Modeling and Simulation of Skeletal Muscle
Authors:Dongwoon Lee, Michael Glueck, Azam Khan, Eugene Fiume and Ken Jackson
Date:December 2009
Pages: 13
Keywords: Computer Graphics, Three-Dimensional Graphics, Animation, Physically Based Modeling, Human Modeling, Muscle Physiology, Anatomy, Biomechanics
Note: Submitted to Foundations and Trends in Computer Graphics and Vision
File: MuscleSimulationSurvey.pdf

Abstract:
Muscles provide physiological functions to drive body movement and anatomically characterize body shape, making them a crucial component of modeling animated human figures. Substantial efforts have been expended on developing computational models of muscles for the purpose of increasing realism and accuracy in a broad range of applications, including computer graphics and biomechanics. We survey various approaches that have been employed to model and simulate muscles both morphologically and functionally. Modeling the realistic morphology of muscle requires that muscle deformations be accurately depicted. To this end, several methodologies have been presented, including geometrically-based, physically-based, and data-driven approaches. On the other hand, the simulation of physiological muscle functions aims to identify the biomechanical controls responsible for realistic human motion. Estimating these muscle controls has been pursued through static and dynamic simulations. We review and discuss all these approaches, and conclude with suggestions for future research.


Title:Adaptive and high-order methods for valuing American options
Authors:Christina Christara and Duy Minh Dang
Date:December 2009
Pages: 25
Keywords: adaptive mesh selection, error equidistribution, quadratic splines, collocation, finite differences, European option, American option, penalty method
Note: A later version of this paper is published in the Journal of Computational Finance, Vol 14, Issue 4, 2011, pages 73-113.
File: ccc/americanOption.pdf

Abstract:
We develop space-time adaptive and high-order methods for valuing American options using a partial differential equation (PDE) approach. The linear complementarity problem arising due to the free boundary is handled by a penalty method. Both finite difference and finite element methods are considered for the space discretization of the PDE, while classical finite differences, such as Crank-Nicolson, are used for the time discretization. The high-order discretization in space is based on an optimal finite element collocation method, the main computational requirements of which are the solution of one tridiagonal linear system at each time step, while the resulting errors at the gridpoints and midpoints of the space partition are fourth-order. To control the space error, we use adaptive gridpoint distribution based on an error equidistribution principle. A time stepsize selector is used to further increase the efficiency of the methods. Numerical examples show that our methods converge fast and provide highly accurate options prices, Greeks, and early exercise boundaries.


Title:A PDE Pricing Framework for Cross-Currency Interest Rate Derivatives
Author:Duy Minh Dang, Christina Christara, Kenneth R. Jackson and Asif Lakhany
Date:November 2009
Pages: 10
Keywords: Power Reverse Dual Currency Swaps, Bermudan Cancelable, Partial Differential Equation, PDE, Alternating Direction Implicit, ADI, Generalized Minimal Residual, GMRES, Fast Fourier Transform, FFT
Note: Accepted for publication by the "2010 Workshop on Computational Finance and Business Intelligence", International Conference In Computational Science (ICCS), Amsterdam, May 31 to June 2, 2010.
File: Link to the paper on the SSRN

Abstract:
We propose a general framework for efficient pricing via a Partial Differential Equation (PDE) approach of cross-currency interest rate derivatives under the Hull-White model. In particular, we focus on pricing long-dated foreign exchange (FX) interest rate hybrids, namely Power Reverse Dual Currency (PRDC) swaps with Bermudan cancelable features. We formulate the problem in terms of three correlated processes that incorporate FX skew via a local volatility function. This formulation results in a parabolic PDE in three spatial dimensions. Finite difference methods are used for the spatial discretization of the PDE. The Crank-Nicolson method and the Alternating Direction Implicit (ADI) method are considered for the time discretization. In the former case, the preconditioned Generalized Minimal Residual (GMRES) method is employed for the solution of the resulting block banded linear system at each time step, with the preconditioner solved by Fast Fourier Transform (FFT) techniques. Numerical examples illustrating the convergence properties of the methods are provided.


Title:Pricing of Cross-Currency Interest Rate Derivatives on Graphics Processing Units
Author:Duy Minh Dang
Date:October 2009
Pages: 15
Keywords: Power Reverse Dual Currency Swaps, Bermudan Cancelable, Partial Differential Equation, PDE, Alternating Direction Implicit, Finite Differences, Graphics Processing Units, Parallel Computing
Note: Accepted for publication in the Third International Workshop on Parallel and Distributed Computing in Finance, IEEE International Parallel & Distributed Processing Symposium, Atlanta, USA, April 2010.
File: Link to the paper on the SSRN

Abstract:
We present a Graphics Processing Unit (GPU) parallelization of the computation of the price of cross-currency interest rate derivatives via a Partial Differential Equation (PDE) approach. In particular, we focus on the GPU-based parallel computation of the price of long-dated foreign exchange interest rate hybrids, namely Power Reverse Dual Currency (PRDC) swaps with Bermudan cancelable features. We consider a three-factor pricing model with foreign exchange skew which results in a parabolic PDE in three spatial dimensions plus time. Finite difference methods on uniform grids are used for the spatial discretization of the PDE, and the Alternating Direction Implicit technique is employed for the time discretization. We then exploit the parallel architectural features of GPUs together with the Compute Unified Device Architecture (CUDA) framework to design and implement an efficient parallel algorithm for pricing PRDC swaps. Over each period of the tenor structure, we divide the pricing of a Bermudan cancelable PRDC swap into two independent pricing subproblems, each of which can efficiently be solved on a GPU. Using this approach on two NVIDIA Tesla C870 GPUs of an NVIDIA 4-GPU Tesla S870 to price a Bermudan cancelable PRDC swap having a 30 year maturity and annual exchange of fund flows, we have achieved an asymptotic speedup by a factor of 44 relative to a single thread on a 2.0GHz Xeon processor.


Title: A study of a discrete element method based granular dynamics solver
Author: Tina Siu Yee
Date: October 2009
Pages: 43
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2009.
File: ccc/tyee-09-msc.pdf
Keywords: molecular dynamics, sand, powder, particle simulation, ODE solver, Runge-Kutta, friction, dampening, piling, stiction

Abstract: Granular dynamics is the dynamics of a large set of small particles (grains). Convincing simulation of natural granular phenomena (e.g. dust, sand or powders) is a challenging mathematical and computational problem. Our observation is that the more realistically the collection of grains approaches its static state, the more natural the simulation appears. This study focuses on the simulation of sets of grains as the set approaches its static state. The method begins with a discrete element (also referred to as molecular dynamics) model of the inter-particle contacts within the granular collection. Inertia terms (friction/dampening) are added to the basic contact model to facilitate static piling. An examination of the different contact models on the formation of the final static state and a discussion of the numerical consequences of each model is presented. The discrete element approach demonstrates to be a straightforward and natural way to model many granular behaviors. Its versatility makes it possible to use it to build a general-purpose granular solver.


Title: Quartic spline collocation for second-order boundary value problems
Authors:Christina C. Christara and Guohong Liu
Date:September 2009
Pages: 8
Keywords: sixth order convergence, quartic splines, second-order BVP, collocation
Note: Proceedings of the 9th HERCMA Conference on Computer Mathematics and Applications, Athens University of Economics and Business, September 23-26, 2009, Athens, Greece, pp. 1-8.
File: ccc/paper-qrtbvp2.pdf

Abstract:
Collocation methods based on quartic splines are presented for second-order two-point boundary value problems. In order to obtain a uniquely solvable linear system for the degrees of freedom of the quartic spline collocation approximation, in addition to the boundary conditions specified by the problem, extra boundary or near-boundary conditions are introduced. Non-optimal (fourth-order) and optimal (sixth-order) quartic-spline methods are considered. The theoretical behavior of the collocation methods is verified by numerical experiments. The extension of the methods to two-dimensional problems is briefly considered.


Title:Option Pricing Using Fourier Space Time-Stepping Framework
Author:Vladimir Surkov
Date:August 2009
Pages: 134
Keywords: Option pricing, Levy processes, partial integro-differential equation, Fourier transform
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2009.
File: Link to the thesis on the SSRN

Abstract:

This thesis develops a generic framework based on the Fourier transform for pricing and hedging of various options in equity, commodity, currency, and insurance markets. The pricing problem can be reduced to solving a partial integro-differential equation (PIDE). The Fourier Space Time-stepping (FST) framework developed in this thesis circumvents the problems associated with the existing finite difference methods by utilizing the Fourier transform to solve the PIDE. The FST framework-based methods are generic, highly efficient and rapidly convergent.

The Fourier transform can be applied to the pricing PIDE to obtain a linear system of ordinary differential equations that can be solved explicitly. Solving the PIDE in Fourier space allows for the integral term to be handled efficiently and avoids the asymmetrical treatment of diffusion and integral terms, common in the finite difference schemes found in the literature. For path-independent options, prices can be obtained for a range of stock prices in one iteration of the algorithm. For exotic, path-dependent options, a time-stepping methodology is developed to handle barriers, free boundaries, and exercise policies.

The thesis includes applications of the FST framework-based methods to a wide range of option pricing problems. Pricing of single- and multi-asset, European and path-dependent options under independent-increment exponential Levy stock price models, common in equity and insurance markets, can be done efficiently via the cornerstone FST method. Mean-reverting Levy spot price models, common in commodity markets, are handled by introducing a frequency transformation, which can be readily computed via scaling of the option value function. Generating stochastic volatility, to match the long-term equity options market data, and stochastic skew, observed in currency markets, is addressed by introducing a non-stationary extension of multi-dimensional Levy processes using regime-switching. Finally, codependent jumps in multi-asset models are introduced through copulas.

The FST methods are computationally efficient, running in O(MNd log2 N) time with M time steps and N space points in each dimension on a d-dimensional grid. The methods achieve second-order convergence in space; for American options, a penalty method is used to attain second-order convergence in time. Furthermore, graphics processing units are utilized to further reduce the computational time of FST methods.


Title:An Efficient Unified Approach for the Numerical Solution of Delay Differential Equations
Author:Hossein ZivariPiran and Wayne H. Enright
Date: September 2009
Pages: 20
Keywords: Delay differential equations, Discontinuous ordinary differential equations, Runge-Kutta methods, Linear multistep methods
Note: To appear in Numerical Algorithms
DOI: 10.1007/s11075-009-9331-y
Springer Link to article
File: hzpEnrightNA09Preprint

Abstract:

In this paper we propose a new framework for designing a delay differential equation (DDE) solver which works with any supplied initial value problem (IVP) solver that is based on a standard step-by-step approach, such as Runge-Kutta or linear multi-step methods, and can provide dense output. This is done by treating a general DDE as a special example of a discontinuous IVP. Using this interpretation we develop an efficient technique to solve the resulting discontinuous IVP. We also give a more clear process for the numerical techniques used when solving the implicit equations that arise on a time step, such as when the underlying IVP solver is implicit or the delay vanishes.

The new modular design for the resulting simulator we introduce, helps to accelerate the utilization of advances in the different components of an effective numerical method. Such components include the underlying discrete formula, the interpolant for dense output, the strategy for handling discontinuities and the iteration scheme for solving any implicit equations that arise.


Title:Efficient Simulation, Accurate Sensitivity Analysis and Reliable Parameter Estimation for Delay Differential Equations
Author:Hossein ZivariPiran
Date: September 2009
Pages: 103
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2009.
File: hzp_phd_thesis

Abstract:

Delay differential equations (DDEs) are a class of differential equations that have received considerable recent attention and been shown to model many real life problems, traditionally formulated as systems of ordinary differential equations (ODEs), more naturally and more accurately. Ideally a DDE modeling package should provide facilities for approximating the solution, performing a sensitivity analysis and estimating unknown parameters. In this thesis we propose new techniques for efficient simulation, accurate sensitivity analysis and reliable parameter estimation of DDEs.

We propose a new framework for designing a delay differential equation (DDE) solver which works with any supplied initial value problem (IVP) solver that is based on a general linear method (GLM) and can provide dense output. This is done by treating a general DDE as a special example of a discontinuous IVP. We identify a precise process for the numerical techniques used when solving the implicit equations that arise on a time step, such as when the underlying IVP solver is implicit or the delay vanishes.

We introduce an equation governing the dynamics of sensitivities for the most general system of parametric DDEs. Then, having a similar view as the simulation (DDEs as discontinuous ODEs), we introduce a formula for finding the size of jumps that appear at discontinuity points when the sensitivity equations are integrated. This leads to an algorithm which can compute sensitivities for various kind of parameters very accurately.

We also develop an algorithm for reliable parameter identification of DDEs. We propose a method for adding extra constraints to the optimization problem, changing a possibly non-smooth optimization to a smooth problem. These constraints are effectively handled using information from the simulator and the sensitivity analyzer.

Finally, we discuss the structure of our evolving modeling package DDEM. We present a process that has been used for incorporating existing codes to reduce the implementation time. We discuss the object-oriented paradigm as a way of having a manageable design with reusable and customizable components. The package is programmed in C++ and provides a user-friendly calling sequences. The numerical results are very encouraging and show the effectiveness of the techniques.


Title: Quadratic spline collocation for one-dimensional parabolic partial differential equations
Authors: Christina C. Christara, Tong Chen and Dang Duy Minh
Date: 2009
Pages: 43
Note:To appear in Numerical Algorithms.
File:DOI doi:10.1007/s11075-009-9317-9
Journal page: http://www.springerlink.com/content/v1g8565702373446/
Earlier version: http://www.cs.toronto.edu/pub/reports/na/ccc/parab.ps.gz
Keywords: quadratic splines, collocation, parabolic PDEs, Crank-Nicolson, stability, optimal order of convergence, American options.

Abstract:

New methods for solving general linear parabolic partial differential equations (PDEs) in one space dimension are developed. The methods combine quadratic-spline collocation for the space discretization and classical finite differences, such as Crank-Nicolson, for the time discretization. The main computational requirements of the most efficient method are the solution of one tridiagonal linear system at each time step, while the resulting errors at the gridpoints and midpoints of the space partition are fourth order. The stability and convergence properties of some of the new methods are analyzed for a model problem. Numerical results demonstrate the stability and accuracy of the methods. Adaptive mesh techniques are introduced in the space dimension, and the resulting method is applied to the American put option pricing problem, giving very competitive results.


Title:The Reliability/Cost Trade-off for a Class of ODE Solvers
Authors: W.H. Enright and L. Yan
Date: 2009
Pages: 22
Note: To appear in Numerical Algorithms
DOI: 10.1007/s11075-009-9288-x
Springer Link to article
File: enl09.pdf

Abstract:

In the numerical solution of ODEs, it is now possible to develop efficient techniques that will deliver approximate solutions that are piecewise polynomials. The resulting methods can be designed so that the piecewise polynomial will satisfy a perturbed ODE with an associated defect (or residual) that is directly controlled in a consistent fashion. We will investigate the reliability/cost trade off that one faces when implementing and using such methods, when the methods are based on an underlying discrete Runge-Kutta formula.

In particular we will identify a new class of continuous Runge-Kutta methods with a very reliable defect estimator and a validity check that reflects the credibility of the estimate. We will introduce different measures of the ``reliability'' of an approximate solution that are based on the accuracy of the approximate solution; the maximum magnitude of the defect of the approximate solution; and how well the method is able to estimate the maximum magnitude of the defect of the approximate solution. We will also consider how methods can be implemented to detect and cope with special difficulties such as the effect of round-off error (on a single step) or the ability of a method to estimate the magnitude of the defect when the stepsize is large (as might happen when using a high-order method at relaxed accuracy requests).

Numerical results on a wide selection of problems will be summarized for methods of orders five, six and eight. It will be shown that a modest increase in the cost per step can lead to a significant improvement in the quality of the approximate solutions and the reliability of the method. For example, the numerical results demonstrate that, if one is willing to increase the cost per step by 50\%, then a method can deliver approximate solutions where the reported estimated maximum defect is within 1\% of its true value on 95\% of the steps.


Title:Analytic Dynamic Factor Copula Models
Authors:Ken Jackson, Alex Kreinin and Wanhe Zhang
Date:Revised November 2011
Pages: 23
Note: To appear in the book Credit Portfolio Securitizations and Derivatives to be published by Wiley. Earlier versions of this paper can be found here and here.
File: Analytic.Dynamic.Factor.Copula.Models.2011a.pdf

Abstract:

The Gaussian factor copula model is the market standard model for multi-name credit derivatives. Its main drawback is that factor copula models exhibit correlation smiles when calibrating against market tranche quotes. To overcome the calibration deficiency, we introduce a multi-period factor copula model by chaining one-period factor copula models. The correlation coefficients in our model are allowed to be time-dependent, and hence they are allowed to follow certain stochastic processes. Therefore, we can calibrate against market quotes more consistently. Usually, multi-period factor copula models require multi-dimensional integration, typically computed by Monte Carlo simulation, which makes calibration extremely time consuming. In our model, the portfolio loss of a completely homogeneous pool possesses the Markov property, thus we can compute the portfolio loss distribution analytically without multi-dimensional integration. Numerical results demonstrate the efficiency and flexibility of our model to match market quotes.


Title:On the Blunting Method in the Verified Integration of ODEs
Authors:N. S. Nedialkov, K. R. Jackson and M. Neher
Date:July 2015; revised May 2016
Pages: 20
Note: Published in the Journal of Reliable Computing. The journal version can be found here. Earlier versions of this paper can be found here and here.
File: ned_jac_neh_blunting_2016.pdf

Abstract:

Verified methods for the integration of initial value problems (IVPs) for ODEs aim at computing guaranteed error bounds for the flow of an ODE while maintaining a low level of overestimation. This paper is concerned with one of the sources of overestimation: a matrix-vector product describing a parallelepiped in phase space.

We analyze the blunting method developed by Berz and Makino, which consists of a special choice of the matrix in this product. For the linear model problem
      u′ = Au,   u(0) = u0u0,
where u ∈ Rm, A ∈ Rm x m, m ≥ 2 and u0 is a given interval vector, we compare the convergence behavior of the blunting method with that of the well-known QR interval method. In both methods, the amount of overestimation of the flow of the initial set depends on the spectral radius of some well-defined matrices. We show that under certain conditions, the spectral radii of the matrices that describe the excess propagation in the QR method and in the blunting method, respectively, have the same limits, so that the excess propagation in both methods is similar.


Title:New Interpolants for Asymptotically Correct Defect Control of BVODEs
Authors: W.H. Enright and P.H. Muir
Date: 2009
Pages: 20
Note: To appear in Numerical Algorithms
DOI: 10.1007/s11075-009-9266-3
Springer Link to article
File: em09.pdf

Abstract:

The defect of a continuous approximate solution to an ODE is the amount by which that approximation fails to satisfy the ODE. A number of studies have explored the use of {\it asymptotically correct} defect estimates in the numerical solution of initial value ODEs (IVODEs). By employing an appropriately constructed interpolant to an approximate discrete solution to the ODE, various researchers have shown that it is possible to obtain estimates of the local error and/or the maximum defect that are asymptotically correct on each step, as the stepsize $h \rightarrow 0$.

In this paper, we investigate the usefulness of asymptotically correct defect estimates for defect control in boundary value ODE (BVODE) codes. In the BVODE context, for a sequence of meshes which partition the problem interval, one computes a discrete numerical solution, constructs an interpolant, and estimates the maximum defect. The estimates (typically obtained by sampling the defect at a small number of points on each subinterval of the mesh) are used in a redistribution process to determine the next mesh and thus the availability of these more reliable maximum defect estimates can lead to improved meshes. As well, when such estimates are available, the code can terminate with more confidence that the defect is bounded throughout the problem domain by the user-prescribed tolerance. In this paper we employ a boot-strapping approach to derive interpolants that allow asymptotically correct defect estimates. Numerical results are included to demonstrate the validity of this approach.


Title:Quartic spline collocation for fourth-order boundary value problems
Authors: Christina C. Christara, Ying Zhu and Jingrui Zhang
Date:September 2008
Pages: 6
Note: Proceedings of the 2008 Numerical Analysis conference, September 1-5, 2008, Kalamata, Greece, pp. 62-67.
File: ccc/paper-qrtbvp4.pdf

Abstract:
We consider the numerical solution of linear fourth-order boundary value problems (BVPs) in one and two dimensions, by methods based on quartic splines and the collocation methodology. The discretization error is sixth order on the gridpoints and midpoints of a uniform partition and fifth order globally in the uniform norm. For the linear systems arising from the discretization of certain biharmonic problems by quartic spline collocation, we develop fast solvers based on Fourier transforms, leading to asymptotically almost optimal solution techniques.


Title: Multigrid and Spline Collocation Methods on Non-uniform Grids
Author: Guoyu Wang
Date: September 2008
Pages: 98
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2008.
File: Please contact ccc SYMBOL_AT cs DOT toronto DOT edu
Keywords: quadratic splines, cubic splines, fourth order of convergence, collocation, multigrid, extension operator, restriction operator, preconditioning

Abstract: This thesis describes numerical methods for the solution of one-dimensional second-order differential equations on non-uniform grids. The methods combine the multigrid and spline collocation methodologies. Multigrid methods are presented. Appropriate restriction and extension operators are developed for quadratic and cubic splines on non-uniform grids. Multigrid methods are applied to quadratic and cubic spline collocation equations arising from the discretization of one-dimensional second-order differential equations. The convergence analysis of a two-grid method integrated with a damped Richardson relaxation scheme as smoother shows that the rate of convergence is faster than $\frac{1}{2}$ for a model second-order equation with homogeneous Dirichlet boundary conditions. An adaptive technique for estimating the minimal acceptable coarse grid size is proposed. Numerical experiments using full multigrid methods are performed on various one-dimensional second-order differential equations discretized on non-uniform grids, and the conditions under which they are convergent and efficient are studied. The numerical results confirm our analysis.


Title:Randomization in the First Hitting Time Problem
Authors:Ken Jackson, Alex Kreinin and Wanhe Zhang
Date:February 2009
Pages: 10
Note: A slightly revised version of this paper will appear in Statistics and Probability Letters.
An earlier version of this paper, published in Feb. 2008 and entitled "Randomization in the Default Boundary Problem", is available from http://www.defaultrisk.com/ and as First.Hitting.Time.2008.pdf.
File: First.Hitting.Time.2009.pdf

Abstract:

In this paper we consider the following inverse problem for the first hitting time distribution: given a Wiener process with a random initial state, probability distribution, F(t), and a linear boundary, b(t) = mu t, find a distribution of the initial state such that the distribution of the first hitting time is F(t). This problem has important applications in credit risk modeling where the process represents, so-called, distance to default of an obligor, the first hitting time represents a default event and the boundary separates the healthy states of the obligor from the default state. We show that randomization of the initial state of the process makes the problem analytically tractable.


Title: Bi-quartic Spline Collocation Methods for Fourth-order Boundary Value Problems with an Application to the Biharmonic Dirichlet Problem
Author: Jingrui Zhang
Date: January 2008
Pages: 160
Note:Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2008.
File: ccc/jingrui-08-phd.ps.gz
Keywords: sixth order of convergence, quartic splines, collocation, fast Fourier Transform (FFT), GMRES, preconditioning

Abstract: We present optimal bi-quartic spline collocation methods for the numerical solution of fourth-order boundary value problems on rectangular domains, and apply a particular instance of these methods to the biharmonic Dirichlet problem.

The methods are based on quartic splines and the collocation discretization methodology with midpoints of a uniform partition as the collocation points. While the standard quartic spline method usually provides second-order approximations, the two bi-quartic spline collocation methods developed in this thesis are observed to produce approximations which are of sixth order at grid points and midpoints, and of fifth order at other points. Both are based on high order perturbations of the differential and boundary conditions operators. The one-step (extrapolated) method forces the approximations to satisfy a perturbed problem, while the three-step (deferred-correction) method proceeds in three steps and perturbs the right sides of the linear equations appropriately.

The three-step bi-quartic spline collocation method is then applied to the biharmonic Dirichlet problem and a fast solver for the resulting linear systems is developed. The fast solver consists of solving an auxiliary biharmonic problem with Dirichlet and second derivative boundary conditions along the two opposite boundaries, and a one-dimensional problem related to the two opposite boundaries. The linear system arising from the bi-quartic spline collocation discretization of the auxiliary problem is solvable by fast Fourier transforms. The problem related to the two opposite boundaries is solved by preconditioned GMRES (PGMRES). We present a detailed analysis of the performance of PGMRES by studying bounds for the eigenvalues of the preconditioned matrix. We show that the number of PGMRES iterations is independent of the grid size $n$. As a result, the complexity of the fast solver is $O(n^2\log(n))$. Numerical experiments from a variety of problems, including practical applications and problems more general than the analysis assumes, verify the accuracy of the discretization scheme and the effectiveness of the fast solver.


Title:Fast Valuation of Forward Starting Basket Default Swaps
Authors: Ken Jackson, Alex Kreinin and Wanhe Zhang
Date: December 2007
Pages: 20
Note: Also available from http://www.defaultrisk.com/
A slightly revised version of this paper is published in the International Journal of Theoretical and Applied Finance, Vol. 13, No. 2 (2010), pp. 195-209.
File: fwdbds.2007a.pdf

Abstract:

A basket default swap (BDS) is a credit derivative with contingent payments that are triggered by a combination of default events of the reference entities. A forward-starting basket default swap (FBDS) is a BDS starting at a specified future time. Existing analytic or semi-analytic methods for pricing FBDS are time consuming due to the large number of possible default combinations before the BDS starts. This paper develops a fast approximation method for FBDS based on the conditional independence framework. The method converts the pricing of a FBDS to an equivalent BDS pricing problem and combines Monte Carlo simulation with an analytic approach to achieve an effective method. This hybrid method is a novel technique which can be viewed either as a means to accelerate the convergence of Monte Carlo simulation or as a way to estimate parameters in an analytic method that are difficult to compute directly. Numerical results demonstrate the accuracy and efficiency of the proposed hybrid method.


Title:Analysis of the Blunting Anti-Wrapping Strategy
Authors:Kenneth R. Jackson, Ned S. Nedialkov and Markus Neher
Date: October 2007
Pages: 2
Note: This paper was presented at ICIAM 2007, Zurich, Switzerland, July 2007. The proceedings will be published by Proceedings in Applied Mathematics and Mechanics.
File: Blunting.iciam07.pdf

Abstract:

Interval methods for ODEs often face two obstacles in practical computations: the dependency problem and the wrapping effect. Taylor model methods, which have been developed by Berz and his group, have recently attracted attention. By combining interval arithmetic with symbolic calculations, these methods suffer far less from the dependency problem than traditional interval methods for ODEs. By allowing nonconvex enclosure sets for the flow of a given initial value problem, Taylor model methods have also a high potential for suppressing the wrapping effect.

Makino and Berz advocate the so-called blunting method. In this paper, we analyze the blunting method (as an interval method) for a linear model ODE. We compare its convergence behavior with that of the well-known QR interval method.


Title:Parallel Option Pricing With Fourier Space Time-Stepping Method on Graphics Processing Units
Author:Vladimir Surkov
Date: October 2007
Pages: 7
Note:Accepted by the First International Workshop on Parallel and Distributed Computing in Finance 2008.
The link below is to a copy of the paper on the Social Science Research Network (SSRN).
File: Link to the paper on the SSRN

Abstract:

With the evolution of Graphics Processing Units (GPUs) into powerful and cost-efficient computing architectures, their range of application has expanded tremendously, especially in the area of computational finance. Current research in the area, however, is limited in terms of options priced and complexity of stock price models. This paper presents algorithms, based on the Fourier Space Time-stepping (FST) method, for pricing single and multi-asset European and American options with Levy underliers on a GPU. Furthermore, the single-asset pricing algorithm is parallelized to attain greater efficiency.


Title:Numerical Methods for the Valuation of Synthetic Collateralized Debt Obligations
Author:Xiaofang Ma
Date: September 2007
Pages: 82
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2007.
File: ma-07-phd.pdf

Abstract:

A Collateralized Debt Obligation (CDO) is a credit derivative that creates fixed income securities, which are known as tranches. A CDO is called a synthetic CDO if the risky assets in the underlying pool are credit default swaps. An essential part of the valuation of a synthetic CDO tranche is how to estimate accurately and efficiently the expected value of the tranche loss function. It is known that the expected value of a function of one random variable is completely determined by the distribution of the random variable and the function itself. A standard approach to estimate the expected value of a function of one random variable is to estimate the distribution of the underlying random variable, the pool loss in our case, and then to evaluate the expected value of the given function, the tranche loss function for our problem. Following this approach, we introduce three methods for estimating the distribution of the pool loss: a stable recursive method for computing the distribution of the pool loss exactly, an improved compound Poisson approximation method and a normal power approximation method for approximating the distribution of the pool loss.

We also develop a new method that focuses on the tranche loss function directly. The tranche loss function is expressed simply in terms of two bases functions. Each of the two bases functions is a transformation of the hockey stick function h(x), where h(x) = 1 - x if 0 ≤ x < 1 and 0 if x ≥ 1. By approximating the hockey stick function by a sum of exponentials, the tranche loss function is approximated by a sum of exponentials. The main advantage of this method is that the distribution of the pool loss need not be estimated. A crucial part of this new method is the determination of the coefficients of an exponential approximation to the hockey stick function. We discuss both the numerical method for computing the exponential approximation to the hockey stick function as well as the theoretical properties of the approximation.

Performance comparisons of the four new methods developed in this thesis and other standard methods for synthetic CDO valuation are presented.


Title: Adaptive Finite Difference Methods for Valuing American Options
Author: Duy Minh Dang
Date: September 2007
Pages: 128
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2007.
File: dmdang-07-msc.pdf
Keywords: adaptive mesh selection, monitor function, finite differences, Crank-Nicolson, European option, American option, penalty method, projected SOR

Abstract: We develop space-time adaptive methods for valuing American options with strong emphasis on American put options. We examine the application of adaptive techniques to the Black-Scholes partial differential equation problem associated with an American put option in the context of non-uniform second-order finite differences. At certain timesteps, we obtain a redistribution of the spatial points based on a monitor function that attempts to equidistribute the error. The proposed finite difference discretization on non-uniform grids and redistribution of the spatial points lead to linear complementarity problems with $\Mcal$-matrices. The Projected Successive Over-relaxation and a penalty method are considered to handle the free boundaries. We study and compare the accuracy and efficiency of the considered methods. A complete proof of convergence and uniqueness of the projected SOR method under certain conditions is also presented.


Title: Quartic Spline Collocation Methods For Second-Order Two-Point Boundary Value ODE Problems
Author: Guohong Liu
Date: August 2007
Pages: 130
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2007.
File: ghliu-07-msc.ps.gz
Keywords: sixth order of convergence, quartic splines, collocation, boundary conditions

Abstract: Collocation methods based on quartic splines are presented for second-order two-point boundary value problems. In addition to the boundary conditions specified by the problem, extra boundary conditions are introduced in order to uniquely determine the quartic spline approximation. The standard quartic spline collocation method gives fourth order bounds. Two optimal methods, namely the extrapolated (one-step) and the deferred-correction (two-step) methods, are formulated based on appropriate extra boundary conditions and an appropriate perturbation of the operators of the differential equation, boundary conditions and extra boundary conditions. The convergence analysis and error bounds are developed using a Green's functions approach. The analysis shows that the maximum discrete error is of sixth order, and the maximum global error is of fifth order for the optimal methods.
The properties of the matrices arising from the optimal methods for a certain class of problems are studied. Non-optimal collocation methods based on different extra boundary conditions are also investigated. Problems with layers are also considered, and a grid refinement technique is presented. The theoretical behavior of the collocation methods is verified by numerical results.


Title:A High-Performance Method for the Biharmonic Dirichlet Problem on Rectangles
Authors:Christina C. Christaramand Jingrui Zhang
Date: July 2007
Pages: 3
Note: Proceedings of the 2007 International Conference On Preconditioning Techniques for Large Sparse Matrix Problems in Scientific and Industrial Applications, July 9-12, 2007 Météopole, Toulouse, France, pp. 35-37.
File: Please contact ccc SYMBOL_AT cs DOT toronto DOT edu

Abstract:
We propose a fast solver for the linear system resulting from the application of a sixth-order Bi-Quartic Spline Collocation method to the biharmonic Dirichlet problem on a rectangle. The fast solver is based on Fast Fourier Transforms (FFTs) and preconditioned GMRES (PGMRES) and has complexity $O(n^2\log(n))$ on an $n \times n$ uniform partition. The FFTs are applied to two auxiliary problems with different boundary conditions on the two vertical sides of the domain, while the PGMRES is applied to a problem related to the two vertical boundaries. We show that the number of PGMRES iterations required to reduce the relative residual to a certain tolerance is independent of the gridsize $n$. Numerical experiments verify the effectiveness of the solver.


Title:Robust and Reliable Defect Control for Runge-Kutta Methods
Author:Li Yan
Date: July 2007
Pages: 179
File: LiYan-07-msc.ps.gz

Abstract:

Using defect error control for a class of continuous Runge-Kutta methods for solving nonstiff problems has been studied over the last two decades. Recently a class of such methods, with an associated defect that can be reliably controlled, has been proposed \cite{Enright2007}. This approach is improved in this thesis by increasing the degree of the local interpolant by one and eliminating one of the terms contributing to the leading coefficient in the asymptotic expansion of the defect. We demonstrate this new improved robust and reliable defect approach on three continuous Runge-Kutta methods of order 5, 6, and 8. We also plot the shape of defect curves and compare them with previous work. Numerical results on the 25 test problems of DETEST at a wide range of accuracy requests are presented as well.


Title:Option Pricing with Regime Switching Lévy Processes Using Fourier Space Time Stepping
Authors:Kenneth R. Jackson, Sebastian Jaimungal and Vladimir Surkov
Date: September 2007 (earlier draft July 2007)
Pages: 6
Note: Appears in the proceedings of The Fourth IASTED International Conference on Financial Engineering and Applications (FEA 2007)
File: option_pricing_regime_switching_levy_fst_07.pdf

Abstract:

Although jump-diffusion and Lévy models have been widely used in industry, the resulting pricing partial integro-differential equation poses various difficulties for valuation. Diverse finite-difference schemes for solving the problem have been introduced in the literature. Invariably, the integral and diffusive terms are treated asymmetrically, large jumps are truncated and the methods are difficult to extend to higher dimensions. We present a new efficient transform approach for regime-switching Lévy models which is applicable to a wide class of path-dependent options (such as Bermudan, barrier, and shout options) and options on multiple assets.


Title:Efficient Contouring of PDEs on Unstructured Meshes
Authors:H.G. Moghaddam and W.H. Enright
Date: 2007
Pages: 7
Note: ACM Trans. on Math. Soft., 34, Article 19, 25 pages.
File: enm07.pdf

Abstract:

We introduce three fast contouring algorithms for visualizing the solution of Partial Differential Equations based on the PCI (Pure Cubic Interpolant). The PCI is a particular piecewise bicubic polynomial interpolant defined over an unstructured mesh. Unlike standard contouring approaches, our contouring algorithms do not need a fine structured approximation and work efficiently with the original scattered data. The basic idea is to first identify the intersection points between contour curves and the sides of each triangle and then draw smooth contour segments connecting these points. We compare these contouring algorithms with the built-in Matlab procedure and with other contouring algorithms. We demonstrate that our algorithms are both more accurate and faster than the others.


Title: Physics-Based Simulations for Fluid Mixtures
Author: Dongwoon Lee
Date: March 2007
Pages: 58
Note:Research Paper, Computer Science Department, University of Toronto.
File:DWLee.Research.Paper.2007.pdf

Abstract:

We develop a physics-based particle method to simulate various fluid mixture effects. A particle method is chosen over a grid-based method because we believe a particle method is more intuitive and better suited for handling physical and chemical interactions between fluids. Since a particle carries the physical attributes of the fluid (e.g, density and velocity), transfer of these attributes between particles can be computed easily using a particle method. Also, arbitrary shapes of the fluid mixture can be modeled based on the particles' spatial configuration, which is updated throughout the simulation.


Title: Valuation of Forward Starting Basket Default Swaps
Author: Wanhe Zhang
Date: April 2007
Pages: 45
Note:Research Paper, Computer Science Department, University of Toronto.
File:fwdbds.2007.pdf

Abstract:

A basket default swap (BDS) is a credit derivative with contingent payments that are triggered by a combination of default events of the reference entities. A forward-starting BDS is a basket default swap starting at a specified future time. In this paper, we study valuation methods for a forward-starting BDS. We begin by reviewing the popular factor copula model. The widely used Monte Carlo method and associated variance reduction techniques are surveyed. The analytical solution of a recursive algorithm is developed. Conditional on a specified common factor, we explore the possible combination of defaults during the life of the forward contract; under each scenario, we evaluate the object functions; we finish the valuation by computing the expectations of these object functions. The possible combination of defaults results in a large combinatorial problem. In order to overcome the inefficiency of the method outlined above, a more applicable approximation method that omits or interpolates the unimportant scenarios is proposed. Numerical results compare the accuracy and performance of these methods and illustrate the effect of contract parameters.


Title: Fourier Space Time-stepping for Option Pricing with Lévy Models
Authors: Kenneth R. Jackson, Sebastian Jaimungal and Vladimir Surkov
Date: March 2007
Pages: 32
Note:Submitted to the Journal of Computational Finance.
The link below is to a copy of the paper on the Social Science Research Network (SSRN).
File: Link to the paper on the SSRN

Abstract:
Jump-diffusion and Lévy models have been widely used to partially alleviate some of the biases inherent in the classical Black-Scholes-Merton model. Unfortunately, the resulting pricing problem requires solving a more difficult partial-integro differential equation (PIDE) and although several approaches for solving the PIDE have been suggested in the literature, none are entirely satisfactory. All treat the integral and diffusive terms asymmetrically and are difficult to extend to higher dimensions. We present a new, efficient algorithm, based on transform methods, which symmetrically treats the diffusive and integrals terms, is applicable to a wide class of path-dependent options (such as Bermudan, barrier, and shout options) and options on multiple assets, and naturally extends to regime-switching Lévy models.


Title: Valuation of Forward Starting CDOs
Authors: Ken Jackson and Wanhe Zhang
Date: February 2007
Pages: 15
Note:A slightly revised version of this paper will appear in the International Journal of Computer Mathematics
File:fwdcdo.2007.pdf

Abstract:
A forward starting CDO is a single tranche CDO with a specified premium starting at a specified future time. Pricing and hedging forward starting CDOs has become an active research topic. We present a method for pricing a forward starting CDO by converting it to an equivalent synthetic CDO. The value of the forward starting CDO can then be computed by the well developed methods for pricing the equivalent synthetic one. We illustrate our method using the industry-standard Gaussian-factor-copula model. Numerical results demonstrate the accuracy and efficiency of our method.


Title: Loss Distribution Evaluation for Synthetic CDOs
Authors: Ken Jackson, Alex Kreinin and Xiaofang Ma
Date: February 2007
Pages: 26
File:JKM.paper1.pdf

Abstract:
Efficient numerical methods for evaluating the loss distributions of synthetic CDOs are important for both pricing and risk management purposes. In this paper we discuss several methods for loss distribution evaluations. We first develop a stable recursive method. Then the improved compound Poisson approximations proposed by Hipp are introduced. Finally, the normal power approximation method that has been used in actuarial science is described. The recursive method computes the loss distribution exactly, whereas the other two methods compute the loss distribution approximately. Numerical results based on these three and some known methods for synthetic CDO pricing are given.


Title: On Convergence of Numerical Methods for Pricing Convertible Bonds
Authors: Dongyi (Elena) Li
Date: January 2007
Pages: 144
File:dongyi-07-msc.pdf

Abstract:

In this thesis two frameworks are considered to value convertible bonds with credit risk: the TF model (Tsiveriotis and Fernandes) and the AFV model (Ayache, Forsyth and Vetzal). Both models are associated with a pair of coupled partial differential equations (PDEs) with free boundary constraints. In practice, the projected overrelaxation method and the penalty method are widely used to solve such free-boundary value problems, and the Crank-Nicolson time-stepping scheme is adopted as the underlying discretization method to achieve quadratic precision. However, due to the complexity of the PDEs in these two models and discontinuities in practice present in the boundary conditions, only linear convergence is observed in many cases. The objective of this thesis is to investigate the difficulties related to convergence and stability when solving these coupled PDEs with the Crank-Nicolson scheme, and to suggest some modifications of the basic schemes to improve stability and restore quadratic convergence.


Title: An Exponential Approximation to the Hockey Stick Function
Authors: Ian Iscoe, Ken Jackson, Alex Kreinin and Xiaofang Ma
Date: March 2010 (replaces an earlier draft of January 2007)
Pages: 24
Note:Submitted to the Journal of Applied Numerical Mathematics.
An earlier version of this paper can be found here.
File:IJKM.paper2.revised.pdf

Abstract:

The hockey stick (HS) function plays an important role in pricing and risk management of many financial derivatives. This paper considers approximating the HS function by a sum of exponentials. This enables the efficient computation of an approximation to the expected value of the HS function applied to a sum of conditionally independent nonnegative random variables, a task that arises in pricing many financial derivatives, CDOs in particular. The algorithm proposed by Beylkin and Monzón is used to determine the parameters of the exponential approximation to the hockey stick function. Theoretical properties of the approximation are studied and numerical results are presented.


Title:Pricing Synthetic CDOs based on Exponential Approximations to the Payoff Function
Authors: Ian Iscoe, Ken Jackson, Alex Kreinin and Xiaofang Ma
Date: April 2011 (replaces an earlier draft of January 2007)
Pages: 28
Note:Submitted to the Journal of Computational Finance.
An earlier version of this paper can be found here.
File:IJKM.paper3.revised.pdf

Abstract:

Correlation-dependent derivatives, such as Asset-Backed Securities (ABS) and Collateralized Debt Obligations (CDO), are common tools for offsetting credit risk. Factor models in the conditional independence framework are widely used in practice to capture the correlated default events of the underlying obligors. An essential part of these models is the accurate and efficient evaluation of the expected loss of the specified tranche, conditional on a given value of a systematic factor (or values of a set of systematic factors). Unlike other papers that focus on how to evaluate the loss distribution of the underlying pool, in this paper we focus on the tranche loss function itself. It is approximated by a sum of exponentials so that the conditional expectation can be evaluated in closed form without having to evaluate the pool loss distribution. As an example, we apply this approach to synthetic CDO pricing. Numerical results show that it is efficient.


Title:A Scattered Data Interpolant for the Solution of Three-Dimensional PDEs
Authors:H.G. Moghaddam and W.H. Enright
Date: 2006
Pages: 7
Note: Proceedings of ECCOMAS CFD 2006, TU Delft, The Netherlands, September 2006.
File: enm06.pdf

Abstract:

Using a Differential Equation Interpolant (DEI), one can accurately approximate the solution of a Partial Differential Equation (PDE) at off-mesh points. The idea is to allocate a multi-variate polynomial to each mesh element and consequently , the collection of such poly- nomials over all mesh elements will define a piecewise polynomial approximation. In this paper we will investigate such interpolants on a three-dimensional unstructured mesh. As reported elsewhere, for a tetrahedron mesh in three dimensions, tensor product tri-quadratic and pure tri-quadratic interpolants are the most appropriate candidates. We will report on the effectiveness of these alternatives on some typical PDEs.


Title:Robust Defect Control for RK Methods Using Efficiently Computed Optimal-order Interpolants
Authors:W.H. Enright and W.B. Hayes
Date: 2006
Pages: 19
Note: ACM Trans. on Math. Soft., 33, pp. 1-19.
File: enh06.ps.gz

Abstract:

The quest for reliable integration of {\it initial value problems} (IVPs) for {\it ordinary differential equations} (ODEs) is a long-standing problem in numerical analysis. At one end of the reliability spectrum are fixed stepsize methods implemented using standard floating point, where the onus lies entirely with the user to ensure the stepsize chosen is adequate for the desired accuracy. At the other end of the reliability spectrum are rigorous interval-based methods, that can provide provably correct bounds on the error of a numerical solution. This rigour comes at a price, however: interval methods are generally 2-3 orders of magnitude more expensive than fixed stepsize floating-point methods. Along the spectrum between these two extremes lie various methods of different expense that estimate and control some measure of the local errors and adjust the stepsize accordingly. In this paper, we continue previous investigations into a class of interpolants for use in Runge-Kutta methods that have a defect function whose qualitative behaviour is asymptotically independent of the problem being integrated. In particular the point, in a step, where the maximum defect occurs as $h\rightarrow 0$ is known {\it a priori}. This property allows the defect to be monitored and controlled in an efficient and robust manner even for modestly large stepsizes. Our interpolants also have a defect with the highest possible order given the constraints imposed by the order of the underlying discrete formula. We demonstrate the approach on three Runge-Kutta methods of order 5, 6, and 8, and provide Fortran and preliminary {\it Matlab} interfaces to these three new integrators. We also consider how sensitive such methods are to roundoff errors. Numerical results for four problems on a range of accuracy requests are presented.


Title: Verifying Approximate Solutions to Differential Equations
Authors:W.H. Enright
Date: 2006
Pages: 9
Note: Journal of Computational and Applied Mathematics (JCAM), 185, pp. 203-311.
File: enr06.ps.gz

Abstract:

It is now standard practice in computational science for large scale simulations to be implemented and investigated in a problem solving environment (PSE) such as MATLAB or MAPLE. In such an environment, a scientist or engineer will formulate a mathematical model, approximate its solution using an appropriate numerical method, visualize the approximate solution and verify (or validate) the quality of the approximate solution. Traditionally we have been most concerned with the development of effective numerical software for generating the approximate solution and several efficient and reliable numerical libraries are now available for use within the most widely used PSEs. On the other hand, the visualization and verification tasks have received little attention, even though each often requires as much computational effort as is involved in generating the approximate solution. In this paper we will investigate the effectiveness of a suite of tools that we have recently introduced in the MATLAB PSE to verify approximate solutions of ordinary differential equations. We will use the notion of `effectivity index', widely used by researchers in the adaptive mesh PDE community, to quantify the credibility of our verification tools. Numerical examples will be presented to illustrate the effectiveness of these tools when applied to a standard numerical method on two model test problems.


Title: The Parallel Solution of ABD Systems Arising in Numerical Methods for BVPs for ODEs
Author: Richard N. Pancer
Date: August 2006
Pages: 330
Note:Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2006.
File:pancer-06-phd.pdf

Abstract:

Many numerical algorithms for the solution of Boundary Value Problems (BVPs) for Ordinary Differential Equations (ODEs) contain significant components that can be parallelized easily, and recent investigations have shown that substantial speedups are attainable on machines with a modest number of processors. An important step in most of these algorithms – and often the most computationally-intensive step – is the numerical solution of an Almost Block Diagonal (ABD) system of linear algebraic equations. The parallelization of this step is not so straightforward as certain characteristics of the problem make it difficult to apply standard divide-and-conquer techniques in order to arrive at a stable parallel algorithm. In the past, several algorithms have been proposed for the parallel solution of the ABD system, but many are potentially unstable or offer only limited parallelism. The proper treatment of the ABD system has proven to be a challenge in the design of parallel BVP codes.

In this thesis we present three parallel algorithms for the numerical solution of ABD systems. A parallel algorithm for this problem can be up to M / log M times faster than the fastest sequential algorithm, where the fastest sequential algorithm requires M steps. Each parallel algorithm we present attains this theoretically optimal speedup if enough processors are available, and each can be adapted for use on architectures with fewer than the required number of processors. Two of the algorithms, SLF-QR and SLF-LU, were discovered independently by us and by S.J. Wright in the 1990s. Wright presented these algorithms and analyzed their stability in two papers in the 1990s, proving SLF-QR is stable and showing that SLF-LU is stable under certain assumptions. We provide some additional insight into the stability of SLF-LU, and extend the basic algorithms to make better use of available processors during the factorization stage in order to increase parallelism in the solution stage.

The third algorithm we propose, RSCALE, is based on a notably different numerical technique: eigenvalue rescaling. RSCALE uses fewer local operations and produces less fill-in than either of the other two algorithms. In addition, RSCALE is proven to be stable for a reasonable class of problems, and has been observed to be stable for a wide class of problems through extensive numerical testing.

RSCALE is approximately 2.2 times faster than SLF-QR. The efficiency of SLF-LU is dependent on its solution strategy and its speed can vary from problem to problem, but for most problems RSCALE is approximately 1.2 times faster than SLF-LU. Moreover, we show that a variant of SLF-LU is potentially unstable on a surprising number of randomly-generated, yet well-posed, linear problems, as well as on certain nonlinear problems commonly used to test BVP codes. Since these problems pose no difficulty for either of the other two algorithms, we believe that SLF-QR, not SLF-LU, may be RSCALE's nearest competitor in terms of both speed and reliability.


Title: Optimal Quadratic and Cubic Spline Collocation on Nonuniform Partitions
Authors: Christina C. Christara and Kit Sun Ng
Date: 2006
Pages: 31
Note:Appeared in Computing, 76:3-4, 2006, pp. 227-257.
File:SCnonuni.pdf, DOI
Keywords: spline collocation; second-order two-point boundary value problem; error bounds; optimal order of convergence; spline interpolation.

Abstract:

We develop optimal quadratic and cubic spline collocation methods for solving linear second-order two-point boundary value problems on non-uniform partitions. To develop optimal non-uniform partition methods, we use a mapping function from uniform to non-uniform partitions and develop expansions of the error at the non-uniform collocation points of some appropriately defined spline interpolants. The existence and uniqueness of the spline collocation approximations are shown, under some conditions. Optimal global and local orders of convergence of the spline collocation approximations and derivatives are derived, similar to those of the respective methods for uniform partitions. Numerical results on a variety of problems, including a boundary layer problem, and a non-linear problem, verify the optimal convergence of the methods, even under more relaxed conditions than those assumed by theory.


Title: Adaptive Techniques for Spline Collocation
Authors: Christina C. Christara and Kit Sun Ng
Date: 2006
Pages: 19
Note:Appeared in Computing, 76:3-4, 2006, pp. 259-277.
File:adapt.pdf, DOI
Keywords: spline collocation; second-order two-point boundary value problem; error bounds; optimal order of convergence; adaptive grid; grid size estimator; error estimator; spline interpolation.

Abstract:

We integrate optimal quadratic and cubic spline collocation methods for second-order two-point boundary value problems with adaptive grid techniques, and grid size and error estimators. Some adaptive grid techniques are based on the construction of a mapping function that maps uniform to non-uniform points, placed appropriately to minimize a certain norm of the error. One adaptive grid technique for cubic spline collocation is mapping-free and resembles the technique used in COLSYS (COLNEW). Numerical results on a variety of problems, including problems with boundary or interior layers, and singular perturbation problems indicate that, for most problems, the cubic spline collocation method requires less computational effort for the same error tolerance, and has equally reliable error estimators, when compared to Hermite piecewise cubic collocation. Comparison results with quadratic spline collocation are also presented.


Title: Pricing Convertible Bonds with Dividend Protection subject to Credit Risk Using a Numerical PDE Approach
Author: Qingkai Mo
Date: April 2006
Pages: 92
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2006.
File:ccc/moqk-06-msc.pdf
Keywords: convertible bond, dividend protection, credit risk, Conversion Ratio Adjustment, Dividend Pass-Thru, finite differences, Crank-Nicolson, Projected Successive Over-Relaxation (PSOR), penalty method.

Abstract: We develop a pricing model for convertible bonds with dividend protection subject to credit risk by extending the models developed by Tsiveriotis and Fernandes (TF), and by Ayache, Forsyth and Vetzal (AFV). We consider two techniques to incorporate the dividend protection feature: Conversion Ratio Adjustment and Dividend Pass-Thru. We apply finite difference methods to discretize the PDEs associated with our models, and study the Projected Successive Over-Relaxation (PSOR) and penalty methods to handle the free boundaries. We compare these two methods in terms of convergence rate, number of iterations per time step and computation time for pricing convertible bonds without dividends. Finally, we apply the penalty method, the better of the two methods, to solve the systems arising from our models for convertible bonds with dividend protection. We examine the convergence rates and discuss the difference between the results from the extended TF and AFV models, with both dividend protection techniques.


Title:The PCI: A Scattered Data Interpolant for the Solution of Partial Differential Equations
Authors:H.G. Moghaddam and W.H. Enright
Date: 2005
Pages: 10
Note: Proceedings of Adaptive Modeling and Simulation 2005, (P. Diez and N.E. Wiberg eds.), pp. 243-252, Barcelona.
File: enhg05.ps.gz

Abstract:

Most Partial Differential Equations (PDEs) that arise in practical applications do not have a closed form solution. In these cases, numerical methods can be used to approximate the solution at a discrete set of mesh points in the domain associated with the the problem definition. These approximations usually provide more accuracy at the mesh points than is available (using piecewise linear approximation, for example) at off-mesh points. Although many numerical methods have the capability of fixing the location of some or all of the mesh points, this can significantly affect the efficiency and accuracy of the computed solution. Since the best distribution of mesh points is usually non-uniform, when one wants to visualize the solution (one of the most common post-processing tasks), some extra data must be generated. Enright introduced the Differential Equation Interpolant (DEI) which approximates the solution of a PDE such that the approximations at off-mesh points have the same order of accuracy as those at mesh points.

For a two dimensional problem, the DEI is a piecewise bivariate polynomial interpolant. The number of unknown coefficients is determined by the degree and type of the interpolant (tensor product or pure) and might be greater than the number of independent linear constraints defined by the information provided by the PDE solver. Consequently, we might require extra independent constraints to uniquely determine the associated piecewise polynomial. With the DEI approach these extra constraints are based on almost satisfying the PDE at a prescribed set of 'collocation' points. The DEI generates more accurate but less smooth global approximations.

In the case that the underlying numerical method produces approximations on an unstructured mesh, the DEI can be considered to be a scattered data interpolant. Ramos and Enright introduced the Alternate Differential Equation Interpolant (ADEI), a piecewise polynomial interpolant associated with a scattered data set. Instead of choosing random points or fixed points, they developed an iterative algorithm to find suitable collocation points based on the magnitude of the coefficients of the resulting interpolant. Although the ADEI provides a relatively smooth surface, it can still suffer from the appearance of discontinuities. Moreover, finding a suitable set of collocation points may take a long time. We introduce the PCI to overcome these difficulties.


Title: Tools for the Verification of Approximate Solutions to Differential Equations
Authors:W.H. Enright
Date: 2005
Pages: 11
Note: in Handbook for Scientific Computation, (B. Einarsson ed.), SIAM Press, pp. 109-119.
File: enr05c.ps.gz

Abstract:

Scientific computation is often carried out in general purpose problem solving environments (PSEs) such as as those associated with large scale, high performance simulations. When practitioners work in these environments it is essential that they have access to state-of-the-art numerical software and tools to visualize approximate solutions produced by such software. It is equally essential (although not yet standard practice) to have available tools that can be used to verify (or validate) that the approximate solution is consistent with the true solution of the underlying mathematical model. In this presentation we will identify a suite of tools that are being developed to address this need for the case where the underlying model is a system of ODEs or PDEs. The resulting suite of tools can be used to improve the practitioners confidence in the results and allow him to quantify the inherent reliability\index{reliability}/cost trade-offs that arise. The collection of generic tools are suitable for use with any numerical method and cover a range of cost/complexity. We will assume that the approximate solution is provided as a piecewise (possibly multivariate) polynomial on an arbitrary mesh. The tools we develop perform such tasks as measuring the size of the associated defect, estimating the global error, estimating the condition number and checking that a different method generates a consistent approximate solution. The application of these tools to two typical simulations are presented.


Title: GIA-induced secular variations in the Earth's long wavelength gravity field: Influence of 3-D viscosity variations (short communication),
Authors: Konstantin Latychev, Jerry X. Mitrovica, Mark E. Tamisiea, Jeroen Tromp, Christina C. Christara and Robert Moucha
Date: November 2005
Pages: 6
Note:Published in Earth and Planetary Science Letters, Vol. 240, Issue 2, Pages 322-327
File:DOI link , journal page
Keywords: geopotential harmonics, glacial isostatic adjustment, 3-D structure

Abstract:

Predictions of present day secular variations in the Earth's long wavelength geopotential driven by glacial isostatic adjustment (GIA) have previously been analyzed to infer the radial profile of mantle viscosity and to constrain ongoing cryospheric mass balance. These predictions have been based on spherically symmetric Earth models. We explore the impact of lateral variations in mantle viscosity using a new finite-volume formulation for computing the response of 3-D Maxwell viscoelastic Earth models. The geometry of the viscosity field is constrained from seismic-to-mographic images of mantle structure, while the amplitude of the lateral viscosity variations is tuned by a free parameter in the modeling. We focus on the zonal J_l harmonics for degrees l = 2, ..., 8 and demonstrate that large-scale lateral viscosity variations of two to three orders of magnitude have a modest, 5-10%, impact on predictions of J_2. In contrast, predictions of higher degree harmonics show a much greater sensitivity to lateral variation in viscosity structure. We conclude that future analyses of secular trends (for degree l > 2) estimated from ongoing (GRACE, CHAMP) satellite missions must incorporate GIA predictions based on 3-D viscoelastic Earth models.


Title: An Efficient Algorithm Based on Quadratic Spline Collocation and Finite Difference Methods for Parabolic Partial Differential Equations
Author: Tong Chen
Date: September 2005
Pages: 92
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2005.
File:ccc/tchen-05-msc.pdf
Keywords: semi-explicit semi-implicit method, optimal order of convergence, Crank-Nicolson, relaxed conditions for stability

Abstract: An efficient algorithm which combines quadratic spline collocation methods (QSC) for the space discretization and classical finite difference methods (FDMs), such as Crank-Nicolson, for the time discretization to solve general linear parabolic partial differential equations has been studied. By combining QSC and finite differences, a form of the approximate solution of the problem at each time step can be obtained; thus the value of the approximate solution and its derivatives can be easily evaluated at any point of the space domain for each time step.
There are two typical ways for solving this problem: (a) using QSC in its standard formulation, which has low accuracy $\mathcal{O}(h^2)$ and low computational work. More precisely, it requires the solution of a tridiagonal linear system at each time step; (b) using optimal QSC, which has high accuracy $\mathcal{O}(h^4)$ and requires the solution of either two tridiagonal linear systems or an almost pentadiagonal linear system at each time step. A new technique is introduced here which has the advantages of the above two techniques; more precisely, it has high accuracy $\mathcal{O}(h^4)$ and almost the same low computational work as the standard QSC.


Title: Pricing Convertible Bonds using Partial Differential Equations
Author: Lucy Xingwen Li
Date: September 2005
Pages: 81
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2005.
File:lucy-05-msc.pdf
Keywords: Coupon payments, call, put, convert, discontinuities, implicit methods, successive over-relaxation, penalty method

Abstract:

A Convertible Bond (CB) is a corporate debt security that gives the holder the right to exchange future coupon payments and principal repayment for a prescribed number of shares of equity. Thus, it has both an equity part and a fixed-income part, and may contain some additional features, such as callability and puttability.
In this paper, we study the model for valuing Convertible Bonds with credit risk originally developed by Kostas Tsiveriotis and Chris Fernandes (TF). The Convertible Bond is a derivative of the stock price, and the pricing model developed by TF is based on a free boundary value problem associated with a pair of parabolic Partial Differential Equations (PDEs) with discontinuities at the time points when there is a coupon payment, or when the bond is converted, or when it is called back (purchased) by the issuer, or when it is put (sold) to the issuer. We explore the possible derivation of the TF model and study the convergence of several numerical methods for solving the free boundary value problem associated with the TF model. In particular, we consider the Successive Over-Relaxation (SOR) iteration and a penalty method for solving the linear complementarity problem used to handle the free boundary. Special emphasis is given to the effectiveness of the numerical scheme as well to the treatment of discontinuities.


Title: Neumaier's Method for the Solution of Initial Value Problems for Stiff Ordinary Differential Equations
Author: Annie Hsiao Chen Yuk
Date: August 2005
Pages: 57
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2005.
File:Yuk.05.msc.pdf
Keywords: Stiff IVPs for ODEs, Validated Integration

Abstract:

Compared with standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated methods not only compute a numerical solution to a problem, but also generate a guaranteed bound on the global error associated with the numerical solution. There have been significant developments in the field of validated numerical methods of IVPs over the past few decades. However, none of the validated methods developed to date are suitable for stiff IVPs for ODEs.

This thesis investigates the potential of Neumaier's validated method for stiff IVPs for ODEs. We show that Neuamier's result is a special case of Dahlquist's result. Our findings show that this method has promise as an effective validated method for stiff IVPs for ODEs, for problems where there is no wrapping effect.


Title: On Taylor Model Based Integration of ODEs
Authors: M. Neher, K. R. Jackson and N. S. Nedialkov
Date: Original Draft August 2005; revised August 2006; accepted August 2006.
Pages: 21
Note:A slightly modified version of this paper is published in the SIAM Journal on Numerical Analysis, Vol. 45, No. 1, 2007, pp. 236-262.
File:Taylor_Models_ODES_2006.pdf
Keywords: Taylor model methods, verified integration, ODEs, IVPs

Abstract:

Interval methods for verified integration of initial value problems (IVPs) for ordinary differential equations (ODEs) have been used for more than 40 years. For many classes of IVPs, these methods are able to compute guaranteed error bounds for the flow, where traditional methods provide only approximations to a solution. Overestimation, however, is a potential drawback of verified methods. For some problems, the computed error bounds become overly pessimistic, or the integration even breaks down. The dependency problem and the wrapping effect are particular sources of overestimations in interval computations.

Berz and his co-workers have developed Taylor model methods, which extend interval arithmetic with symbolic computations. The latter is an effective tool for reducing both the dependency problem and the wrapping effect. By construction, Taylor model methods appear particularly suitable for integrating nonlinear ODEs. We analyze Taylor model based integration of ODEs and compare Taylor model methods with traditional enclosure methods for IVPs for ODEs.


Title: Glacial isostatic adjustment on 3-D Earth models: a finite-volume formulation
Authors: Konstantin Latychev, Jerry X. Mitrovica, Jeroen Tromp, Mark E. Tamisiea, Dimitri Komatitsch, Christina C. Christara
Date: May 2005
Pages: 24
Note:Published in Geophysical Journal International, Vol. 161, Issue 2, Pages 421-444
File: DOI link , journal page
Keywords: crystal motions, finite volumes, glacial isostatic adjustment, 3-D viscoelastic Earth models, parallel computation, domain decomposition

Abstract:

We describe and present results from a finite-volume (FV) parallel computer code for forward modelling the Maxwell viscoelastic response of a 3-D, self-gravitating, elastically compressible Earth to an arbitrary surface load. We implement a conservative, control volume discretization of the governing equations using a tetrahedral grid in Cartesian geometry and a low-order, linear interpolation. The basic starting grid honours all major radial discontinuities in the Preliminary Reference Earth Model (PREM), and the models are permitted arbitrary spatial variations in viscosity and elastic parameters. These variations may be either continuous or discontinuous at a set of grid nodes forming a 3-D surface within the (regional or global) modelling domain. In the second part of the paper, we adopt the FV methodology and a spherically symmetric Earth model to generate a suite of predictions sampling a broad class of glacial isostatic adjustment (GIA) data types (3-D crustal motions, long-wavelength gravity anomalies). These calculations, based on either a simple disc load history or a global Late Pleistocene ice load reconstruction (ICE-3G), are benchmarked against predictions generated using the traditional normal-mode approach to GIA. The detailed comparison provides a guide for future analyses (e.g. what grid resolution is required to obtain a specific accuracy?) and it indicates that discrepancies in predictions of 3-D crustal velocities less than 0.1 mm yr^{-1} are generally obtainable for global grids with ~3 x 10^6 nodes; however, grids of higher resolution are required to predict large-amplitude (>1 cm yr^{-1}) radial velocities in zones of peak post-glacial uplift (e.g. James bay) to the same level of absolute accuracy. We conclude the paper with a first application of the new formulation to a 3-D problem. Specifically, we consider the impact of mantle viscosity heterogeneity on predictions of present-day 3-D crustal motions in North America. In these tests, the lateral viscosity variation is constructed, with suitable scaling, from tomographic images of seismic S-wave heterogeneity, and it is characterized by approximately 2 orders of magnitude (peak-to-peak) lateral variations within the lower mantle and 1 order of magnitude variations in the bulk of the upper mantle (below the asthenosphere). We find that the introduction of 3-D viscosity structure has a profound impact on horizontal velocities; indeed, the magnitude of the perturbation (of order 1 mm yr^{-1}) is as large as the prediction generated from the underlying (1-D) radial reference model and it far exceeds observational uncertainties currently being obtained from space-geodetic surveying. The relative impact of lateral viscosity variations on predicted radial motions is significantly smaller.


Title: Spline Collocation on Adaptive Grids and Non-Rectangular Domains
Author:Kit Sun Ng
Date: June 2005
Pages: 187
Note:Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2005.
File:ngkit-05-phd.ps.gz
Keywords: quadratic splines, cubic splines, optimal convergence, non-uniform grid, adaptive methods, gridsize estimator, error estimator, L-shaped domains, T-shaped domains, skipped grid, moving mesh PDE.

Abstract:

Optimal spline collocation is a computationally cost effective finite element method that has been relatively recently developed. Although there are many important results for optimal spline collocation, we believe the method can further be extended to solve a series of more relevant and difficult problems.

We first consider using optimal spline collocation on non-uniform partitions to solve one-dimensional linear second-order Boundary Value Problems (BVPs) that have solutions with regions of high variations, such as boundary layers. Optimal quadratic spline collocation (QSC) is extended to non-uniform partitions by using a mapping function from uniform to non-uniform partitions. Optimal cubic spline collocation (CSC) on non-uniform partitions is also examined. The formulation and analysis of the CSC methods are based, as in the QSC case, on a mapping function from uniform to non-uniform partitions. However, this method is converted into a mapping-free method and is integrated with adaptive techniques that include error and grid size estimators.

Spline collocation is then considered for solving two-dimensional linear second-order BVPs. We extend optimal spline collocation to solving BVPs on L- and T-shaped domains, which occur often in practice. The CSC method can handle general boundary conditions, which had previously never been done, but our approach requires a third step to achieve optimal convergence. We also consider two adaptive approaches for solving problems on rectangular domains whose solutions have rapid variations. As in the one-dimensional case, such problems present a significant challenge to obtain accurate approximations at minimal computational cost. We first implement a static method, which involves adding collocation points to regions where the solution has sharp variations, with CSC that uses skipped grids, which are grids that have several levels of refinement, in certain regions of the domain. An optimal skipped grid method is developed by converting the second-order BVP into a system of three first-order BVPs. We then implement a dynamic method, which involves moving the collocation points to regions where the solution has sharp variations, with CSC that incorporates the {\it Moving Mesh Partial Differential Equation} (MMPDE). The MMPDE method is further integrated into an adaptive method that can solve BVPs within a specific error tolerance.


Title: Multigrid and Cubic Spline Collocation Methods for Advection Equations
Author:Zheng Zeng
Date: April 2005
Pages: 111
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2005.
File:ccc/zengz-05-msc.ps.gz
Keywords: Shallow water equations, semi-Lagrangian methods, algebraic elimination, analytic elimination, systems of PDEs

Abstract:

This thesis describes numerical methods for the solution of advection equations in one-dimensional space. The methods are based on combining the multigrid and cubic spline collocation (CSC) methodologies.
Multigrid methods for cubic splines are presented. Appropriate restriction and extension operators are developed for cubic splines that satisfy various boundary conditions (BCs), i.e., Dirichlet, Neumann, periodic and general BCs. The multigrid methods are applied to CSC equations arising from the discretization of one-dimensional second-order differential equations. The convergence analysis of a two-grid method integrated with a damped Richardson relaxation scheme as smoother shows the rate of convergence is equal to or faster than $1/2$ for a model second order equation with periodic BCs.
Multigrid methods for cubic splines are then extended to the solution of one-dimensional shallow water equations (SWEs). The SWEs are discretized in time with a semi-Lagrangian semi-implicit scheme and in space with CSC methods. At each time step, there are three different space discretization approaches: Analytic Elimination - Discretization (AED), Discretization -Algebraic Elimination (DAE), and Discretization - No Elimination (DNE). We discuss advantages and disadvantages of each, and develop new numerical methods based on multigrid and CSC for solving the SWEs discretized with either the AED or the DNE approaches. We compare our methods with Jacobi's iterative method applied to the same problems. Through comparison and convergence discussion, we show that our multigrid methods for CSC are convergent and efficient. The numerical results confirm our analysis.


Title: DDVERK90: A User-Friendly Implementation of an Effective DDE Solver
Author:Hossein Zivaripiran
Date: April 2005
Pages: 70
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2005.
File:hzp05.ps.gz
Keywords: delay differential equations, Fortran 90 solver, user-friendly solver, embedded event location, template driver.

Abstract:

Delay differential equations(DDEs) provide powerful and rich mathematical models that have been proven to be useful in simulating some real life problems. The current publicly available DDE solvers are either very formidable to use (especially for new users) or designed to support only selected classes of problems.
In this thesis we have developed a new Fortran 90 DDE solver DDVERK90 that aims to combine the effectiveness of the Fortran 77 DDE solver DDVERK and the user-friendliness of the MATLAB ODE solvers. We have added the capability of embedded event location to facilitate many tasks that arise in applications. We have also introduced a new approach for classifying DDE problems that helps users to write driver programs to solve their problem very quickly. They do this by comparing their problem to other previously solved example problems in the same class and using the corresponding `template' driver as a guide.


Title: Software for Ordinary and Delay Differential Equations: Accurate Approximate Solutions are not Enough
Authors:W.H. Enright
Date: 2005
Pages: 15
Note: Applied Numerical Mathematics (APNUM), 56, pp. 459-471, 2006.
File: enr05b.pdf

Abstract:

Numerical methods for both ordinary differential equations (ODEs) and delay differential equations (DDEs) are traditionally developed and assessed on the basis of how well the accuracy of the approximate solution is related to the specified error tolerance on an adaptively-chosen, discrete mesh. This may not be appropriate in numerical investigations that require visualization of an approximate solution on a continuous interval of interest (rather than at a small set of discrete points) or in investigations that require the determination of the `average' values or the `extreme' values of some solution components. In this paper we will identify modest changes in the standard error-control and stepsize-selection strategies that make it easier to develop, assess and use methods which effectively deliver approximations to differential equations (both ODEs and DDEs) that are more appropriate for these type of investigations. The required changes will typically increase the cost per step by up to 40\%, but the improvements and advantages gained will be significant. Numerical results will be presented for these modified methods applied to two example investigations (one ODE and one DDE).


Title: Method of Lines Solutions of Boussinesq Equations
Authors:S. Hamdi, W.H. Enright, W.E. Schiesser and Y. Ouellet
Date: 2005
Pages: 17
Note: Journal of Computational and Applied Mathematics (JCAM), 183, pp. 327-342, 2005.
File: heso05.pdf

Abstract:

A numerical solution procedure based on the method of lines for solving the Nwogu's one-dimensional extended Boussinesq equations is presented. The numerical scheme is accurate up to fifth order in time and fourth-order accurate in space, thus reducing all truncation errors to a level smaller than the dispersive terms retained by most extended Boussinesq models. Exact solitary wave solutions and invariants of motions recently derived by the authors are used to specify initial data for the incident solitary waves in the numerical model of Nwogu and for the verification of the associated computed solutions. The invariants of motions and several error measures are monitored in order to assess the conservative properties and the accuracy of the numerical scheme. The proposed method of lines solution procedure is general and can be easily modified to solve a wide range of Boussinesq-like equations in coastal engineering.


Title: Exact Solutions and Conservation Laws for Generalized Quintic Regularized Long Wave Equations
Authors:S. Hamdi, W.H. Enright, W.E. Schiesser and J.J. Gottlieb
Date: 2005
Pages: 10
Note: Journal of Nonlinear Analysis, 63, pp. 1425-1434, 2005. Also appeared in the Proceedings of the Fourth World Congress of Nonlinear Analysis, Orlando, June 2004.
File: hesg05.pdf

Abstract:

A new model equation is proposed for simulating unidirectional wave propagation in nonlinear media with dispersion. This model equation is a one-dimensional evolutionary partial differential equation that is obtained by coupling the generalized Korteweg-de Vries (gKdV) equation, which includes a nonlinear term of any order and cubic dispersion, with the quintic Regularized Long Wave (qRLW) equation, which includes fifth order dispersion. Exact solitary wave solutions are derived for this model equation. These analytical solutions are obtained for any order of the nonlinear term and for any given values of the coefficients of the cubic and quintic dispersive terms. Analytical expressions for three conservation laws and for three invariants of motion for solitary wave solutions of this new equation are also derived.


Title: On the use of `arc length' and `defect' for mesh selection for differential equations
Authors:W.H. Enright
Date: 2005
Pages: 7
Note: to appear (2005) in Computing Letters (COLE).
File: enr05.ps.gz

Abstract:

In this note we will consider mesh selection techniques for ordinary differential equations (ODEs) and for partial differential equations (PDEs) based on arc length and defect. We assume we are trying to approximate $y(x)$ for $x \in [0, T]$ in the ODE case and $u(x, y)$ for $(x, y) \in \Omega$ in the PDE case. The two specific areas of application that we have in mind are the numerical solution of boundary value problems (BVPs) in ODEs and the numerical solution of PDEs when a method of lines (MOL) approach is employed. The approach we develop is applicable to a wider class of PDEs including mixed-order problems and 3D problems.


Title: An Agent-Based Simulation of Double-Auction Markets
Author:Ted Xiao Guo
Date: March 2005
Pages: 75
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2005.
File:Guo-05.msc.pdf
Keywords: agent based simulation, financial markets, double-auction markets.

Abstract:

Agent-based simulation has emerged as a promising research alternative to the traditional analytical approach in studying financial markets. The idea behind agent-based simulation is to constructively model a market in a bottom-up fashion using articial agents. This thesis aims to accomplish four objectives. The first objective is to present a high-quality software platform that is capable of carrying out agent-based financial market simulations. The second objective is to simulate a double-auction market with this software platform, using homogeneous zero-intelligence agents. The simulation is based on the articial market model proposed by Smith, Farmer, Gillermot and Krishnamurthy (2003). The third objective is to study the microstructure properties of the simulated double-auction market. In particular, we investigate certain statistical properties for the mid-price, bid-ask spread and limit-order book. The fourth and the last objective is to explore and evaluate trading strategies associated with the time-constrained asset liquidation problem through computational agents.


Title: Efficient Contouring on Unstructured Meshes
Authors: Hassan Goldani Moghaddam
Date: April 2004
Pages: 66
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2004.
File:goldani04.ps.gz
Keywords: interpolation, scattered data, visualization, PDE, contour, unstructured mesh

Abstract:
A PCI (Pure Cubic Interpolant) is a piecewise bicubic polynomial interpolant over an unstructured mesh based on the DEI (Differential Equation Interpolant) approach. It can approximate the solution of a PDE (Partial Differential Equation) at off-mesh points with the same order of accuracy as those provided by an underlying numerical PDE solver at mesh points. It satisfies the continuity property that is very important in the visualization area. In addition to continuity, the PCI is efficient in terms of time and accuracy. We compare the PCI with the ADEI (Alternate Differential Equation Interpolant) and some other DEI-based interpolants. Furthermore, we introduce three fast contouring algorithms based on the PCI and an underlying unstructured mesh. Unlike standard contouring functions, our contouring algorithms do not need a fine structured approximation and work efficiently with the original scattered data.


Title: Exact Solutions of Extended Boussinesq Equations
Authors:S. Hamdi, W.H. Enright, W.E. Schiesser and Y. Ouellet
Date: 2004
Pages: 10
Note: Numerical Algorithms 37, pp. 165-175.
File: heso04.ps.gz

Abstract:

The problem addressed in this paper is the verification of numerical solutions of nonlinear dispersive wave equations such as Boussinesq-like systems of equations. A practical verification tool is to compare the numerical solution to an exact solution if available. In this work, we derive some exact solitary wave solutions and several invariants of motion for a wide range of Boussinesq-like equations using Maple software. The exact solitary wave solutions can be used to specify initial data for the incident waves in the Boussinesq numerical model and for the verification of the associated computed solution. When exact solutions are not available, four invariants of motions can be used as verification tools for the conservation properties of the numerical model.


Title: Exact Solutions and Invariants of Motion for General types of Regularized Long Wave Equations
Authors:S. Hamdi, W.H. Enright, W.E. Schiesser and J.J. Gottlieb
Date: 2004
Pages: 10
Note: Mathematics and Computers in Simulation, IMACS MATCOM.
File: hesg04.ps.gz

Abstract:

New exact solitary wave solutions are derived for general types of the regularized long wave (RLW) equation and its simpler alternative, the generalized equal width wave (EW) equation, which are evolutionary partial differential equations for the simulation of one-dimensional wave propagation in nonlinear media with dispersion processes. New exact solitary wave solutions are also derived for the generalized EW-Burgers equation, which models the propagation of nonlinear and dispersive waves with certain dissipative effects. The analytical solutions for these model equations are obtained for any order of the nonlinear terms and for any given value of the coefficients of the nonlinear, dispersive and dissipative terms. Analytical expressions for three invariants of motion for solitary wave solutions of the generalized EW equation are also devised.


Title: A Fast Shadowing Algorithm for High Dimensional ODE Systems
Authors: Wayne Hayes and Kenneth R. Jackson
Date: November 2004
Pages: 18
Note:A revised version of this paper appeared in SIAM J. Sci. Comput., Vol. 29, No. 4, 2007, pp. 1738-1758.
File:DNA.pdf
Keywords: Nonlinear dynamical systems, shadowing, reliable simulation.

Abstract:

Numerical solutions to chaotic dynamical systems are suspect because the chaotic nature of the problem implies that numerical errors are magnied exponentially with time. To bolster condence in the reliability of such solutions, we turn to the study of shadowing. An exact trajectory of a chaotic dynamical system lying near an approximate trajectory is called a shadow. Finding shadows of numerical trajectories of systems of ordinary dirential equations is very compute intensive, and until recently it has been infeasible to study shadows of higher-dimensional systems. This paper introduces several optimizations to previous algorithms. The optimizations collectively acheive an average speedup of almost 2 orders of magnitude with no detectable loss in ectiveness. The algorithm is tested on systems with up to 180 dimensions. Its application to large gravitational N-body integrations has already led to a deeper understanding of the reliability of galaxy simulations.


Title:Rigorous High-Dimensional Shadowing Using Containment: The General Case
Authors: Carmen Young, Wayne B. Hayes, Kenneth R. Jackson
Date: September 2004
Pages: 14
Note:Submitted to the AIMS Conference on Dynamical Systems and Differential Equations
File:YHJ.pdf
Keywords: Nonlinear dynamical systems, shadowing, reliable simulation.

Abstract:

A shadow is an exact solution to an iterated map that remains close to an approximate solution for a long time. An elegant geometric method for proving the existence of shadows is called containment, and it has been proven previously in two and three dimensions, and in some special cases in higher dimensions. This paper presents the general proof using tools from dirential and algebraic topology and singular homology.


Title: Valuation of Mortgage-Backed Securities in a Distributed Environment
Authors: Vladimir Surkov
Date: March 2004
Pages: 51
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2004.
File:Surkov04msc.pdf
Keywords: Valuation of Mortgage-Backed Securities, computational finance, Monte Carlo method, Quasi-Monte Carlo method, low-discrepancy sequences, Brownian bridge discretization, distributed computing environment, Microsoft .NET.

Abstract:

Valuation of Mortgage-Backed Securities, regarded as integration in high-dimensional space, can be readily performed using the Monte Carlo method. The Quasi-Monte Carlo method, by utilizing low-discrepancy sequences, has been able to achieve better convergence rates for computational finance problems despite analysis suggesting that the improved convergence comes into effect only at sample sizes growing exponentially with dimension. This may be attributed to the fact that the integrands are of low effective dimension and quasi-random sequence's good equidistribution properties in low dimensions allow for the faster convergence rates to be attained at feasible sample sizes. The Brownian bridge discretization is traditionally used to reduce the effective dimension although an alternate choice of discretization can produce superior results. This paper examines the standard Brownian bridge representation and offers a reparametrization to further reduce dimension. The performance is compared both in terms of improvement in convergence and reduced effective dimensionality as computed using ANOVA decomposition. Also, porting of the valuation algorithm to a distributed environment using Microsoft .NET is presented.


Title: Validated Numerical Bounds on the Global Error for Initial Value Problems for Stiff Ordinary Differential Equations
Authors: Chao Yu
Date: September 2004
Pages: 50
Note:M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2004.
File:chao_yu.msc.pdf
Keywords: validated numerical method, initial value problem, ordinary differential equation

Abstract:

There are many standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs). Compared with these methods, validated methods for IVPs for ODEs produce bounds that are guaranteed to contain the true solution of a problem, if the true solution exists and is unique.

The main result of this thesis is a formula to bound the global error associated with the numerical solution of a stiff IVP for an ODE. We give the complete proof of this result. Moreover, we derive Dahlquist's formula and Neumaier's formula from this formula. We also give alternative (and possibly simpler) proofs of some known related results.


Title: Towards a high-performance method for the biharmonic Dirichlet problem
Authors: Jingrui Zhang
Date: April 2004
Pages: 77
Note:Research paper (equivalent to M.Sc. Thesis), Computer Science Dept., Univ. of Toronto, 2004.
File:ccc/jingrui-04-msc.ps.gz
Keywords: spline collocation; fourth-order two-point boundary value problem; optimal order of convergence; FFT; GMRES; preconditioning; eigenvalues; eigenvectors

Abstract:
The biharmonic Dirichlet problem appears in many applications. Therefore, there is a lot of interest in developing high-performance methods for its solution. Two important components of performance are accuracy and efficiency. In this research, we combine a sixth order discretization method and a Fast Fourier Transform (FFT)-based solver for the solution of the biharmonic Dirichlet problem.

The discretization of the problem uses the collocation methodology and quartic splines. Quartic Spline Collocation (Quartic SC) methods that produce sixth order approximations have been recently developed for the solution of fourth order Boundary Value Problems (BVPs). In this paper, we apply an adjustment to the quartic spline basis functions so that the Quartic SC matrix has more favorable properties. Particular attention is given to the one-dimensional harmonic problem. We study the properties of two Quartic SC matrices, more specifically, one that corresponds to Dirichlet and Neumann conditions (the standard harmonic problem) and another that corresponds to Dirichlet and second derivative conditions. The latter is solvable by FFTs and used as preconditioner for the first. We develop formulae for the eigenvalues and eigenvectors of the preconditioned matrix. We show that three iterations suffice to obtain convergence for the GMRES preconditioned method. Numerical experiments verify the effectiveness of the solver and preconditioner. We also discuss the application of the preconditioned GMRES method to the two-dimensional biharmonic Dirichlet problem.


Title: A Survey of Shadowing Methods for Numerical Solutions of Ordinary Differential Equations
Authors:W. B. Hayes and K. R. Jackson
Date: December 2003
Pages: 31
Note: A slightly modified version of this paper appears in the Journal Applied Numerical Mathematics, Vol. 53, 2005, pp. 299-321.
File: hayes-shadowing-survey.ps.gz

Abstract:

A shadow is an exact solution to a set of equations that remains close to a numerical solution for a long time. Shadowing is thus a form of backward error analysis for numerical solutions to ordinary differential equations. This survey introduces the reader to shadowing with a detailed tour of shadowing algorithms and practical results obtained over the last 15 years, using the gravitational n-body problem as a running example.


Title: Optimal Quadratic Spline Collocation on Non-Uniform Partitions
Authors: Christina C. Christara, and Kit Sun Ng
Date: June 2003, revised 2004
Pages: 28
Note:A revised version to appear in Computing
File: ccc/QSCnonuni.ps.gz
Keywords: spline collocation; second-order two-point boundary value problem; error bounds; optimal order of convergence; adaptive grid; grid size estimator; error estimator; spline interpolation.

Abstract:

Quadratic and Cubic Spline Collocation (QSC and CSC) methods of optimal orders of convergence have been developed for the solution of linear second-order two-point Boundary Value Problems (BVPs) discretized on uniform partitions. In this paper, we extend the optimal QSC methods to non-uniform partitions, and in a companion paper we do the same for CSC. To develop optimal non-uniform partition QSC methods, we use a mapping function from uniform to non-uniform partitions and develop expansions of the error at the non-uniform collocation points of some appropriately defined spline interpolants. The existence and uniqueness of the QSC approximations are shown, under some conditions. The $j$th derivative of the QSC approximation is $O(h^{3-j})$ globally, and $O(h^{4-j})$ locally on certain points, for $j \ge 0$.


Title: Optimal Cubic Spline Collocation on Non-Uniform Partitions
Authors: Christina C. Christara, and Kit Sun Ng
Date: June 2003, revised 2004
Pages: 17
Note:A revised version to appear in Computing
File: ccc/CSCnonuni.ps.gz
Keywords: spline collocation; second-order two-point boundary value problem; error bounds; optimal order of convergence; adaptive grid; grid size estimator; error estimator; spline interpolation.

Abstract:

In a recent paper, non-uniform partition Quadratic Spline Collocation (QSC) methods of optimal order of convergence were developed for second-order two-point Boundary Value Problems (BVPs). In this paper, we develop optimal non-uniform Cubic Spline Collocation (CSC) methods. The formulation and analysis of the methods are based, as in the QSC case, on a mapping function from uniform to non-uniform partitions. However, the CSC method is turned into a mapping-free method. The $j$th derivative of the CSC approximation is $O(h^{4-j})$ globally, for $j \ge 0$, and $O(h^{5-j})$ locally on certain points, for $j > 0$. The non-uniform partition CSC method is integrated with adaptive grid techniques, and grid size and error estimators. One adaptive grid technique is based on the construction of a mapping function. Another is mapping-free and resembels the technique used in COLSYS (COLNEW) \cite{AMR95}. Numerical results on a variety of problems, including problems with boundary or interior layers, and singular perturbation problems indicate that, for most problems, the CSC method require less computational effort for the same error tolerance, and has equally reliable error estimators, when compared to Hermite piecewise cubic collocation (HPCC).


Title: Fast Contouring of Solutions to Partial Differential Equations
Authors:E.L. Bradbury and W.H. Enright
Date: May 2003
Pages: 25
Note: This manuscript has been submitted for publication in ACM Trans. on Math. Soft., (Revised May 2003).
File: be03.ps.gz

Abstract:

The application of Differential Equation Interpolants (DEIs) to the visualization of the solutions to Partial Differential Equations (PDEs) is investigated. In particular, we describe how a DEI can be used to generate a fine mesh approximation from a coarse mesh approximation; this fine mesh approximation can then be used by a standard contouring function to render an accurate contour plot of the surface. However, the standard approach has a time complexity equivalent to that of rendering a surface plot, $O(fm^2)$ (where $fm$ is the ratio of the width of the course mesh to the fine mesh). To address this concern three fast contouring algorithms are proposed which compute accurate contour lines directly from the DEI, and have time complexity at most $O(fm)$.


Title: Rigorous Shadowing of Numerical Solutions of Ordinary Differential Equations by Containment
Authors:W. B. Hayes and K. R. Jackson
Date: March 2003
Pages: 25
Note: Accepted March 2003 for publication by the SIAM Journal on Numerical Analysis. hayes-2001-shadowing is an earlier and longer version of this paper.
File: hayes-2003-shadowing.ps.gz

Abstract:

An exact trajectory of a dynamical system lying close to a numerical trajectory is called a shadow. We present a general-purpose method for proving the existence of finite-time shadows of numerical ODE integrations of arbitrary dimension in which some measure of hyperbolicity is present and there is either 0 or 1 expanding modes, or 0 or 1 contracting modes. Much of the rigor is provided automatically by interval arithmetic and validated ODE integration software that is freely available. The method is a generalization of a previously published containment process that was applicable only to two-dimensional maps. We extend it to handle maps of arbitrary dimension with the above restrictions, and finally to ODEs. The method involves building n-cubes around each point of the discrete numerical trajectory through which the shadow is guaranteed to pass at appropriate times. The proof consists of two steps: first, the rigorous computational verification of a simple geometric property we call the inductive containment property; and second, a simple geometric argument showing that this property implies the existence of a shadow. The computational step is almost entirely automated and easily adaptable to any ODE problem. The method allows for the rescaling of time, which is a necessary ingredient for successfully shadowing ODEs. Finally, the method is local, in the sense that it builds the shadow inductively, requiring information only from the most recent integration step, rather than more global information typical of several other methods. The method produces shadows of comparable length and distance to all currently published results.


Title: The cost/reliability trade-off in verifying approximate solutions to differential equations
Authors:W.H. Enright
Date: February 2003
Pages: 9
Note: Presented at Bari International workshop on computational codes:the technological aspects of mathematics, December 2002. (Also submitted for publication in special issue of the journal of computational methods in science and engineering -- JCMSE.)
File: ebari03.ps.gz

Abstract:

It is now standard practice in computational science for large scale simulations to be implemented and investigated in a problem solving environment (PSE) such as MATLAB or MAPLE. In such an environment, a scientist or engineer will formulate a mathematical model, approximate its solution using an appropriate numerical method, visualize the approximate solution and verify (or validate) the quality of the approximate solution. Traditionally we have been most concerned with the development of effective numerical software for generating the approximate solution and several efficient and reliable numerical libraries are now available for use within the most widely used PSEs. On the other hand, the visualization and verification tasks have received little attention, even though each often requires as much computational effort as is involved in generating the approximate solution. In this paper we will investigate the effectiveness of a suite of tools that we have recently introduced in the MATLAB PSE to verify approximate solutions of ordinary differential equations. In particular we will identify and illustrate the inherent trade-off between reliability and efficiency that arises. We will use the notion of `effectivity index', widely used by researchers in the adaptive mesh PDE community, to quantify the quality of our verification tools and illustrate the performance of these tools on a two test problems.


Title: Exact Solutions of the Generalized Equal Width Wave Equation
Authors:S. Hamdi, W.H. Enright, W.E. Schiesser and J.J. Gottlieb
Date: March 2003
Pages: 10
Note: To appear in Proceedings of Workshop on Wave Phenomena in Physics and Engineering: New Models, Algorithms, and Applications, Montreal, May 2003.
File: hesg03.ps.gz

Abstract:

The equal width wave (EW) equation is a model partial differential equation for the simulation of one-dimensional wave propagation in nonlinear media with dispersion processes. The EW-Burgers equation models the propagation of nonlinear and dispersive waves with certain dissipative effects. In this work, we derive exact solitary wave solutions for the general form of the EW equation and the generalized EW-Burgers equation with nonlinear terms of any order. We also derive analytical expressions of three invariants of motion for solitary wave solutions of the generalized EW equation.


Title: The Design and Implementation of Usable ODE Software
Authors:W.H. Enright
Date: December 2002
Pages: 13
Note: Numerical Algorithms 31, 1-4, pp. 125-137, 2002.
File: enz02.ps.gz

Abstract:

In the last decade it has become standard for students and researchers to be introduced to state-of-the-art numerical software through a problem solving environment (PSE) rather than through the use of scientific libraries callable from a high level language such as Fortran or C. In this paper we will identify the constraints and implications that this imposes on the ODE software we investigate and develop. In particular the way a `numerical solution' is displayed and viewed by a user dictates that new measures of performance and quality must be adopted. We will use the MATLAB environment and ODE software for initial value problems, boundary value problems and delay problems to illustrate the issues that arise and the progress that has been made. One of the major implications is the expectation that accurate approximations at `off-mesh' points must be provided. Traditional numerical methods for ODEs have produced approximations to the underlying solution on an associated discrete, adaptively chosen mesh. In recent years it has become common for the ODE software to also deliver approximations at off-mesh values of the independent variable. Such a feature can be extremely valuable in applications and leads to new measures of quality and performance which are more meaningful to users and more consistently interpreted and implemented in contemporary ODE software. Numerical examples of the robust and reliable behaviour of such software will be presented and the cost/reliability trade-offs that arise will be quantified.


Title: Quadratic Spline Galerkin Method for the Shallow Water Equations
Authors:Anita T. Layton, Christina C. Christara, and Kenneth R. Jackson
Date: November 2002
Pages: 30
File: sweqsg02.ps.gz
Note: Submitted to Mathematics and Computers in Simulation
Keywords: numerical weather prediction; finite element; semi-Lagrangian semi-implicit; Rossby stability; staggered grids

Abstract:

Currently in most global meteorological applications, the spectral transform method or low-order finite difference/finite element methods are used. The spectral transform method, which yields high-order approximations, requires Legendre transforms. The Legendre transforms have a computational complexity of $\mathcal O(N^3)$, where $N$ is the number of subintervals in one dimension, and thus render the spectral transform method unscalable. In this study, we present an alternative numerical method for solving the shallow water equations (SWEs) on a sphere in spherical coordinates. In this implementation, the SWEs are discretized in time using the two-level semi-Lagrangian semi-implicit method, and in space on staggered grids using the quadratic spline Galerkin method. We show that, when applied to a simplified version of the SWEs, the method yields a neutrally stable solution for the meteorologically significant Rossby waves. Moreover, we demonstrate that the Helmholtz equation arising from the discretization and solution of the SWEs should be derived algebraically rather than analytically, in order for the method to be stable with respect to the Rossby waves. These results are verified numerically using Boyd's equatorial wave equations \cite{boyd1} with initial conditions chosen to generate a soliton.


Title: Optimal Quadratic Spline Collocation Methods for the Shallow Water Equations
Authors:Anita T. Layton, Christina C. Christara, and Kenneth R. Jackson
Date: November 2002
Pages: 38
File: sweqsc02.ps.gz
Note: Submitted to Mathematics and Computers in Simulation
Keywords: numerical weather prediction; finite element; semi-Lagrangian semi-implicit; Rossby stability; staggered grids

Abstract:

In this study, we present numerical methods, based on the optimal quadratic spline collocation (OQSC) methods, for solving the shallow water equations (SWEs) in spherical coordinates. A quadratic spline collocation method approximates the solution of a differential problem by a quadratic spline. In the standard formulation, the quadratic spline is computed by making the residual of the differential equations zero at a set of collocation points; the resulting error is second order, while the error associated with quadratic spline interpolation is fourth order locally at certain points and third order globally. The OQSC methods generate approximations of the same order as quadratic spline interpolation. In the one-step OQSC method, the discrete differential operators are perturbed to eliminate low-order error terms, and a high-order approximation is computed using the perturbed operators. In the two-step OQSC method, a second-order approximation is generated first, using the standard formulation, and then a high-order approximation is computed in a second phase by perturbing the right sides of the equations appropriately.

In this implementation, the SWEs are discretized in time using the semi-Lagrangian semi-implicit scheme, which allows large timesteps while maintaining numerical stability, and in space using the OQSC methods. The resulting methods are efficient and yield stable and accurate representation of the meteorologically important Rossby waves. Moreover, by adopting the Arakawa C-type grid, the methods also faithfully capture the group velocity of inertia-gravity waves.


Title: Shadowing High-Dimensional Hamiltonian Systems: the Gravitational N-Body Problem
Author:Wayne B. Hayes
Date: November 2002
Pages: 4
Note: Physics Review Letters, Vol. 90, No. 5, 2003.
File: sh-n-body-M1.ps.gz

Abstract:

A shadow is an exact solution to a chaotic system of equations that remains close to a numerically computed solution for a long time. Using a variable-order, variable-timestep integrator, we numerically compute solutions to a gravitational $N$-body problem in which many particles move and interact in a fixed potential. We then search for shadows of these solutions with the longest possible duration. We find that in "softened" potentials, shadow durations are sufficiently long for significant evolution to occur. However, in unsoftened potentials, shadow durations are typically very short.


Title: Shadowing-Based Reliability Decay in Softened N-Body Simulations
Author:Wayne B. Hayes
Date: September 2002
Pages: 4
Note: Astrophysical Journal Letters, Vol. 587, No. 2, Part 2, 20 April 2003, pp. L59-L62.
File: shadow-decay.ps.gz

Abstract:

A shadow of a numerical solution to a chaotic system is an exact solution to the equations of motion that remains close to the numerical solution for a long time. In a collisionless $n$-body system, we know that particle motion is governed by the global potential rather than by inter-particle interactions. As a result, the trajectory of each individual particle in the system is independently shadowable. It is thus meaningful to measure the number of particles that have shadowable trajectories as a function of time. We find that the number of shadowable particles decays exponentially with time as $e^{-\mu t}$, and that for $\eps\in [\sim 0.2,1]$ (in units of the local mean inter-particle separation $\bar n$), there is an explicit relationship between the decay constant $\mu$, the timestep $h$ of the leapfrog integrator, the softening $\eps$, and the number of particles $N$ in the simulation. Thus, given $N$ and $\eps$, it is possible to pre-compute the timestep $h$ necessary to acheive a desired fraction of shadowable particles after a given length of simulation time. We demonstrate that a large fraction of particles remain shadowable over $\sim$100 crossing times even if particles travel up to about 1/3 of the softening length per timestep. However, a sharp decrease in the number of shadowable particles occurs if the timestep increases to allow particles to travel further than 1/3 the softening length in one timestep, or if the softening is decreased below $\sim 0.2\bar n$.


Title: Computation of Value-at-Risk: the Fast Convolution Method, Dimension Reduction and Perturbation Theory
Author:Petter Wiberg
Date: September 2002
Pages: 88
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2002
File: wiberg.phd.ps.gz

Abstract:

Value-at-risk is a measure of market risk for a portfolio. Market risk is the chance that the portfolio declines in value due to changes in market variables. This thesis is about the computation of value-at-risk for portfolios with derivatives and for models for returns that have a distribution with fat tails.

We introduce a new Fourier algorithm, the fast convolution method, for computing value-at-risk. The fast convolution method is different from other Fourier methods in that it does not require that the characteristic function of the portfolio returns be known explicitly. Our new method can therefore be used with more general return models. In the thesis we present experiments with three return models: the normal model, the asymmetric T model and a model using the non-parametric Parzen density estimator. We also discuss how the fast convolution method can be extended to compute the value-at-risk gradient, present a proof of convergence and illustrate the performance of the method with examples.

We develop and compare two methods for dimension reduction in the computation of value-at-risk. The goal of dimension reduction is to reduce computation time by finding a small model that captures the main dynamics of the original model. We compare the two methods for an example problem and conclude that the method based on mean square error is superior. Finally, we present an optimization example that illustrates that dimension reduction may reduce the time to compute value-at-risk while maintaining good accuracy.

We develop a perturbation theory for value-at-risk with respect to changes in the return model. By considering variational properties, we derive a first-order error bound and find the condition number of value-at-risk. We argue that the sensitivity observed in empirical studies is an inherent limitation of value-at-risk.


Title: Quadratic Spline Collocation Revisited: Extension to Systems of Elliptic PDEs
Authors:Christina C. Christara and Kit Sun Ng
Date: July 2002
Pages: 32
Note: Depart. of Computer Science, University of Toronto, Technical Report 318/2001, rev. July 2002.
File: ccc/sys.ps.gz

Abstract:

We revisit the optimal Quadratic Spline Collocation (QSC) methods for single elliptic PDEs developed in \cite{Chri94} and prove some new results, including a discrete maximum principle. We extend QSC to systems of two coupled linear second-order PDEs in two dimensions. The system of PDEs is treated as one entity; no decoupling is applied.
We study the matrix properties of the linear system arising from the discretization of systems of two PDEs by QSC. We give sufficient conditions under which the QSC linear system is uniquely solvable and develop analytic formulae for the eigenvalues and eigenvectors of the QSC matrix from $2 \times 2$ Helmholtz operators with constant coefficients.
Numerical results demonstrate that the QSC methods are optimal, that is, fourth order locally at certain points and third order globally, even for some problems that do not satisfy the conditions of the analysis. The QSC methods are compared to conventional second-order discretization methods and are shown to produce smaller approximation errors in the same computation time, while they achieve the same accuracy in less time.
The formulation of the optimal two-step QSC method for a $2 \times 2$ system of PDEs can be extended in a straightforward way to a $n \times n$ system of PDEs.


Title: Computation of the Probability Density Function and the Cumulative Distribution Function of the Generalized Gamma Variance Model
Author: Xiaofang Ma
Date: April 2002
Pages: 77
Note: M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2002.
File: ma-02-msc.ps.gz

Abstract:

Numerical methods for computing the probability density function (pdf) (satisfying a relative accuracy requirement) and the cumulative distribution function (cdf) (satisfying an absolute accuracy requirement) of the generalized Gamma variance model are investigated.

A hybrid method is developed to calculate the pdf. This hybrid method chooses between several basic methods to evaluate the pdf depending on the value of the risk factor and the values of the shape parameters. Extensive numerical experiments are performed to verify the robustness of the proposed hybrid method. Comparison with some existing methods for special cases suggests that this hybrid method is accurate. To improve the performance of the hybrid method, a strategy suggested by Alex Levin is adopted in the present program.

A method for computing the cdf is also developed. Numerical comparisons for several special cases are carried out to verify the correctness and the accuracy of the proposed method.


Title: Exact Solutions of Extended Boussinesq Equations
Authors:S. Hamdi, W.H. Enright, Y. Ouellet and W.E. Schiesser
Date: December 2002
Pages: 10
Note: submitted to Numerical Algorithms, December 2002.
File: heos02.ps.gz

Abstract:

The problem addressed in this paper is the verification of numerical solutions of nonlinear dispersive wave equations such as Boussinesq-like system of equations. A practical verification tool is to compare the numerical solution to an exact solution if available. In this work, we derive some exact solitary wave solutions and several invariants of motion for a wide range of Boussinesq-like equations using Maple software. The exact solitary wave solutions can be used to specify initial data for the incident waves in the Boussinesq numerical model and for the verification of the associated computed solution. When exact solutions are not available, four invariants of motions can be used as verification tools for the conservation properties of the numerical model.


Title: Interpolation of Surfaces over Scattered Data
Authors:G.A. Ramos and W.H. Enright
Date: September 2001
Pages: 7
Note: Proceedings of IASTED, (M.H. Hamza, ed.), pp.219-224, Marbella Spain, September 2001, ACTA Press.
File: er01.ps.gz

Abstract:

We investigate the performance of DEI, an approach [2] that computes off-mesh approximations of PDE solutions, and that can also be used as a technique for scattered data interpolation and surface representation. For the general case of unstructured meshes, we found it necessary to modify the original DEI. The resulting method, ADEI, adjusts the parameter of the interpolant, obtaining better performance. Finally we measure ADEI's performance using different sets of scattered data and compare its performance against two methods from the collection of ACM algorithms.


Title: Hierarchichal Representation and Computation of Approximate Solutions in Scientific Simulations
Authors:W.H. Enright
Date: 2001
Pages: 17
Note: in `The Architecture of Scientific Software', (R. Boisvert and P. Tang eds.), pp. 301-316, Kluwer, Boston.
File: ebt01.ps.gz

Abstract:

In many applications involving large scale scientific computing a massive amount of data is generated in a typical simulation. Such simulations generally require the numerical solution of systems of differential equations where the results are often generated remotely using special high-performance software and computer systems visualization tools. The visualization packages are usually run on local workstations and make use of colour, lighting, texture, sound and animation to highlight and reveal interesting characteristics or features of the approximate solution. This `interactive' viewing of the data is the `rendering' stage of the modeling process and it can be very selective and local in the sense that only a subset of the variables are rendered and then only in regions where something interesting is happening. The result is that, in many simulations, large amounts of data must be stored and shared in a distributed environment while only a very small fraction of this data will ever be viewed by anyone. In this paper we propose an approach, using a hierarchichal representation of the approximate solution, which will avoid the generation and storage of data that is not required.


Title: Rigorous Shadowing of Numerical Solutions of Ordinary Differential Equations by Containment
Authors: W. B. Hayes and K. R. Jackson
Date: December 2001
Pages: 41
Note: hayes-2003-shadowing is a shorter, revised version of this paper.
File: hayes-2001-shadowing.ps.gz

Abstract:

An exact trajectory of a dynamical system lying close to a numerical trajectory is called a shadow. We present a general-purpose method for proving the existence of finite-time shadows of numerical ODE integrations of arbitrary dimension in which some measure of hyperbolicity is present and there is either 0 or 1 expanding modes, or 0 or 1 contracting modes. Much of the rigor is provided automatically by interval arithmetic and validated ODE integration software that is freely available. The method is a generalization of a previously published containment process that was applicable only to two-dimensional maps. We extend it to handle maps of arbitrary dimension with the above restrictions, and finally to ODEs. The method involves building n-cubes around each point of the discrete numerical trajectory through which the shadow is guaranteed to pass at appropriate times. The proof consists of two steps: first, the rigorous computational verification of a simple geometric property we call the inductive containment property; and second, a simple geometric argument showing that this property implies the existence of a shadow. The computational step is almost entirely automated and easily adaptable to any ODE problem. The method allows for the rescaling of time, which is a necessary ingredient for successfully shadowing ODEs. Finally, the method is local, in the sense that it builds the shadow inductively, requiring information only from the most recent integration step, rather than more global information typical of several other methods. The method produces shadows of comparable length and distance to all currently published results.


Title: Dimension Reduction in the Computation of Value-at-Risk
Authors:Claudio Albanese, Ken Jackson and Petter Wiberg
Date: November 2001
Pages: 19
Note: Accepted May 2002 for publication in the Journal of Risk Finance
File: dimred.ps.gz

Abstract:

Value-at-risk models can have many dimensions. We present two new algorithms for dimension reduction in value-at-risk algorithms with Delta-Gamma approximations. In the first method, we compute a reduced portfolio with a small mean square error for the residual and, in the second method, we use low rank approximations to find a reduced portfolio. The paper concludes with an example, estimating value-at-risk and hedging an option portfolio, that demonstrates that dimension reduction leads to large savings in computational time without sacrificing accuracy.


Title: The Design and Implementation of an Object-Oriented Validated ODE Solver
Authors:N. S. Nedialkov and K. R. Jackson
Date: Original draft October 2001; Revised June 2002.
Pages: 32
File: ned.software.01.ps.gz

Abstract:

Validated methods for initial value problems (IVPs) for ordinary differential equations (ODEs) produce bounds that are guaranteed to contain the true solution of a problem. These methods use interval techniques to enclose rounding errors, truncation errors, and uncertainties in the model.

We describe the design and implementation of VNODE, Validated Numerical ODE, which is an object-oriented solver for computing rigorous bounds on the solution of an IVP for an ODE. We have designed VNODE as a set of C++ classes, which encapsulate data structures and algorithms employed in validated methods for IVPs for ODEs.

In VNODE, new solvers can be constructed by choosing methods from sets of methods. In addition, part of a solver can be replaced by another one performing the same task. This enables the user to isolate and compare methods implementing the same part of a solver. Moreover, a new method can be added to VNODE by encapsulating it in a class and deriving this class from an existing one.

We illustrate how to integrate with VNODE, how to construct new solvers, and how to extend this package with new methods.


Title: Quartic-Spline Collocation Methods for Fourth-Order Two-Point Boundary Value Problems
Author:Ying Zhu
Date: April 2001
Pages: 73
Note: M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 2001.
File: ccc/yz-01-msc.ps.gz

Abstract:

This thesis presents numerical methods for the solution of general linear fourth-order boundary value problems in one dimension. The methods are based on quartic splines, that is, piecewise quartic polynomials with C^3 continuity, and the collocation discretization methodology with the midpoints of a uniform partition being the collocation points.
The standard quartic-spline collocation method is shown to be second order. Two sixth-order quartic-spline collocation methods are developed and analyzed. They are both based on a high order perturbation of the differential equation and boundary conditions operators. The one-step method forces the approximation to satisfy a perturbed problem, while the three-step method proceeds in three steps and perturbs the right sides of the equations appropriately.
The error analysis follows the Green's function approach and shows that both methods exhibit optimal order of convergence, that is, they are locally sixth order on the gridpoints and midpoints of the uniform partition, and fifth order globally. The properties of the matrices arising from a restricted class of problems are studied. Analytic formulae for the eigenvalues and eigenvectors are developed, and related to those arising from quadratic-spline collocation matrices. Numerical results verify the orders of convergence predicted by the analysis.


Title: High-Order Spatial Discretization Methods for the Shallow Water Equations
Author:Anita W. Tam
Date: February 2001
Pages: 160
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2001.
File: tam-01-phd.ps.gz

Abstract:

We present new numerical methods for the shallow water equations on a sphere in spherical coordinates. In our implementation, the equations are discretized in time with the two-level semi-Lagrangian semi-implicit (SLSI) method, and in space on a staggered grid with the quadratic spline Galerkin (QSG) and the optimal quadratic spline collocation (OQSC) methods. When discretized on a uniform spatial grid, the solutions are shown through numerical experiments to be fourth-order in space locally at the nodes and midpoints of the spatial grids, and third-order globally.

We also show that, when applied to a simplified version of the shallow water equations, each of our algorithms yields a neutrally stable solution for the meteorologically significant Rossby waves. Moreover, we demonstrate that the Helmholtz equation associated with the shallow water equations should be derived algebraically rather than analytically in order for the algorithms to be stable with respect to the Rossby waves. These results are verified numerically using Boyd's equatorial wave equations with initial conditions to generate a soliton.

We then analyze the performance of our methods on various staggered grids --- the A-, B-, and C-grids. From an eigenvalue analysis of our simplified version of the shallow water equations, we conclude that, when discretized on the C-grid, our algorithms faithfully capture the group velocity of inertia-gravity waves. Our analysis suggests that neither the A- nor B-grids will produce such good results. Our theoretical results are supported by numerical experiments, in which we discretize Boyd's equatorial wave equations using different staggered grids and set the initial conditions for the problem to generate gravitation modes instead of a soliton. With both the A- and B-grids, some waves are observed to travel in the wrong direction, whereas, with the C-grid, gravity waves of all wavelengths propagate in the correct direction.


Title: Hedging with Value-At-Risk
Author:Claudio Albanese, Ken Jackson and Petter Wiberg
Date: January 2001
Pages: 17
Note:
File: vargrad.ps.gz

Abstract:

Methods to computing value-at-risk gradients with respect to portfolio positions have many applications. They include calculation of capital/reward efficient frontiers, hedging of derivative portfolios and optimal replication. We present a new algorithm for computing value-at-risk and its gradients. If the return can be decomposed as a sum of independent portfolio marginals, the pay-off distribution can be computed with multiple convolutions. This principal-component-type decomposition is also useful for calibrating fat-tailed distributions. We conclude with two applications of hedging with value-at-risk.


Title: Some Recent Advances in Validated Methods for IVPs for ODEs
Author:N. S. Nedialkov and K. R. Jackson
Date: January 2001
Pages: 7
Note: To appear in the Proceedings of the International Minisymposium on Mathematical Modelling and Scientific Computations, 8-11 April 2001, Borovets, Bulgaria.
File: ned.bg.01.ps.gz

Abstract:

We present an overview of interval Taylor series (ITS) methods for IVPs for ODEs and discuss some recent advances in the theory of validated methods for IVPs for ODEs, such as, an interval Hermite-Obreschkoff (IHO) scheme, the stability of ITS and IHO methods, and a new perspective on the wrapping effect.


Title: Rigorous Shadowing of Numerical Solutions of Ordinary Differential Equations by Containment
Author:Wayne B. Hayes
Date: January 2001
Pages: 87
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 2001.
File: hayes-01-phd.ps.gz

Abstract:

An exact trajectory of a dynamical system lying close to a numerical trajectory is called a shadow. We present a general-purpose method for proving the existence of finite-time shadows of numerical ODE integrations of arbitrary dimension in which some measure of hyperbolicity is present and there is either 0 or 1 expanding modes, or 0 or 1 contracting modes. Much of the rigor is provided automatically by interval arithmetic and validated ODE integration software that is freely available. The method is a generalization of a previously published containment process that was applicable only to two-dimensional maps. We extend it to handle maps of arbitrary dimension with the above restrictions, and finally to ODEs. The method involves building n-cubes around each point of the discrete numerical trajectory through which the shadow is guaranteed to pass at appropriate times. The proof consists of two steps: first, the rigorous computational verification of an inductive containment property; and second, a simple geometric argument showing that this property implies the existence of a shadow. The computational step is almost entirely automated and easily adaptable to any ODE problem. The method allows for the rescaling of time, which is a necessary ingredient for successfully shadowing ODEs. Finally, the method is local, in the sense that it builds the shadow inductively, requiring information only from the most recent integration step, rather than more global information typical of several other methods. The method produces shadows of comparable length and distance to all currently published results.


Title: Superconvergent Interpolants for Collocation Methods applied to mixed order BVODEs
Authors:W.H. Enright and R. Sivasothinathan
Date: 2000
Pages: 34
Note: A revised version of this paper appeared in ACM Trans. on Math. Soft.,26, 3, pp.323-35.
File: er00.ps.gz

Abstract:

Continuous approximations to boundary value problems in ordinary differential equations (BVODEs), constructed using collocation at Gauss points, are more accurate at the mesh points than at off-mesh points. From these approximations, it is possible to construct improved continuous approximations by extending the high accuracy that is available at the mesh points to off-mesh points. One possibility is the {Bootstrap} approach, which improves the accuracy of the approximate solution at the off-mesh points in a sequence of steps until the accuracy at the mesh points and off-mesh points is consistent. A Bootstrap approach for systems of mixed order BVODEs is developed to improve approximate solutions produced by {COLNEW}, a Gauss collocation based software package. An implementation of this approach is discussed and numerical results presented which confirm that the improved approximations satisfy the predicted error bounds and are relatively inexpensive to construct.


Title: Continuous Numerical Methods for ODEs with Defect Control
Authors:W.H. Enright
Date: 2000
Pages: 16
Note: Journal of Computational and Applied Mathematics, 125 (2000), pp. 159-170.
File: edc01.ps.gz

Abstract:

Over the last decade several general purpose numerical methods for ordinary differential equations (ODEs) have been developed which generate a continuous piecewise polynomial approximation that is defined for all values of the independent variable in the range of interest. For such methods it is possible to introduce measures of the quality of the approximate solution based on how well the piecewise polynomial satisfies the ODE. This leads naturally to the selection strategies in order to control the magnitude of the associated defect can be very effective and such methods are now being widely used. In this paper we will review the advantages of this class of numerical methods and present examples of how they can be effectively applied. We will focus on numerical methods for initial value problems (IVPs) and boundary value problems (BVPs) where most of the developments have been introduced but we will also discuss the implications and related developments for other classes of ODEs such as delay differential equations (DDEs) and differential algebraic equations (DAEs).


Title: Fast Fourier Transform Solvers and Preconditioners for Quadratic Spline Collocation
Authors:Christina C. Christara and Kit Sun Ng
Date: November 2000 (rev. October 2001, February 2002)
Pages: 33
Note: Depart. of Computer Science, University of Toronto, Technical Report 317/2000. A revised version to appear in BIT, 42:4 (December 2002).
File: ccc/precond.pdf, ccc/precond.ps.gz
DOI: 10.1023/A:1021944218806
Journal: PDF file (fulltext), page

Abstract:

Quadratic Spline Collocation (QSC) methods of optimal order of convergence have been recently developed for the solution of elliptic Partial Differential Equations (PDEs). In this paper, linear solvers based on Fast Fourier Transforms (FFT) are developed for the solution of the QSC equations. The complexity of the FFT solvers is O(N^2 log N), where N is the gridsize in one dimension. These direct solvers can handle PDEs with coefficients in one variable or constant, and Dirichlet, Neumann and periodic boundary conditions, along at least one direction of a rectangular domain. General variable coefficient PDEs are handled by preconditioned iterative solvers. The preconditioner is the QSC matrix arising from a constant coefficient PDE. The convergence analysis of the preconditioner is presented. It is shown that, under certain conditions, the convergence rate is independent of the gridsize. The preconditioner is solved by FFT techniques, and integrated with one-step or acceleration methods, giving rise to asymptotically almost optimal linear solvers, with complexity O(N^2 log N). Numerical experiments verify the effectiveness of the solvers and preconditioners, even on problems more general than the analysis assumes. The development and analysis of FFT solvers and preconditioners is extended to QSC equations corresponding to systems of elliptic PDEs.


Title: A New Perspective on the Wrapping Effect in Interval Methods for Initial Value Problems for Ordinary Differential Equations
Authors: Nedialko S. Nedialkov and Kenneth R. Jackson
Date: November 2000
Pages: 46
Note: A slightly modified version of this paper is published in the book "Perspectives on Enclosure Methods", Ulrich Kulisch, Rudolf Lohner and Axel Facius, editors, Springer-Verlag, 2001, pp. 219-264.
File: ned.scan00.ps.gz

Abstract:

The problem of reducing the wrapping effect in interval methods for initial value problems for ordinary differential equations has usually been studied from a geometric point of view. We develop a new perspective on this problem by linking the wrapping effect to the stability of the interval method. Thus, reducing the wrapping effect is related to finding a more stable scheme for advancing the solution. This allows us to exploit eigenvalue techniques and to avoid the complicated geometric arguments used previously.

We study the stability of several anti-wrapping schemes, including Lohner's QR-factorization method, which we show can be viewed as a simultaneous iteration for computing the eigenvalues of an associated matrix. Using this connection, we explain how Lohner's method improves the stability of an interval method and show that, for a large class of problems, its global error is not much bigger than that of the corresponding point method.


Title: Some Recent Advances in Validated Methods for IVPs for ODEs
Authors: Kenneth R. Jackson and Nedialko S. Nedialkov
Date: Revised August 2001 (original report November 2000)
Pages: 23
Note: Accepted December 2001 for publication in the Journal of Applied Numerical Mathematics.
File: numdiff9a.ps.gz

Abstract:

Compared to standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated methods (often called interval methods) for IVPs for ODEs have two important advantages: if they return a solution to a problem, then (1) the problem is guaranteed to have a unique solution, and (2) an enclosure of the true solution is produced.

We present a brief overview of interval Taylor series (ITS) methods for IVPs for ODEs and discuss some recent advances in the theory of validated methods for IVPs for ODEs. In particular, we discuss an interval Hermite-Obreschkoff (IHO) scheme for computing rigorous bounds on the solution of an IVP for an ODE, the stability of ITS and IHO methods, and a new perspective on the wrapping effect, where we interpret the problem of reducing the wrapping effect as one of finding a more stable scheme for advancing the solution.


Title: PMIRKDC: a Parallel Mono-Implicit Runge-Kutta Code with Defect Control for Boundary Value ODEs
Authors: P.H. Muir, R.N. Pancer, K.R. Jackson
Date: Original Draft September 2000; Revised January 2003.
Pages: 34
Note: A slightly modified version of this paper appeared in Parallel Computing, Volume 29, Issue 6, June 2003, Pages 711-741.
The following report is a longer version of this paper.
File: parallel-mirkdc.ps.gz

Abstract:

We describe parallel software, PMIRKDC, for solving boundary value ordinary differential equations (BVODEs). This software is based on the package, MIRKDC, which employs mono-implicit Runge-Kutta schemes within a defect control algorithm. The primary computational costs involve the treatment of large, almost block diagonal (ABD) linear systems. The most significant feature of PMIRKDC is the replacement of sequential ABD software, COLROW, with new parallel ABD software, RSCALE, based on a parallel block eigenvalue rescaling algorithm. Other modifications involve parallelization of the setup of the ABD systems and solution interpolants, and defect estimation. Numerical results show almost linear speedups.


Title: Runge-Kutta Software for the Parallel Solution of Boundary Value ODEs
Authors: P.H. Muir, R.N. Pancer, K.R. Jackson
Date: August 2000
Pages: 46
Note: The previous paper is a shorter version of this technical report.
File: parallel-mirkdc-tr.ps.gz

Abstract:

In this paper we describe the development of parallel software for the numerical solution of boundary value ordinary differential equations (BVODEs). The software, implemented on two shared memory, parallel architectures, is based on a modification of the MIRKDC package, which employs discrete and continuous mono-implicit Runge-Kutta schemes within a defect control algorithm. The primary computational costs are associated with the almost block diagonal (ABD) linear systems representing the Newton matrices arising from the iterative solution of the nonlinear algebraic systems which result from the discretization of the ODEs. The most significant modification featured in the parallel version of the code is the replacement of the sequential ABD linear system software, COLROW, which employs alternating row and column elimination, with new parallel ABD linear system software, RSCALE, which is based on a recently developed parallel block eigenvalue rescaling algorithm. Other modifications are associated with the parallelization of the setup of the ABD systems, the setup of the approximate solution interpolants, and the estimation of the defect. The numerical results show that nearly optimal speedups can be obtained, and that substantial speedups in overall solution time are achieved, compared with the sequential version of MIRKDC.


Title: Quadratic Spline Collocation Methods for Systems of Elliptic PDEs
Author: Kit Sun Ng
Date: April 2000
Pages: 67
Note: M.Sc. Thesis, Computer Science Departmnent, University of Toronto, 2000.
File: ccc/ngkit-00-msc.ps.gz

Abstract:
We consider Quadratic Spline Collocation (QSC) methods for solving systems of two linear second-order PDEs in two dimensions. Optimal order approximation to the solution is obtained, in the sense that the convergence order of the QSC approximation is the same as the order of the quadratic spline interpolant.

We study the matrix properties of the linear system arising from the discretization of systems of two PDEs by QSC. We give sufficient conditions under which the QSC linear system is uniquely solvable and the optimal order of convergence for the QSC approximation is guaranteed. We develop fast direct solvers based on Fast Fourier Transforms (FFTs) and iterative methods using multigrid or FFT preconditioners for solving the above linear system.

Numerical results demonstrate that the QSC methods are fourth order locally on certain points and third order globally, and that the computational complexity of the linear solvers developed is almost asymptotically optimal. The QSC methods are compared to conventional second order discretization methods and are shown to produce smaller approximation errors in the same computation time, while they achieve the same accuracy in less time.


Title: An Effective High-Order Interval Method for Validating Existence and Uniqueness of the Solution of an IVP for an ODE
Authors: N. S. Nedialkov, K. R. Jackson and J. D. Pryce
Date: October 2000. (This is a revision of our original draft of December 1999.)
Pages: 17
Note: Reliable Computing, Vol. 7, Issue 6, Nov. 2001, pp. 449-465.
File: stepsize.99a.ps.gz

Abstract:

Validated methods for initial value problems for ordinary differential equations produce bounds that are guaranteed to contain the true solution of a problem. When computing such bounds, these methods verify that a unique solution to the problem exists in the interval of integration and compute a priori bounds for the solution in this interval. A major difficulty in this verification phase is how to take as large a stepsize as possible, subject to some tolerance requirement. We propose a high-order enclosure method for proving existence and uniqueness of the solution and computing a priori bounds.


Title: Computing Rigorous Bounds on the Solution of an Initial Value Problem for an Ordinary Differential Equation
Authors: Nedialko (Ned) Stoyanov Nedialkov
Date: February 1999
Pages: 149
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 1999.
File: ned-99-phd.ps.gz

Abstract:

Compared to standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated (also called interval) methods for IVPs for ODEs have two important advantages: if they return a solution to a problem, then (1) the problem is guaranteed to have a unique solution, and (2) an enclosure of the true solution is produced.

To date, the only effective approach for computing guaranteed enclosures of the solution of an IVP for an ODE has been interval methods based on Taylor series. This thesis derives a new approach, an interval Hermite-Obreschkoff (IHO) method, for computing such enclosures.

Compared to interval Taylor series (ITS) methods, for the same order and stepsize, our IHO scheme has a smaller truncation error and better stability. As a result, the IHO method allows larger stepsizes than the corresponding ITS methods, thus saving computation time. In addition, since fewer Taylor coefficients are required by IHO than ITS methods, the IHO method performs better than the ITS methods when the function for computing the right side contains many terms.

The stability properties of the ITS and IHO methods are investigated. We show as an important by-product of this analysis that the stability of an interval method is determined not only by the stability function of the underlying formula, as in a standard method for an IVP for an ODE, but also by the associated formula for the truncation error.

This thesis also proposes a Taylor series method for validating existence and uniqueness of the solution, a simple stepsize control, and a program structure appropriate for a large class of validated ODE solvers.


Title: An Efficient Transposition Algorithm for Distributed Memory Computers
Authors: C. C. Christara, X. Ding and K. R. Jackson
Date: October 1999. (This is a revision of our drafts of January and April 1999.)
Pages: 19
Note: Accepted for publication in the proceedings of the 13th Annual International Symposium on High Performance Computing Systems and Applications (HPCS'99), Queen's University, Kingston, Ontario, Canada, 13-16 June 1999. Appeared as Chapter 38 in High Performance Computing Systems and Applications 1999, A. Pollard, D. J. K. Mewhort and D. F. Weaver eds. pp. 349-368, Kluwer Academic Publishers.
File: transpose.99b.ps.gz

Abstract:
Data transposition is required in many numerical applications. When implemented on a distributed-memory computer, data transposition requires all-to-all communication, a time consuming operation. The Direct Exchange algorithm, commonly used for this task, is inefficient if the number of processors is large. We investigate a series of more sophisticated techniques: the Ring Exchange, Mesh Exchange and Cube Exchange algorithms. These data transposition schemes were incorporated into a parallel solver for the shallow-water equations. We compare the performance of these schemes with that of the Direct Exchange Algorithm and the MPI all-to-all communication routine, MPI_AllToAll. The numerical experiments were performed on a Cray T3E computer with 512 processors and on an ethernet-connected cluster of 36 Sun workstations. Both the analysis and the numerical results indicate that the more sophisticated Mesh and Cube Exchange algorithms perform better than either the simpler well-known Direct and Ring Exchange schemes or the MPI_AllToAll routine. We also generalize the Mesh and Cube Exchange algorithms to a d-dimensional mesh algorithm, which can be viewed as a generalization of the standard hypercube data transposition algorithm.


Title: ODE Software that Computes Guaranteed Bounds on the Solution
Authors: N. S. Nedialkov and K. R. Jackson
Date: Original draft December 1998; Revised March 1999.
Pages: 31
Note: Accepted as a chapter in the book "Advances in Software Tools for Scientific Computing", edited by H. P. Langtangen, A. M. Bruaset and E. Quak, Springer-Verlag, 1999.
The title of the original draft was "Software Issues in Validated ODE Solving".
File: SciTools.98.ps.gz

Abstract:
Validated methods for initial value problems (IVPs) for ordinary differential equations (ODEs) produce bounds that are guaranteed to contain the true solution of a problem. In this chapter, we briefly survey validated methods for IVPs for ODEs, discuss software issues related to the implementation of a validated ODE solver, and describe the structure of a package for computing rigorous bounds on the solution of an IVP for an ODE.


Title: An Interval Hermite-Obreschkoff Method for Computing Rigorous Bounds on the Solution of an Initial Value Problem for an Ordinary Differential Equation
Authors: N. S. Nedialkov and K. R. Jackson
Date: February 1999 (original draft October 1998)
Pages: 22
Note: Appeared in Reliable Computing, 5 (1999), pp. 289-310 and as a chapter in the book Developments in Reliable Computing, T. Csendes, editor, pages 289-310, Kluwer, Dordrecht, Netherlands, 1999.
File: ned.scan98.ps.gz

Abstract:
To date, the only effective approach for computing guaranteed bounds on the solution of an initial value problem (IVP) for an ordinary differential equation (ODE) has been interval methods based on Taylor series. This paper derives a new approach, an interval Hermite-Obreschkoff (IHO) method, for computing such enclosures. Compared to interval Taylor series (ITS) methods, for the same stepsize and order, our IHO scheme has a smaller truncation error, better stability, and requires fewer Taylor coefficients and high-order Jacobians.

The stability properties of the ITS and IHO methods are investigated. We show as an important by-product of this analysis that the stability of an interval method is determined not only by the stability function of the underlying formula, as in a standard method for an IVP for an ODE, but also by the associated formula for the truncation error.


Title: Numerical Solution of the Shallow-Water Equations on Distributed Memory Systems
Author: Xiaoliang (Lloyd) Ding
Date: October 1998
Pages: 73
Note: M.Sc. Thesis, Computer Science Departmnent, University of Toronto, 1998.
File: ding.98.msc.ps.gz

Abstract:
The shallow-water equations are often used as a mathematical model when numerical methods for solving weather or climate prediction problems are tested. This thesis studies the performance and scalability of numerical methods for the shallow-water equations on distributed memory systems. Time integration of the numerical methods is based on a two-time-level semi-implicit semi-Lagrangian scheme. To solve the Helmholtz problem that arises at each time-step, a fast direct solver based on FFTs is used. This technique requires a data transposition, which is a time consuming operation on a distributed memory system.

The data transposition requires an all-to-all type communication. The Direct Exchange algorithm, commonly used for this task, is inefficient if the number of processors is large. We investigate a series of more sophisticated techniques: the Ring Exchange, Mesh Exchange and Cube Exchange algorithms. We compare the performance of these techniques with that of the Message Passing Interface collective communication routine. We perform an isoefficiency analysis to measure the scalability of the parallel system. The numerical experiments are carried out on a Cray T3E with 512 processors and on an ethernet-connected cluster of 36 workstations.

Both the analysis and our numerical results indicate that the Cube Exchange and Mesh Exchange algorithms perform better than the other methods. However, even with the more sophisticated techniques, the parallel algorithm for the shallow-water equations remains essentially unscalable.


Title: A Numerical Study of One-Factor Interest Rate Models
Author: Zhong Ge
Date: January 1998
Pages: 95
Note: M.Sc. Thesis, Computer Science Departmnent, University of Toronto, 1998.
File: ge.98.msc.ps.gz

Abstract:
This thesis is concerned with a numerical study of one-factor interest rate models. First the basics of option pricing theory are outlined together with a simple derivation of the Heath-Jarrow-Morton (HJM) models. Then we discuss the term-structure-consistent models and their relation with the HJM models. We show that adaptive tree methods, namely, tree methods that include adaptive branching rules to incorporate the mean-reverting properties of the interest rates, are very stable and efficient. We apply the tree methods to five models: the Black-Karasinski model, the Cox-Ingersoll-Ross (CIR) model, the cubic-variance model, the Ho-Lee model and the Hull-White model. Using the market data from the US market near the end of June, 1997, we test and compare the five models. Our results show that all the models except that of Ho-Lee produce similar prices for at-the-money or in-the-money options, but give very different prices for deep-out-of-the-money options. In particular, the more sensitive the volatility of a model is to the level of the interest rates, the higher prices the model produces for deep-out-of-the-money put options. Finally we look at the issue of negative interest rates in the Hull-White model and our results suggest that it is insignificant for today's US market.


Title: Distributed Parallel Shooting for BVODEs
Authors: Kwok L. Chow and Wayne H. Enright
Date: November 1997
Pages: 57
Note: A revised version of this paper will appear in the ASTC High Performance Computing 1998
File: cs-97-310.ps.gz

Abstract:
Many important scientific problems can be formulated as systems of ordinary differential equations with two-point boundary value constraints (BVODE). Multiple shooting is one of the most widely used numerical techniques for solving BVODE problems. In this work, we present a new distributed parallel numerical algorithm for BVODEs which is based on multiple shooting. We investigate the numerical stability of this new distributed algorithm and identify difficulties that can arise. We propose a new parallel iterative refinement scheme to cope with some specific numerical difficulties identified in our investigation. Computational experience is presented to demonstrate the potential effectiveness of our approach.
Keywords: Distributed parallel algorithm, boundary value problems, multiple shooting


Title: Fitting Random Stable Solar Systems to Titius-Bode Law
Authors: Wayne Hayes and Scott Tremaine
Date: October 1997
Pages: 12
Note: Submitted to ICARUS.
File: bode.97.ps.gz

Abstract:
Simple "solar systems" are generated with planetary orbital radii r distributed uniformly random in log(r) between 0.2 and 50 AU. A conservative stability criterion is imposed by requiring that adjacent planets are separated by a minimum distance of k Hill radii, for values of k ranging from 1 to 8. Least-squares fits of these systems to generalized Bode laws are performed, and compared to the fit of our own Solar System. We find that this stability criterion, and other "radius-exclusion" laws, generally produce approximately geometrically spaced planets that fit a Titius-Bode law about as well as our own Solar System. We then allow the random systems the same exceptions that have historically been applied to our own Solar System. Namely, one gap may be inserted, similar to the gap between Mars and Jupiter, and up to 3 planets may be "ignored", similar to how some forms of Bode's law ignore Mercury, Neptune, and Pluto. With these particular exceptions, we find that our Solar System fits significantly better than the random ones. However, we believe that this choice of exceptions, designed specifically to give our own Solar System a better fit, gives it an unfair advantage that would be lost if other exception rules were used. We conclude that the significance of Bode's law is simply that stable planetary systems tend to be regularly spaced; this conclusion could be strengthened by the use of more stringent methods of rejecting unstable solar systems, such as long-term orbit integrations.


Title: Superconvergent Interpolants for the Collocation Solution of BVODEs
Authors: W. H. Enright and P. H. Muir
Date: October 21, 1997
Pages: 42
Note: Address of P. H. Muir:
Department of Mathematics and Computing Science
Saint Mary's University, Halifax, Nova Scotia, Canada B3H 3C3
Paul.Muir@stmarys.ca
File: colintrp97.ps.gz

Abstract:
A long-standing open question associated with the use of collocation methods for boundary value ordinary differential equations is concerned with the development of a high order continuous solution approximation to augment the high order discrete solution approximation, obtained at the mesh points which subdivide the problem interval. It is well-known that the use of collocation at Gauss points leads to solution approximations at the mesh points for which the global error is O(h^{2k}), where k is the number of collocation points used per subinterval and h is the subinterval size. The collocation solution also yields a C^0 continuous solution approximation that has a global error of O(h^{k+1}). The discrete solution is thus said to be superconvergent. In this paper, we show how to augment the superconvergent discrete collocation solution to obtain efficient C^1 continuous "superconvergent" interpolants, whose global errors are O(h^{2k}). The key ideas are to use the theoretical framework of continuous Runge-Kutta schemes and to augment the collocation solution with inexpensive mono-implicit Runge-Kutta stages. Specific schemes are derived for k = 1,2,3, and 4. Numerical results are provided to support the theoretical analysis.
Keywords: Collocation, Runge-Kutta methods, boundary value ordinary differential equations, interpolation.
AMS Subject Classifications: 65L05, 65L10


Title: The Parallel Solution of Almost Block Diagonal Systems Arising in Numerical Methods for BVPs for ODEs
Authors: R. N. Pancer and K. R. Jackson
Date: March 1997
Pages: 6
Note: In the Proceedings of the Fifteenth IMACS World Congress, Berlin, Germany, 1997, Vol. 2, pp. 57-62.
File: imacs.97.ps.gz

Abstract:
Three parallel algorithms are presented for solving the Almost Block Diagonal (ABD) systems of linear algebraic equations that arise during the numerical solution of Boundary Value Problems (BVPs) for Ordinary Differential Equations (ODEs). Until recently, the solution of these systems has proven to be the execution-time bottleneck in the parallel implementation of most BVP codes. Several numerical examples are given comparing the three algorithms with respect to accuracy and speed.


Title: Multigrid and Multilevel Methods for Quadratic Spline Collocation
Authors: Christina C. Christara and Barry Smith
Date: February 1997
Pages: 21
Note: Appeared in BIT 37:4 (1997), pp. 781-803.
File: multign.pdf, multign.ps.gz
DOI: 10.1007/BF02510352
Journal: PDF file (fulltext), page

Abstract:
Multigrid methods are developed and analyzed for quadratic spline collocation equations arising from the discretization of one-dimensional second-order differential equations. The rate of convergence of the two-grid method integrated with a damped Richardson relaxation scheme as smoother is shown to be faster than 1/2, independently of the step-size. The additive multilevel versions of the algorithms are also analyzed. The development of quadratic spline collocation multigrid methods is extended to two-dimensional elliptic partial differential equations. Multigrid methods for quadratic spline collocation methods are not straightforward: because the basis functions used with quadratic spline collocation are not nodal basis functions, thus the design of efficient restriction and extension operators is nontrivial. Experimental results, with V-cycle and full multigrid, indicate that suitably chosen multigrid iteration is a very efficient solver for the quadratic spline collocation equations.


Title: Validated Solutions of Initial Value Problems for Ordinary Differential Equations
Authors: N. S. Nedialkov, K. R. Jackson and G. F. Corliss
Date: February 1997
Pages: 40
Note: A slightly revised version of this paper appeared in the journal Applied Mathematics and Computation, 105 (1999), pp. 21-68.
File: vsode.97.ps.gz

Abstract:
Compared to standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated methods for IVPs for ODEs have two important advantages: if they return a solution to a problem, then (1) the problem is guaranteed to have a unique solution, and (2) an enclosure of the true solution is produced. The authors survey Taylor series methods for validated solutions of IVPs for ODEs, describe several such methods in a common framework, and identify areas for future research.


Title: Scientific Computing by Numerical Methods
Author(s): C. C. Christara and K. R. Jackson
Date: 1996
Pages: 121
File: survey.pdf
Note: A revised version of this article appeared in Encyclopaedia of Applied Physics, Vol 17, 1996, pp. 1-79, and in Mathematical Tools for Physicists, 2005, pp. 281-383.
Abstract:
Numerical methods are an indispensable tool in solving many problems that arise in applied physics. In this article, we briefly survey a few of the most common mathematical problems and review some numerical methods to solve them.
As can be seen from the table of contents, the topics covered in this survey are those that appear in most introductory numerical methods books. However, our discussion of each topic is more brief than is normally the case in such texts and we frequently provide references to more advanced topics that are not usually considered in introductory books.
We originally intended to include a section on the computation of eigenvalues and eigenvectors of matrices, but, because of time and space constraints, we omitted it. The interested reader should consult an introductory numerical methods book, such as those listed in the last section, or an advanced text, such as \cite{Golub-Van-Loan, Parlett, Saad, Stewart, Wilkinson}.


Title: Global Error Measures for Large N-body Simulations
Authors: Wayne B. Hayes and Kenneth R. Jackson
Date: October 1996
Pages: 3
Note: Accepted for publication in the Proceedings of the 12th `Kingston meeting' on Theoretical Astrophysics: Computational Astrophysics, ASP Conference Series, Vol. ??, eds. D. A. Clarke & M. J. West, (San Francisco: ASP).
File: shadowing-96b.ps.gz

Abstract:
Chaotic systems like the N-body problem display "sensitive dependence on initial conditions". This means that a small perturbation to a solution produces a new solution that diverges in phase space exponentially with time from the original solution. Since numerical errors introduce small perturbations, numerical solutions generally diverge exponentially from the true solution with the same initial conditions. However, sometimes there exists a true solution with slightly different initial conditions that remains close to the numerical solution. Such true solutions are called "shadows", and their existence implies that the numerical simulation may be dynamically valid in a very strong sense: the sense that the numerical solution closely follows some true solution. This method of global error analysis was initially studied in the context of gravitational N-body systems by Quinlan and Tremaine in 1992, where they studied a simplified system with one particle moving amongst 99 fixed ones. The present work tries to extend their work by studying shadowing for more realistic systems with more moving particles. Preliminary results seem to indicate that the shadowing property may hold better for collisionless systems than collisional ones, but results are still far from conclusive.


Title: DIMSEMs -- Diagonally Implicit Single-Eigenvalue Methods for the Numerical Solution of Stiff ODEs on Parallel Computers
Authors: R. F. Enenkel and K. R. Jackson
Date: September 1996
Pages: 32
Note: A slightly revised version of this paper appeared in the journal Advances in Computational Mathematics, 7 (1997), pp. 97-133.
File: dimsem.96.ps.gz

Abstract:
Our new general linear methods (GLMs) have several properties suited for the solution of stiff ODEs on parallel computers.

  1. They are strictly diagonally implicit, allowing parallelism in the Newton iteration used to solve the nonlinear equations arising from the implicitness of the formula.
  2. Only one eigenvalue of the stability matrix is non-zero, resulting in a solution free from contamination from spurious solutions corresponding to non-dominant, non-zero eigenvalues.
  3. The methods have high stage order, avoiding the phenomenon of order reduction that occurs, for example, with some Runge-Kutta methods.
  4. The methods are L-stable, making them suitable for stiff problems.
The thesis presents order bounds and derives DIMSEMs of orders 2--6, as well as a non-DIMSEM, L-stable class of diagonal methods of all orders. A fixed-order, variable-stepsize implementation of the DIMSEMs is described, including the derivation of local error estimators, and test results on both sequential and parallel computers is presented. The testing shows the DIMSEMs to be competitive with fixed-order versions of the popular solver LSODE on a practical test problem.


Title: Remarks on the optimal convolution kernel for CSOR waveform relaxation
Authors: Min Hu, Ken Jackson, Jan Janssen and Stefan Vandewalle
Date: August 1996
Pages: 24
Note: A slightly revised version of this paper appeared in the journal Advances in Computational Mathematics, 7 (1997), pp. 135-156.
File: csor.96.ps.gz

Abstract:
The convolution SOR waveform relaxation method is a numerical method for solving large-scale systems of ordinary differential equations on parallel computers. It is similar in spirit to the SOR acceleration method for solving linear systems of algebraic equations, but replaces the multiplication with an overrelaxation parameter by a convolution with a time-dependent overrelaxation function. Its convergence depends strongly on the particular choice of this function. In this paper, an analytic expression is presented for the optimal continuous-time convolution kernel and its relation to the optimal kernel for the discrete-time iteration is derived. It is investigated whether this analytic expression can be used in actual computations. Also, the validity of the formulae that are currently used to determine the optimal continuous-time and discrete-time kernels is extended towards a larger class of ODE systems.


Title: Fast Fourier transform solvers for quadratic spline collocation
Author: A. Constas
Date: July 1996
Pages: 64
Note: M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 1996.
File: ccc/constas-96-msc.ps.gz

Abstract:
The solution of linear systems arising from the discretisation of second order elliptic Boundary Value Problems (BVPs) defined in rectangular domains by solvers employing Fast Fourier Transforms (FFTs) is studied. The types of boundary conditions considered are homogeneous Dirichlet, periodic, homogeneous Neumann or combinations of these. Two-dimensional and three-dimensional problems are considered. Quadratic spline collocation methods are used for the discretisation of the BVPs. The FFT solvers are first developed for differential operators with constant coefficients and even order derivatives. Diagonal scalings of these solvers are then used as preconditioners for the solution of linear systems arising from the discretisation of general second order linear elliptic BVPs. Several acceleration methods are tested. Results from numerical experiments that demonstrate the asymptotically optimal convergence and computational performance of the FFT solvers are presented.


Title: DIMSEMs -- Diagonally Implicit Single-Eigenvalue Methods for the Numerical Solution of Stiff ODEs on Parallel Computers
Author: Robert F. Enenkel
Date: 1996
Pages: 199
Note: Ph.D. Thesis, Computer Science Dept., Univ. of Toronto, 1996.
File: enenkel-96-phd.ps.gz

Abstract:
Our new general linear methods (GLMs) have several properties suited for the solution of stiff ODEs on parallel computers.

  1. They are strictly diagonally implicit, allowing parallelism in the Newton iteration used to solve the nonlinear equations arising from the implicitness of the formula.
  2. Only one eigenvalue of the stability matrix is non-zero, resulting in a solution free from contamination from spurious solutions corresponding to non-dominant, non-zero eigenvalues.
  3. The methods have high stage order, avoiding the phenomenon of order reduction that occurs, for example, with some Runge-Kutta methods.
  4. The methods are L-stable, making them suitable for stiff problems.
The thesis presents order bounds and derives DIMSEMs of orders 2--6, as well as a non-DIMSEM, L-stable class of diagonal methods of all orders. A fixed-order, variable-stepsize implementation of the DIMSEMs is described, including the derivation of local error estimators, and test results on both sequential and parallel computers is presented. The testing shows the DIMSEMs to be competitive with fixed-order versions of the popular solver LSODE on a practical test problem.


Title: Fast Shadowing Algorithm for High Dimensional ODE Systems
Author: Wayne Hayes and Kenneth R. Jackson
Date: March 1996
Pages: 22
File: shadowing-96a.ps.gz

Abstract:
A true trarjectory of a chaotic dynamical system lying near an approximate trajectory is called a shadow. Finding shadows of numerical trajectories of ODE systems is very compute intensive and until recently it has been infeasible to study shadows of higher-dimensional systems. We study the shadowing algorithm introduced by Grebogi, Hammel, Yorke and Sauer in 1990 and extended to arbitrary Hamiltonian systems by Quinlan and Tremaine in 1992, and introduce several major optimizations resulting in speedups of over 100. This algorithm is used to shadow gravitational N body systems with up to 150 phase spacedimensions.


Conference:Computational Finance Workshop
Organizers:Peter Forsyth, Computer Science, University of Waterloo
Ken Jackson, Computer Science, University of Toronto
Date: Thursday, May 2, 1996
Note: Files associated with the Computational Finance Workshop held at the University of Toronto, May 2, 1996
Directory: comp.fin.workshop
File: comp.fin.workshop/agenda

Theme:
Modern models in finance rarely admit analytical solutions. This workshop focused on computational methods for obtaining approximate solutions to financial models with application to such areas as: option pricing, portfolio selection, and risk management. Ten speakers from banks, consulting companies, software companies and universities presented their work on new models and numerical methods, including schemes based on Monte Carlo, Lattices, and Partial Differential Equations. The workshop was intended to increase awareness of advances in computational finance and to discover opportunities to improve practices or undertake further research.


Title: Modeling Path Dependent Options
Author: Yidong Liu
Date: May 1996
Pages: 20
Note: Computational Finance Workshop, University of Toronto, May 2, 1996
File: comp.fin.workshop/liu.ps.gz


Title: PDE Models of the Term -- Structure: Matching Yield and Volatility Curves
Author: Ken Vetzal
Date: May 1996
Pages: 17
Note: Computational Finance Workshop, University of Toronto, May 2, 1996
File: comp.fin.workshop/vetzal.ps.gz


Title: Asset Allocation with Transaction Costs
Author: Hailiang Yang
Date: May 1996
Pages: 22
Note: Computational Finance Workshop, University of Toronto, May 2, 1996
File: comp.fin.workshop/yang.ps.gz


Title: Robust Numerical Methods for PDE Models of Asian Options
Author: Rob Zvan
Date: May 1996
Pages: 14 (zvan.1.ps) and 8 (zvan.2.ps)
Note: Computational Finance Workshop, University of Toronto, May 2, 1996
Files: comp.fin.workshop/zvan.1.ps.gz and comp.fin.workshop/zvan.2.ps.gz


Title: Numerical Pricing of Path-Dependent Options
Author: Yidong Liu
Date: April 1996
Pages: 68
Note: M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 1996.
File: liu-96-msc.ps.gz

Abstract:
Option contracts have become increasingly important in the field of finance since they possess characteristics that are attractive to both speculators and hedgers. One important problem is determining the "fair value" of an option efficiently and accurately.

In this thesis we first review basic option pricing theory and several popular numerical methods for option pricing. After discussing the analytic Black-Scholes formula for option prices, we focus on numerical methods for option pricing, including Monte Carlo methods, Tree methods and Finite Difference methods.

The main result of this thesis is an extension of the Hull-White tree method for pricing Asian Options. The original Hull-White method is applicable to options in which the averaging period is taken from the beginning to the end of the life of the option. We relax this restriction to any averaging period that ends with the life of the option. The Hull-White tree method is a special case of our method.


Title: Runge-Kutta Research at Toronto
Authors: T. E. Hull, W. H. Enright and K. R. Jackson
Date: May 1996 (replaces an earlier draft of March 1996)
Pages: 16
Note: A slightly revised version of this paper appeared in the journal Applied Numerical Mathematics, 22 (1996), pp. 225-236.
File: rk.survey.96.ps.gz

Abstract:
The main purpose of this paper is to summarize the work on Runge-Kutta methods at the University of Toronto during the period 1963 to the present. To provide some background, brief mention is also made of related work on the numerical solution of ordinary differential equations, but, with just a few exceptions, specific references are given only if the referenced material has a direct bearing on Runge-Kutta methods and their application to a variety of problem areas.

There are several main themes. New Runge-Kutta formulas and new error control strategies are developed, leading for example to continuous methods and to new application areas such as delay, differential- algebraic and boundary value problems. Software design and implementation are also emphasized. And so is the importance of careful testing and comparing. Other topics, such as the notion of effectiveness, taking advantage of parallelism, and handling discontinuities are also discussed.


Title: Improving the Efficiency of Runge-Kutta Methods for the Solution of BVPs for Higher-Order ODEs
Author: Khalid Zuberi
Date: January 1996
Pages: 96
Note: M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 1996.
File: zuberi-96-msc.ps.gz

Abstract:
Runge-Kutta methods are often used to solve two-point boundary value problems (BVPs) for ordinary differential equations (ODEs.) These methods are usually applied directly to first-order systems; to solve problems for higher-order ODEs, the differential equations are often reduced to an equivalent first-order system. We exploit the structure of the reduced system to construct an approximate Jacobian matrix that is significantly cheaper to construct for general higher order BVPs. Second-order ODEs are examined in detail, and a specialised linear system solver is suggested to provide additional computational savings. Numerical results on sample BVPs for second-order ODEs show significant computational savings with no loss of solution accuracy.


Title: Numerical Solution of Retarded and Neutral Delay Differential Equations using Continuous Runge-Kutta Methods
Author: Hiroshi Hayashi
Date: 1996
Pages: 78
Note: Ph.D Thesis, Computer Science Dept., Univ. of Toronto, 1996.
File: hayashi-96-phd.ps.gz

Abstract:
In this thesis, we present a numerical method for solving delay differential equations (DDEs) and analysis for the numerical solution. We have developed the method by adapting recently developed techniques for initial value ordinary differential equations and developing a new approach to handle vanishing delays. We determine convergence properties for the numerical solution associated with our method. We first analyze such properties for retarded type DDEs and then the analysis is extended to the case of neutral type DDEs. We have developed an experimental Fortran code as a modified version of DVERK based on a 6th order continuous Runge-Kutta formula. An implementation of our method is described and numerical results for various kinds of DDEs are presented.


Title: Interpolation and Error Control Schemes for Algebraic Differential Equations Using Continuous Implicit Runge-Kutta Methods
Author: Hung Nguyen
Date: 1995
Pages: 97
Note: Ph.D Thesis, Computer Science Dept., Univ. of Toronto, 1995.
File: nguyen-95-phd.ps.gz

Abstract:
This thesis presents alternative interpolation and error control schemes for index 1, index 2 and index 3 algebraic differential equations. We develop three corresponding schemes with interpolants based on the "bootstrapping" procedure and error controls based on the defect and algebraic residual of an underlying perturbed initial value problem. The test problems we use to illustrate our approach are the classical pendulum problem, the seven body problem and the driven cavity problem. The three implicit continuous Runge-Kutta formulas used as examples in our investigation are SDIRK(2), Gauss(2) and Gauss(3).


Title: The numerical solution of large systems of stiff IVPs for ODEs
Author: K. R. Jackson
Date: September 1995
Pages: 17
Note: A slightly revised version of this paper appeared in the journal Applied Numerical Mathematics, 20 (1996), pp. 5-20.
File: mol.95.ps.gz

Abstract:
The application of the method of lines to a system of time-dependent partial differential equations gives rise to a system of initial-value problems (IVPs) for ordinary differential equations (ODEs). Such systems are often stiff and very large. The need to solve problems of this kind has affected the development of both formulas and codes for IVPs for ODEs. We survey some of these developments in this article.


Title: Improving performance when solving high order and mixed order boundary value problems in ODEs
Authors: Wayne H. Enright and Min Hu
Date: March 1995
Pages: 9
File: high-order-BVP.ps.gz

Abstract:
Solving high order or mixed order boundary value problems by general purpose software often requires the system to be first converted to a larger equivalent first order system. The cost of solving such problems is generally $O(m^3)$ where $m$ is the dimension of the equivalent first order system. In this paper, we show how to reduce this cost by exploiting the special structure the `equivalent' first order system inherits from the original associated mixed-order system. This technique applies to a broad class of boundary value methods. We illustrate the potential benefits by considering in detail a general purpose Runge-Kutta method and a multiple shooting method.


Title: Efficient Shadowing of High Dimensional Chaotic Systems with the Large Astrophysical N-body Problem as an Example
Author: Wayne Hayes
Date: January 1995
Pages: 61 (thesis.ps) and 2 (errata.ps)
Note: M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 1995.
Directory: hayes-95-msc
Files: hayes-95-msc/thesis.ps.gz and hayes-95-msc/errata.ps.gz

Abstract:
N-body systems are chaotic. This means that numerical errors in their solution are magnified exponentially with time, perhaps making the solutions useless. Shadowing tries to show that numerical solutions of chaotic systems still have some validity. A previously published shadowing refinement algorithm is optimized to give speedups of order 60 for small problems and asymptotic speedups of O(N) on large problems. This optimized algorithm is used to shadow N-body systems with up to 25 moving particles. Preliminary results suggest that average shadow length scales roughly as 1/N, i.e., shadow lengths decrease rapidly as the number of phase-space dimensions of the system is increased. Some measures of simulation error for N-body systems are discussed that are less stringent than shadowing. Many areas of further research are discussed both for high-dimensional shadowing, and for N-body measures of error.


Title: DIMSEMs -- Diagonally Implicit Single-Eigenvalue Methods for the Numerical Solution of Stiff ODEs on Parallel Computers: A Preliminary Report
Author: R. F. Enenkel and K. R. Jackson
Date: January 1995
Pages: 138
File: dimsem.prelim.ps.gz

Abstract:
We describe a new class of General Linear Methods (GLMs) with the following properties: the methods are strictly diagonally implicit, allowing efficient parallel implementation; the stability matrix has no spurious eigenvalues (i.e., only one eigenvalue of the stability matrix is non-zero); the methods have high stage order, allowing them to retain their order on problems for which order reduction might otherwise be a problem; the methods are L-stable, making them suitable for the solution of stiff problems. As well as presenting some order barriers, we derive and test methods of orders 2-6.


Title: Parallel solvers for spline collocation equations
Author: C. C. Christara
Date: January 1995
Pages: 35
Note: Appeared in Advances in Engineering Software, 27:1/2, 1996, pp. 71-89
File: ccc/civil.ps.gz

Abstract:
A variety of solvers for the spline collocation equations arising from the discretisation of elliptic Partial Differential Equations (PDEs) are considered. The convergence properties of semi-iterative and Krylov subspace acceleration methods applied to the system of spline collocation equations are studied. The preconditioners tested include incomplete factorisation and SSOR for both the natural and multicolour orderings, domain decomposition based on Schur complement methods with nonoverlapping subdomains, or Schwarz methods with overlapping subdomains, and multigrid methods.

The parallelisation of some of the above iterative methods is studied and their advantages and disadvantages discussed. The communication requirements of the methods are discussed when the methods are implemented on distributed memory machines. Results which show that spline collocation methods are very competitive for the solution of PDEs are presented.


Title: Precision Control and Exception Handling in Scientific Computing
Author: K. R. Jackson and N. S. Nedialkov
Date: 1995
Pages: 8
File: prec.except.ps.gz

Abstract:
This paper describes convenient language facilities for precision control and exception handling. Nedialkov has developed a variable-precision and exception handling library, SciLib, implemented as a numerical class library in C++. A new scalar data type, real, is introduced, consisting of variable-precision floating-point numbers. Arithmetic, relational, and input and output operators of the language are overloaded for reals, so that mathematical expressions can be written without explicit function calls. Precision of computations can be changed during program execution. The exception handling mechanism treats only numerical exceptions and does not distinguish between different types of exceptions.

The proposed precision control and exception handling facilities are illustrated by sample SciLib programs.


Title: Precision Control and Exception Handling in Scientific Computing
Author: Nedialko (Ned) Stoyanov Nedialkov
Date: September 1994
Pages: 51
Note: M.Sc. Thesis, Computer Science Dept., Univ. of Toronto, 1994.
File: ned-94-msc.ps.gz

Abstract:
We describe language facilities for precision control and exception handling and show how they can help to construct better numerical algorithms.

Precision of computations can be changed during program execution. The first two precisions are IEEE single and double. Increasing the precision provides at least two times more significant digits while the exponent range is significantly larger than double the exponent range of the previous precision.

The exception handling mechanism treats only numerical exceptions and does not distinguish between different types of numerical exceptions.

A variable-precision and exception handling library, SciLib, has been implemented in C++. A new scalar data type real is introduced consisting of variable-precision floating-point numbers. Arithmetic, relational, input and output operators of the language are overloaded for the real data type, so it can be used like any other scalar data type of C++.

The proposed precision control and exception handling are illustrated using SciLib. The examples show the benefits of using precision control to handle exceptions and to avoid catastrophic cancelations.


Title: Interpolating Runge-Kutta methods for vanishing delay differential equations
Authors: Wayne H. Enright and Min Hu
Date: 1994
Pages: 34
Ref: Technical report 292/94, Dept. Computer Science, Univ. of Toronto
Note: The files int_ddrk6n.f.gz and int_ddvss6.f.gz contain FORTRAN format of the coefficients (in Appendix A and B of the report) of the special interpolants for the codes DDRK6N and DDVSS6 respectively.
Files: cs-94-292.ps.gz, int_ddrk6n.f.gz and int_ddvss6.f.gz

Abstract:
In the numerical solution of delay differential equations by a continuous explicit Runge-Kutta method a difficulty arises when the delay vanishes or becomes smaller than the stepsize the method would like to use. In this situation the standard explicit sequential process of computing the Runge-Kutta stages become an implicit process and an iteration scheme must be adopted. We will consider several alternative iteration schemes and investigate their order.


Title: An Application of Runge-Kutta Predictor-Corrector Methods To Two System of Hyperbolic Equations Arising in Optics
Author: K. R. Jackson
Date: July 1994
Pages: 3
Note: Appeared in the Proceedings of the Fourteenth IMACS World Congress, Atlanta GA, July, 1994, W. F. Ames ed., pp.~1329--1331
File: imacs.94.ps.gz


Title: A Survey of the Explicit Runge-Kutta Method
Authors: Wayne H. Enright, Desmond J. Higham, Brynjulf Owren and Philip W. Sharp
Date: April 1995
Pages: 36
Ref: Technical report 291/94, Dept. Computer Science, Univ. of Toronto
Note: This is a REVISED version, dated April 1995
File: cs-94-291.ps.gz

Abstract:
Research in explicit Runge-Kutta methods is producing continual improvements to the original algorithms, and the aim of this survey is to relate the current state-of-the-art. In drawing attention to recent advances, we hope to provide useful information for those who apply numerical methods.

We describe recent work in the derivation of Runge-Kutta coefficients: ``classical'' general-purpose formulas, ``special'' formulas for high order and Hamiltonian problems, and ``continuous'' formulas for dense output. We also give a thorough review of implementation details. Modern techniques are described for the tasks of controlling the local error in a step-by-step integration, computing reliable estimates of the global error, detecting stiffness, and detecting and handling discontinuities and singularities. We also discuss some important software issues.


Title: High performance computing of elliptic partial differential equations with spline collocation
Author: C. C. Christara
Date: June 1994
Pages: 13
Note: Appeared in Proceedings of the 1994 Computational Structures Technology conference (CST94), August 30 - September 1, 1994, Athens, Greece, sponsored by Civil-Comp, Vol. A, pp. 23-34
File: ccc/highperf.ps.gz

Abstract:
We consider a variety of solvers for the spline collocation equations arising from the discretisation of elliptic Partial Differential Equations. We study the convergence properties of semi-iterative and conjugate gradient acceleration methods applied to the system of spline collocation equations. The preconditioners tested include incomplete factorisation and SSOR for both the natural and multicolour orderings, domain decomposition based on Schur complement methods with nonoverlapping subdomains, or Schwarz methods with overlapping subdomains, and multigrid methods. We study the parallelisation of some of the above iterative methods and discuss their advantages and disadvantages. The communication requirements of the methods are discussed when the methods are implemented on distributed memory machines. Results which show that spline collocation methods are very competitive for the solution of BVPs are presented.


Title: An Analysis of the Order of Runge-Kutta Methods That Use an Iterative Scheme To Compute Their Internal Stage Values
Authors: K. R. Jackson, A. Kvaerno and S. P. Norsett
Date: May 1994
Pages: 53
Note: A slightly revised version of this paper appeared in the journal BIT, 36 (1996), pp. 713-765.
File: tr-94-1.ps.gz

Abstract:
We employ B-series to analyze the order of Runge-Kutta (RK) methods that use simple iteration, modified Newton iteration or full Newton iteration to compute their internal stage values. Our assumptions about the initial guess for the internal stage values cover a wide range of practical schemes. Moreover, the analytical techniques developed in this paper can be applied in many other similar contexts.


Title: Quadratic spline collocation methods for elliptic partial differential equations
Author: C. C. Christara
Date: March 1994 (replaces Tech. Rep. DCS-135, 1990)
Pages: 34
Note: A slightly revised version of this paper appeared in BIT, 34:1, 1994, pp. 33-61
File: ccc/cs-90-135.ps.gz
DOI: 10.1007/BF01935015
Journal: PDF file (fulltext), page

Abstract:
We consider Quadratic Spline Collocation (QSC) methods for linear second order elliptic Partial Differential Equations (PDEs). The standard formulation of these methods leads to non-optimal approximations. In order to derive optimal QSC approximations, high order perturbations of the PDE problem are generated. These perturbations can be applied either to the PDE problem operators or to the right sides, thus leading to two different formulations of optimal QSC methods. The convergence properties of the QSC methods are studied. Optimal $O(h sup 3-j )$ global error estimates for the $j$-th partial derivative are obtained for a certain class of problems. Moreover, $O(h sup 4-j )$ error bounds for the $j$-th partial derivative are obtained at certain sets of points. Results from numerical experiments verify the theoretical behaviour of the QSC methods. Performance results also show that the QSC methods are very effective from the computational point of view. The QSC methods have been implemented efficiently on parallel machines.


Title: The Mathematical Basis for a New Polynomial Rootfinder with Quadratic Convergence
Authors: T. E. Hull and R. Mathon
Date: 1994
Pages: 16
File: hull-mathon-math-basis.ps.gz

Abstract:
Formulas developed originally by Weierstrass have been used since the 1960's by many others for the simultaneous determination of all the roots of a polynomial. Convergence to simple roots is quadratic, but individual approximations to a multiple root converge only linearly. However, it is shown here that the mean of such individual approximations converges quadratically to that root. This result, along with some detail about the behavior of such approximations in the neighborhood of the multiple root, suggests a new approach to the design of a polynomial rootfinder.

This paper first provides the relevant mathematical results:a short derivation of the formulas, convergence proofs, an indication of the behavior near a multiple root, and finally some error bounds. It then provides the outline of an algorithm based on these results, along with some preliminary experimental results to illustrate the major points, and to demonstrate some of the potential for a future software package. It should also be noted that the method is well-suited to take advantage of a parallel environment.


Title: Coupled Mode Equations with Free Carrier Effects: A Numerical Solution
Authors: N. G. R. Broderick, C. M. de Sterke and K. R. Jackson
Date: May 1993
Pages: 15
Note: A slightly revised version of this paper appeared in the J. Optical and Quantum Electronics, 26 (1994), pp. S219-S234.
File: joqe-1.ps.gz

Abstract:
We present a numerical method for solving a set of coupled mode equations describing light propagation through a medium with a grating and free carriers. The carriers, which are excited by the light and decay exponentially, lower the refractive index, thus rendering the system nonlinear. The method is fourth-order accurate, efficient, stable, easy to implement and well suited for vector and parallel computers. In addition, it can be extended to handle diffusive carrier effects.


Title: The Use of Butcher Series in the Analysis of Newton-Like Iterations in Runge-Kutta Formulas
Authors: K. R. Jackson, A. Kvaerno and S. P. Norsett
Date: April 1993
Pages: 17
Note: A slightly revised version of this paper appeared in the journal Applied Numerical Mathematics, 15 (1994), pp. 341-356.
File: anm-1.ps.gz

Abstract:
We consider the numerical solution of initial value problems for both ordinary differential equations and differential-algebraic equations by Runge-Kutta (RK) formulas. We assume that the internal stage values of the RK formula are computed by some iterative scheme for solving nonlinear equations, such as Newton's method. Using Butcher series and rooted trees, we give a complete characterization of the local error in the RK formula after k iterations of the scheme. Results are given for three specific schemes: simple iteration, the modified Newton iteration, and the full Newton iteration. The ideas developed in this paper, however, are easily applied to other iterative schemes of this kind.


Title: A Runge-Kutta Type Boundary Value ODE Solver with Defect Control
Authors: W.H. Enright and Paul Muir
Date: June 1993
Pages: 41
File: cs-93-267.ps.gz

Abstract:
A popular approach for the numerical solution of boundary value ODE problems involves the use of collocation methods. Such methods can be naturally implemented so as to provide a continuous approximation to the solution over the entire problem interval. On the other hand, several authors have suggested as an alternative, certain subclasses of the implicit Runge-Kutta formulas, known as mono-implicit Runge-Kutta (MIRK) formulas, which can be implemented substantially more efficiently than the collocation methods. These latter formulas do not have a natural implementation that provides a continuous approximation to the solution; rather only a discrete approximation at certain points within the problem interval is obtained. However recent work in the area of initial value problems has demonstrated the possibility of generating inexpensive interpolants for any explicit Runge-Kutta formula. These ideas have recently been extended to develop interpolants for the MIRK formulas. In this paper, we describe our investigation of the use of continuous MIRK formulas in a code for the numerical solution of boundary value ODE problems. A primary thrust of this investigation is to consider defect control, based on these interpolants, as an alternative to the standard use of global error estimates, as the basis for termination and mesh redistribution criteria.


Title: Implementing Complex Elementary Functions Using Exception Handling
Authors: T. E. Hull, Thomas F. Fairgrieve and Ping Tak Peter Tang
Date: 1993
Pages: 31
File: cmplx-elem-fcns.ps.gz

Abstract:
We develop algorithms for reliable and accurate evaluations of the complex elementary functions required in Fortran 77 and Fortran 90, namely cabs, csqrt, cexp, clog, csin and ccos. The algorithms are presented in a pseudo-code which has convenient exception handling facilities. A tight error bound is derived for each algorithm. Corresponding Fortran programs for an IEEE environment have also been developed to illustrate the practicality of the algorithms, and these programs have been tested very carefully to help confirm the correctness of the algorithms and their error bounds -- the results of these tests are included in the paper, but the Fortran programs are not.


Title: The Practical Implications of Order Reduction on the Solution of Stiff Initial Value Problems using Runge-Kutta Formulas.
Authors: C. MacDonald and W. Enright
Date: June 1993
Pages: 14
Ref: Technical report 261/92, Dept. Computer Science, Univ. of Toronto
File: cs-92-261.ps.gz

Abstract:
Stability analysis of Runge-Kutta (RK) formulas was originally limited to linear ordinary differential equations (ODEs). More recently such analysis has been extended to include the behaviour of solutions to non-linear problems. This extension led to additional stability requirements for RK methods. Although the class of problems has been widened, the analysis is still restricted to a fixed stepsize. In the case of differential algebraic equations (DAEs), additional order conditions must be satisfied to achieve full classical ODE order and avoid possible `order reduction'. In this case too, a fixed stepsize analysis is employed. Such analysis may be of only limited use in quantifying the effectiveness of adaptive methods on stiff problems.

In this paper we examine the phenomenon of `order reduction' and its implications on variable-step algorithms. We introduce a global measure of order referred to here as the `observed order' which is based on the average stepsize over the region of integration. This measure may be better suited to the study of stiff systems, where the stepsize selection algorithm will vary the stepsize considerably over the interval of integration. Observed order gives a better indication of the relationship between accuracy and cost. Using this measure, the `observed order reduction' will be seen to be less severe than that predicted by fixed stepsize order analysis.


Title: Parallel computation of partial differential equations on distributed memory machines
Author: C. C. Christara
Date: June 1992
Pages: 13
Note: Appeared in Advances in Computer Methods for Partial Differential Equations - VII, pp. 142-149 R. Vichnevetsky, D. Knight, G. Richter eds.
File: ccc/paral.ps.gz

Abstract:
We review a number of parallel methods for the solution of elliptic Boundary Value Problems (BVPs). The methods considered are of coarse or medium grain parallelism and fixed grid discretisation. The cases where the degree of parallelism grows linearly with the size of the problem in all its dimensions or in fewer dimensions are considered. We also consider the parallelism in both the discretisation of the BVP phase and the solution of the resulting linear system phase. We discuss the assignment of data to processors and the communication requirements of the methods, when implemented on distributed memory machines. Based on a general model for a distributed memory machine, we develop a framework in order to classify the communication requirements of the methods considered. We study the communication complexity of the methods on a general distributed machine model, and present performance results on the iPSC/2 hypercube.


Title: The Parallel Solution of ABD Systems Arising in Numerical Methods for BVPs for ODEs
Authors: K. R. Jackson and R. N. Pancer
Date: May 1992
Pages: 35
Ref: Technical report 255/91, Dept. Computer Science, Univ. of Toronto
File: cs-91-255.ps.gz

Abstract:
Many commonly used numerical methods for the solution of Boundary Value Problems (BVPs) for Ordinary Differential Equations (ODEs) contain significant components that can be parallelized easily, and recent investigations have shown that substantial speedups are attainable on machines with a modest number of processors. However, a stage in most of these methods is the numerical solution of an "Almost Block Diagonal" (ABD) system of linear algebraic equations. Several authors have proposed parallel algorithms for solving such systems; unfortunately most are potentially unstable or offer only limited parallelism. As a result, solving ABD systems has proven to be a bottleneck in the overall execution time of parallel BVP codes.

In recent papers, Wright presents new stable parallel algorithms for solving ABD systems. Each algorithm attains the theoretically optimal speedup for the problem if enough processors are available, and each can be adapted for use on architectures with fewer than the optimal number of processors. The algorithms in Section 3 of this paper were discovered independently and are similar to those investigated by Wright. Extensions to the basic algorithm are described which make better use of available processors during the decomposition stage in order to increase parallelism in the back-solve stage.

A new algorithm based on "eigenvalue rescaling" is presented in Section 4 which can be up to twice as fast as the fastest algorithm in Section 3. Numerical experiments show that this new algorithm does not exhibit the instability inherent in some earlier schemes. In addition, some insight supporting the numerical evidence is given.


Title: Order Barriers and Characterizations for Continuous Mono-Implicit Runge-Kutta Schemes
Authors: Paul Muir and Brynjulf Owren
Date: December 1991
Pages: 22
Ref: Technical report 258/91, Dept. Computer Science, Univ. of Toronto
File: cs-91-258.ps.gz

Abstract:
The mono-implicit Runge-Kutta (MIRK) schemes, a subset of the family of implicit Runge-Kutta (IRK) schemes, were originally proposed for the numerical solution of initial value ODE's more than fifteen years ago. During the last decade, there has been considerable investigation of the use of these schemes in the numerical solution of boundary value ODE problems, where their efficient implementation suggests that they may provide a worthwhile alternative to the widely used collocation schemes. In fact, recent work in this area has seen the development of some software packages for boundary value ODE's based on these schemes. Unfortunately, these schemes lead to algorithms which provide only a discrete solution approximation at a set of mesh points over the problem interval, while the collocation schemes provide a natural continuous solution approximation. The availability of a continuous solution is important not only to the user of the software but also within the code itself, for example, in estimation of errors, defect control, mesh selection, and the provision of initial solution estimates for new meshes. An approach for the construction of a continuous solution approximation based on the MIRK schemes is suggested by recent work in the area of continuous extensions for explicit Runge-Kutta schemes for initial value ODE's. In this paper, we describe our work in the investigation of continuous versions of the MIRK schemes:
(i) we give some lower bounds relating the stage order to the minimal number of stages for general IRK schemes,
(ii) we establish lower bounds on the number of stages needed to derive continuous MIRK schemes of orders 1 through 6,
(iii) we provide characterizations of such schemes having a minimal number of stages for each of these orders.


Title: Adaptive Linear Equation Solvers in Codes for Large Stiff Systems of ODEs
Authors: K. R. Jackson and W. L. Seward
Date: July 1991
Pages: 23
Ref: Research Report CS-91-33, Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, N2L 3G1
Note: A slightly revised version of this paper appeared in the SIAM J. Sci. Comput., 14 (1993), pp. 800-823.
File: uw-cs-91-33.ps.gz

Abstract:
Iterative linear equation solvers have been shown to be effective in codes for large systems of stiff initial-value problems for ordinary differential equations (ODEs). While preconditioned iterative methods are required in general for efficiency and robustness, unpreconditioned methods may be cheaper over some ranges of the interval of integration. In this paper, we develop a strategy for switching between unpreconditioned and preconditioned iterative methods depending on the amount of work being done in the iterative solver and properties of the matrix being solved. This strategy is combined with a ``type-insensitive'' approach to the choice of formula used in the ODE code to develop a method that makes a smooth transition between nonstiff and stiff regimes in the interval of integration. We find that, as expected, for some large systems of ODEs, there may be a considerable saving in execution time when the type-insensitive approach is used. If there is a region of the integration that is ``mildly'' stiff, switching between unpreconditioned and preconditioned iterative methods also increases the efficiency of the code significantly.


Title: Multicolour orderings and iterative methods for elliptic equations
Author: C. C. Christara
Date: June 1991
Pages: 11
Note: Appeared in Proceedings of the Supercomputing Symposium '91 (SS '91), June 3-5, 1991, Fredericton, NB, Canada, pp. 221-230
File: ccc/colour.ps.gz

Abstract:
In this paper we study the performance and parallel implementation of iterative methods applied to linear systems arising from the discretisation of linear elliptic Boundary Value Problems (BVPs) which are ordered according to multicolour orderings. We view the discretisation of a BVP as a process of generating a set of equations between stencils. These equations, when ordered according to some indexing of the stencil, result in a sparse linear system. We discuss the effect of indexing of the stencils on the sparsity pattern of the linear system, on the convergence of iterative methods applied to the system, as well as on the parallel implementation of these methods. We discuss which stencil indexings are preferable in order to minimise communication in message-passing parallel architectures.


Title: Domain decomposition and incomplete factorisation methods for partial differential equations
Author: C. C. Christara
Date: May 1991
Pages: 11
Note: Appeared in Proceedings of the sixth Distributed Memory Computing Conference (DMCC6), April 28 - May 1, 1991, Portland, OR, U.S.A., pp. 369-377
File: ccc/iccg.ps.gz

Abstract:
In this paper we develop and study a method which tries to combine the merits of Domain Decomposition (DD) and Incomplete Cholesky preconditioned Conjugate Gradient method (ICCG) for the parallel solution of linear elliptic Partial Differential Equations (PDEs) on rectangular domains. We first discretise the PDE problem, using Spline Collocation, a method of Finite Element type based on smooth splines. This gives rise to a sparse linear system of equations. The ICCG method provides us with a very efficient, but not straightforward parallelisable linear solver for such systems. On the other hand, DD methods are very effective for elliptic PDEs. A combination of DD and ICCG methods, in which the subdomain solves are carried out with ICCG, leads to efficient and highly parallelisable solvers. We implement this hybrid DD\-ICCG method on a hypercube, discuss its parallel efficiency, and show results from experiments on configurations with up to 32 processors. We apply a totally local communication scheme and discuss its performance on the iPSC/2 hypercube. A similar approach can be used with other PDE discretisation methods.


Title: Nonlinear coupled mode equations on a finite interval: a numerical procedure
Authors: C. M. de Sterke, K. R. Jackson and B. D. Robert
Date: February 1991
Pages: 31
Note: A slightly revised version of this paper appeared in the J. Optical and Quantum Electronics, 26 (1994), pp. S219-S234.
File: osa-1.ps.gz

Abstract:
We present a numerical procedure to solve a set of nonlinear coupled mode equations on a finite interval. These equations originally arose in the study of the dynamics of gap solitons in nonlinear periodic media. Our procedure, which makes use of an implicit fourth-order Runge-Kutta method, is easy to implement, versatile, and very well suited for vectorization or parallelization.


Title: A Survey of Parallel Numerical Methods for Initial Value Problems for Ordinary Differential Equations
Author: K. R. Jackson
Date: 1991
Pages: 6
Note: A slightly revised version of this paper appeared in the IEEE Transactions on Magnetics, 27 (1991), pp. 3792-3797.
File: ieee-trans-mag-1.ps.gz

Abstract:
The parallel solution of Initial Value Problems for Ordinary Differential Equations has become an active area of research during the past few years. We briefly survey the recent developments in this area, with particular emphasis on traditional forward-step methods that offer the potential for effective small-scale parallelism on currently existing machines.


Title: Schur complement preconditioned conjugate gradient methods for spline collocation equations
Author: C. C. Christara
Date: May 1991
Pages: 14
Note: An earlier version of this paper appeared in Computer architecture news, 18:3, 1990, pp. 108-120, and Proceedings of the 1990 International Conference on Supercomputing, June 1990, Amsterdam, the Netherlands, sponsored by ACM
File: ccc/scmpcg.ps.gz

Abstract:
We are interested in the efficient solution of linear second order Partial Differential Equation (PDE) problems on rectangular domains. The PDE discretisation scheme used is of Finite Element type and is based on quadratic splines and the collocation methodology. We integrate the Quadratic Spline Collocation (QSC) discretisation scheme with a Domain Decomposition (DD) technique. We develop DD motivated orderings of the QSC equations and unknowns and apply the Preconditioned Conjugate Gradient (PCG) method for the solution of the Schur Complement (SC) system. Our experiments show that the SC-PCG-QSC method in its sequential mode is very efficient compared to standard direct band solvers for the QSC equations. We have implemented the SC-PCG-QSC method on the iPSC/2 hypercube and present performance evaluation results for up to 32 processors configurations. We discuss a type of nearest neighbour communication scheme, in which the amount of data transfer per processor does not grow with the number of processors. The estimated efficiencies are at the order of 90%.


Title: Conjugate gradient methods for spline collocation equations
Author: C. C. Christara
Date: May 1991
Pages: 9
Note: An earlier version of this paper appeared in Proceedings of the fifth Distributed Memory Computing Conference (DMCC5), April 8-12, 1990, Charleston, SC, U.S.A., pp. 550-558
File: ccc/pcgcoef.ps.gz

Abstract:
We study the parallel computation of linear second order elliptic Partial Differential Equation (PDE) problems in rectangular domains. We discuss the application of Conjugate Gradient (CG) and Preconditioned Conjugate Gradient (PCG) methods to the linear system arising from the discretisation of such problems using quadratic splines and the collocation discretisation methodology. Our experiments show that the number of iterations required for convergence of CG-QSC (Conjugate Gradient applied to Quadratic Spline Collocation equations) grows linearly with the square root of the number of equations. We implemented the CG and PCG methods for the solution of the Quadratic Spline Collocation (QSC) equations on the iPSC/2 hypercube and present performance evaluation results for up to 32 processors configurations. Our experiments show efficiencies of the order of 90%, for both the fixed and scaled speedups.


Title: Derivation of Efficient Continuous Explicit Runge-Kutta Methods
Authors: B. Owren and M. Zennaro
Date: 1990
Pages: 17
Ref: Technical report 240/90, Dept. Computer Science, Univ. of Toronto
File: cs-90-240.ps.gz

Abstract:
Continuous Explicit Runge-Kutta methods with the minimal number of stages are considered. These methods are continuously differentiable if and only if one of the stages is the FSAL evaluation. A characterization of a subclass of these methods is developed for order 3, 4 and 5. It is shown how the free parameters of these methods can be used either to minimize the continuous truncation error coefficients or to maximize the stability region. As a representative for these methods the 5th order method with minimized error coefficients is chosen, supplied with an error estimation method, and analysed by using the DETEST software. The results are compared with a similar implementation of the Dormand-Prince 5(4) pair with interpolant, showing a significant advantage to the new method for the chosen problems.


Title: The Potential for Parallelism in Runge-Kutta Methods -- Part 1: RK Formulas in Standard Form
Authors: K. R. Jackson and S. P. Norsett
Date: November 1990
Pages: 36
Ref: Technical report 239/90, Dept. Computer Science, Univ. of Toronto
Note: A slightly revised version of this paper appeared in the SIAM J. Numer. Anal., 32 (1995), pp. 49-82.
File: cs-90-239.ps.gz

Abstract:
We examine the potential for parallelism in Runge-Kutta (RK) methods based on formulas in standard one-step form. Both negative and positive results are presented. Many of the negative results are based on a theorem that bounds the order of a RK formula in terms of the minimum polynomial for its coefficient matrix. The positive results are largely examples of prototypical formulas which offer a potential for effective ``coarse-grain'' parallelism on machines with a few processors.


Title: Using the IMSL MATH/LIBRARY in Numerical Methods Courses
Authors: K. R. Jackson and T. E. Hull
Date: April 1990
Pages: 13
Note: A slightly revised version of this paper appeared in the IMSL Technical Report Series, No. 9005.
File: imsl-9005.ps.gz

Abstract:
This article describes our experience using the IMSL MATH/LIBRARY in five numerical methods classes taught to engineering students at the University of Toronto during the 1987-88 and 1988-89 academic years.