# Reports of Parallel software for implicit differential equations

Research within the project: Parallel software for implicit differential equations

*Research within the project: **Parallel software for implicit differential equations*

*Oscillatory Stormer-Cowell methods*

P.J. van der Houwen, E. Messina, B.P. Sommeijer

1998, MAS-R9813, ISSN 1386-3703

We consider explicit methods for initial-value problems for special second-order ordinary differential equations where the righthand side does not contain the derivative of**y**and where the solution components are known to be periodic with frequencies ω_{j}lying in a given nonnegative interval [__ω__, ω]. The aim of the paper is to exploit this extra information and to modify a given integration method in such a way that the method parameters are "tuned" to the interval [__ω__, \barω]. Such an approach has already been proposed by Gautschi in 1961 for linear multistep methods for first-order differential equations in which the dominant frequencies ω_{j}are a priori known. In this paper, we only assume that the interval [__ω__,\barω] is known. Two "tuning" techniques, respectively based on a least squares and a minimax approximation, are considered and applied to the*classical*explicit Störmer-Cowell methods and the recently developed*parallel*explicit Störmer-Cowell methods.

Download the pdf-file*Parallel Stormer-Cowell methods for high-precision orbit computations*P.J. van der Houwen, E. Messina, J.J.B. de Swart

1998, MAS-R9812, ISSN 1386-3703

Many orbit problems in celestial mechanics are described by (nonstiff) initial-value problems (IVPs) for second-order ordinary differential equations of the form**y**" =**f**(**y**). The most successful integration methods are based on high-order Runge-Kutta-Nyström formulas. However, these methods were designed for sequential is paper, we consider high-order parallel methods that are not based on Runge-Kutta-Nyström formulas, but which fit into the class of general linear methods. In each step, these methods compute blocks of k approximate solution values (or stage values) at k different points using the whole previous block of solution values.

Download the pdf-file*Parallel methods for nonstiff VIDEs*P.J. van der Houwen

1998, MAS-R9811, ISSN 1386-3703

We consider numerical methods for nonstiff initial-value problems for Volterra integro-differential equations. Such problems may be considered as initial-value problems for ordinary differential equations with expensive righthand side functions because each righthand side evaluation requires the application of a quadrature formula. The often considerable costs suggest the use of methods that require only one righthand side evaluation per step. One option is a conventional linear multistep method. However, if a parallel computer system is available, then one might also look for methods with more righthand sides per step, but such that they can all be evaluated in parallel. In this paper, we construct such parallel methods and we show that on parallel computers they are by far superior to the conventional linear multistep methods which do not have scope for parallelism. Moreover, the (real) stability interval is considerably larger.

Download the pdf-file*Parallel Adams methods*

P.J. van der Houwen, E. Messina

1998, MAS-R9810, ISSN 1386-3703

In the literature, various types of parallel methods for integrating nonstiff initial-value problems for first-order ordinary differential equation have been proposed. The greater part of them are based on an implicit multistage method in which the implicit relations are solved by the predictor-corrector (or fixed point iteration) method. In the predictor-corrector approach the computation of the components of the stage vector iterate can be distributed over s processors, where s is the number of implicit stages of the corrector method.

Download the pdf-file*Splitting methods for second-order initial value problems*

P.J. van der Houwen, E. Messina

1998, MAS-R9809, ISSN 1386-3703

We consider stiff initial-value problems for second-order differential equations of the special form y" = f(y). Stiff initial-value problem solvers are necessarily implicit, hence, we are faced with the problem of solving systems of implicit relations. This paper focuses on the construction and analysis of iterative solution methods which are effective in cases where the Jacobian of the righthand side of the differential equation can be split into a sum of matrices with a simple structure.

Download the pdf-file*The use of approximate factorization in stiff ODE solvers*P.J. van der Houwen, B.P. Sommeijer

1997, MAS-R9732, ISSN 1386-3703

We consider implicit integration methods for the numerical solution of stiff initial-value problems. In applying such methods, the implicit relations are usually solved by Newton iteration. However, it often happens that in subintervals of the integration interval the problem is nonstiff or mildly stiff with respect to the stepsize. In these nonstiff subintervals we do not need the (expensive) Newton iteration process. This motivated us to look for an iteration process that converges in mildly stiff situations and is less costly than Newton iteration.

Download the pdf-file*Computation of elliptic Fekete point sets*

J.D. Pinter, W.J.H. Stortelder, J.J.B. de Swart

1997, MAS-R9705, ISSN 1386-3703

The objective of this work is to provide a methodology for approximating globally optimal Fekete point configurations. This problem is of obvious interest in numerical mathematics and scientific modeling. Following a brief discussion of the pertinent analytical background, Lipschitz global optimization (LGO) is applied to determine --i.e., to numerically approximate-- Fekete point configurations. Next to the optimization approach, an alternative strategy by formulating a set of differential-algebraic equations (DAEs) of index 2 will be considered.

Download the pdf-file*On the construction of error estimators for implicit Runge-Kutta methods*

J.J.B. de Swart; G. Soderlind;

1997, MAS-R9704, ISSN 1386-3703

For implicit Runge-Kutta methods intended for stiff ODEs or DAEs, it is often difficult to embed a local error estimating method which gives realistic error estimates for stiff/algebraic components. If the embedded method's stability function is unbounded at*z*= ∞, stiff error components are grossly overestimated. In practice some codes "improve" such inadequate error estimates by premultiplying the estimate by a "filter" matrix which damps or removes the large, stiff error components. Although improving computational performance, this technique is somewhat arbitrary and lacks a sound theoretical backing. In this scientific note we resolve this problem by introducing an*implicit*error estimator.

Download the pdf-file*Parallel iterative linear solvers for multistep Runge-Kutta methods*

E. Messina, J.J.B. de Swart, W.A. van der Veen

1996, NM-R9619, ISSN 0169-0388

This paper deals with solving stiff systems of differential equations by implicit Multistep Runge--Kutta (MRK) methods. For this type of methods, nonlinear systems of dimension*sd*arise, where*s*is the number of Runge--Kutta stages and*d*the dimension of the problem. Applying a Newton process leads to linear systems of the same dimension, which can be very expensive to solve in practice. Like in [HS96], where the one-step*RK*methods were considered, we approximate these linear systems by*s*systems of dimension*d*, which can be solved in parallel on a computer with s processors.

Download the pdf-file*Waveform relaxation methods for implicit differential equations*P.J. van der Houwen, W.A. van der Veen

1996, NM-R9617, ISSN 0169-0388

We apply a Runge-Kutta-based waveform relaxation method to initial-value problems for implicit differential equations. In the implementation of such methods, a sequence of nonlinear systems has to be solved iteratively in each step of the integration process. The size of these systems increases linearly with the number of stages of the underlying Runge-Kutta method, resulting in high linear algebra costs in the iterative process for high-order Runge-Kutta methods. In our earlier investigations of iterative solvers for implicit initial-value problems, we designed an iteration method in which the linear algebra costs are almost independent of the number of stages when implemented on a parallel computer system. In this paper, we use this parallel iteration process in the Runge-Kutta waveform relaxation method. In particular, we analyse the convergence of the method. The theoretical results are illustrated by a few numerical examples.

Download the pdf-file*Parallel linear system solvers for Runge-Kutta methods*P.J. van der Houwen, J.J.B. de Swart

1996, NM-R9616, ISSN 0169-0388

If the nonlinear systems arising in implicit Runge-Kutta methods like the Radau IIA methods are iterated by (modified) Newton, then we have to solve linear systems whose matrix of coefficients is of the form*I-A*\otimes hJ$ with*A*the Runge-Kutta matrix and*J*an approximation to the Jacobian of the righthand side function of the system of differential equations. For larger systems of differential equations, the solution of these linear systems by a direct linear solver is very costly, mainly because of the*LU*-decompositions. We try to reduce these costs by solving the linear systems by a second (inner) iteration process. This inner iteration process is such that each inner iteration again requires the solution of a linear system.

Download the pdf-file*Test set for IVP solvers*W.M. Lioen, J.J.B. de Swart, W.A. van der Veen

1996, NM-R9615, ISSN 0169-0388

In this paper a collection of Initial Value test Problems for systems of Ordinary Differential Equations, Implicit Differential Equations and Differential-Algebraic Equations is presented. This test set is maintained by the project group for Parallel IVP Solvers of CWI, department of Numerical Mathematics. This group invites everyone to contribute new test problems to this test set. How new problems can be submitted can be found in this paper as well.

Download the pdf-file*Experiences with sparse matrix solvers in parallel ODE software*J.J.B. de Swart, J.G. Blom

1995, NM-R9520, ISSN 0169-0388

The use of implicit methods for numerically solving stiff systems of differential equations requires the solution of systems of non-linear equations. Normally these are solved by a Newton-type process, in which we have to solve systems of linear equations. The Jacobian of the derivative function determines the structure of the matrices of these linear systems. Since it often occurs that the components of the derivative function only depend on a small number of variables, the system can be considerably sparse. Hence, it can be worth the effort to use a sparse matrix solver instead of a dense LU-decomposition. This paper reports on experiences with the direct sparse matrix solvers MA28 by Duff, Y12M by Zlatev et al. and one special-purpose matrix solver, all embedded in the parallel ODE solver PSODE by Sommeijer.

Download the pdf-file*On the diagonal approximation of full matrices*W.M. Lioen

1995, NM-R9518, ISSN 0169-0388

In this paper the construction of diagonal matrices, in some sense approximating the inverse of a given square matrix, is described. The matrices are constructed using the well-known computer algebra system Maple. The techniques we show are applicable to square matrices in general. Results are given for use in Parallel diagonal-implicit Runge-Kutta (PDIRK) methods. For an*s*-stage Radau IIA corrector we conjecture*s*! possibilities for the diagonal matrices.

Download the pdf-file*Approximating Runge-Kutta matrices by triangular matrices*

W. Hoffmann, J.J.B. de Swart

1995, NM-R9517, ISSN 0169-0388

The implementation of implicit Runge-Kutta methods requires the solution of large systems of non-linear equations. Normally these equations are solved by a modified Newton process, which can be very expensive for problems of high dimension. The recently proposed triangularly implicit iteration methods for ODE-IVP solvers [HSw95] substitute the Runge-Kutta matrix A in the Newton process for a triangular matrix T that approximates A, hereby making the method suitable for parallel implementation. The matrix T is constructed according to a simple procedure, such that the stiff error components in the numerical solution are strongly damped. In this paper we prove for a large class of Runge-Kutta methods that this procedure can be carried out and that the diagonal entries of T are positive. This means that the linear systems that are to be solved have a non-singular matrix.

Download the pdf-file*Triangularly implicit iteration methods for ODE-IVP solvers*

P.J. van der Houwen, J.J.B. de Swart

1995, NM-R9510, ISSN 0169-0388

It often happens that iteration processes used for solving the implicit relations arising in ODE-IVP methods only start to converge rapidly after a certain number of iterations. Fast convergence right from the beginning is particularly important if we want to use so-called step-parallel iterations in which the iteration method is concurrently applied at a number of step points. In this paper, we construct highly parallel iteration methods that do converge fast from the first iteration on. Our starting point is the PDIRK method (parallel, diagonal-implicit, iterated Runge-Kutta method), designed for solving implicit Runge-Kutta equations on parallel computers. The PDIRK method may be considered as Newton type iteration in which the Newton Jacobian is `simplified' to block-diagonal form.

Download the pdf-file*Step-parallel algorithms for stiff initial value problems*W.A. van der Veen

1995, NM-R9507, ISSN 0169-0388

For the parallel integration of stiff initial value problems, three types of parallelism can be employed: "parallelism across the problem", "parallelism across the method" and "parallelism across the steps". Recently, methods based on Runge-Kutta schemes that use parallelism across the method have been proposed in [5, 6]. These methods solve implicit Runge-Kutta schemes by means of the so-called diagonally iteration scheme and are called PDIRK methods. The experiments described in [5], show that the speedup factor of certain high-order PDIRK methods, is about 2 with respect to a good sequential code. However, a disadvantage of the high-order PDIRK methods is, that a relatively large number of iterations is needed for each step. This disadvantage can be compensated by employing step-parallelism.

Download the pdf-file*Iteration of Runge-Kutta methods with block triangular Jacobians*P.J. van der Houwen, B.P. Sommeijer

1995, NM-R9506, ISSN 0169-0388

We shall consider iteration processes for solving the implicit relations associated with implicit Runge-Kutta (RK) methods applied to stiff initial value problems (IVPs). The conventional approach for solving the RK equations uses Newton iteration employing the full righthand side Jacobian. For IVPs of large dimension, this approach is not attractive because of the high costs involved in the LU-decomposition of the Jacobian of the RK equations. Several proposals have been made to reduce these high costs. The most well-known remedy is the use of similarity transformations by which the RK Jacobian is transformed to a block-diagonal matrix whose matrices. Such a grossly simplified Newton iteration process allows for a considerable amount of parallelism. However, the important issue is whether this block-triangular approach does converge. It is the aim of this paper to get insight into the effect on the convergence of block- triangular Jacobian approximations.

Download the pdf-file