GSoC 2014 Application Thilina Rathnayake Linear Algebra Module for CSymPy - sympy/sympy GitHub Wiki

Preliminaries

About Me

Name: Thilina Rathnayake

University: University of Moratuwa, Sri Lanka.

Email: [email protected]

Github: thilinarmtb

IRC: thilinarmtb on freenode

I am a 3rd year undergraduate of Computer Science and Engineering Department, University of Moratuwa. Currently I am working as an intern for 6 months which is a compulsory part of my degree. This will end in late April leaving me enough time to get ready for my GSoC project If I get selected. I will be able to work 40 hrs a week on the project.

Personal Background

I have been programming for about six years now. I started programming with C and then learned C++ and after playing around with a few languages, ended up with Python. At present I do most of my coding with C++ and python. Very recently, I have found Cython really interesting.

Recently, I stopped using IDEs for development since they just make things complicated. I respect the KISS principle, Keep it simple, stupid. This made me switch to a simple gedit environment with some cool extensions for development and found this arrangement extremely easy to work with. I have worked under three different Linux flavours during the past few years. I started with Ubuntu then switched to Fedora and Debian respectively. Currently I am using Ubuntu again. I hardly ever use Windows and run it on a virtual machine If I have to.

I participated in last year's GSoC and got accepted by SymPy to develop a Diophantine equation module. I successfully completed the project. A Lot of information about the project can be found in my blog [2]. I enjoy solving problems using c++ and Python. I also have a little bit of interest in Data mining and have competed in a few competitions on Kaggle.

I have taken an elementary linear algebra course module which covered most of the things in this proposal. I think it would provide me a good foundation to successfully finish this project.

Patch Requirement

I have made several PRs that are merged into the SymPy code base. #2168 is my first PR for implementing the Diophantine equation module. I have merged PRs in CSymPy as well. Have a look at #99 and #103.

Project: Linear Algebra Module for CSymPy

Introduction

Linear Algebra is a branch of mathematics dealing with vector spaces and linear mappings[1]. A vector space is a collection of vectors and vectors are defined using elements from a Field. These elements are known as scalars. A linear mapping T is a function from a vector space V to a vector space W so that vectors v1, v2, v belonging to V and scalar a we have T(v1 + v2) = T(v1) + T(v2) and T(a v) = a T(v). If the dimension of the vector space V is n and the dimension of vector space W is m this relation can be represented by the equation T = Av where A is a m x n Matrix. In addition to this there are numerous concepts related to vectors and vector spaces like inner product spaces, span and basis.

Linear algebra is widely used by many professionals like statisticians, physicists, biologists, computer scientists, engineers, and social scientists. It's also widely used in simulations and modeling. Perhaps the most interesting application of linear algebra is to solve a system of linear equations. A system of linear equations can be represented in Matrix notation as Ax = b where A is the coefficient Matrix and x and b are column matrices representing the variables and the constant terms respectively. Determinants, Cramer's rule and Gaussian elimination can be used to solve a system of linear equations and several other techniques are available depending on the domain of the solutions.

Philosophy

Currently there are various libraries available which have been specifically designed to perform Numerical Linear Algebraic algorithms. There are very fast numerical libraries like FLINT [3] which is written in C. C++ libraries like Linbox [4] and LELA [5] also exist. FLINT, Linbox and LELA supports arbitrary precision calculations as well. Reusing these libraries will make a lot of sense rather than re-implementing them ourselves for several reasons. One serious advantage is that these libraries have been used for a considerable time and have proven their capabilities. Most of the bugs have been identified and fixed. Also, it is almost unlikely that our very own implementations (if we decide to do it) can beat them in performance (at least not in near future). So by using these existing libraries, we can very quickly have a well tested and high speed numerical linear algebra module in CSymPy. Plus it will give us more time to focus and work on achieving the goal and come up with the optimal solution. Although there are very compelling advantages in using a library, we have to make sure that we don't sacrifice CSymPy's maintainability, extensibility or user-friendliness in doing so. To my knowledge both FLINT and LELA are released with GNU General public license which is incompatible with CSymPy's MIT license. We can use them as optional dependencies to avoid this conflict or we can go with Linbox. But Linbox has hard dependencies on Givaro [6] and fflas-ffpack [7].

On the other hand, libraries for symbolic Linear algebraic algorithms are not that common. To my knowledge Maxima and GiNaC are the only libraries which support this. Maxima is written in Lisp and GiNaC is GPL licensed. So, CSymPy will have to provide symbolic linear algebraic algorithms itself. This project mainly hopes to address the issue of creating a symbolic linear algebra library for CSymPy. This will be very useful in PyDy and many other areas.

Also CSymPy is intended to be used by SymPy through python wrappers, the idea is not to worry about the API at the CSymPy level and just provide functions which actually do the required thing.

Deliverables

Most of the algorithms found in SymPy can be re-implemented in C++ so that they perform faster. Here is a list of the things I wish to do during the summer.

  • Matrices (General)
    • Scalar multiplication and transpose
    • Addition and Multiplication of two matrices
    • Elementary row operations: Adding two rows, multiply a row by a scalar and interchange two rows.
    • Sub-matrix: Obtained by removing a set of the rows and columns of the original matrix
    • Rank of a matrix
  • Matrix decomposition
  • Square matrices
    • Trace, Determinant and Inverse
    • Checks for identifying Symmetric / Skew symmetric matrices, identity matrices, orthogonal matrices, triangular matrices
    • Characteristic polynomial
    • Eigen values
  • Solutions of the system Ax = b
  • Miscellaneous

Basic Architecture

I wish to follow an architecture which is similar to what SymPy is having right now. There is a MatrixBase class that provides the most deep abstraction of a matrix. It will only contain two members, number of rows and number of columns of the matrix. It will provide virtual methods for operations and algorithms that we wish to implement. Any class that inherit the MatrixBase class should provide their own implementation of these algorithms and operations. This enables us to create separate classes for dense and sparse matrices and let them do their own implementation of the algorithms to suit their structure. Further, they will have to select the representation of data. I wish to use a vector for storing the elements of a dense matrix and a map for sparse matrices following what SymPy has done. Only the none-zero values will be stored in the map keyed by the pair (i, j) where i and j are the row number and column number of the element respectively. If the key (i, j) is not found in the map, value of the (i, j)th element of the matrix is zero. But there are several other representation available for sparse matrices like unordered map or CSR [7.5] (TODO: Fix this). We can play around with this underlying representation as long as we provide a uniform way to access the elements of both dense and sparse matrices.

In the proposal I have given more emphasis to the algorithms for Dense matrices. But I hope to implement the algorithms for Sparse matrices also. This won't be that hard since SymPy currently has a set of algorithms for Sparse matrices. Below is an approximate representation of the architecture.

class MatrixBase {
public:
    MatrixBase(unsigned row, unsigned col);

    // These functions create a new instance of either DenseMatrix or SparseMatrix
    // and return a reference to the result
    virtual RCP<const MatrixBase> add_matrix(const MatrixBase &other) const = 0;
    virtual RCP<const MatrixBase> mul_matrix(const MatrixBase &other) const = 0;
    virtual RCP<const MatrixBase> add_basic(const Basic &other) const = 0;
    virtual RCP<const MatrixBase> mul_basic(const Basic &other) const = 0;

    // To get the values of row and col
    unsigned nrows();
    unsigned ncols();

    // These should be implemented by the derived classes. If not applicable raise
    // an exception
    virtual RCP<const Basic>get(unsigned i) const = 0;
    virtual void set(unsigned i) const = 0;
    virtual unsigned rank() const = 0;
    virtual RCP<const Basic> det() const = 0;
    virtual RCP<const MatrixBase> inv() const = 0
    :
    :
    // and many more..

private:
    unsigned row;
    unsigned col;
};

class DenseMatrix: public MatrixBase {
public:
    // Constructors
    DenseMatrix(unsigned row, unsigned column)
    DenseMatrix(unisgned row, unsigned column, std::vector<RCP<Basic>> &l)

    // Should implement all the virtual methods from MatrixBase
    // and throw an exception if a method is not applicable.
    // add_matrix, mul_matrix will have to find the correct function
    // to call depending on the `other` argument.
private:
    std::vector<RCP<Basic>> m;
};

class SparseMatrix: public MatrixBase {
public:
    // Constructors
    SparseMatrix(unsigned row, unsigned column)
    SparseMatrix(unisgned row, unsigned column, std::map<std::pair<int, int>, RCP<Basic>> &l)

    // Should implement all the virtual methods from MatrixBase
    // and throw an exception if a method is not applicable.
    // add_matrix and mul_matrix will have to find the correct function
    // to call depending on the `other` argument.
private:
    std::map<std::pair<int, int>, RCP<Basic>> m;
};

RCP<const DenseMatrix> mul_dense_dense(const DenseMatrix &A, const DenseMatrix &B);
RCP<const DenseMatrix> add_dense_dense(const DenseMatrix &A, const DenseMatrix &B);
RCP<const SparseMatrix> mul_sparse_sparse(const SparseMatrix &A, const SparseMatrix &B);
RCP<const SparseMatrix> add_sparse_sparse(const SparseMatrix &A, const SparseMatrix &B);
RCP<const DenseMatrix> add_sparse_dense(const SparseMatrix &A, const DenseMatrix &B);

Last six functions are called by add_matrix and mul_matrix implementations in DenseMatrix and SparseMatrix according to the type (DenseMatrix or SparseMatrix) of other. There are several ways to do this. One can do this with if statements also by checking the type of other. I am going to simply use ifs till we figure out a good way to do this.

Plan

I plan to implement the module in separate phases as listed below.

Phase #1

This phase lays the foundation for all the other phases which follow. I wish to implement the basic structure of the matrix class and the most basic operations which are in the heart of the module.

  • Matrix structure

Good design is as little design is as possible. So We should concentrate on keeping things as simple as possible. When thinking about a representation for a Matrix at first we need to think about how we are going to store the elements. Both SymPy and GiNaC [8] follow a similar method. SymPy stores elements in a flat list and GiNaC stores them as a vector. In CSymPy we can use a vector that stores the objects of type Basic to do this as well. The reason to go for Basic instead of Symbol is that we can support Add, Mul, Function and all the types which are derived from Basic. Other two parameters needed are the row size and the column size. If number of the rows is m and number of columns is n, then the (i, j)th entry of the Matrix will be the (im + j)th* element of the vector (both i and j are indexed from zero).

One might find that providing members to store determinant, rank or other fixed properties of the Matrix would speed up certain user requests as we don't need to re-calculate them over and over again. In my opinion, that kind of an architecture would be only beneficial if we know for sure that matrix is immutable and in that case, We can derive an Immutable Matrix class from this base class adding those required members.

  • Matrix Transpose, Scalar multiplication, Addition and Multiplication.

Transpose of a matrix involves swapping the (i, j)th element with the (j, i)th element. Scalar multiplication of a matrix by a scalar(here any type derived from Basic) is performed by multiplying each element of the matrix by that scalar. Addition of two matrices also is a trivial task and can only be performed if the dimensions of the two matrices are equal. One has to perform addition element wise and create a new Matrix from the dimensions and the new vector resulting from the addition.

For matrix multiplication GiNaC uses the naive algorithm [10], which has the complexity O(mnp) when multiplying two matrices of dimensions m x p and p x n. There is a slightly more efficient method (specially for matrices with larger dimensions) known as Strassen algorithm [11]. Two implementations of the algorithm can be found, one by C [12] and other by Java [13]. Since there is no considerable performance difference between the naive method and Strassen method for smaller dimensions, this should be implemented if we have time at the end of the summer.

  • Rank of a matrix

Rank of a matrix can be calculated by converting matrix into an upper triangular form and counting the non-zero rows. This is implemented in GiNaC [14].

Phase #2

In phase two, I mainly wish to focus on matrix decomposition and square matrices related algorithms.

  • Matrix decompositions

In Matrix decompositions, a single Matrix is represented by a product of two or more simple forms so that they can be used to simplify certain calculations. For an example, the system Ax = b can be easily solved by decomposing A into two matrices L and U where A = LU. Here L is lower triangular and U is an upper triangular Matrix. Then the system Ax = b is equal to the system L(Ux) = b. This system can be easily solved by first solving the system Ly = b and then solving Ux = y. The latter two systems can by solved by back-substitution since U and L are triangular.

During the summer, I wish to implement LU decomposition, LDL decomposition and Cholesky decomposition. LU decomposition can only be applied to square matrices. Simplest LU decomposition algorithm is based on transforming a given square matrix A, in to an upper triangular matrix by using elementary row operations and then finding a lower triangular matrix by doing the same row operations in the same order on the identity matrix with the same dimensions as A. However there is another method used by SymPy for the LU decomposition called fraction-free LU decomposition [15]. The code is available in SymPy and the paper contains a pseudo code for the algorithm, so this can be implemented without running into any serious problems.

Cholesky decomposition can only be performed on a symmetric positive definite matrix A. A is decomposed into two matrices, L and L* where L is an triangular matrix with positive real diagonal entries and L* is it's conjugate transpose. Since complex numbers are not implemented in CSymPy, conjugate of a Matrix is equal to itself.

There are numerous decompositions defined for Matrices [16]. If I finish the proposed work early, this is one of the areas I can come back and improve.

  • Trace, Determinant and Inverse

Trace of a square matrix is the sum of it's diagonal elements. So we have to sum up the elements of the form in+ i* for i in [0, 1, ..., n-1].

There are several ways to calculate the determinant [17]. One way is the direct application of the algebraic definition, that is summing up the product of all the possible permutations with the appropriate sign. More efficient way of doing this is to calculate the determinant recursively using cofactors and minors. Also, one can calculate the determinant by transforming the Matrix into a triangular Matrix and calculating the product of the diagonal elements. Currently SymPy uses three methods to calculate the determinant. Bareis algorithm, Berkowitz algorithm and LU-decomposition. Bareis method is an extension of Gaussian elimination. If we have the LU factorization of a matrix, determinant of that Matrix is equal to the product of the determinants of L and U. Determinants of L and U can be calculated by the product of the diagonal entries. Code for these algorithms are already there in SymPy, so these can be easily implemented. GiNaC calculates some heuristics to decide which algorithm to be used. We can make use of this too.

SymPy uses several algorithms to find the inverse of a Matrix as well. These algorithms are based on Gaussian Elimination, LU decomposition and Adjoint matrix. We can port these algorithms to C++.

Phase #3

Most of the work in this phase won't be that difficult due to the fact that determinants, inverse and Matrix decompositions have been implemented by now. So within this phase, it's likely that I will be able to work on miscellaneous algorithms which were not mentioned in this proposal.

  • Checks for identifying Symmetric / Skew symmetric matrices, identity matrices, orthogonal matrices, triangular matrices

Implementation is trivial, just need to check whether the Matrix satisfies the respective definitions.

  • Characteristic polynomial

Characteristic polynomial of a Matrix A is given by the determinant of (tI-A) where t is any parameter. Characteristic polynomial can be easily calculated by using the Berkovitz algorithm found in the paper [18].

  • Eigen values / vectors

Eigen values of a square matrix A are the roots of it's characteristic polynomial. CSymPy currently doesn't have an equation solver. But it won't be that hard to implement a solver for quadratic case. If we succeed in doing that, we can find Eigen values of a 2X2 Matrix.

  • Solutions of the system Ax = b

Linear systems are solved in SymPy using several methods based on LU, LDL and Cholesky decompositions. These things have come up before in the proposal and we only have to use the results from the decompositions. I wish to implement the same algorithms in CSymPy.

  • Miscellaneous

There are considerable number of algorithms / which are there in SymPy but I haven't mentioned here. I think they can be selected to be implemented in the module according to their usage after discussing with my mentor and the community.

Tentative Schedule

This is only a loose sketch of what I wish to follow during the summer.

  • Week 1
    • Matrix structure
  • Week 2 - 3
    • Scalar multiplication and transpose
    • Addition and Multiplication of two matrices
    • Elementary row operations: Adding two rows, multiply a row by a scalar and interchange two rows.
    • Sub-matrix: Obtained by removing a set of the rows and columns of the original matrix
    • Rank of a matrix
  • Week 4-5
    • Matrix decomposition: Mainly LU, LDL and Cholesky. if I finish these early, I can implement several other decompositions.
  • Week 6
    • Trace, Determinant and Inverse
  • Week 7
    • Checks for identifying Symmetric / Skew symmetric matrices, identity matrices, orthogonal matrices, triangular matrices
    • Characteristic polynomial
  • Week 9
    • Eigen values and vectors
  • Week 10
    • Solutions of the system Ax = b
  • Week 11-12
    • Implement some specialized algorithms for the SparseMatrix class.
  • Week 13-14
    • Miscellaneous

Tests will be carried out separately for each algorithm along with their implementation.

References

[1] http://en.wikipedia.org/wiki/Linear_algebra

[2] http://thilinaatsympy.wordpress.com/

[3] http://www.flintlib.org/

[4] http://linalg.org/linbox-html/index.html

[5] http://www.singular.uni-kl.de/lela/

[6] https://forge.imag.fr/projects/givaro

[7] http://linalg.org/projects/fflas-ffpack

[7.5] http://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_row_.28CSR_or_CRS.29

[8] http://www.ginac.de/

[9] http://conradsanderson.id.au/misc/sanderson_templates_lecture_uqcomp7305.pdf

[10] http://hg.pynac.org/pynac/src/b7fe62bfb3a682773a7157d965deea2b85e3719c/ginac/matrix.cpp?at=default#cl-593

[11] http://en.wikipedia.org/wiki/Strassen_algorithm

[12] http://gitorious.org/intelws2010/matrix-multiplication/blobs/master/src/matmul.c

[13] http://www.cs.huji.ac.il/~omrif01/Strassen

[14] http://hg.pynac.org/pynac/src/b7fe62bfb3a682773a7157d965deea2b85e3719c/ginac/matrix.cpp?at=default#cl-1091

[15] W. Zhou & D.J. Jeffrey, "Fraction-free matrix factors: new forms for LU and QR factors". Frontiers in Computer Science in China, Vol 2, no. 1, pp. 67-80, 2008.

[16] http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/decomp.html

[17] http://www.cs.tamu.edu/media/12380/tamu-cs-tr-2005-11-4.pdf

[18] S.J. Berkowitz, On computing the determinant in small parallel time using a small number of processors, ACM, Information Processing Letters 18, 1984, pp. 147-150

⚠️ **GitHub.com Fallback** ⚠️