Introduction to computational linear algebra 1st Edition by Bernard Philippe, Jocelyne Erhel, Nabil Nassif – Ebook PDF Instant Download/Delivery: 9781482258738, 1482258730
Full download Introduction to computational linear algebra 1st Editionafter payment

Product details:
ISBN 10: 1482258730
ISBN 13: 9781482258738
Author: Bernard Philippe, Jocelyne Erhel, Nabil Nassif
Teach Your Students Both the Mathematics of Numerical Methods and the Art of Computer Programming
Introduction to Computational Linear Algebra presents classroom-tested material on computational linear algebra and its application to numerical solutions of partial and ordinary differential equations. The book is designed for senior undergraduate students in mathematics and engineering as well as first-year graduate students in engineering and computational science.
The text first introduces BLAS operations of types 1, 2, and 3 adapted to a scientific computer environment, specifically MATLAB®. It next covers the basic mathematical tools needed in numerical linear algebra and discusses classical material on Gauss decompositions as well as LU and Cholesky’s factorizations of matrices. The text then shows how to solve linear least squares problems, provides a detailed numerical treatment of the algebraic eigenvalue problem, and discusses (indirect) iterative methods to solve a system of linear equations. The final chapter illustrates how to solve discretized sparse systems of linear equations. Each chapter ends with exercises and computer projects.
Table of contents:
1 Basic Linear Algebra Subprograms – BLAS
1.1 An Introductory Example
1.2 Matrix Notations
1.3 IEEE Floating Point Systems and Computer Arithmetic
1.4 Vector-Vector Operations: Level-1 BLAS
1.5 Matrix-Vector Operations: Level-2 BLAS
1.6 Matrix-Matrix Operations: Level-3 BLAS
1.6.1 Matrix Multiplication Using GAXPYS
1.6.2 Matrix Multiplication Using Scalar Products
1.6.3 Matrix Multiplication Using External Products
1.6.4 Block Multiplications
1.6.5 An Efficient Data Management
1.7 Sparse Matrices: Storage and Associated Operations
1.8 Exercises
1.9 Computer Project: Street Algorithm
2 Basic Concepts for Matrix Computations
2.1 Vector Norms
2.2 Complements on Square Matrices
2.2.1 Definition of Important Square Matrices
2.2.2 Use of Orthonormal Bases
2.2.3 Gram-Schmidt Process
2.2.4 Determinants
2.2.5 Eigenvalue-Eigenvector and Characteristic Polynomial
2.2.6 Schur’s Decomposition
2.2.7 Orthogonal Decomposition of Symmetric Real and Complex Hermitian Matrices
2.2.7.1 A Real and Symmetric: A = AT
2.2.7.2 A Complex Hermitian: A = A*
2.2.8 Symmetric Positive Definite and Positive Semi-Definite
Matrices
2.3 Rectangular Matrices: Ranks and Singular Values
2.3.1 Singular Values of a Matrix
2.3.2 Singular Value Decomposition
2.4 Matrix Norms
2.5 Exercises
2.6 Computer Exercises
3 Gauss Elimination and LU Decompositions of Matrices
3.1 Special Matrices for LU Decomposition
3.1.1 Triangular Matrices
3.1.2 Permutation Matrices
3.2 Gauss Transforms
3.2.1 Preliminaries for Gauss Transforms
3.2.2 Definition of Gauss Transforms
3.3 Naive LU Decomposition for a Square Matrix with Principal Minor Property (pmp)
3.3.1 Algorithm and Operations Count
3.3.2 LDL Decomposition of a Matrix Having the Principal Minor Property (pmp)
3.3.3 The Case of Symmetric and Positive Definite Matrices:
Cholesky Decomposition.
3.3.4 Diagonally Dominant Matrices
3.4 PLU Decompositions with Partial Pivoting Strategy
3.4.1 Unscaled Partial Pivoting
3.4.2 Scaled Partial Pivoting
3.4.3 Solving a System Arb Using the LU Decomposition
3.5 MATLAB Commands Related to the LU Decomposition
3.6 Condition Number of a Square Matrix
3.7 Exercises
3.8 Computer Exercises
4 Orthogonal Factorizations and Linear Least Squares Problems
4.1 Formulation of Least Squares Problems: Regression Analysis
4.1.1 Least Squares and Regression Analysis.
4.1.2 Matrix Formulation of Regression Problems
4.2 Existence of Solutions Using Quadratic Forms
4.2.1 Full Rank Cases: Application to Regression Analysis
4.3 Existence of Solutions through Matrix Pseudo-Inverse
4.3.1 Obtaining Matrix Pseudo-Inverse through Singular Value Decomposition.
4.4 The QR Factorization Theorem
4.4.1 Householder Transforms
4.4.2 Steps of the QR Decomposition of a Matrix
4.4.3 Particularizing When m>n
4.4.4 Givens Rotations
4.5 Gram-Schmidt Orthogonalization
4.6 Least Squares Problem and QR Decomposition
4.7 Householder QR with Column Pivoting
4.8 MATLAB Implementations
4.8.1 Use of the Backslash Operator
4.8.2 QR Decompositions
4.9 Exercises
4.10 Computer Exercises
5 Algorithms for the Eigenvalue Problem
5.1 Basic Principles
5.1.1 Why Compute the Eigenvalues of a Square Matrix?
5.1.2 Spectral Decomposition of a Matrix
5.1.3 The Power Method and its By-Products
5.2 QR Method for a Non-Symmetric Matrix
5.2.1 Reduction to an Upper Hessenberg Matrix
5.2.2 QR Algorithm for an Upper Hessenberg Matrix
5.2.3 Convergence of the QR Method.
5.3 Algorithms for Symmetric Matrices
5.3.1 Reduction to a Tridiagonal Matrix
5.3.2 Algorithms for Tridiagonal Symmetric Matrices
5.4 Methods for Large Size Matrices
5.4.1 Rayleigh-Ritz Projection
5.4.2 Arnold Procedure
5.4.3 The Arnoldi Method for Computing Eigenvalues of a Large Matrix
5.4.4 Arnoldi Method for Computing an Eigenpair
5.4.5 Symmetric Case: Lanczos Algorithm
5.5 Singular Value Decomposition
5.5.1 Full SVD
5.5.2 Singular Triplets for Large Matrices
5.6 Exercises
5.7 Computer Exercises
6 Iterative Methods for Systems of Linear Equations
6.1 Stationary Methods
6.1.1 Splitting
6.1.2 Classical Stationary Methods
6.2 Krylov Methods
6.2.1 Krylov Properties
6.2.2 Subspace Condition
6.2.3 Minimization Property for spd Matrices
6.2.4 Minimization Property for General Matrices
6.3 Method of Steepest Descent for spd Matrices
6.3.1 Convergence Properties of the Steepest Descent Method
6.3.2 Preconditioned Steepest Descent Algorithm
6.4 Conjugate Gradient Method (CG) for spd Matrices
6.4.1 Krylov Basis Properties
6.4.2 CG Algorithm
6.4.3 Convergence of CG..
6.4.4 Preconditioned Conjugate Gradient
6.4.5 Memory and CPU Requirements in PCG
6.4.6 Relation with the Lanczos Method
6.4.7 Case of Symmetric Indefinite Systems: SYMMLQ Method
6.5 The Generalized Minimal Residual Method
6.5.1 Krylov Basis Computation
6.5.2 GMRES Algorithm.
6.5.3 Convergence of GMRES
6.5.4 Preconditioned GMRES
6.5.5 Restarted GMRES
6.5.6 MINRES Algorithm
6.6 The Bi-Conjugate Gradient Method
6.6.1 Orthogonality Properties in BiCG
6.6.2 BiCG Algorithm
6.6.3 Convergence of BiCG
6.6.4 Breakdowns and Near-Breakdowns in BiCG
6.6.5 Complexity of BiCG and Variants of BiCG
6.6.6 Preconditioned BiCG
6.7 Preconditioning Issues
7 Sparse Systems to Solve Poisson Differential Equations
6.8 Exercises
7.1 Poisson Differential Equations
7.2 The Path to Poisson Solvers
7.3 Finite Differences for Poisson-Dirichlet Problems
7.3.1 One-Dimensional Dirichlet-Poisson
7.3.2 Two-Dimensional Poisson-Dirichlet on a Rectangle
7.3.3 Complexity for Direct Methods: Zero-Fill Phenomenon
7.4 Variational Formulations
7.4.1 Integration by Parts and Green’s Formula.
7.4.2 Variational Formulation to One-Dimensional Poisson Problems
7.4.3 Variational Formulations to Two-Dimensional Poisson Problems
7.4.4 Petrov-Galerkin Approximations
7.5 One-Dimensional Finite Element Discretizations
7.5.1 The P₁ Finite Element Spaces
7.5.2 Finite Element Approximation Using S₁ (II)
7.5.3 Implementation of the Method
7.5.4 One-Dimensional P₂ Finite-Elements
7.6 Exercises
77 Computer Exercises
People also search for:
introduction to computational thinking and data science
introduction to computational neuroscience
introduction to computational antibody engineering
introduction to computational advertising
introduction to computational and systems biology
Tags: Bernard Philippe, Jocelyne Erhel, Nabil Nassif, Introduction to computational


