Monday, March 11, 2019

A Gentle Introduction to Matrix Operations for Machine Learning

A Gentle Introduction to Matrix Operations for Machine Learning

Published By:Jason Brownlee

Transpose

A defined matrix can be transposed, which creates a new matrix with the number of columns and rows flipped.
This is denoted by the superscript “T” next to the matrix.
An invisible diagonal line can be drawn through the matrix from top left to bottom right on which the matrix can be flipped to give the transpose.
The operation has no effect if the matrix is symmetrical, e.g. has the same number of columns and rows and the same values at the same locations on both sides of the invisible diagonal line.
The columns of A^T are the rows of A.
— Page 109, Introduction to Linear Algebra, Fifth Edition, 2016.
We can transpose a matrix in NumPy by calling the T attribute.
Running the example first prints the matrix as it is defined, then the transposed version.
The transpose operation provides a short notation used as an element in many matrix operations.

Inversion

Matrix inversion is a process that finds another matrix that when multiplied with the matrix, results in an identity matrix.
Given a matrix A, find matrix B, such that AB or BA = In.
The operation of inverting a matrix is indicated by a -1 superscript next to the matrix; for example, A^-1. The result of the operation is referred to as the inverse of the original matrix; for example, B is the inverse of A.
A matrix is invertible if there exists another matrix that results in the identity matrix, where not all matrices are invertible. A square matrix that is not invertible is referred to as singular.
Whatever A does, A^-1 undoes.
— Page 83, Introduction to Linear Algebra, Fifth Edition, 2016.
The matrix inversion operation is not computed directly, but rather the inverted matrix is discovered through a numerical operation, where a suite of efficient methods may be used, often involving forms of matrix decomposition.
However, A^−1 is primarily useful as a theoretical tool, and should not actually be used in practice for most software applications.
— Page 37, Deep Learning, 2016.
A matrix can be inverted in NumPy using the inv() function.
First, we define a small 2×2 matrix, then calculate the inverse of the matrix, and then confirm the inverse by multiplying it with the original matrix to give the identity matrix.
Running the example prints the original, inverse, and identity matrices.
Matrix inversion is used as an operation in solving systems of equations framed as matrix equations where we are interested in finding vectors of unknowns. A good example is in finding the vector of coefficient values in linear regression.

Trace

A trace of a square matrix is the sum of the values on the main diagonal of the matrix (top-left to bottom-right).
The trace operator gives the sum of all of the diagonal entries of a matrix
— Page 46, Deep Learning, 2016.
The operation of calculating a trace on a square matrix is described using the notation “tr(A)” where A is the square matrix on which the operation is being performed.
The trace is calculated as the sum of the diagonal values; for example, in the case of a 3×3 matrix:
Or, using array notation:
We can calculate the trace of a matrix in NumPy using the trace() function.
First, a 3×3 matrix is created and then the trace is calculated.
Running the example, first the array is printed and then the trace.
Alone, the trace operation is not interesting, but it offers a simpler notation and it is used as an element in other key matrix operations.

Determinant

The determinant of a square matrix is a scalar representation of the volume of the matrix.
The determinant describes the relative geometry of the vectors that make up the rows of the matrix. More specifically, the determinant of a matrix A tells you the volume of a box with sides given by rows of A.
It is denoted by the “det(A)” notation or |A|, where A is the matrix on which we are calculating the determinant.
The determinant of a square matrix is calculated from the elements of the matrix. More technically, the determinant is the product of all the eigenvalues of the matrix.
The intuition for the determinant is that it describes the way a matrix will scale another matrix when they are multiplied together. For example, a determinant of 1 preserves the space of the other matrix. A determinant of 0 indicates that the matrix cannot be inverted.
The determinant of a square matrix is a single number. […] It tells immediately whether the matrix is invertible. The determinant is a zero when the matrix has no inverse.
— Page 247, Introduction to Linear Algebra, Fifth Edition, 2016.
In NumPy, the determinant of a matrix can be calculated using the det() function.
First, a 3×3 matrix is defined, then the determinant of the matrix is calculated.
Running the example first prints the defined matrix and then the determinant of the matrix.
Like the trace operation, alone, the determinant operation is not interesting, but it offers a simpler notation and it is used as an element in other key matrix operations.

Matrix Rank

The rank of a matrix is the estimate of the number of linearly independent rows or columns in a matrix.
The rank of a matrix M is often denoted as the function rank().
An intuition for rank is to consider it the number of dimensions spanned by all of the vectors within a matrix. For example, a rank of 0 suggest all vectors span a point, a rank of 1 suggests all vectors span a line, a rank of 2 suggests all vectors span a two-dimensional plane.
The rank is estimated numerically, often using a matrix decomposition method. A common approach is to use the Singular-Value Decomposition or SVD for short.
NumPy provides the matrix_rank() function for calculating the rank of an array. It uses the SVD method to estimate the rank.
The example below demonstrates calculating the rank of a matrix with scalar values and another vector with all zero values.
Running the example prints the first vector and its rank of 1, followed by the second zero vector and its rank of 0.
The next example makes it clear that the rank is not the number of dimensions of the matrix, but the number of linearly independent directions.
Three examples of a 2×2 matrix are provided demonstrating matrices with rank 0, 1 and 2.
Running the example first prints a 0 2×2 matrix followed by the rank, then a 2×2 that with a rank 1 and finally a 2×2 matrix with a rank of 2.

No comments: