Menu Top
Applied Mathematics for Class 11th & 12th (Concepts and Questions)
11th Concepts Questions
12th Concepts Questions

Applied Maths Class 12th Chapters (Concepts)
1. Numbers, Quantification and Numerical Applications 2. Matrices 3. Differentiation and Its Applications
4. Integration and Its Application 5. Differential Equations and Modeling 6. Probability Distribution
7. Inferential Statistics 8. Index Numbers and Time Based Data 9. Financial Mathematics
10. Linear Programming

Content On This Page
Matrices and Types of Matrices Equality of Matrices Transpose of a Matrix
Symmetric and Skew Symmetric Matrix Algebra of Matrices Determinants
Inverse of a Matrix Solving System of Simultaneous Equations using Matrix Method Cramer’s Rule and Inverse of Coefficient Matrix


Chapter 2 Matrices (Concepts)

Welcome to this foundational chapter introducing Matrices, a remarkably powerful mathematical tool used for organizing data, representing systems, and performing complex operations efficiently. Matrices provide a structured way to handle arrays of numbers, finding extensive applications across diverse fields such as economics (input-output models), computer graphics (transformations like rotation and scaling), engineering (solving systems of equations, network analysis), statistics (data representation), and physics. This chapter will equip you with the fundamental vocabulary, notation, and algebraic operations associated with matrices, laying the essential groundwork for understanding their role in solving linear systems and representing linear transformations, key topics in Applied Mathematics.

We begin by defining a matrix as a rectangular array of numbers, called elements, arranged systematically in rows and columns. The size or dimension of a matrix is specified by its order, written as $m \times n$, where $m$ is the number of rows and $n$ is the number of columns. We then introduce various special types of matrices based on their order or element properties:

Understanding these types is crucial for recognizing special structures and properties. We define the equality of matrices: two matrices are equal if and only if they have the same order and their corresponding elements are identical.

The core of the chapter focuses on the fundamental algebraic operations defined for matrices:

We also introduce the concept of the transpose of a matrix ($A'$ or $A^T$), which is formed by interchanging the rows and columns of the original matrix $A$. Key properties of the transpose, such as $(A+B)' = A'+B'$, $(kA)' = kA'$, and the important reversal law for products, $(AB)' = B'A'$, are highlighted. This may lead to brief introductions of symmetric matrices (where $A'=A$) and skew-symmetric matrices (where $A'=-A$). Finally, a key application introduced conceptually is representing a system of linear equations in matrix form as $AX = B$, where $A$ is the matrix of coefficients, $X$ is the column matrix of variables, and $B$ is the column matrix of constants. This matrix representation paves the way for powerful solution techniques using matrix inverses or determinants, which are typically covered in subsequent topics, demonstrating the practical utility of matrix algebra in solving real-world problems.



Matrices and Types of Matrices


A matrix (plural: matrices) is a rectangular arrangement, or table, of numbers, symbols, or expressions, arranged in rows and columns. Matrices are powerful mathematical tools used extensively in various fields, including linear algebra, physics, engineering, computer graphics, economics, and statistics, to represent linear transformations, systems of linear equations, and to store and manipulate data efficiently.

For example, a simple matrix can be used to represent the prices of different items in different shops:

$\begin{bmatrix} \textsf{₹}\$10 & \textsf{₹}\$12 \\ \textsf{₹}\$15 & \textsf{₹}\$14 \\ \textsf{₹}\$8 & \textsf{₹}\$9 \end{bmatrix}$

Here, the rows could represent items (say, Item A, Item B, Item C) and the columns could represent shops (Shop 1, Shop 2). The element in a specific row and column gives the price of that item in that shop.


Order of a Matrix

The order or dimension of a matrix specifies its size by stating the number of rows and the number of columns it contains. If a matrix has $m$ rows and $n$ columns, its order is denoted as $m \times n$ (read as 'm by n'). The number of elements in an $m \times n$ matrix is $m \times n$.

A matrix is typically denoted by a capital letter, e.g., $A, B, C$. The elements (or entries) within the matrix are denoted by the corresponding lowercase letter with two subscripts, $a_{ij}$, where the first subscript $i$ indicates the row number and the second subscript $j$ indicates the column number.

A general matrix $A$ of order $m \times n$ can be represented as:

$A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}$

... (1.1)

The element $a_{ij}$ is located in the $i$-th row and $j$-th column. For example, $a_{23}$ is the element in the 2nd row and 3rd column.

We can also denote a matrix $A$ of order $m \times n$ compactly as $A = [a_{ij}]_{m \times n}$.

Example 1. Consider the matrix $B = \begin{bmatrix} 2 & 4 & 6 \\ 1 & 3 & 5 \end{bmatrix}$. State its order and identify the elements $b_{13}$ and $b_{22}$.

Answer:

The matrix $B$ has 2 rows and 3 columns. Therefore, its order is $2 \times 3$.

The element $b_{13}$ is the element in the 1st row and 3rd column. From the matrix, $b_{13} = 6$.

The element $b_{22}$ is the element in the 2nd row and 2nd column. From the matrix, $b_{22} = 3$.


Types of Matrices

Matrices are classified into various types based on the number of rows and columns, or the nature of their elements. Here are some important types:

Row Matrix

A matrix that has only one row is called a row matrix. Its order is always $1 \times n$, where $n$ is the number of columns.

Example: $\begin{bmatrix} 1 & 5 & -2 & 0 \end{bmatrix}$ is a row matrix of order $1 \times 4$.

$\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \end{bmatrix}$ is a general row matrix of order $1 \times n$.

Column Matrix

A matrix that has only one column is called a column matrix. Its order is always $m \times 1$, where $m$ is the number of rows.

Example: $\begin{bmatrix} 3 \\ -1 \\ 7 \end{bmatrix}$ is a column matrix of order $3 \times 1$.

$\begin{bmatrix} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{bmatrix}$ is a general column matrix of order $m \times 1$.

Zero Matrix (Null Matrix)

A matrix in which all the elements are zero is called a zero matrix or null matrix. It is denoted by the symbol $O$. The order of a zero matrix can be any $m \times n$.

Examples: $\begin{bmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{bmatrix}$ is a zero matrix of order $3 \times 2$.

$\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$ is a zero matrix of order $2 \times 2$.

$\begin{bmatrix} 0 \end{bmatrix}$ is a zero matrix of order $1 \times 1$.

Square Matrix

A matrix in which the number of rows is equal to the number of columns is called a square matrix. If a square matrix has $n$ rows and $n$ columns, its order is $n \times n$, often simply referred to as a square matrix of order $n$.

In a square matrix $A = [a_{ij}]_{n \times n}$, the elements $a_{11}, a_{22}, a_{33}, \dots, a_{nn}$ are called the diagonal elements or the elements of the principal diagonal (or main diagonal).

Examples: $\begin{bmatrix} 2 & -1 \\ 4 & 3 \end{bmatrix}$ is a square matrix of order 2. The diagonal elements are 2 and 3.

$\begin{bmatrix} 1 & 0 & 5 \\ 6 & 2 & -3 \\ 0 & 1 & 4 \end{bmatrix}$ is a square matrix of order 3. The diagonal elements are 1, 2, and 4.

Diagonal Matrix

A diagonal matrix is a square matrix in which all the non-diagonal elements are zero. That is, in a square matrix $A = [a_{ij}]_{n \times n}$, if $a_{ij} = 0$ for all $i \ne j$, then $A$ is a diagonal matrix. The diagonal elements ($a_{ii}$) may or may not be zero.

Examples: $\begin{bmatrix} 5 & 0 \\ 0 & -2 \end{bmatrix}$ is a diagonal matrix of order 2.

$\begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 9 \end{bmatrix}$ is a diagonal matrix of order 3.

$\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$ is also a diagonal matrix (specifically, the zero matrix).

Scalar Matrix

A scalar matrix is a diagonal matrix in which all the diagonal elements are equal to a constant scalar, say $k$. That is, a square matrix $A = [a_{ij}]_{n \times n}$ is a scalar matrix if $a_{ij} = 0$ for $i \ne j$ and $a_{ii} = k$ for all $i=1, 2, \dots, n$, where $k$ is a scalar.

Examples: $\begin{bmatrix} 7 & 0 \\ 0 & 7 \end{bmatrix}$ is a scalar matrix with $k=7$.

$\begin{bmatrix} -2 & 0 & 0 \\ 0 & -2 & 0 \\ 0 & 0 & -2 \end{bmatrix}$ is a scalar matrix with $k=-2$.

A scalar matrix can be written in the form $kI_n$, where $I_n$ is the identity matrix of order $n$.

Identity Matrix (Unit Matrix)

An identity matrix (or unit matrix) is a special type of scalar matrix where all the diagonal elements are equal to 1. The non-diagonal elements are all zero. It is denoted by $I_n$ for an identity matrix of order $n$, or simply $I$ when the order is clear from the context.

A square matrix $A = [a_{ij}]_{n \times n}$ is an identity matrix if $a_{ij} = 0$ for $i \ne j$ and $a_{ii} = 1$ for all $i=1, 2, \dots, n$.

The identity matrix plays a role similar to the number 1 in the multiplication of real numbers. When multiplied by another matrix, it leaves the other matrix unchanged (provided the multiplication is defined).

Examples: $I_1 = \begin{bmatrix} 1 \end{bmatrix}$ (Order 1)

$I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ (Order 2)

$I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$ (Order 3)

Upper Triangular Matrix

An upper triangular matrix is a square matrix where all the elements below the main diagonal are zero. That is, in a square matrix $A = [a_{ij}]_{n \times n}$, if $a_{ij} = 0$ for all $i > j$, then $A$ is an upper triangular matrix. The elements on and above the main diagonal can be non-zero.

Example: $\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ 0 & a_{22} & a_{23} \\ 0 & 0 & a_{33} \end{bmatrix}$ is a general upper triangular matrix of order 3.

$\begin{bmatrix} 1 & 2 & 3 \\ 0 & 4 & 5 \\ 0 & 0 & 6 \end{bmatrix}$ is a specific upper triangular matrix.

Lower Triangular Matrix

A lower triangular matrix is a square matrix where all the elements above the main diagonal are zero. That is, in a square matrix $A = [a_{ij}]_{n \times n}$, if $a_{ij} = 0$ for all $i < j$, then $A$ is a lower triangular matrix. The elements on and below the main diagonal can be non-zero.

Example: $\begin{bmatrix} a_{11} & 0 & 0 \\ a_{21} & a_{22} & 0 \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$ is a general lower triangular matrix of order 3.

$\begin{bmatrix} 1 & 0 & 0 \\ 7 & 2 & 0 \\ -1 & 3 & 5 \end{bmatrix}$ is a specific lower triangular matrix.



Equality of Matrices


Two matrices are considered equal if and only if they are identical in every aspect. This means they must have the same dimensions and the same elements in corresponding positions. The concept of matrix equality is fundamental as it allows us to form matrix equations and solve for unknown variables within matrices.

Specifically, two matrices $A$ and $B$ are said to be equal, denoted as $A = B$, if and only if the following two conditions are met:

  1. They have the same order (i.e., the same number of rows and the same number of columns). If the order of matrix $A$ is $m \times n$, then the order of matrix $B$ must also be $m \times n$.
  2. The elements in the corresponding positions of the two matrices are equal. If $A = [a_{ij}]_{m \times n}$ and $B = [b_{ij}]_{m \times n}$, then $A = B$ if and only if $a_{ij} = b_{ij}$ for all possible values of $i$ (from 1 to $m$) and $j$ (from 1 to $n$).

If either of these conditions is not satisfied, the matrices are unequal ($A \ne B$).


Examples

Example 1. Are the matrices $A = \begin{bmatrix} 2 & -1 \\ 3 & 4 \end{bmatrix}$ and $B = \begin{bmatrix} 2 & -1 \\ 3 & 4 \end{bmatrix}$ equal?

Answer:

To determine if matrices $A$ and $B$ are equal, we check the two conditions for equality.

1. Check the order: Matrix $A$ has 2 rows and 2 columns. So, the order of $A$ is $2 \times 2$. Matrix $B$ has 2 rows and 2 columns. So, the order of $B$ is $2 \times 2$. The orders of both matrices are the same ($2 \times 2$). The first condition is satisfied.

2. Check the corresponding elements: We compare the elements at each position $(i, j)$ in matrix $A$ with the corresponding element at position $(i, j)$ in matrix $B$.

Element at (1, 1): $a_{11} = 2$ and $b_{11} = 2$. So, $a_{11} = b_{11}$.

Element at (1, 2): $a_{12} = -1$ and $b_{12} = -1$. So, $a_{12} = b_{12}$.

Element at (2, 1): $a_{21} = 3$ and $b_{21} = 3$. So, $a_{21} = b_{21}$.

Element at (2, 2): $a_{22} = 4$ and $b_{22} = 4$. So, $a_{22} = b_{22}$.

Since all corresponding elements are equal, the second condition is also satisfied.

As both conditions for matrix equality are met, matrices $A$ and $B$ are equal.


Example 2. Find the values of $x, y, z$ if $\begin{bmatrix} x+y & 2 \\ 5 & z \end{bmatrix} = \begin{bmatrix} 6 & 2 \\ 5 & 8 \end{bmatrix}$.

Answer:

The given matrices are $\begin{bmatrix} x+y & 2 \\ 5 & z \end{bmatrix}$ and $\begin{bmatrix} 6 & 2 \\ 5 & 8 \end{bmatrix}$.

Both matrices are of order $2 \times 2$. Since they are given to be equal, their corresponding elements must be equal.

Equating the element in the 1st row, 1st column:

$x + y = 6$

... (i)

Equating the element in the 1st row, 2nd column:

$2 = 2$

[This is consistent]

Equating the element in the 2nd row, 1st column:

$5 = 5$

[This is consistent]

Equating the element in the 2nd row, 2nd column:

$z = 8$

... (ii)

From equation (ii), we find the value of $z$ directly: $z = 8$.

From equation (i), we have the equation $x + y = 6$. This equation gives a relationship between $x$ and $y$. However, with only one equation involving two variables, we cannot determine unique values for $x$ and $y$. Any pair of real numbers $(x, y)$ that sums to 6 will satisfy this condition.

Therefore, based on the given matrix equality:

The value of $z$ is 8.

The values of $x$ and $y$ must satisfy the equation $x + y = 6$. Unique values for $x$ and $y$ cannot be determined from the information given.



Transpose of a Matrix


The transpose of a matrix is obtained by interchanging its rows and columns. If $A$ is an $m \times n$ matrix, its transpose, denoted by $A^T$ or $A'$, is an $n \times m$ matrix where the element in the $i$-th row and $j$-th column of $A^T$ is the element in the $j$-th row and $i$-th column of $A$.

Mathematically, if $A = [a_{ij}]$, then $A^T = [a_{ji}]$. This means that the element $a_{ij}$ in the original matrix $A$ becomes the element $a_{ji}$ in the transposed matrix $A^T$.


Examples

Example 1. Find the transpose of the matrix $A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$.

Answer:

Matrix A is of order $2 \times 3$ (2 rows and 3 columns). To find the transpose $A^T$, we interchange the rows and columns.

The first row of A, which is $\begin{bmatrix} 1 & 2 & 3 \end{bmatrix}$, becomes the first column of $A^T$.

The second row of A, which is $\begin{bmatrix} 4 & 5 & 6 \end{bmatrix}$, becomes the second column of $A^T$.

$A^T = \begin{bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{bmatrix}$

The order of $A^T$ is $3 \times 2$ (3 rows and 2 columns), which is expected since the order of A was $2 \times 3$.


Example 2. Find the transpose of the matrix $B = \begin{bmatrix} -1 & 0 & 5 \\ 7 & 3 & -2 \\ 4 & 1 & 6 \end{bmatrix}$.

Answer:

Matrix B is a square matrix of order $3 \times 3$. To find $B^T$, we interchange rows and columns.

Row 1 of B: $\begin{bmatrix} -1 & 0 & 5 \end{bmatrix}$ becomes Column 1 of $B^T$.

Row 2 of B: $\begin{bmatrix} 7 & 3 & -2 \end{bmatrix}$ becomes Column 2 of $B^T$.

Row 3 of B: $\begin{bmatrix} 4 & 1 & 6 \end{bmatrix}$ becomes Column 3 of $B^T$.

$B^T = \begin{bmatrix} -1 & 7 & 4 \\ 0 & 3 & 1 \\ 5 & -2 & 6 \end{bmatrix}$

The order of $B^T$ is $3 \times 3$. For a square matrix, the transpose has the same order.


Properties of Transpose

The transpose operation on matrices has several useful properties. Let A and B be matrices of appropriate orders such that the following operations are defined, and let $k$ be a scalar (a real number).

1. Transpose of Transpose: $(A^T)^T = A$
This means taking the transpose of a transposed matrix gives back the original matrix. If $A = [a_{ij}]$, then $A^T = [a_{ji}]$. Taking the transpose of $A^T$, the element at position $(i, j)$ in $(A^T)^T$ is the element at position $(j, i)$ in $A^T$, which is $a_{ij}$. Thus, $(A^T)^T = A$.

2. Transpose of a Scalar Multiple: $(kA)^T = kA^T$
The transpose of a scalar multiple of a matrix is the scalar multiple of its transpose. If $A = [a_{ij}]$, then $kA = [ka_{ij}]$. The element at position $(i, j)$ in $(kA)^T$ is the element at position $(j, i)$ in $kA$, which is $ka_{ji}$. The element at position $(i, j)$ in $kA^T$ is $k$ times the element at position $(i, j)$ in $A^T$, which is $k \times a_{ji} = ka_{ji}$. Thus, $(kA)^T = kA^T$.

3. Transpose of a Sum: $(A + B)^T = A^T + B^T$
The transpose of the sum of two matrices is the sum of their transposes. This property holds if matrices A and B have the same order. If $A = [a_{ij}]$ and $B = [b_{ij}]$, then $A+B = [a_{ij}+b_{ij}]$. The element at position $(i, j)$ in $(A+B)^T$ is the element at position $(j, i)$ in $A+B$, which is $a_{ji}+b_{ji}$. The element at position $(i, j)$ in $A^T + B^T$ is the element at position $(i, j)$ in $A^T$ plus the element at position $(i, j)$ in $B^T$, which is $a_{ji} + b_{ji}$. Thus, $(A+B)^T = A^T + B^T$.

4. Transpose of a Product: $(AB)^T = B^T A^T$
The transpose of the product of two matrices is the product of their transposes in the reverse order. This is a very important property. This property holds if the number of columns in A is equal to the number of rows in B. If A is $m \times n$ and B is $n \times p$, then AB is $m \times p$, and $(AB)^T$ is $p \times m$. $B^T$ is $p \times n$, and $A^T$ is $n \times m$, so $B^T A^T$ is $(p \times n) \times (n \times m) = p \times m$. The orders match. Let $A = [a_{ik}]$ be $m \times n$ and $B = [b_{kj}]$ be $n \times p$. The element at position $(i, j)$ in $AB$ is $\sum_{k=1}^n a_{ik}b_{kj}$. The element at position $(j, i)$ in $(AB)^T$ is the element at position $(i, j)$ in $AB$, which is $\sum_{k=1}^n a_{ik}b_{kj}$. Now consider $B^T = [b'_{jk}]$ where $b'_{jk} = b_{kj}$ (element at $(j, k)$ in $B^T$ is element at $(k, j)$ in B). And $A^T = [a'_{ki}]$ where $a'_{ki} = a_{ik}$ (element at $(k, i)$ in $A^T$ is element at $(i, k)$ in A). The element at position $(j, i)$ in $B^T A^T$ is $\sum_{k=1}^n b'_{jk}a'_{ki} = \sum_{k=1}^n b_{kj}a_{ik} = \sum_{k=1}^n a_{ik}b_{kj}$. This is the same as the element at position $(j, i)$ in $(AB)^T$. Thus, $(AB)^T = B^T A^T$.



Symmetric and Skew Symmetric Matrix


Symmetric and skew-symmetric matrices are special types of square matrices defined based on their relationship with their transpose.


Symmetric Matrix

A square matrix $A$ is called a symmetric matrix if its transpose is equal to the matrix itself, i.e., $A^T = A$.

In terms of elements, a matrix $A = [a_{ij}]$ is symmetric if $a_{ij} = a_{ji}$ for all possible values of $i$ and $j$. This means that the element in the $i$-th row and $j$-th column is equal to the element in the $j$-th row and $i$-th column. Symmetric matrices are mirrored across their main diagonal.

Example:

$A = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 5 \\ 3 & 5 & 6 \end{bmatrix}$

Here, $A$ is a $3 \times 3$ square matrix. Let's find its transpose $A^T$:

$A^T = \begin{bmatrix} 1 & 2 & 3 \\ 2 & 4 & 5 \\ 3 & 5 & 6 \end{bmatrix}$

Since $A^T = A$, the matrix $A$ is symmetric. Observe that $a_{12} = 2$ and $a_{21} = 2$ ($a_{12} = a_{21}$). Similarly, $a_{13} = 3$ and $a_{31} = 3$ ($a_{13} = a_{31}$), and $a_{23} = 5$ and $a_{32} = 5$ ($a_{23} = a_{32}$).


Skew-Symmetric Matrix

A square matrix $A$ is called a skew-symmetric matrix if its transpose is equal to the negative of the matrix, i.e., $A^T = -A$.

In terms of elements, a matrix $A = [a_{ij}]$ is skew-symmetric if $a_{ij} = -a_{ji}$ for all possible values of $i$ and $j$. This means the element in the $i$-th row and $j$-th column is the negative of the element in the $j$-th row and $i$-th column.

Property of Diagonal Elements of a Skew-Symmetric Matrix

For a skew-symmetric matrix, we have the condition $a_{ij} = -a_{ji}$ for all $i, j$. Let's consider the diagonal elements of the matrix, where the row index and column index are the same, i.e., $i = j$. Substituting $i=j$ into the condition:

$a_{ii} = -a_{ii}$

Adding $a_{ii}$ to both sides of the equation:

$a_{ii} + a_{ii} = 0$

$2a_{ii} = 0$

Dividing by 2:

$a_{ii} = 0$

This result shows that all the diagonal elements ($a_{11}, a_{22}, a_{33}$, etc.) of a skew-symmetric matrix must be zero.

Example:

$B = \begin{bmatrix} 0 & 2 & -3 \\ -2 & 0 & 5 \\ 3 & -5 & 0 \end{bmatrix}$

Here, $B$ is a $3 \times 3$ square matrix. Let's check its diagonal elements: $b_{11}=0, b_{22}=0, b_{33}=0$. All are zero. Now let's check the off-diagonal elements:

$b_{12} = 2$, $b_{21} = -2$. $b_{12} = -b_{21}$. $b_{13} = -3$, $b_{31} = 3$. $b_{13} = -b_{31}$. $b_{23} = 5$, $b_{32} = -5$. $b_{23} = -b_{32}$.

Now let's find the transpose of B:

$B^T = \begin{bmatrix} 0 & -2 & 3 \\ 2 & 0 & -5 \\ -3 & 5 & 0 \end{bmatrix}$

And let's find $-B$:

$-B = -1 \times \begin{bmatrix} 0 & 2 & -3 \\ -2 & 0 & 5 \\ 3 & -5 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -2 & 3 \\ 2 & 0 & -5 \\ -3 & 5 & 0 \end{bmatrix}$

Since $B^T = -B$, the matrix $B$ is skew-symmetric.


Expressing a Square Matrix as the Sum of a Symmetric and a Skew-Symmetric Matrix

A fundamental theorem states that any square matrix $A$ can be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric matrix.

Let $A$ be any square matrix. We can write $A$ using a simple algebraic manipulation:

$A = \frac{1}{2}(2A) = \frac{1}{2}(A + A + A^T - A^T)$

Rearranging the terms, we can group them as follows:

$A = \frac{1}{2}((A + A^T) + (A - A^T))$

Distributing the scalar $\frac{1}{2}$:

$A = \frac{1}{2}(A + A^T) + \frac{1}{2}(A - A^T)$

... (i)

Let $P = \frac{1}{2}(A + A^T)$ and $Q = \frac{1}{2}(A - A^T)$. Then, from equation (i), we have $A = P + Q$. Now, let's verify if P is symmetric and Q is skew-symmetric.

Check if P is Symmetric

To check if $P$ is symmetric, we find its transpose $P^T$. If $P^T = P$, then P is symmetric.

$P^T = \left(\frac{1}{2}(A + A^T)\right)^T$

Using the property $(kA)^T = kA^T$ (where $k = \frac{1}{2}$):

$P^T = \frac{1}{2}(A + A^T)^T$

Using the property $(A + B)^T = A^T + B^T$:

$P^T = \frac{1}{2}(A^T + (A^T)^T)$

Using the property $(A^T)^T = A$:

$P^T = \frac{1}{2}(A^T + A)$

Since matrix addition is commutative ($A^T + A = A + A^T$):

$P^T = \frac{1}{2}(A + A^T)$

[Matrix addition is commutative]

We observe that $\frac{1}{2}(A + A^T)$ is exactly our definition of $P$.

$P^T = P$

Therefore, $P = \frac{1}{2}(A + A^T)$ is a symmetric matrix.

Check if Q is Skew-Symmetric

To check if $Q$ is skew-symmetric, we find its transpose $Q^T$. If $Q^T = -Q$, then Q is skew-symmetric.

$Q^T = \left(\frac{1}{2}(A - A^T)\right)^T$

Using the property $(kA)^T = kA^T$ (where $k = \frac{1}{2}$):

$Q^T = \frac{1}{2}(A - A^T)^T$

Using the property $(A - B)^T = A^T - B^T$:

$Q^T = \frac{1}{2}(A^T - (A^T)^T)$

Using the property $(A^T)^T = A$:

$Q^T = \frac{1}{2}(A^T - A)$

We want to show that $Q^T = -Q$. Recall that $Q = \frac{1}{2}(A - A^T)$. We can factor out -1 from the expression for $Q^T$:

$Q^T = -\frac{1}{2}(A - A^T)$

We observe that $-\frac{1}{2}(A - A^T)$ is the negative of our definition of $Q$.

$Q^T = -Q$

Therefore, $Q = \frac{1}{2}(A - A^T)$ is a skew-symmetric matrix.

Thus, any square matrix $A$ can be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric matrix:

$A = \underbrace{\frac{1}{2}(A+A^T)}_{\text{Symmetric Matrix}} + \underbrace{\frac{1}{2}(A-A^T)}_{\text{Skew-Symmetric Matrix}}$

This decomposition is unique for any given square matrix A.



Algebra of Matrices


Just like real numbers, matrices can undergo algebraic operations such as addition, subtraction, and multiplication. However, these operations are subject to certain conditions related to the dimensions (order) of the matrices involved. Matrices can also be multiplied by a scalar (a real number).


Addition and Subtraction of Matrices

The operations of addition and subtraction are defined for two matrices only if they have the same order. If the matrices have different orders, addition and subtraction are not possible.

If two matrices $A$ and $B$ have the same order, say $m \times n$, their sum $A+B$ and difference $A-B$ are also matrices of order $m \times n$. These operations are performed by adding or subtracting the corresponding elements of the matrices.

If $A = [a_{ij}]$ and $B = [b_{ij}]$ are two matrices of the same order $m \times n$, then:

Their sum is $A + B = [a_{ij} + b_{ij}]$. This means the element in the $i$-th row and $j$-th column of $(A+B)$ is $a_{ij} + b_{ij}$.

$A + B = \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} & \dots & a_{1n}+b_{1n} \\ a_{21}+b_{21} & a_{22}+b_{22} & \dots & a_{2n}+b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1}+b_{m1} & a_{m2}+b_{m2} & \dots & a_{mn}+b_{mn} \end{bmatrix}$

Their difference is $A - B = [a_{ij} - b_{ij}]$. This means the element in the $i$-th row and $j$-th column of $(A-B)$ is $a_{ij} - b_{ij}$.

$A - B = \begin{bmatrix} a_{11}-b_{11} & a_{12}-b_{12} & \dots & a_{1n}-b_{1n} \\ a_{21}-b_{21} & a_{22}-b_{22} & \dots & a_{2n}-b_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1}-b_{m1} & a_{m2}-b_{m2} & \dots & a_{mn}-b_{mn} \end{bmatrix}$

Properties of Matrix Addition

Matrix addition satisfies several important properties, similar to the addition of real numbers. Let A, B, and C be matrices of the same order $m \times n$.

1. Commutative Law: $A + B = B + A$.
The order of addition does not matter. This is because the addition of corresponding elements ($a_{ij} + b_{ij}$) is commutative in real numbers ($a_{ij} + b_{ij} = b_{ij} + a_{ij}$).

2. Associative Law: $(A + B) + C = A + (B + C)$.
When adding three or more matrices of the same order, the way they are grouped does not affect the result. This holds because addition of corresponding elements is associative in real numbers.

3. Existence of Additive Identity: For every $m \times n$ matrix $A$, there exists an $m \times n$ zero matrix, denoted by $O$ (or $O_{m \times n}$), such that $A + O = O + A = A$.
The zero matrix is a matrix where all elements are zero. Adding zero to any element does not change the element, so adding a zero matrix does not change the original matrix.

4. Existence of Additive Inverse: For every $m \times n$ matrix $A$, there exists a matrix $-A$ of the same order, called the negative of $A$ or the additive inverse of $A$, such that $A + (-A) = (-A) + A = O$.
The matrix $-A$ is obtained by negating every element of $A$. If $A = [a_{ij}]$, then $-A = [-a_{ij}]$. When we add $A$ and $-A$, the corresponding elements sum to $a_{ij} + (-a_{ij}) = 0$, resulting in the zero matrix.


Scalar Multiplication of a Matrix

Multiplying a matrix $A$ by a scalar $k$ (a real number) is a straightforward operation. The result is a new matrix, denoted by $kA$ or $Ak$, of the same order as $A$. Each element of the original matrix $A$ is multiplied by the scalar $k$.

If $A = [a_{ij}]$ is an $m \times n$ matrix and $k$ is a scalar, then $kA = [k a_{ij}]$.

$kA = \begin{bmatrix} ka_{11} & ka_{12} & \dots & ka_{1n} \\ ka_{21} & ka_{22} & \dots & ka_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ ka_{m1} & ka_{m2} & \dots & ka_{mn} \end{bmatrix}$

Properties of Scalar Multiplication

Scalar multiplication interacts well with matrix addition and scalar addition. Let A and B be matrices of the same order $m \times n$, and let $k$ and $l$ be scalars.

1. Distributive Law (Scalar over Matrix Addition): $k(A + B) = kA + kB$.
The scalar $k$ can be distributed over the sum of two matrices. This holds because scalar multiplication distributes over addition of real numbers element-wise.

2. Distributive Law (Scalar Addition over Matrix): $(k + l)A = kA + lA$.
The sum of two scalars can be distributed over a matrix. This also follows from the distributive property of real numbers.

3. Associative Law: $(kl)A = k(lA)$.
Multiplying a matrix by the product of two scalars is the same as multiplying the matrix by one scalar and then multiplying the result by the other scalar.

4. Identity Property: $1 \cdot A = A$.
Multiplying a matrix by the scalar 1 leaves the matrix unchanged.

5. Inverse Property: $(-1) \cdot A = -A$.
Multiplying a matrix by the scalar -1 gives the additive inverse of the matrix.

6. Zero Property: $0 \cdot A = O$.
Multiplying a matrix by the scalar 0 results in the zero matrix of the same order as A.


Matrix Multiplication

Matrix multiplication is a more complex operation than addition or scalar multiplication, and it is not always defined for any two matrices. The product of two matrices $A$ and $B$, denoted as $AB$, is defined only if the number of columns in the first matrix ($A$) is equal to the number of rows in the second matrix ($B$).

If $A$ is an $m \times n$ matrix (read as 'm by n matrix') and $B$ is an $n \times p$ matrix, then the product matrix $C = AB$ is defined, and its order will be $m \times p$. The number of columns in A ($n$) must match the number of rows in B ($n$). The resulting matrix C will have the number of rows of A ($m$) and the number of columns of B ($p$).

The element $c_{ij}$ in the $i$-th row and $j$-th column of the product matrix $C=AB$ is obtained by taking the elements of the $i$-th row of $A$ and the elements of the $j$-th column of $B$, multiplying the corresponding elements, and summing up these products.

Mathematically, if $A = [a_{ik}]$ is $m \times n$ and $B = [b_{kj}]$ is $n \times p$, then the element $c_{ij}$ of $C=AB$ (where $C$ is $m \times p$) is given by:

$c_{ij} = \sum_{k=1}^{n} a_{ik} b_{kj} = a_{i1}b_{1j} + a_{i2}b_{2j} + \dots + a_{in}b_{nj}$

... (i)

This calculation involves a "row-by-column" multiplication and summation.

Illustrative Examples of Matrix Multiplication:

Example A: Let $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}$ (order $2 \times 2$) and $B = \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix}$ (order $2 \times 2$). The number of columns in A (2) equals the number of rows in B (2), so $AB$ is defined and will have order $2 \times 2$.

$AB = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} a_{11}b_{11} + a_{12}b_{21} & a_{11}b_{12} + a_{12}b_{22} \\ a_{21}b_{11} + a_{22}b_{21} & a_{21}b_{12} + a_{22}b_{22} \end{bmatrix}$

Example B: Let $A = \begin{bmatrix} 1 & 2 \end{bmatrix}$ (order $1 \times 2$) and $B = \begin{bmatrix} 3 \\ 4 \end{bmatrix}$ (order $2 \times 1$). The number of columns in A (2) equals the number of rows in B (2), so $AB$ is defined and will have order $1 \times 1$.

$AB = \begin{bmatrix} 1 & 2 \end{bmatrix} \begin{bmatrix} 3 \\ 4 \end{bmatrix} = \begin{bmatrix} (1 \times 3) + (2 \times 4) \end{bmatrix} = \begin{bmatrix} 3 + 8 \end{bmatrix} = \begin{bmatrix} 11 \end{bmatrix}$

Example C: Using the same matrices as Example B, let's consider $BA$. $B = \begin{bmatrix} 3 \\ 4 \end{bmatrix}$ (order $2 \times 1$) and $A = \begin{bmatrix} 1 & 2 \end{bmatrix}$ (order $1 \times 2$). The number of columns in B (1) equals the number of rows in A (1), so $BA$ is defined and will have order $2 \times 2$.

$BA = \begin{bmatrix} 3 \\ 4 \end{bmatrix} \begin{bmatrix} 1 & 2 \end{bmatrix} = \begin{bmatrix} 3 \times 1 & 3 \times 2 \\ 4 \times 1 & 4 \times 2 \end{bmatrix} = \begin{bmatrix} 3 & 6 \\ 4 & 8 \end{bmatrix}$

Examples B and C show that even if both $AB$ and $BA$ are defined, their orders might be different, and even if the orders are the same, the resulting matrices are generally not equal.

Properties of Matrix Multiplication

Matrix multiplication has several properties, but it behaves differently from multiplication of real numbers in some crucial ways. Let A, B, and C be matrices such that the indicated products and sums are defined according to the rules of matrix algebra.

1. Associative Law: $(AB)C = A(BC)$.
Matrix multiplication is associative. If the product of three matrices A, B, and C is defined in that order, the grouping of the matrices does not affect the final product.

2. Distributive Law:
(a) $A(B + C) = AB + AC$ (Left distributive law). If A is $m \times n$ and B, C are $n \times p$, then this property holds.
(b) $(A + B)C = AC + BC$ (Right distributive law). If A, B are $m \times n$ and C is $n \times p$, then this property holds.

3. Existence of Multiplicative Identity: For a square matrix $A$ of order $n$, there exists an identity matrix of order $n$, denoted by $I_n$, such that $AI_n = I_n A = A$.
The identity matrix $I_n$ is a square matrix of order $n$ with 1s on the main diagonal and 0s elsewhere. For example, $I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ and $I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$. Multiplying any matrix by an appropriately sized identity matrix leaves the matrix unchanged, acting like the number 1 in scalar multiplication.

4. Multiplication by Zero Matrix: For any matrix A (of order $m \times n$), multiplying by a zero matrix of appropriate order results in a zero matrix.
If $O_{n \times p}$ is a zero matrix of order $n \times p$, then $A_{m \times n} O_{n \times p} = O_{m \times p}$. If $O_{p \times m}$ is a zero matrix of order $p \times m$, then $O_{p \times m} A_{m \times n} = O_{p \times n}$.

5. Non-Commutativity: In general, matrix multiplication is not commutative. This means that for two matrices A and B, $AB$ is generally not equal to $BA$.
As shown in Examples B and C above, sometimes $AB$ is defined while $BA$ is not, or vice versa. Even when both $AB$ and $BA$ are defined and have the same order (e.g., when A and B are both square matrices of the same order), $AB$ is typically not equal to $BA$.

6. Zero Product Property: If $AB = O$, it does not necessarily imply that $A=O$ or $B=O$.
This is a significant difference from real number multiplication, where if $ab=0$, then either $a=0$ or $b=0$. In matrices, it is possible for the product of two non-zero matrices to be the zero matrix.
Example: Let $A = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$ and $B = \begin{bmatrix} 0 & 2 \\ 0 & 0 \end{bmatrix}$. Both A and B are non-zero matrices. Let's calculate their product $AB$:

$AB = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 2 \\ 0 & 0 \end{bmatrix} = \begin{bmatrix} (0 \times 0) + (1 \times 0) & (0 \times 2) + (1 \times 0) \\ (0 \times 0) + (0 \times 0) & (0 \times 2) + (0 \times 0) \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} = O$

Here, $AB = O$, even though $A \ne O$ and $B \ne O$.


Example of Matrix Multiplication

Example 1. If $A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$ and $B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$, find $AB$ and $BA$.

Answer:

Matrix A is of order $2 \times 2$. Matrix B is of order $2 \times 2$. Since the number of columns in A (2) equals the number of rows in B (2), the product $AB$ is defined and will be of order $2 \times 2$. Since the number of columns in B (2) equals the number of rows in A (2), the product $BA$ is defined and will be of order $2 \times 2$.

Calculate AB

$AB = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$

To find the element in the first row and first column of $AB$, we multiply the first row of A by the first column of B and sum the products: $(1 \times 5) + (2 \times 7) = 5 + 14 = 19$.

To find the element in the first row and second column of $AB$, we multiply the first row of A by the second column of B and sum the products: $(1 \times 6) + (2 \times 8) = 6 + 16 = 22$.

To find the element in the second row and first column of $AB$, we multiply the second row of A by the first column of B and sum the products: $(3 \times 5) + (4 \times 7) = 15 + 28 = 43$.

To find the element in the second row and second column of $AB$, we multiply the second row of A by the second column of B and sum the products: $(3 \times 6) + (4 \times 8) = 18 + 32 = 50$.

$AB = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$

Calculate BA

$BA = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$

To find the element in the first row and first column of $BA$, we multiply the first row of B by the first column of A and sum the products: $(5 \times 1) + (6 \times 3) = 5 + 18 = 23$.

To find the element in the first row and second column of $BA$, we multiply the first row of B by the second column of A and sum the products: $(5 \times 2) + (6 \times 4) = 10 + 24 = 34$.

To find the element in the second row and first column of $BA$, we multiply the second row of B by the first column of A and sum the products: $(7 \times 1) + (8 \times 3) = 7 + 24 = 31$.

To find the element in the second row and second column of $BA$, we multiply the second row of B by the second column of A and sum the products: $(7 \times 2) + (8 \times 4) = 14 + 32 = 46$.

$BA = \begin{bmatrix} 23 & 34 \\ 31 & 46 \end{bmatrix}$

Comparing the results for $AB$ and $BA$, we see that the corresponding elements are different.

$AB \ne BA$

This example confirms that matrix multiplication is generally not commutative.



Determinants


The determinant is a scalar value associated with a square matrix. It is a unique number that can be calculated from the elements of the matrix and provides crucial information about the matrix, such as whether it is invertible. Determinants have wide applications in mathematics, physics, and engineering, including solving systems of linear equations, finding the inverse of a matrix, calculating eigenvalues, and determining areas or volumes transformed by the matrix.

The determinant of a square matrix A is commonly denoted as $\det(A)$, $\text{det}(A)$, or by using vertical bars around the matrix symbol, $|A|$. Note that the vertical bars for a determinant do not represent the absolute value, but rather the determinant of the matrix.


Determinant of a $1 \times 1$ Matrix

The simplest case is a $1 \times 1$ matrix. If $A = [a_{11}]$ is a $1 \times 1$ matrix, its determinant is simply the value of its single element.

$\det(A) = |a_{11}| = a_{11}$

Example: If $A = [5]$, then $\det(A) = 5$. If $B = [-3]$, then $\det(B) = -3$.


Determinant of a $2 \times 2$ Matrix

For a $2 \times 2$ matrix, the determinant is calculated using a specific formula involving the products of elements along the diagonals.

If $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$ is a $2 \times 2$ matrix, its determinant is calculated as the difference between the product of the elements on the main diagonal (top-left to bottom-right) and the product of the elements on the off-diagonal (top-right to bottom-left).

$\det(A) = \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc$

... (i)

Example: If $A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}$, then using the formula:

$\det(A) = (2 \times 4) - (3 \times 1) = 8 - 3 = 5$


Determinant of a $3 \times 3$ Matrix

The determinant of a $3 \times 3$ (or larger square matrix) is defined recursively in terms of determinants of smaller matrices. This calculation is typically done using the method of expansion by minors and cofactors along any row or any column.

Let $A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$ be a $3 \times 3$ matrix.

Minor of an Element

The minor of an element $a_{ij}$ (the element in the $i$-th row and $j$-th column), denoted by $M_{ij}$, is the determinant of the square matrix obtained by deleting the $i$-th row and the $j$-th column of the original matrix $A$.

Example: For the matrix $A$ above:

The minor of $a_{11}$ is $M_{11}$, obtained by deleting row 1 and column 1:

$M_{11} = \det \begin{bmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{bmatrix} = a_{22}a_{33} - a_{23}a_{32}$

The minor of $a_{23}$ is $M_{23}$, obtained by deleting row 2 and column 3:

$M_{23} = \det \begin{bmatrix} a_{11} & a_{12} \\ a_{31} & a_{32} \end{bmatrix} = a_{11}a_{32} - a_{12}a_{31}$

Cofactor of an Element

The cofactor of an element $a_{ij}$, denoted by $C_{ij}$, is related to its minor $M_{ij}$ by including a sign factor based on the position of the element. The formula is:

$C_{ij} = (-1)^{i+j} M_{ij}$

... (ii)

The term $(-1)^{i+j}$ results in a checkerboard pattern of signs:

$\begin{bmatrix} + & - & + \\ - & + & - \\ + & - & + \end{bmatrix}$

This means $C_{ij} = M_{ij}$ if $i+j$ is even, and $C_{ij} = -M_{ij}$ if $i+j$ is odd.

Example: For the matrix A:

The cofactor of $a_{11}$ is $C_{11} = (-1)^{1+1} M_{11} = (+1) M_{11} = M_{11} = a_{22}a_{33} - a_{23}a_{32}$.

The cofactor of $a_{23}$ is $C_{23} = (-1)^{2+3} M_{23} = (-1) M_{23} = -M_{23} = -(a_{11}a_{32} - a_{12}a_{31}) = a_{12}a_{31} - a_{11}a_{32}$.

Expansion by Cofactors

The determinant of a square matrix is the sum of the products of the elements of any one row (or any one column) with their corresponding cofactors. This is also known as Laplace expansion.

Expanding the determinant of matrix A along the first row:

$\det(A) = a_{11}C_{11} + a_{12}C_{12} + a_{13}C_{13}$

... (iii)

Expanding along the first column:

$\det(A) = a_{11}C_{11} + a_{21}C_{21} + a_{31}C_{31}$

... (iv)

The value of the determinant is independent of the row or column chosen for expansion. It is often easiest to expand along a row or column that contains one or more zeros, as the product of the element and its cofactor will be zero, simplifying the calculation.

Substituting the definition of cofactors ($C_{ij} = (-1)^{i+j} M_{ij}$) into the first row expansion (iii):

$\det(A) = a_{11}(-1)^{1+1}M_{11} + a_{12}(-1)^{1+2}M_{12} + a_{13}(-1)^{1+3}M_{13}$

$\det(A) = a_{11}M_{11} - a_{12}M_{12} + a_{13}M_{13}$

Substituting the expressions for the $2 \times 2$ minors:

$\det(A) = a_{11} \det \begin{bmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{bmatrix} - a_{12} \det \begin{bmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{bmatrix} + a_{13} \det \begin{bmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{bmatrix}$

$\det(A) = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31})$

... (v)


Properties of Determinants

Determinants have several properties that are useful for their calculation and in theoretical applications. Let A and B be square matrices of the same order.

1. Transpose Property: $\det(A^T) = \det(A)$.
The determinant of a matrix is equal to the determinant of its transpose.

2. Row/Column Interchange Property: If any two rows (or columns) of a matrix are interchanged, the sign of the determinant changes.
For example, if $A'$ is obtained from $A$ by swapping two rows, then $\det(A') = -\det(A)$.

3. Identical/Proportional Rows/Columns Property: If any two rows (or columns) of a matrix are identical or proportional, the determinant is zero.
If Row $i$ = Row $j$ (for $i \ne j$), or if Row $i = k \times$ Row $j$ (for some scalar $k$ and $i \ne j$), then $\det(A) = 0$.

4. Scalar Multiplication Property (Row/Column): If each element of a single row (or column) of a matrix is multiplied by a scalar $k$, the determinant of the new matrix is $k$ times the determinant of the original matrix.
$\det(\text{Matrix with } i\text{-th row multiplied by } k) = k \det(A)$.

5. Scalar Multiplication Property (Matrix): If a matrix $A$ of order $n$ is multiplied by a scalar $k$, the determinant of the resulting matrix is $k^n$ times the determinant of the original matrix.
$\det(kA) = k^n \det(A)$ (where $n$ is the order of the square matrix A).

6. Row/Column Operation Property: If to the elements of any row (or column), the corresponding elements of any other row (or column) are added with a scalar multiple, the determinant remains unchanged.
$\det(\text{Matrix with } R_i \to R_i + k R_j) = \det(A)$. This property is crucial for simplifying matrices using row operations to calculate determinants.

7. Determinant of a Product: For two square matrices A and B of the same order, the determinant of their product is the product of their determinants.
$\det(AB) = \det(A) \det(B)$.

8. Determinant of a Power: For a square matrix A and any positive integer $n$, the determinant of $A^n$ is the $n$-th power of the determinant of A.
$\det(A^n) = (\det(A))^n$.

9. Determinant of a Triangular, Diagonal, or Scalar Matrix: If a square matrix A is a triangular matrix (upper or lower), a diagonal matrix, or a scalar matrix, its determinant is the product of its diagonal elements.
If $A = \begin{bmatrix} a_{11} & * & * \\ 0 & a_{22} & * \\ 0 & 0 & a_{33} \end{bmatrix}$ (Upper Triangular) or $A = \begin{bmatrix} a_{11} & 0 & 0 \\ * & a_{22} & 0 \\ * & * & a_{33} \end{bmatrix}$ (Lower Triangular) or $A = \begin{bmatrix} a_{11} & 0 & 0 \\ 0 & a_{22} & 0 \\ 0 & 0 & a_{33} \end{bmatrix}$ (Diagonal), then $\det(A) = a_{11}a_{22}a_{33}$.

10. Determinant of an Inverse Matrix: If a matrix A is invertible (non-singular), then its determinant is non-zero, and the determinant of its inverse $A^{-1}$ is the reciprocal of the determinant of A.
If $\det(A) \ne 0$, then $\det(A^{-1}) = \frac{1}{\det(A)}$.


Example of $3 \times 3$ Determinant Calculation

Example 1. Calculate the determinant of the matrix $A = \begin{bmatrix} 1 & 2 & 3 \\ 0 & 4 & 1 \\ 2 & 0 & 5 \end{bmatrix}$.

Answer:

We will calculate the determinant by expanding along the first row ($i=1$). The elements in the first row are $a_{11}=1, a_{12}=2, a_{13}=3$. The signs for the cofactors in the first row are $(-1)^{1+1}=+1$, $(-1)^{1+2}=-1$, and $(-1)^{1+3}=+1$.

First, find the minors $M_{11}, M_{12}, M_{13}$:

Minor $M_{11}$: Delete row 1 and column 1 from A.

$M_{11} = \det \begin{bmatrix} 4 & 1 \\ 0 & 5 \end{bmatrix} = (4 \times 5) - (1 \times 0) = 20 - 0 = 20$

Minor $M_{12}$: Delete row 1 and column 2 from A.

$M_{12} = \det \begin{bmatrix} 0 & 1 \\ 2 & 5 \end{bmatrix} = (0 \times 5) - (1 \times 2) = 0 - 2 = -2$

Minor $M_{13}$: Delete row 1 and column 3 from A.

$M_{13} = \det \begin{bmatrix} 0 & 4 \\ 2 & 0 \end{bmatrix} = (0 \times 0) - (4 \times 2) = 0 - 8 = -8$

Next, find the cofactors $C_{11}, C_{12}, C_{13}$ using $C_{ij} = (-1)^{i+j} M_{ij}$:

$C_{11} = (-1)^{1+1} M_{11} = (+1)(20) = 20$

$C_{12} = (-1)^{1+2} M_{12} = (-1)(-2) = 2$

$C_{13} = (-1)^{1+3} M_{13} = (+1)(-8) = -8$

Now, calculate the determinant by expanding along the first row: $\det(A) = a_{11}C_{11} + a_{12}C_{12} + a_{13}C_{13}$

$\det(A) = (1)(20) + (2)(2) + (3)(-8)$

$\det(A) = 20 + 4 - 24$

$\det(A) = 24 - 24 = 0$

The determinant of matrix A is 0.

Alternate Calculation (Expand along Column 1)

Let's calculate the determinant by expanding along the first column ($j=1$). The elements in the first column are $a_{11}=1, a_{21}=0, a_{31}=2$. The signs for the cofactors in the first column are $(-1)^{1+1}=+1$, $(-1)^{2+1}=-1$, and $(-1)^{3+1}=+1$.

We need the cofactors $C_{11}, C_{21}, C_{31}$. $C_{11}$ was already calculated as 20.

Minor $M_{21}$: Delete row 2 and column 1 from A.

$M_{21} = \det \begin{bmatrix} 2 & 3 \\ 0 & 5 \end{bmatrix} = (2 \times 5) - (3 \times 0) = 10 - 0 = 10$

Cofactor $C_{21} = (-1)^{2+1} M_{21} = (-1)(10) = -10$.

Minor $M_{31}$: Delete row 3 and column 1 from A.

$M_{31} = \det \begin{bmatrix} 2 & 3 \\ 4 & 1 \end{bmatrix} = (2 \times 1) - (3 \times 4) = 2 - 12 = -10$

Cofactor $C_{31} = (-1)^{3+1} M_{31} = (+1)(-10) = -10$.

Calculate the determinant by expanding along the first column: $\det(A) = a_{11}C_{11} + a_{21}C_{21} + a_{31}C_{31}$

$\det(A) = (1)(20) + (0)(-10) + (2)(-10)$

$\det(A) = 20 + 0 - 20$

$\det(A) = 0$

The result obtained by expanding along the first column is the same as that obtained by expanding along the first row, which is 0.



Inverse of a matrix


The concept of the inverse of a matrix is analogous to the reciprocal of a non-zero scalar in arithmetic. Just as for a non-zero number $a$, there exists a number $a^{-1} = \frac{1}{a}$ such that $a \times a^{-1} = a^{-1} \times a = 1$, for a square matrix $A$, its inverse, denoted by $A^{-1}$, is another matrix of the same order such that when multiplied by $A$, it results in the identity matrix $I$.

Formally, if $A$ is a square matrix of order $n$, its inverse $A^{-1}$ is a square matrix of the same order $n$ satisfying the condition:

$AA^{-1} = A^{-1}A = I_n$

... (i)

Here, $I_n$ is the identity matrix of order $n$.

It is important to note that:


Singular and Non-Singular Matrix

The existence of the inverse of a square matrix is directly linked to its determinant. This leads to the classification of square matrices into singular and non-singular types.

A square matrix $A$ is called a singular matrix if its determinant is zero, i.e., $\det(A) = 0$.

A square matrix $A$ is called a non-singular matrix if its determinant is non-zero, i.e., $\det(A) \ne 0$.

Condition for Existence of Inverse: A square matrix $A$ has an inverse ($A^{-1}$ exists) if and only if it is a non-singular matrix ($\det(A) \ne 0$).

If $\det(A) = 0$, the matrix $A$ is singular and does not have an inverse.


Adjoint of a Matrix

To find the inverse of a matrix, we first need to understand the concept of the adjoint of a matrix. The adjoint of a square matrix $A$, denoted by $\text{adj}(A)$, is the transpose of the cofactor matrix of $A$.

Let $A = [a_{ij}]$ be a square matrix of order $n$. For each element $a_{ij}$, we can find its cofactor $C_{ij}$. Recall that $C_{ij} = (-1)^{i+j}M_{ij}$, where $M_{ij}$ is the minor of $a_{ij}$ (the determinant of the submatrix obtained by deleting the $i$-th row and $j$-th column of $A$).

The cofactor matrix of $A$ is the matrix $C = [C_{ij}]$, which is formed by replacing each element $a_{ij}$ in $A$ with its corresponding cofactor $C_{ij}$.

Cofactor Matrix of $A = \begin{bmatrix} C_{11} & C_{12} & \dots & C_{1n} \\ C_{21} & C_{22} & \dots & C_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ C_{n1} & C_{n2} & \dots & C_{nn} \end{bmatrix}$

The adjoint of $A$ is the transpose of this cofactor matrix:

$\text{adj}(A) = (\text{Cofactor Matrix of } A)^T = [C_{ij}]^T = [C_{ji}]$

This means the element in the $i$-th row and $j$-th column of $\text{adj}(A)$ is the cofactor $C_{ji}$ of the element $a_{ji}$ from the original matrix $A$.

$\text{adj}(A) = \begin{bmatrix} C_{11} & C_{21} & \dots & C_{n1} \\ C_{12} & C_{22} & \dots & C_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ C_{1n} & C_{2n} & \dots & C_{nn} \end{bmatrix}$

... (ii)

Adjoint of a $2 \times 2$ Matrix

Finding the adjoint is particularly simple for a $2 \times 2$ matrix. If $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the cofactors are:

$C_{11} = (-1)^{1+1} M_{11} = (+1) \det [d] = d$
$C_{12} = (-1)^{1+2} M_{12} = (-1) \det [c] = -c$
$C_{21} = (-1)^{2+1} M_{21} = (-1) \det [b] = -b$
$C_{22} = (-1)^{2+2} M_{22} = (+1) \det [a] = a$

The cofactor matrix is $\begin{bmatrix} C_{11} & C_{12} \\ C_{21} & C_{22} \end{bmatrix} = \begin{bmatrix} d & -c \\ -b & a \end{bmatrix}$.

The adjoint is the transpose of the cofactor matrix:

$\text{adj}(A) = \begin{bmatrix} d & -c \\ -b & a \end{bmatrix}^T = \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$

... (iii)

So, for a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the adjoint is found by swapping the elements on the main diagonal (a and d) and changing the sign of the elements on the off-diagonal (b and c).

Adjoint of a $3 \times 3$ Matrix

For a $3 \times 3$ matrix, we need to calculate all 9 cofactors $C_{ij}$ and arrange them into the cofactor matrix, then take its transpose. If $A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$, the cofactors are $C_{11}, C_{12}, \dots, C_{33}$. The cofactor matrix is $C = \begin{bmatrix} C_{11} & C_{12} & C_{13} \\ C_{21} & C_{22} & C_{23} \\ C_{31} & C_{32} & C_{33} \end{bmatrix}$.

The adjoint is the transpose of the cofactor matrix:

$\text{adj}(A) = C^T = \begin{bmatrix} C_{11} & C_{21} & C_{31} \\ C_{12} & C_{22} & C_{32} \\ C_{13} & C_{23} & C_{33} \end{bmatrix}$

... (iv)


Formula for Inverse using Adjoint and Determinant

A fundamental theorem in matrix algebra relates the inverse of a matrix to its adjoint and determinant. For a non-singular square matrix $A$, the inverse $A^{-1}$ is given by the formula:

$\mathbf{A^{-1} = \frac{1}{\det(A)} \text{adj}(A)}$

... (v)

This formula highlights why the determinant being non-zero is a necessary condition for the existence of the inverse. If $\det(A) = 0$, the expression $\frac{1}{\det(A)}$ is undefined, and hence the inverse does not exist.

The derivation of this formula comes from the property that for any square matrix A:

$A (\text{adj}(A)) = (\text{adj}(A)) A = \det(A) I_n$

If $\det(A) \ne 0$, we can divide by $\det(A)$:

$A \left(\frac{1}{\det(A)} \text{adj}(A)\right) = \left(\frac{1}{\det(A)} \text{adj}(A)\right) A = \frac{1}{\det(A)} \det(A) I_n = I_n$

By the definition of the inverse (Equation i), the matrix $\frac{1}{\det(A)} \text{adj}(A)$ must be the inverse of $A$.


Properties of Inverse

Matrix inversion satisfies several properties. Let A and B be invertible matrices of the same order $n$, and let $k$ be a non-zero scalar.

1. Inverse of the Inverse: $(A^{-1})^{-1} = A$.
Taking the inverse of the inverse of a matrix returns the original matrix.

2. Inverse of a Scalar Multiple: $(kA)^{-1} = \frac{1}{k} A^{-1}$.
The inverse of a scalar multiple of a matrix is the reciprocal of the scalar multiplied by the inverse of the matrix. This requires $k \ne 0$.

3. Inverse of a Product: $(AB)^{-1} = B^{-1}A^{-1}$.
The inverse of the product of two invertible matrices is the product of their inverses in the reverse order. This property is very important.

4. Inverse of the Transpose: $(A^T)^{-1} = (A^{-1})^T$.
The inverse of the transpose of a matrix is equal to the transpose of its inverse.

5. Determinant of the Inverse: $\det(A^{-1}) = \frac{1}{\det(A)}$.
The determinant of the inverse of a matrix is the reciprocal of the determinant of the original matrix. This follows from $\det(AA^{-1}) = \det(I) \implies \det(A)\det(A^{-1}) = 1$.

6. Identity Matrix Inverse: $I_n^{-1} = I_n$.
The inverse of an identity matrix is the identity matrix itself.


Example of Finding Inverse of a $2 \times 2$ Matrix

Example 1. Find the inverse of the matrix $A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}$.

Answer:

Step 1: Calculate the determinant of A. This is to check if the matrix is invertible.

$\det(A) = (2 \times 4) - (3 \times 1) = 8 - 3 = 5$

Since $\det(A) = 5$, which is not zero, the matrix $A$ is non-singular and its inverse exists.

Step 2: Find the adjoint of A. For a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the adjoint is $\begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$.

Applying this formula to $A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}$ (where $a=2, b=3, c=1, d=4$):

$\text{adj}(A) = \begin{bmatrix} 4 & -3 \\ -1 & 2 \end{bmatrix}$

Step 3: Use the formula $A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$.

$A^{-1} = \frac{1}{5} \begin{bmatrix} 4 & -3 \\ -1 & 2 \end{bmatrix}$

Multiply the scalar $\frac{1}{5}$ into each element of the adjoint matrix:

$A^{-1} = \begin{bmatrix} \frac{4}{5} & \frac{-3}{5} \\ \frac{-1}{5} & \frac{2}{5} \end{bmatrix}$

Step 4: Verification (Optional but Recommended). Check if $AA^{-1} = I$.

$AA^{-1} = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix} \begin{bmatrix} 4/5 & -3/5 \\ -1/5 & 2/5 \end{bmatrix}$

Perform the matrix multiplication:

Element (1,1): $(2 \times \frac{4}{5}) + (3 \times \frac{-1}{5}) = \frac{8}{5} - \frac{3}{5} = \frac{5}{5} = 1$

Element (1,2): $(2 \times \frac{-3}{5}) + (3 \times \frac{2}{5}) = \frac{-6}{5} + \frac{6}{5} = \frac{0}{5} = 0$

Element (2,1): $(1 \times \frac{4}{5}) + (4 \times \frac{-1}{5}) = \frac{4}{5} - \frac{4}{5} = \frac{0}{5} = 0$

Element (2,2): $(1 \times \frac{-3}{5}) + (4 \times \frac{2}{5}) = \frac{-3}{5} + \frac{8}{5} = \frac{5}{5} = 1$

$AA^{-1} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = I_2$

Since $AA^{-1} = I_2$, the calculated inverse is correct.



Solving System of Simultaneous Equations using Matrix Method


In mathematics, we often encounter systems of linear equations. For a system with the same number of equations as variables, matrices provide a powerful and systematic method for finding the solution, provided certain conditions are met. This method involves representing the system in matrix form and using the concept of the inverse of a matrix.


Matrix Representation of a System of Linear Equations

Consider a system of $n$ linear equations involving $n$ variables, say $x_1, x_2, \dots, x_n$:

$a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n = b_1$

$a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n = b_2$

$\vdots$

$a_{n1}x_1 + a_{n2}x_2 + \dots + a_{nn}x_n = b_n$

This system can be concisely written in matrix form as $AX = B$, where:

A is the coefficient matrix, an $n \times n$ matrix containing the coefficients of the variables:

$A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}$

X is the variable matrix (or column vector), an $n \times 1$ matrix containing the variables:

$X = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$

B is the constant matrix (or column vector), an $n \times 1$ matrix containing the constants on the right side of the equations:

$B = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}$

When you perform the matrix multiplication $AX$, you get an $n \times 1$ matrix whose elements are the left sides of the equations. Equating this to the matrix $B$ (which is also $n \times 1$) results in the original system of equations.

$\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}$

... (i)


Solving the System using Inverse Matrix Method

The matrix equation $AX = B$ can be solved for the variable matrix $X$ if the coefficient matrix $A$ is invertible.

Given the matrix equation:

$AX = B$

If the coefficient matrix $A$ is non-singular (i.e., $\det(A) \ne 0$), then its inverse $A^{-1}$ exists. We can pre-multiply (multiply from the left side) both sides of the matrix equation by $A^{-1}$:

$A^{-1}(AX) = A^{-1}B$

Using the associative property of matrix multiplication, $(A^{-1}A)X = A^{-1}B$:

$(A^{-1}A)X = A^{-1}B$

By the definition of the inverse matrix, $A^{-1}A = I_n$ (the identity matrix of order n):

$I_n X = A^{-1}B$

[$A^{-1}A = I_n$]

Multiplying a matrix by the identity matrix leaves it unchanged, so $I_n X = X$:

$\mathbf{X = A^{-1}B}$

... (ii)

This equation provides the solution for the system of linear equations. To find the values of the variables $x_1, x_2, \dots, x_n$, we need to:

  1. Calculate the inverse of the coefficient matrix $A$, i.e., $A^{-1}$. This is possible only if $\det(A) \ne 0$.
  2. Perform the matrix multiplication of $A^{-1}$ and the constant matrix $B$.

The resulting column matrix $X$ will contain the values of the variables $x_1, x_2, \dots, x_n$.


Conditions for Consistency of the System

The matrix method provides a clear way to determine whether a system of linear equations is consistent (has one or more solutions) or inconsistent (has no solution). Consider the system $AX = B$, where A is a square matrix.

Case 1: If $\det(A) \ne 0$ (A is a non-singular matrix)

In this case, the inverse $A^{-1}$ exists and is unique. The system $AX = B$ has a unique solution given by $X = A^{-1}B$. The system is said to be consistent and has a unique solution.

Case 2: If $\det(A) = 0$ (A is a singular matrix)

If $\det(A) = 0$, the matrix $A$ is singular, and its inverse $A^{-1}$ does not exist. In this scenario, the system $AX = B$ may have either no solution or infinitely many solutions. To determine which case it is, we examine the product $(\text{adj}(A))B$.


Example using Matrix Method ($2 \times 2$ System)

Example 1. Solve the following system of equations using the matrix method:

$2x + 3y = 7$

$x + 4y = 9$

Answer:

Step 1: Write the system of equations in matrix form $AX = B$.

$\begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 7 \\ 9 \end{bmatrix}$

Here, the coefficient matrix is $A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}$, the variable matrix is $X = \begin{bmatrix} x \\ y \end{bmatrix}$, and the constant matrix is $B = \begin{bmatrix} 7 \\ 9 \end{bmatrix}$.

Step 2: Check if the matrix A is non-singular by calculating its determinant.

$\det(A) = \begin{vmatrix} 2 & 3 \\ 1 & 4 \end{vmatrix} = (2 \times 4) - (3 \times 1) = 8 - 3 = 5$

Since $\det(A) = 5$, which is non-zero, the matrix $A$ is non-singular. Therefore, its inverse $A^{-1}$ exists, and the system of equations has a unique solution.

Step 3: Find the adjoint of A. For a $2 \times 2$ matrix $\begin{bmatrix} a & b \\ c & d \end{bmatrix}$, the adjoint is $\begin{bmatrix} d & -b \\ -c & a \end{bmatrix}$.

Using this rule for $A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}$:

$\text{adj}(A) = \begin{bmatrix} 4 & -3 \\ -1 & 2 \end{bmatrix}$

Step 4: Calculate the inverse of A using the formula $A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$.

$A^{-1} = \frac{1}{5} \begin{bmatrix} 4 & -3 \\ -1 & 2 \end{bmatrix}$

Step 5: Use the formula $X = A^{-1}B$ to find the solution.

$X = \frac{1}{5} \begin{bmatrix} 4 & -3 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 7 \\ 9 \end{bmatrix}$

Perform the matrix multiplication $ \begin{bmatrix} 4 & -3 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 7 \\ 9 \end{bmatrix} $:

Element (1,1): $(4 \times 7) + (-3 \times 9) = 28 - 27 = 1$
Element (2,1): $(-1 \times 7) + (2 \times 9) = -7 + 18 = 11$

$X = \frac{1}{5} \begin{bmatrix} 1 \\ 11 \end{bmatrix}$

Multiply the resulting matrix by the scalar $\frac{1}{5}$:

$X = \begin{bmatrix} \frac{1}{5} \\ \frac{11}{5} \end{bmatrix}$

Since $X = \begin{bmatrix} x \\ y \end{bmatrix}$, by equating the elements of the matrix equation, we get the values of the variables.

$x = \frac{1}{5}$

$y = \frac{11}{5}$

The unique solution to the given system of equations is $x = \frac{1}{5}$ and $y = \frac{11}{5}$.



Cramer’s rule and Inverse of Coefficient Matrix


Cramer's rule is a method for solving a system of linear equations using determinants. It provides a direct formula for the value of each variable, provided the system has a unique solution. This rule is a consequence of the theory of determinants and is closely related to the inverse matrix method.


Cramer's Rule

Consider a system of $n$ linear equations in $n$ variables $x_1, x_2, \dots, x_n$:

$a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n = b_1$

$a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n = b_2$

$\vdots$

$a_{n1}x_1 + a_{n2}x_2 + \dots + a_{nn}x_n = b_n$

This system can be written in matrix form as $AX = B$, where:

$A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}$, $X = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$, and $B = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{bmatrix}$

Let $\Delta = \det(A)$ be the determinant of the coefficient matrix A.

Now, define $\Delta_j$ as the determinant of the matrix obtained by replacing the $j$-th column of the original coefficient matrix $A$ with the constant matrix $B$.

Cramer's rule states that if $\Delta \ne 0$, the system of linear equations has a unique solution given by the formulas:

$\mathbf{x_j = \frac{\Delta_j}{\Delta}}$

for $j = 1, 2, \dots, n$ ... (i)

This means $x_1 = \frac{\Delta_1}{\Delta}$, $x_2 = \frac{\Delta_2}{\Delta}$, and so on, up to $x_n = \frac{\Delta_n}{\Delta}$.

Conditions for using Cramer's Rule:

Cramer's Rule for a $2 \times 2$ System

Consider the system:

$a_{11}x + a_{12}y = b_1$

$a_{21}x + a_{22}y = b_2$

The coefficient matrix is $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}$ and the constant matrix is $B = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}$.

$\Delta = \det(A) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = a_{11}a_{22} - a_{12}a_{21}$.

$\Delta_1$ is the determinant of the matrix formed by replacing the 1st column of A with B:

$\Delta_1 = \begin{vmatrix} b_1 & a_{12} \\ b_2 & a_{22} \end{vmatrix} = b_1a_{22} - a_{12}b_2$.

$\Delta_2$ is the determinant of the matrix formed by replacing the 2nd column of A with B:

$\Delta_2 = \begin{vmatrix} a_{11} & b_1 \\ a_{21} & b_2 \end{vmatrix} = a_{11}b_2 - b_1a_{21}$.

If $\Delta \ne 0$, the unique solution is given by $x = \frac{\Delta_1}{\Delta}$ and $y = \frac{\Delta_2}{\Delta}$.

Cramer's Rule for a $3 \times 3$ System

Consider the system:

$a_{11}x + a_{12}y + a_{13}z = b_1$

$a_{21}x + a_{22}y + a_{23}z = b_2$

$a_{31}x + a_{32}y + a_{33}z = b_3$

The coefficient matrix is $A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$ and the constant matrix is $B = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}$.

$\Delta = \det(A) = \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix}$.

$\Delta_1 = \begin{vmatrix} b_1 & a_{12} & a_{13} \\ b_2 & a_{22} & a_{23} \\ b_3 & a_{32} & a_{33} \end{vmatrix}$ (obtained by replacing column 1 of A with B)

$\Delta_2 = \begin{vmatrix} a_{11} & b_1 & a_{13} \\ a_{21} & b_2 & a_{23} \\ a_{31} & b_3 & a_{33} \end{vmatrix}$ (obtained by replacing column 2 of A with B)

$\Delta_3 = \begin{vmatrix} a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \\ a_{31} & a_{32} & b_{3} \end{vmatrix}$ (obtained by replacing column 3 of A with B)

If $\Delta \ne 0$, the unique solution is given by $x = \frac{\Delta_1}{\Delta}$, $y = \frac{\Delta_2}{\Delta}$, and $z = \frac{\Delta_3}{\Delta}$.


Connection between Cramer's Rule and Inverse Matrix Method

Cramer's rule can be derived directly from the formula for the inverse of a matrix. Recall the formula for the inverse of a non-singular matrix $A$:

$A^{-1} = \frac{1}{\det(A)} \text{adj}(A)$

And the solution to the matrix equation $AX = B$ when $A$ is non-singular is $X = A^{-1}B$.

$X = \frac{1}{\det(A)} \text{adj}(A) B$

Let's expand this for a $3 \times 3$ system. Let $A = [a_{ij}]$, $X = \begin{bmatrix} x \\ y \\ z \end{bmatrix}$, $B = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}$. The adjoint of A is $\text{adj}(A) = \begin{bmatrix} C_{11} & C_{21} & C_{31} \\ C_{12} & C_{22} & C_{32} \\ C_{13} & C_{23} & C_{33} \end{bmatrix}$, where $C_{ij}$ are the cofactors of $a_{ij}$ in A.

$\begin{bmatrix} x \\ y \\ z \end{bmatrix} = \frac{1}{\Delta} \begin{bmatrix} C_{11} & C_{21} & C_{31} \\ C_{12} & C_{22} & C_{32} \\ C_{13} & C_{23} & C_{33} \end{bmatrix} \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}$

Performing the matrix multiplication on the right side:

$\begin{bmatrix} x \\ y \\ z \end{bmatrix} = \frac{1}{\Delta} \begin{bmatrix} C_{11}b_1 + C_{21}b_2 + C_{31}b_3 \\ C_{12}b_1 + C_{22}b_2 + C_{32}b_3 \\ C_{13}b_1 + C_{23}b_2 + C_{33}b_3 \end{bmatrix}$

Now, let's look at the numerators:

The numerator for $x$ is $C_{11}b_1 + C_{21}b_2 + C_{31}b_3$. Recall the definition of $\Delta_1 = \begin{vmatrix} b_1 & a_{12} & a_{13} \\ b_2 & a_{22} & a_{23} \\ b_3 & a_{32} & a_{33} \end{vmatrix}$. If we expand $\Delta_1$ along its first column, the elements are $b_1, b_2, b_3$ and their corresponding cofactors are $C_{11}, C_{21}, C_{31}$ (since these cofactors are calculated from the submatrices of $\Delta_1$, which are the same as the submatrices of A when columns 2 and 3 are kept).

$\Delta_1 = b_1 C_{11} + b_2 C_{21} + b_3 C_{31}$

[Expansion along 1st column of $\Delta_1$]

So, $x = \frac{1}{\Delta} (\Delta_1) = \frac{\Delta_1}{\Delta}$.

Similarly, the numerator for $y$ is $C_{12}b_1 + C_{22}b_2 + C_{32}b_3$. This is the expansion of $\Delta_2 = \begin{vmatrix} a_{11} & b_1 & a_{13} \\ a_{21} & b_2 & a_{23} \\ a_{31} & b_3 & a_{33} \end{vmatrix}$ along its second column.

$\Delta_2 = b_1 C_{12} + b_2 C_{22} + b_3 C_{32}$

[Expansion along 2nd column of $\Delta_2$]

So, $y = \frac{1}{\Delta} (\Delta_2) = \frac{\Delta_2}{\Delta}$.

And the numerator for $z$ is $C_{13}b_1 + C_{23}b_2 + C_{33}b_3$. This is the expansion of $\Delta_3 = \begin{vmatrix} a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \\ a_{31} & a_{32} & b_{3} \end{vmatrix}$ along its third column.

$\Delta_3 = b_1 C_{13} + b_2 C_{23} + b_3 C_{33}$

[Expansion along 3rd column of $\Delta_3$]

So, $z = \frac{1}{\Delta} (\Delta_3) = \frac{\Delta_3}{\Delta}$.

This demonstrates that Cramer's rule is equivalent to the inverse matrix method for solving systems of linear equations. Both methods rely on the fact that the coefficient matrix is non-singular ($\Delta \ne 0$).


Example using Cramer's Rule ($2 \times 2$ System)

Example 1. Solve the following system of equations using Cramer's rule:

$2x + 3y = 7$

$x + 4y = 9$

Answer:

The given system of equations is:

$2x + 3y = 7$

$x + 4y = 9$

The coefficient matrix is $A = \begin{bmatrix} 2 & 3 \\ 1 & 4 \end{bmatrix}$. The constant matrix is $B = \begin{bmatrix} 7 \\ 9 \end{bmatrix}$.

Step 1: Calculate $\Delta = \det(A)$.

$\Delta = \begin{vmatrix} 2 & 3 \\ 1 & 4 \end{vmatrix} = (2 \times 4) - (3 \times 1) = 8 - 3 = 5$

Since $\Delta = 5 \ne 0$, the system has a unique solution, and Cramer's rule is applicable.

Step 2: Calculate $\Delta_1$ and $\Delta_2$.

$\Delta_1$ is the determinant of the matrix formed by replacing the 1st column of A with B:

$\Delta_1 = \begin{vmatrix} 7 & 3 \\ 9 & 4 \endmatrix{vmatrix} = (7 \times 4) - (3 \times 9) = 28 - 27 = 1$

$\Delta_2$ is the determinant of the matrix formed by replacing the 2nd column of A with B:

$\Delta_2 = \begin{vmatrix} 2 & 7 \\ 1 & 9 \end{vmatrix} = (2 \times 9) - (7 \times 1) = 18 - 7 = 11$

Step 3: Use Cramer's rule formulas to find x and y.

$x = \frac{\Delta_1}{\Delta} = \frac{1}{5}$

... (ii)

$y = \frac{\Delta_2}{\Delta} = \frac{11}{5}$

... (iii)

The solution to the system is $x = \frac{1}{5}$ and $y = \frac{11}{5}$. This matches the result obtained using the inverse matrix method in the previous section.


Example using Cramer's Rule ($3 \times 3$ System)

Example 2. Solve the following system of equations using Cramer's rule:

$x + y - z = 1$

$2x + y + z = 6$

$x - y + z = 3$

Answer:

The given system of equations is:

$x + y - z = 1$

$2x + y + z = 6$

$x - y + z = 3$

The coefficient matrix is $A = \begin{bmatrix} 1 & 1 & -1 \\ 2 & 1 & 1 \\ 1 & -1 & 1 \end{bmatrix}$. The constant matrix is $B = \begin{bmatrix} 1 \\ 6 \\ 3 \end{bmatrix}$.

Step 1: Calculate $\Delta = \det(A)$. Expand along the first row for simplicity.

$\Delta = 1 \begin{vmatrix} 1 & 1 \\ -1 & 1 \end{vmatrix} - 1 \begin{vmatrix} 2 & 1 \\ 1 & 1 \end{vmatrix} + (-1) \begin{vmatrix} 2 & 1 \\ 1 & -1 \end{vmatrix}$

$\Delta = 1((1)(1) - (1)(-1)) - 1((2)(1) - (1)(1)) - 1((2)(-1) - (1)(1))$

$\Delta = 1(1 + 1) - 1(2 - 1) - 1(-2 - 1)$

$\Delta = 1(2) - 1(1) - 1(-3)$

$\Delta = 2 - 1 + 3 = 4$

Since $\Delta = 4 \ne 0$, a unique solution exists.

Step 2: Calculate $\Delta_1$, $\Delta_2$, and $\Delta_3$.

$\Delta_1$ (replace column 1 of A with B):

$\Delta_1 = \begin{vmatrix} 1 & 1 & -1 \\ 6 & 1 & 1 \\ 3 & -1 & 1 \end{vmatrix}$

Expand $\Delta_1$ along the first row:

$\Delta_1 = 1 \begin{vmatrix} 1 & 1 \\ -1 & 1 \end{vmatrix} - 1 \begin{vmatrix} 6 & 1 \\ 3 & 1 \end{vmatrix} + (-1) \begin{vmatrix} 6 & 1 \\ 3 & -1 \end{vmatrix}$

$\Delta_1 = 1(1 - (-1)) - 1(6 - 3) - 1(-6 - 3)$

$\Delta_1 = 1(2) - 1(3) - 1(-9)$

$\Delta_1 = 2 - 3 + 9 = 8$

$\Delta_2$ (replace column 2 of A with B):

$\Delta_2 = \begin{vmatrix} 1 & 1 & -1 \\ 2 & 6 & 1 \\ 1 & 3 & 1 \end{vmatrix}$

Expand $\Delta_2$ along the first row:

$\Delta_2 = 1 \begin{vmatrix} 6 & 1 \\ 3 & 1 \end{vmatrix} - 1 \begin{vmatrix} 2 & 1 \\ 1 & 1 \end{vmatrix} + (-1) \begin{vmatrix} 2 & 6 \\ 1 & 3 \end{vmatrix}$

$\Delta_2 = 1(6 - 3) - 1(2 - 1) - 1(6 - 6)$

$\Delta_2 = 1(3) - 1(1) - 1(0)$

$\Delta_2 = 3 - 1 - 0 = 2$

$\Delta_3$ (replace column 3 of A with B):

$\Delta_3 = \begin{vmatrix} 1 & 1 & 1 \\ 2 & 1 & 6 \\ 1 & -1 & 3 \end{vmatrix}$

Expand $\Delta_3$ along the first row:

$\Delta_3 = 1 \begin{vmatrix} 1 & 6 \\ -1 & 3 \end{vmatrix} - 1 \begin{vmatrix} 2 & 6 \\ 1 & 3 \end{vmatrix} + 1 \begin{vmatrix} 2 & 1 \\ 1 & -1 \end{vmatrix}$

$\Delta_3 = 1(3 - (-6)) - 1(6 - 6) + 1(-2 - 1)$

$\Delta_3 = 1(9) - 1(0) + 1(-3)$

$\Delta_3 = 9 - 0 - 3 = 6$

Step 3: Use Cramer's rule formulas to find x, y, and z.

$x = \frac{\Delta_1}{\Delta} = \frac{8}{4} = 2$

... (ii)

$y = \frac{\Delta_2}{\Delta} = \frac{2}{4} = \frac{1}{2}$

... (iii)

$z = \frac{\Delta_3}{\Delta} = \frac{6}{4} = \frac{3}{2}$

... (iv)

The solution to the system is $x = 2$, $y = \frac{1}{2}$, and $z = \frac{3}{2}$.