Chapter 3 Matrices (Concepts)
Welcome to Chapter 3: Matrices! A matrix is formally defined as a rectangular array of numbers or functions, called elements, organized into horizontal rows and vertical columns. The size of a matrix is known as its order, represented as $m \times n$. Matrices are powerful mathematical constructs used for representing large datasets and solving systems of linear equations efficiently.
In this chapter, we explore various specialized types of matrices, such as Square, Identity ($I$), and Diagonal matrices. You will master fundamental operations including addition, subtraction, and scalar multiplication. A key highlight is Matrix Multiplication, which is defined only when the columns of the first matrix match the rows of the second. Notably, matrix multiplication is generally not commutative, meaning $AB \neq BA$ in most cases.
We also introduce the Transpose of a matrix, which leads to the study of Symmetric ($A' = A$) and Skew-symmetric ($A' = -A$) matrices. Finally, we delve into Elementary Row Operations to calculate the Inverse of a Matrix ($A^{-1}$), where the property $AA^{-1} = I$ must hold true.
To enhance your learning, this page includes images, flowcharts, and mindmaps for better visualization. These comprehensive resources and practical examples are prepared by learningspot.co to ensure a clear and deep understanding of matrix algebra.
| Content On This Page | ||
|---|---|---|
| Introduction to Matrices | Operations on Matrices | Transpose of a Matrix |
| Elementary Operations on a Matrix | Invertible Matrices | |
Introduction to Matrices
The study of Matrices originated from the necessity to solve systems of simultaneous linear equations effectively. While various methods existed for solving these equations, the need for a more structured algebraic approach led to the birth of matrix theory. In 1857, the mathematician Arthur Cayley pioneered the development of matrices as a pure algebraic structure, moving beyond just a tool for solving equations to a branch of mathematics in its own right.
In modern times, matrices have become an indispensable tool in diverse fields such as Algebra, Geometry, Statistics, Physics, Chemistry, Psychology, Economics, and Industrial Management. They allow for the compact representation of large sets of data and the simplification of complex linear transformations.
Mathematical Representation of Systems
To understand the utility of a matrix, consider a system of simultaneous linear equations in two unknowns $x$ and $y$. For the purpose of this explanation, let us consider the following equations:
$3x + 2y = 15$
…(i)
$4x - 5y = 8$
…(ii)
This is a system of two linear equations. In such systems, the variables $x$ and $y$ serve as placeholders, and the primary information is contained within the coefficients and the constants. To specify the Left Hand Side (L.H.S.) of this system, we focus on the array of coefficients of $x$ and $y$ in their respective order:
$\begin{bmatrix} 3 & 2 \\ 4 & -5 \end{bmatrix}$
Similarly, the Right Hand Side (R.H.S.) constants provide the necessary information to complete the system specification, which can be represented by the following array:
$\begin{bmatrix} 15 \\ 8 \end{bmatrix}$
Elements, Rows, and Columns
Each array of the above type is known as a Matrix. A matrix is essentially an ordered rectangular array of elements. These elements are the building blocks of the matrix and their specific position is of paramount importance.
Rows of a Matrix
In a matrix, each horizontal line of elements is called a Row. In the coefficient matrix above:
$[3 \ \ 2]$
(First Row)
$[4 \ \ -5]$
(Second Row)
Columns of a Matrix
Each vertical line of elements within the matrix is called a Column. In the coefficient matrix above:
$\begin{bmatrix} 3 \\ 4 \end{bmatrix}$
(First Column)
$\begin{bmatrix} 2 \\ -5 \end{bmatrix}$
(Second Column)
Elements or Entries
The individual numbers, constants, or functions of which a matrix is constituted are called the elements or entries of the matrix. In the examples above, all elements are real numbers, although in advanced studies, elements can be complex numbers or even functions of variables.
System of Simultaneous Linear Equations in Three Variables
To understand why matrices are essential, let us look at how they represent mathematical information. A matrix is essentially an ordered rectangular array where the position of every number is fixed and significant. Let us consider a system of simultaneous linear equations in three variables ($x, y, \text{ and } z$):
$3x + 4y - 5z = 12$
…(i)
$2x - y + 6z = 7$
…(ii)
$x + 3y + 0z = -4$
…(iii)
This system of equations is fully specified by two distinct sets of data: the coefficients of the variables and the constant terms on the right-hand side. We can extract these numbers into rectangular arrays called matrices.
1. The Coefficient Matrix
By extracting the coefficients of $x, y, \text{ and } z$ from the equations above, we form a $3 \times 3$ matrix:
$\begin{bmatrix} 3 & 4 & -5 \\ 2 & -1 & 6 \\ 1 & 3 & 0 \end{bmatrix}$
2. The Constant (R.H.S.) Matrix
The numbers on the right side of the equals sign form a vertical array, often called a Column Matrix:
$\begin{bmatrix} 12 \\ 7 \\ -4 \end{bmatrix}$
Rows, Columns, and Elements
A matrix is defined by its dimensions and its individual components. These are structured as follows:
Rows ($m$): These are the horizontal lines of elements. In our coefficient matrix above, the first row is $[3 \ \ 4 \ \ -5]$, which represents the coefficients of the first equation.
Columns ($n$): These are the vertical lines of elements. The first column is $\begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix}$, which represents all the coefficients of the variable $x$.
Elements (or Entries): The individual numbers constituting the matrix are called elements. For example, in the array below, the number $6$ is an element located at the second row and third column.
Components of the Matrix System
| Component | Visual Orientation | Representation |
|---|---|---|
| Row | Horizontal ($\leftrightarrow$) | Equation level data |
| Column | Vertical ($\updownarrow$) | Variable level data |
| Element | Specific Point $(\cdot)$ | Individual coefficients or constants |
Formal Definition and Notation of Matrices
A Matrix is a systematic rectangular arrangement of $mn$ elements, which can be numbers (real or complex), variables, or even functions. These elements are organized into $m$ horizontal lines, known as Rows, and $n$ vertical lines, known as Columns. The primary purpose of this arrangement is to represent data or mathematical relationships in a structured format that allows for advanced algebraic operations.
To distinguish a matrix from a simple list of numbers, it is traditionally enclosed within square brackets $[\ ]$ or parentheses $( \ )$. It is standard practice to denote a matrix using a capital letter of the alphabet, such as $A, B, \text{ or } C$.
Order and Size of a Matrix
The Order (or dimension) of a matrix is a fundamental property that defines its size. If a matrix $A$ consists of $m$ rows and $n$ columns, it is said to be of order $m \times n$. This is read as "m by n".
Calculation of Total Elements
The total number of elements or entries present in a matrix is simply the product of the number of rows and the number of columns. If the order of a matrix is $m \times n$, then the total elements are $mn$. For example, if a matrix represents the prices of $5$ different items in $3$ different cities in India, the order would be $5 \times 3$, resulting in a total of $15$ data entries.
General Notation and Element Positioning
To identify a specific element within a matrix, we use a small letter (usually the lowercase version of the matrix name) followed by two suffixes. The first suffix, $i$, represents the row number, and the second suffix, $j$, represents the column number. This is denoted as $a_{ij}$.
The general representation of a matrix $A$ of order $m \times n$ is given as:
$A = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & a_{m3} & \dots & a_{mn} \end{bmatrix}$
For a quick and compact representation, we express it as:
$A = [a_{ij}]_{m \times n}$
[where $1 \le i \le m$ and $1 \le j \le n$]
Comparability and Equality of Matrices
Two matrices are not necessarily equal just because they contain the same numbers. The concepts of comparability and equality are defined as follows:
1. Comparable Matrices
Two matrices $A$ and $B$ are said to be Comparable if they possess the same order. This means the number of rows in $A$ must equal the number of rows in $B$, and the number of columns in $A$ must equal the number of columns in $B$. Comparability is a prerequisite for matrix addition and subtraction.
2. Equal Matrices
Two matrices $A = [a_{ij}]$ and $B = [b_{ij}]$ are defined as Equal (written as $A = B$) if and only if they satisfy two strict conditions:
Condition I: They must be of the same order (they are comparable).
Condition II: Their corresponding elements must be identical, meaning $a_{ij} = b_{ij}$ for all possible values of $i$ and $j$.
$\begin{bmatrix} x & y \\ z & w \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$
$\implies x=a, y=b, z=c, w=d$
Example 1. Find the values of $a, b, c, \text{ and } d$ from the following matrix equation:
$\begin{bmatrix} a - b & 2a + c \\ 2a - b & 3c + d \end{bmatrix} = \begin{bmatrix} -1 & 5 \\ 0 & 13 \end{bmatrix}$
Answer:
Given: The two matrices are of the same order ($2 \times 2$) and are equal.
To Find: The unknown variables $a, b, c, \text{ and } d$.
Solution: By the definition of equality of matrices, we equate the corresponding elements:
$a - b = -1$
... (i)
$2a - b = 0 \implies b = 2a$
... (ii)
$2a + c = 5$
... (iii)
$3c + d = 13$
... (iv)
Substitute $b = 2a$ from equation (ii) into equation (i):
$a - 2a = -1$
$-a = -1 \implies a = 1$
Now, substitute $a = 1$ in $b = 2a$:
$b = 2(1) = 2$
Substitute $a = 1$ in equation (iii):
$2(1) + c = 5 \implies c = 5 - 2 = 3$
Substitute $c = 3$ in equation (iv):
$3(3) + d = 13 \implies 9 + d = 13 \implies d = 4$
Result: The values are $a = 1, b = 2, c = 3, \text{ and } d = 4$.
Classification of Matrices
Matrices are classified into various types based on the arrangement of their elements, the number of rows and columns, and the specific values of their entries. Understanding these classifications is essential for performing advanced operations in linear algebra.
1. Row and Column Matrices
These matrices are characterized by having a single dimension (either one row or one column) and are often referred to as Vectors in physical sciences.
Row Matrix
A matrix is called a Row Matrix if it has exactly one row ($m = 1$) and any number of columns. The general order of a row matrix is $1 \times n$.
For example, if we represent the scores of a batsman in four consecutive matches, we might use a $1 \times 4$ row matrix:
$A = [45 \ \ 12 \ \ 102 \ \ 0]_{1 \times 4}$
Column Matrix
A matrix is called a Column Matrix if it has exactly one column ($n = 1$) and any number of rows. The general order of a column matrix is $m \times 1$.
For example, a price list for three different items can be shown as a $3 \times 1$ matrix:
$B = \begin{bmatrix} 50 \\ 150 \\ 200 \end{bmatrix}_{3 \times 1}$
2. Zero or Null Matrix
A matrix in which every single element is zero is known as a Zero Matrix or Null Matrix. It is generally denoted by the symbol $O$. The order of a zero matrix can be any $m \times n$.
A $2 \times 3$ zero matrix is represented as:
$O_{2 \times 3} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$
In matrix algebra, the zero matrix acts as the additive identity, similar to the number $0$ in standard arithmetic.
3. Square Matrix
A matrix in which the number of rows is exactly equal to the number of columns ($m = n$) is called a Square Matrix. A square matrix with $n$ rows and $n$ columns is said to be of order $n$.
The Principal Diagonal
In a square matrix $A = [a_{ij}]_{n \times n}$, the elements $a_{11}, a_{22}, a_{33}, \dots, a_{nn}$ are of particular importance. These elements are called the diagonal elements. The line along which these elements lie is called the Principal Diagonal or the Leading Diagonal.
Consider a square matrix of order 3:
$A = \begin{bmatrix} \mathbf{5} & 2 & 1 \\ 0 & \mathbf{7} & 9 \\ 4 & -3 & \mathbf{2} \end{bmatrix}$
Here, the diagonal elements are $5, 7, \text{ and } 2$. Note that non-square matrices do not have a principal diagonal.
4. Diagonal, Scalar, and Identity Matrices
These are special types of Square Matrices with specific patterns in their diagonal and non-diagonal elements.
Diagonal Matrix
A square matrix is called a Diagonal Matrix if all its non-diagonal elements are zero. That is, $a_{ij} = 0$ whenever $i \ne j$. The diagonal elements themselves may or may not be zero.
$D = \begin{bmatrix} 4 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{bmatrix}$
Scalar Matrix
A diagonal matrix in which all the diagonal elements are equal to the same constant $k$ is called a Scalar Matrix.
$S = \begin{bmatrix} \sqrt{3} & 0 & 0 \\ 0 & \sqrt{3} & 0 \\ 0 & 0 & \sqrt{3} \end{bmatrix}$
Identity Matrix (Unit Matrix)
A scalar matrix in which every diagonal element is exactly $1$ (unity) is called an Identity Matrix. It is denoted by $I_n$, where $n$ is the order. The identity matrix plays a role similar to the number $1$ in scalar multiplication.
$I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
5. Triangular Matrices
A Triangular Matrix is a special type of Square Matrix where the elements on one side of the principal diagonal are all zeros.
Depending on which side of the principal diagonal the zeros are located, triangular matrices are classified into two primary categories: Upper Triangular and Lower Triangular.
(i) Upper Triangular Matrix
A square matrix $A = [a_{ij}]$ is defined as an Upper Triangular Matrix if all the elements below the principal diagonal are zero. In such a matrix, the non-zero elements form a triangle in the upper right portion of the array, including the diagonal itself.
Mathematical Condition
The formal condition for a matrix to be upper triangular is:
$a_{ij} = 0$
for all $i > j$
This means that whenever the row index ($i$) is greater than the column index ($j$), the entry must be zero. Let us look at a $3 \times 3$ example:
$U = \begin{bmatrix} 1 & 4 & 5 \\ 0 & 2 & 6 \\ 0 & 0 & 3 \end{bmatrix}$
In the above matrix $U$, the elements $a_{21}, a_{31},$ and $a_{32}$ are all zero because for these positions, the row number is greater than the column number.
(ii) Lower Triangular Matrix
A square matrix $A = [a_{ij}]$ is defined as a Lower Triangular Matrix if all the elements above the principal diagonal are zero. Here, the non-zero elements are concentrated in the lower left portion of the matrix.
Mathematical Condition
The formal condition for a matrix to be lower triangular is:
$a_{ij} = 0$
for all $i < j$
In this case, whenever the row index ($i$) is less than the column index ($j$), the entry must be zero. For example:
$L = \begin{bmatrix} 1 & 0 & 0 \\ 4 & 2 & 0 \\ 5 & 6 & 3 \end{bmatrix}$
In matrix $L$, the elements $a_{12}, a_{13},$ and $a_{23}$ are zero as they satisfy the condition $i < j$.
Strictly Triangular Matrices
A triangular matrix (either upper or lower) is called Strictly Triangular if all the diagonal elements are also zero. Mathematically, this adds the condition:
$a_{ij} = 0$
for all $i = j$
Example 2. Consider the matrix $A = \begin{bmatrix} 5 & 0 & 0 \\ 2 & 8 & 0 \\ 1 & 4 & 7 \end{bmatrix}$. Determine if it is a triangular matrix and specify its type.
Answer:
To Determine: Type of Triangular Matrix.
Solution:
First, we identify the principal diagonal elements, which are $a_{11}=5, a_{22}=8,$ and $a_{33}=7$.
Next, we examine the elements above and below this diagonal:
Elements above the diagonal are: $a_{12}=0, a_{13}=0, a_{23}=0$.
Since all elements for which $i < j$ (row number less than column number) are zero, the matrix satisfies the condition for a Lower Triangular Matrix.
The elements below the diagonal are non-zero ($a_{21}=2, a_{31}=1, a_{32}=4$), which further confirms its structure.
Conclusion: The given matrix is a Lower Triangular Matrix of order 3.
Operations on Matrices
Matrix algebra involves several fundamental operations such as Addition, Subtraction, Scalar Multiplication, and Matrix Multiplication. These operations allow us to manipulate linear data sets and solve complex systems of equations. In this section, we shall explore these operations along with their properties and derivations.
1. Addition of Matrices
The operation of Addition in matrix algebra is an entry-wise operation. It is the process of combining two matrices to form a single matrix by summing their corresponding elements. However, unlike regular addition of numbers, matrix addition is subject to a strict condition regarding the dimensions of the matrices involved.
Conformability for Addition
Two matrices $A$ and $B$ are said to be conformable or compatible for addition if and only if they are of the same order. This means if matrix $A$ has $m$ rows and $n$ columns, then matrix $B$ must also have exactly $m$ rows and $n$ columns. If the orders are different, their sum is not defined because some elements would lack a corresponding partner for addition.
Mathematical Definition and Derivation
Let $A = [a_{ij}]$ and $B = [b_{ij}]$ be two matrices of the same order $m \times n$. The sum $A + B$ is defined as a new matrix $C = [c_{ij}]$ of the same order $m \times n$, such that every element $c_{ij}$ is the sum of the elements $a_{ij}$ and $b_{ij}$.
$A + B = [a_{ij} + b_{ij}]$
[Definition of Matrix Addition]
For example, if we consider two $2 \times 2$ matrices:
$\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} + \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} a_{11} + b_{11} & a_{12} + b_{12} \\ a_{21} + b_{21} & a_{22} + b_{22} \end{bmatrix}$
Properties of Matrix Addition
Matrix addition follows several algebraic laws that are similar to the properties of addition of real numbers.
1. Commutative Law
Matrix addition is commutative, meaning the order in which two matrices are added does not change the resulting sum. If $A$ and $B$ are of the same order, then:
$A + B = B + A$
Proof: Since the elements of matrices are real numbers and addition of real numbers is commutative ($a_{ij} + b_{ij} = b_{ij} + a_{ij}$), it follows that $A + B = B + A$.
2. Associative Law
Matrix addition is associative. If $A, B,$ and $C$ are three matrices of the same order, the way they are grouped during addition does not affect the result:
$(A + B) + C = A + (B + C)$
3. Existence of Additive Identity
For any $m \times n$ matrix $A$, there exists an $m \times n$ Zero Matrix (denoted by $O$) such that adding it to $A$ leaves $A$ unchanged. The Zero matrix acts as the Additive Identity.
$A + O = A = O + A$
[$O$ is the Null Matrix]
4. Existence of Additive Inverse
For every matrix $A = [a_{ij}]$, there exists a unique matrix $-A = [-a_{ij}]$ (obtained by changing the sign of every element in $A$) such that their sum is the Zero matrix. The matrix $-A$ is called the Negative or Additive Inverse of $A$.
$A + (-A) = O$
Summary of Addition Rules
| Property | Mathematical Expression |
|---|---|
| Conformability | Order of $A$ = Order of $B$ |
| Identity | $A + O = A$ |
| Inverse | $A + (-A) = O$ |
| Resulting Order | Same as the original matrices |
Example 1. Let two matrices represent the stock of Pens and Pencils in two different stationery shops (Shop A and Shop B) in Delhi. Shop A has $\begin{bmatrix} 50 & 30 \\ 20 & 40 \end{bmatrix}$ and Shop B has $\begin{bmatrix} 15 & 25 \\ 10 & 30 \end{bmatrix}$. Find the total stock.
Answer:
Given:
Matrix $A$ (Shop A Stock) = $\begin{bmatrix} 50 & 30 \\ 20 & 40 \end{bmatrix}$
Matrix $B$ (Shop B Stock) = $\begin{bmatrix} 15 & 25 \\ 10 & 30 \end{bmatrix}$
To Find: Total Stock ($A + B$)
Solution: Since both matrices are of the same order ($2 \times 2$), they are compatible for addition. Adding the corresponding elements:
$A + B = \begin{bmatrix} 50 + 15 & 30 + 25 \\ 20 + 10 & 40 + 30 \end{bmatrix}$
The final matrix representing the total stock is:
$A + B = \begin{bmatrix} 65 & 55 \\ 30 & 70 \end{bmatrix}$
Therefore, the total stock combined from both shops is 65 and 55 for the first row items, and 30 and 70 for the second row items.
2. Subtraction of Matrices
The Subtraction of matrices is a fundamental operation that involves finding the difference between two matrices by subtracting their corresponding elements. Much like matrix addition, this operation is performed entry-wise and follows specific rules regarding the dimensions of the matrices involved.
Essentially, subtracting a matrix $B$ from a matrix $A$ is equivalent to adding the negative (additive inverse) of matrix $B$ to matrix $A$. This can be expressed as:
$A - B = A + (-B)$
[Alternative Definition]
Conformability for Subtraction
Two matrices $A$ and $B$ are said to be compatible or conformable for subtraction if and only if they are of the same order. This ensures that every element in the first matrix has exactly one corresponding element in the second matrix to be subtracted from. If the matrices have different dimensions, the subtraction operation is not defined.
Mathematical Definition
Let $A = [a_{ij}]_{m \times n}$ and $B = [b_{ij}]_{m \times n}$ be two matrices of the same order. The difference $A - B$ is a matrix $C = [c_{ij}]_{m \times n}$ where each element is calculated as follows:
$c_{ij} = a_{ij} - b_{ij}$
[for all $1 \le i \le m, 1 \le j \le n$]
In terms of full matrix notation for $2 \times 2$ matrices, it appears as:
$\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} - \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} a_{11} - b_{11} & a_{12} - b_{12} \\ a_{21} - b_{21} & a_{22} - b_{22} \end{bmatrix}$
Example 2. Consider two matrices $A$ and $B$. Matrix $A$ represents the number of units of two products (Mobile and Tablet) available in a store in Mumbai, and matrix $B$ represents the units sold during a festive sale. Find the remaining stock.
Given: $A = \begin{bmatrix} 2 & 3 & 1 \\ 0 & -1 & 5 \end{bmatrix}$ and $B = \begin{bmatrix} 1 & -2 & 3 \\ -1 & 0 & 2 \end{bmatrix}$. Find $A + B$ and $A - B$.
Answer:
Given:
$A = \begin{bmatrix} 2 & 3 & 1 \\ 0 & -1 & 5 \end{bmatrix}$
$B = \begin{bmatrix} 1 & -2 & 3 \\ -1 & 0 & 2 \end{bmatrix}$
To Find: The sum matrix ($A + B$) and the difference matrix ($A - B$).
Solution:
Both matrices $A$ and $B$ are of the order $2 \times 3$. Since they have the same order, they are compatible for both addition and subtraction.
I. Calculating the Sum ($A + B$)
We add the corresponding elements of both matrices:
$A + B = \begin{bmatrix} 2+1 & 3+(-2) & 1+3 \\ 0+(-1) & -1+0 & 5+2 \end{bmatrix}$
$A + B = \begin{bmatrix} 3 & 1 & 4 \\ -1 & -1 & 7 \end{bmatrix}$
II. Calculating the Difference ($A - B$)
We subtract the elements of $B$ from the corresponding elements of $A$:
$A - B = \begin{bmatrix} 2-1 & 3-(-2) & 1-3 \\ 0-(-1) & -1-0 & 5-2 \end{bmatrix}$
Simplifying the operations inside the matrix:
$3 - (-2) = 3 + 2 = 5$
[Subtraction of Negative Integer]
$0 - (-1) = 0 + 1 = 1$
[Subtraction of Negative Integer]
Therefore:
$A - B = \begin{bmatrix} 1 & 5 & -2 \\ 1 & -1 & 3 \end{bmatrix}$
Result: The sum is $\begin{bmatrix} 3 & 1 & 4 \\ -1 & -1 & 7 \end{bmatrix}$ and the difference is $\begin{bmatrix} 1 & 5 & -2 \\ 1 & -1 & 3 \end{bmatrix}$.
Properties of Matrix Subtraction
Unlike matrix addition, matrix subtraction does not satisfy all algebraic properties. It is important to note the following:
1. Non-Commutative: Matrix subtraction is not commutative. In general, $A - B \ne B - A$.
2. Non-Associative: Matrix subtraction is not associative. In general, $(A - B) - C \ne A - (B - C)$.
| Property | Matrix Addition | Matrix Subtraction |
|---|---|---|
| Conformability | Same Order Required | Same Order Required |
| Commutative | Yes ($A+B = B+A$) | No ($A-B \ne B-A$) |
| Identity | Zero Matrix ($A+O=A$) | No specific Right Identity |
Negative of a Matrix
The Negative of a Matrix (also known as the Additive Inverse) is a matrix obtained by changing the sign of every element within the original matrix. If we have a matrix $A$, its negative is denoted by $-A$. This operation is equivalent to multiplying the matrix $A$ by the scalar $-1$.
The primary property of a negative matrix is that when it is added to the original matrix, the result is always a Zero Matrix (Null Matrix) of the same order.
Mathematical Definition
Let $A = [a_{ij}]_{m \times n}$ be a matrix of order $m \times n$. The negative of matrix $A$, denoted by $-A$, is defined as:
$-A = [-a_{ij}]_{m \times n}$
[Definition of Negative Matrix]
In terms of specific elements, if an element in $A$ is positive, it becomes negative in $-A$, and if it is negative, it becomes positive. Zero remains unchanged as $-0 = 0$.
The Additive Property
The relationship between a matrix and its negative is expressed by the following identity:
$A + (-A) = O$
[where $O$ is the Null Matrix]
Role in Matrix Subtraction
The concept of the negative of a matrix allows us to define Subtraction in terms of Addition. Subtracting matrix $B$ from matrix $A$ is mathematically identical to adding the negative of matrix $B$ to matrix $A$.
$A - B = A + (-B)$
[Subraction as Addition]
This shows that the negative matrix is a critical component in understanding the linear structure of matrix algebra.
Example 3. If $A = \begin{bmatrix} 2 & -3 \\ 5 & 0 \\ -1 & 4 \end{bmatrix}$, find the negative of matrix $A$ and verify that $A + (-A) = O$.
Answer:
Given:
$A = \begin{bmatrix} 2 & -3 \\ 5 & 0 \\ -1 & 4 \end{bmatrix}$
To Find: Negative matrix $-A$ and verification of the sum.
Solution:
To find $-A$, we multiply each element of $A$ by $-1$:
$-A = \begin{bmatrix} -(2) & -(-3) \\ -(5) & -(0) \\ -(-1) & -(4) \end{bmatrix}$
$-A = \begin{bmatrix} -2 & 3 \\ -5 & 0 \\ 1 & -4 \end{bmatrix}$
Verification:
Now, let us add $A$ and $-A$:
$A + (-A) = \begin{bmatrix} 2 + (-2) & -3 + 3 \\ 5 + (-5) & 0 + 0 \\ -1 + 1 & 4 + (-4) \end{bmatrix}$
Performing the addition:
$A + (-A) = \begin{bmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{bmatrix}$
Since the resulting matrix is a Null Matrix of order $3 \times 2$, we have verified that $A + (-A) = O$.
3. Scalar Multiplication
In matrix algebra, Scalar Multiplication refers to the operation of multiplying a matrix by a constant number, known as a scalar. Unlike matrix multiplication (which involves two matrices), scalar multiplication involves a single matrix and a real or complex number. This operation is fundamental in scaling the data represented by a matrix, such as adjusting prices in Indian Markets due to inflation or tax changes.
When a matrix is multiplied by a scalar $k$, every individual element within that matrix is multiplied by $k$. This results in a new matrix of the same order as the original.
Mathematical Definition
Let $A = [a_{ij}]_{m \times n}$ be a matrix of order $m \times n$ and let $k$ be any scalar (number). The scalar multiple of $A$ by $k$, denoted by $kA$, is defined as:
$kA = [k \cdot a_{ij}]_{m \times n}$
[Definition of Scalar Multiple]
For a $2 \times 2$ matrix, the operation looks like this:
$k \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} = \begin{bmatrix} k \cdot a_{11} & k \cdot a_{12} \\ k \cdot a_{21} & k \cdot a_{22} \end{bmatrix}$
Properties of Scalar Multiplication
Let $A$ and $B$ be two matrices of the same order $m \times n$, and let $k$ and $l$ be any two scalars. The following properties hold true:
1. Distributive Law for Scalars
If two scalars are added and then multiplied by a matrix, it is equal to the sum of the scalar multiples of that matrix.
$(k + l)A = kA + lA$
2. Distributive Law for Matrices
If a scalar is multiplied by the sum of two matrices, it is distributed over each matrix.
$k(A + B) = kA + kB$
3. Associative Law for Scalars
The order in which scalars are multiplied by the matrix does not change the result.
$(kl)A = k(lA) = l(kA)$
4. Identity and Negative Properties
Multiplying a matrix by $1$ leaves it unchanged, while multiplying by $-1$ gives its Negative Matrix.
$1 \cdot A = A$
[Multiplicative Identity]
$(-1)A = -A$
[Negative of a Matrix]
Summary Table
| Operation | Property Name |
|---|---|
| $k(A+B) = kA+kB$ | Distributivity over Matrix Addition |
| $(k+l)A = kA+lA$ | Distributivity over Scalar Addition |
| $(kl)A = k(lA)$ | Associativity of Scalars |
| $0 \cdot A = O$ | Zero Scalar Property |
Example 4. A grocery store in Mumbai sells three types of rice. The current prices per kg are represented by the matrix $P = [60 \ \ 85 \ \ 120]$. Due to a sudden increase in demand, the store decides to increase all prices by 10%. Represent the new prices using scalar multiplication.
Answer:
Given:
Original Price Matrix ($P$) = $[60 \ \ 85 \ \ 120]$
Rate of Increase = $10 \%$
To Find: New Price Matrix ($P_{new}$)
Solution:
An increase of $10\%$ means the new price is $110\%$ of the original price. This can be expressed as a scalar $k$:
$k = 1 + \frac{10}{100} = 1.1$
Using the property of scalar multiplication $P_{new} = kP$:
$P_{new} = 1.1 \times [60 \ \ 85 \ \ 120]$
Multiplying each element:
$1.1 \times 60 = 66$
$1.1 \times 85 = 93.5$
$1.1 \times 120 = 132$
Therefore, the new price matrix is:
$P_{new} = [\textsf{₹} 66 \ \ \textsf{₹} 93.5 \ \ \textsf{₹} 132]$
The new prices are $\textsf{₹} 66$, $\textsf{₹} 93.5$, and $\textsf{₹} 132$ per kg respectively.
Example 5. If $A = \begin{bmatrix} 2 & 4 \\ 3 & 2 \end{bmatrix}$ and $B = \begin{bmatrix} 1 & 3 \\ -2 & 5 \end{bmatrix}$, find $3A - B$.
Answer:
Solution:
First, we calculate the scalar multiple $3A$:
$3A = 3 \begin{bmatrix} 2 & 4 \\ 3 & 2 \end{bmatrix} = \begin{bmatrix} 3(2) & 3(4) \\ 3(3) & 3(2) \end{bmatrix} = \begin{bmatrix} 6 & 12 \\ 9 & 6 \end{bmatrix}$
Now, we perform the subtraction $3A - B$:
$3A - B = \begin{bmatrix} 6 & 12 \\ 9 & 6 \end{bmatrix} - \begin{bmatrix} 1 & 3 \\ -2 & 5 \end{bmatrix}$
Subtracting corresponding elements:
$3A - B = \begin{bmatrix} 6 - 1 & 12 - 3 \\ 9 - (-2) & 6 - 5 \end{bmatrix}$
$9 - (-2) = 9 + 2 = 11$
[Subtraction Property]
The final matrix is:
$\begin{bmatrix} 5 & 9 \\ 11 & 1 \end{bmatrix}$
4. Multiplication of Matrices
The multiplication of two matrices is a fundamental operation in linear algebra. Unlike the addition of matrices, which is performed element-wise, matrix multiplication involves a Row-by-Column traversal process. The product of two matrices $A$ and $B$ is defined only when they are compatible for multiplication.
Rule of Compatibility
The product $AB$ of two matrices $A$ and $B$ exists if and only if the number of columns in matrix A is equal to the number of rows in matrix B.
If $A = [a_{ij}]_{m \times n}$ and $B = [b_{jk}]_{n \times p}$, then the resulting product matrix $C = AB$ will have the order $m \times p$.
Order of $A = m \times n$, Order of $B = n \times p$
(Condition for multiplication)
If the dimensions are represented as $(m \times n)$ and $(p \times q)$, the "inner" numbers ($n$ and $p$) must be equal for the product to be defined, and the "outer" numbers ($m$ and $q$) define the order of the resulting matrix.
General Formula and Derivation
To understand matrix multiplication, we must look at it as a process of finding the dot product of a row from the first matrix and a column from the second matrix. Let us consider two matrices $A$ and $B$ that are compatible for multiplication.
Let $A = [a_{ij}]$ be a matrix of order $m \times n$ and $B = [b_{jk}]$ be a matrix of order $n \times p$. The resulting matrix $C = AB$ will be of order $m \times p$.
Derivation of the $(i, k)^{th}$ Element
The element $c_{ik}$ situated in the $i^{th}$ row and $k^{th}$ column of the product matrix $C$ is derived by taking the entire $i^{th}$ row of matrix A and the entire $k^{th}$ column of matrix B.
Let the $i^{th}$ row of $A$ be: $R_i = \begin{bmatrix} a_{i1} & a_{i2} & a_{i3} & \dots & a_{in} \end{bmatrix}$
Let the $k^{th}$ column of $B$ be: $C_k = \begin{bmatrix} b_{1k} \\ b_{2k} \\ b_{3k} \\ \vdots \\ b_{nk} \end{bmatrix}$
The product $c_{ik}$ is the sum of the products of the corresponding elements:
$c_{ik} = a_{i1}b_{1k} + a_{i2}b_{2k} + \dots + a_{in}b_{nk}$
[Expanded Form]
In compact sigma notation, this is expressed as:
$c_{ik} = \sum\limits_{j=1}^{n} a_{ij}b_{jk}$
[General formula for $C = AB$]
Here, the index $j$ runs from $1$ to $n$. Note that $n$ is the number of columns in A and also the number of rows in B. This shared index $j$ is what allows the elements to pair up perfectly.
Example 6. Compute the product $AB$ where $A = \begin{bmatrix} 2 & 1 \\ 3 & 4 \end{bmatrix}$ and $B = \begin{bmatrix} 1 & -2 \\ 0 & 5 \end{bmatrix}$.
Answer:
Given matrices are of order $2 \times 2$. Thus, the product $AB$ will also be of order $2 \times 2$.
Let $AB = \begin{bmatrix} c_{11} & c_{12} \\ c_{21} & c_{22} \end{bmatrix}$
Applying the formula $c_{ik} = \sum\limits_{j=1}^{2} a_{ij}b_{jk}$:
For $c_{11}$ (Row 1 of A, Col 1 of B): $(2 \times 1) + (1 \times 0) = 2 + 0 = 2$
For $c_{12}$ (Row 1 of A, Col 2 of B): $(2 \times -2) + (1 \times 5) = -4 + 5 = 1$
For $c_{21}$ (Row 2 of A, Col 1 of B): $(3 \times 1) + (4 \times 0) = 3 + 0 = 3$
For $c_{22}$ (Row 2 of A, Col 2 of B): $(3 \times -2) + (4 \times 5) = -6 + 20 = 14$
$AB = \begin{bmatrix} 2 & 1 \\ 3 & 14 \end{bmatrix}$
(Final Product)
Example 7. If $A = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}_{1 \times 3}$ and $B = \begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix}_{3 \times 1}$, find $AB$.
Answer:
Here, $A$ is a row matrix and $B$ is a column matrix. The number of columns in $A$ is 3, and the number of rows in $B$ is 3. The result will be a $1 \times 1$ matrix (a scalar).
$AB = \begin{bmatrix} (1 \times 4) + (2 \times 5) + (3 \times 6) \end{bmatrix}$
$AB = \begin{bmatrix} 4 + 10 + 18 \end{bmatrix}$
$AB = \begin{bmatrix} 32 \end{bmatrix}$
[Order $1 \times 1$]
Example 8. Find the product matrix $C = AB$ and identify the specific element $c_{21}$, where $A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \\ 5 & 6 \end{bmatrix}$ and $B = \begin{bmatrix} 7 & 8 & 9 \\ 10 & 11 & 12 \end{bmatrix}$.
Answer:
Step 1: Check for Compatibility
The matrix $A$ is of order $3 \times 2$ and matrix $B$ is of order $2 \times 3$. Since the number of columns in A (2) is equal to the number of rows in B (2), the product $AB$ is defined.
Order of $C = AB$ is $3 \times 3$
(Resulting Order)
Step 2: Performing Matrix Multiplication
We will now represent the sum of products for each position $c_{ik}$ within the matrix structure itself:
$C = \begin{bmatrix} (1 \times 7) + (2 \times 10) & (1 \times 8) + (2 \times 11) & (1 \times 9) + (2 \times 12) \\ (3 \times 7) + (4 \times 10) & (3 \times 8) + (4 \times 11) & (3 \times 9) + (4 \times 12) \\ (5 \times 7) + (6 \times 10) & (5 \times 8) + (6 \times 11) & (5 \times 9) + (6 \times 12) \end{bmatrix}$
We multiply each row of $A$ with each column of $B$ using the Row-by-Column rule:
For Row 1 of C:
$c_{11} = (1 \times 7) + (2 \times 10) = 7 + 20 = 27$
$c_{12} = (1 \times 8) + (2 \times 11) = 8 + 22 = 30$
$c_{13} = (1 \times 9) + (2 \times 12) = 9 + 24 = 33$
For Row 2 of C:
$c_{21} = (3 \times 7) + (4 \times 10) = 21 + 40 = 61$
$c_{22} = (3 \times 8) + (4 \times 11) = 24 + 44 = 68$
$c_{23} = (3 \times 9) + (4 \times 12) = 27 + 48 = 75$
For Row 3 of C:
$c_{31} = (5 \times 7) + (6 \times 10) = 35 + 60 = 95$
$c_{32} = (5 \times 8) + (6 \times 11) = 40 + 66 = 106$
$c_{33} = (5 \times 9) + (6 \times 12) = 45 + 72 = 117$
Step 3: Constructing the Product Matrix
Combining all the elements into a single $3 \times 3$ matrix:
$C = \begin{bmatrix} 27 & 30 & 33 \\ 61 & 68 & 75 \\ 95 & 106 & 117 \end{bmatrix}$
Conclusion:
The element $c_{21}$ is located in the second row and first column of the matrix $C$.
$c_{21} = 61$
[Value from matrix $C$]
Properties of Matrix Multiplication
Matrix multiplication is a sophisticated operation that does not always follow the intuitive rules of scalar algebra (the algebra of real numbers). Understanding these properties is vital for solving complex problems in competitive exams like JEE Main, JEE Advanced, and NDA. Below is an elaborate explanation of these properties with derivations and examples.
1. Non-Commutativity of Matrix Multiplication
In real numbers, $a \times b$ is always equal to $b \times a$. However, for matrices $A$ and $B$, the product $AB$ is generally not equal to $BA$. This is known as being non-commutative.
Why is it Non-Commutative?
There are three main reasons why $AB \neq BA$:
Case A: Compatibility Issues - The product $AB$ might be defined, but $BA$ might not even exist because the number of columns in $B$ may not match the number of rows in $A$.
Case B: Order Mismatch - Even if both $AB$ and $BA$ exist, the resulting matrices might have different orders. For instance, if $A$ is $2 \times 3$ and $B$ is $3 \times 2$, $AB$ is $2 \times 2$ while $BA$ is $3 \times 3$.
Case C: Element Mismatch - Even if $A$ and $B$ are square matrices of the same order (so $AB$ and $BA$ are both defined and have the same order), the corresponding elements are usually different.
$AB \neq BA$
(In General)
2. Associative Law
The Associative Law is one of the fundamental properties of matrix algebra. It states that when multiplying three or more matrices, the way in which the matrices are grouped (using parentheses) does not change the result, provided the sequence of the matrices remains identical and the dimensions are compatible for multiplication.
Formal Statement
If $A$, $B$, and $C$ are three matrices such that the products $(AB)C$ and $A(BC)$ are defined, then:
$(AB)C = A(BC)$
Condition for Compatibility (Dimensions)
For the associative law to hold, the dimensions of the matrices must be conformable for multiplication. Let the orders of the matrices be:
$\bullet$ Matrix $A$ of order $m \times n$
$\bullet$ Matrix $B$ of order $n \times p$
$\bullet$ Matrix $C$ of order $p \times q$
In this case, the resulting matrix from both $(AB)C$ and $A(BC)$ will have the order $m \times q$.
Mathematical Derivation
To prove that $(AB)C = A(BC)$, we must show that the corresponding elements of the matrices on both sides are equal.
Let $A = [a_{ij}]_{m \times n}$, $B = [b_{jk}]_{n \times p}$, and $C = [c_{kl}]_{p \times q}$.
Step 1: Elements of $(AB)C$
Let $AB = D$, where $D = [d_{ik}]_{m \times p}$. The element $d_{ik}$ is given by:
$d_{ik} = \sum\limits_{j=1}^{n} a_{ij}b_{jk}$
Now, let $(AB)C = DC = X$, where $X = [x_{il}]_{m \times q}$. The element $x_{il}$ is:
$x_{il} = \sum\limits_{k=1}^{p} d_{ik}c_{kl} = \sum\limits_{k=1}^{p} \left( \sum\limits_{j=1}^{n} a_{ij}b_{jk} \right) c_{kl}$
$x_{il} = \sum\limits_{k=1}^{p} \sum\limits_{j=1}^{n} (a_{ij}b_{jk})c_{kl}$
Step 2: Elements of $A(BC)$
Let $BC = E$, where $E = [e_{jl}]_{n \times q}$. The element $e_{jl}$ is given by:
$e_{jl} = \sum\limits_{k=1}^{p} b_{jk}c_{kl}$
Now, let $A(BC) = AE = Y$, where $Y = [y_{il}]_{m \times q}$. The element $y_{il}$ is:
$y_{il} = \sum\limits_{j=1}^{n} a_{ij}e_{jl} = \sum\limits_{j=1}^{n} a_{ij} \left( \sum\limits_{k=1}^{p} b_{jk}c_{kl} \right)$
$y_{il} = \sum\limits_{j=1}^{n} \sum\limits_{k=1}^{p} a_{ij}(b_{jk}c_{kl})$
Step 3: Conclusion
Since scalar multiplication is associative, $(a_{ij}b_{jk})c_{kl} = a_{ij}(b_{jk}c_{kl})$. Also, the order of summation can be interchanged for finite sums. Therefore, the expressions in (ii) and (iii) are identical.
$x_{il} = y_{il}$
[Hence Proved]
Significance
The Associative Law allows us to define Powers of a Square Matrix. For a square matrix $A$, we can write:
$\bullet$ $A^2 = A \cdot A$
$\bullet$ $A^3 = (A \cdot A) \cdot A = A \cdot (A \cdot A)$
$\bullet$ In general, $A^n = A \cdot A^{n-1} = A^{n-1} \cdot A$.
In problems involving large matrices, it is often easier to calculate $A(BC)$ if the product $BC$ results in a simpler matrix (like a diagonal or identity matrix) compared to $AB$.
Example 9. Verify the associative law for matrices $A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$, $B = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}$, and $C = \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix}$.
Answer:
To Prove: $(AB)C = A(BC)$
Step 1: Finding $AB$
$AB = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} (1\times 1 + 1\times 1) & (1\times 0 + 1\times 1) \\ (0\times 1 + 1\times 1) & (0\times 0 + 1\times 1) \end{bmatrix} $$ = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$
Step 2: Finding $(AB)C$
$(AB)C = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} = \begin{bmatrix} (2\times 2 + 1\times 0) & (2\times 0 + 1\times 2) \\ (1\times 2 + 1\times 0) & (1\times 0 + 1\times 2) \end{bmatrix} $$ = \begin{bmatrix} 4 & 2 \\ 2 & 2 \end{bmatrix}$
Step 3: Finding $BC$
$BC = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} 2 & 0 \\ 0 & 2 \end{bmatrix} = \begin{bmatrix} (1\times 2 + 0\times 0) & (1\times 0 + 0\times 2) \\ (1\times 2 + 1\times 0) & (1\times 0 + 1\times 2) \end{bmatrix} $$ = \begin{bmatrix} 2 & 0 \\ 2 & 2 \end{bmatrix}$
Step 4: Finding $A(BC)$
$A(BC) = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 2 & 0 \\ 2 & 2 \end{bmatrix} = \begin{bmatrix} (1\times 2 + 1\times 2) & (1\times 0 + 1\times 2) \\ (0\times 2 + 1\times 2) & (0\times 0 + 1\times 2) \end{bmatrix} $$ = \begin{bmatrix} 4 & 2 \\ 2 & 2 \end{bmatrix}$
Conclusion:
From Step 2 and Step 4, we see that $(AB)C = A(BC) = \begin{bmatrix} 4 & 2 \\ 2 & 2 \end{bmatrix}$.
3. Distributive Law
The Distributive Law ensures that matrix multiplication interacts with matrix addition in a manner similar to real numbers. It confirms that multiplying a matrix by the sum of two other matrices is equivalent to multiplying them individually and then adding the results. This property is fundamental for expanding algebraic expressions involving matrices, such as $(A + B)^2$ or $(A + B)(A - B)$.
Left Distributive Law
If $A$ is a matrix of order $m \times n$, and $B$ and $C$ are matrices of order $n \times p$, then matrix $A$ distributes over the sum of $B$ and $C$ from the left.
$A(B + C) = AB + AC$
Derivation of Left Distributive Law
Let $A = [a_{ij}]_{m \times n}$, $B = [b_{jk}]_{n \times p}$, and $C = [c_{jk}]_{n \times p}$.
Step 1: Elements of the Left Hand Side (LHS)
Let $S = B + C$. The $(j, k)^{th}$ element of $S$ is given by $s_{jk} = b_{jk} + c_{jk}$.
Now, let $X = A(B + C) = AS$. The $(i, k)^{th}$ element of $X$, denoted by $x_{ik}$, is:
$x_{ik} = \sum\limits_{j=1}^{n} a_{ij}s_{jk}$
$x_{ik} = \sum\limits_{j=1}^{n} a_{ij}(b_{jk} + c_{jk})$
[Substituting value of $s_{jk}$]
By the distributive property of real numbers, $a_{ij}(b_{jk} + c_{jk}) = a_{ij}b_{jk} + a_{ij}c_{jk}$. Thus:
$x_{ik} = \sum\limits_{j=1}^{n} a_{ij}b_{jk} + \sum\limits_{j=1}^{n} a_{ij}c_{jk}$
... (i)
Step 2: Elements of the Right Hand Side (RHS)
The $(i, k)^{th}$ element of $AB$ is $\sum\limits_{j=1}^{n} a_{ij}b_{jk}$ and the $(i, k)^{th}$ element of $AC$ is $\sum\limits_{j=1}^{n} a_{ij}c_{jk}$.
Let $Y = AB + AC$. The $(i, k)^{th}$ element of $Y$, denoted by $y_{ik}$, is:
$y_{ik} = \sum\limits_{j=1}^{n} a_{ij}b_{jk} + \sum\limits_{j=1}^{n} a_{ij}c_{jk}$
... (ii)
Comparing (i) and (ii), we see $x_{ik} = y_{ik}$. Hence, $A(B + C) = AB + AC$.
Right Distributive Law
If $A$ and $B$ are matrices of order $m \times n$, and $C$ is a matrix of order $n \times p$, then matrix $C$ distributes over the sum of $A$ and $B$ from the right.
$(A + B)C = AC + BC$
Note: It is crucial to maintain the order. Since matrix multiplication is non-commutative, $(A + B)C$ is not necessarily equal to $CA + CB$. Distribution must happen from the specific side indicated.
Example 10. If $A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$, $B = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$, and $C = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}$, verify the Left Distributive Law $A(B + C) = AB + AC$.
Answer:
Step 1: Calculate LHS $A(B + C)$
First, find $B + C$:
$B + C = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} + \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} = \begin{bmatrix} 2 & 2 \\ 2 & 2 \end{bmatrix}$
Now, $A(B + C) = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 2 & 2 \\ 2 & 2 \end{bmatrix}$
$A(B + C) = \begin{bmatrix} (1\times 2 + 2\times 2) & (1\times 2 + 2\times 2) \\ (3\times 2 + 4\times 2) & (3\times 2 + 4\times 2) \end{bmatrix} = \begin{bmatrix} 6 & 6 \\ 14 & 14 \end{bmatrix}$
Step 2: Calculate RHS $AB + AC$
$AB = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} (0+2) & (1+0) \\ (0+4) & (3+0) \end{bmatrix} = \begin{bmatrix} 2 & 1 \\ 4 & 3 \end{bmatrix}$
$AC = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} = \begin{bmatrix} (2+2) & (1+4) \\ (6+4) & (3+8) \end{bmatrix} = \begin{bmatrix} 4 & 5 \\ 10 & 11 \end{bmatrix}$
$AB + AC = \begin{bmatrix} 2 & 1 \\ 4 & 3 \end{bmatrix} + \begin{bmatrix} 4 & 5 \\ 10 & 11 \end{bmatrix} = \begin{bmatrix} 6 & 6 \\ 14 & 14 \end{bmatrix}$
Since LHS = RHS, the Left Distributive Law is verified.
Important Identities Based on Distributive Law
The following identities are valid for square matrices $A$ and $B$ of the same order, provided $AB = BA$ (Commutative case):
| Identity Name | Formula for Matrices | Condition |
|---|---|---|
| Square of Sum | $(A + B)^2 = A^2 + AB + BA + B^2$ | In general |
| Simplified Square | $(A + B)^2 = A^2 + 2AB + B^2$ | Only if $AB = BA$ |
| Difference of Squares | $(A - B)(A + B) = A^2 + AB - BA - B^2$ | In general |
| Identity Product | $(A - I)(A + I) = A^2 - I$ | Always (since $AI = IA$) |
4. Existence of Multiplicative Identity
In the algebra of real numbers, the number $1$ is called the multiplicative identity because $a \times 1 = 1 \times a = a$ for any real number $a$. In matrix algebra, a similar role is played by the Identity Matrix (also known as the Unit Matrix). For every square matrix, there exists a unique identity matrix that leaves the matrix unchanged upon multiplication.
Definition of Identity Matrix
An identity matrix $I$ is a square matrix in which all the elements of the principal diagonal (from top left to bottom right) are $1$ and all other elements are $0$. It is denoted by $I_n$ where $n$ is the order of the matrix.
Example of Identity Matrices:
$I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$, $I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
The Identity Law
For every square matrix $A$ of order $n \times n$, there exists an identity matrix $I$ of order $n \times n$ such that:
$AI = IA = A$
Formal Proof (Derivation)
Let $A = [a_{ij}]$ be a square matrix of order $n \times n$ and $I = [\delta_{ij}]$ be the identity matrix of the same order, where $\delta_{ij}$ is the Kronecker delta defined as:
$\delta_{ij} = \begin{cases} 1 & , & i = j \\ 0 & , & i \neq j \end{cases}$
To Prove: $AI = A$
Let the product $AI = C$, where $C = [c_{ij}]$. By the definition of matrix multiplication:
$c_{ij} = \sum\limits_{k=1}^{n} a_{ik}\delta_{kj}$
In the summation $\sum\limits_{k=1}^{n} a_{ik}\delta_{kj}$, the term $\delta_{kj}$ is zero for all values of $k$ except when $k = j$. When $k = j$, $\delta_{jj} = 1$.
$c_{ij} = a_{ij} \cdot \delta_{jj} = a_{ij} \cdot 1 = a_{ij}$
Since the $(i, j)^{th}$ element of $AI$ is the same as the $(i, j)^{th}$ element of $A$ for all $i, j$, we conclude that $AI = A$. Similarly, it can be proven that $IA = A$.
Identity in Non-Square Matrices
If $A$ is a non-square matrix of order $m \times n$, the identity property still holds, but the identity matrices used for left and right multiplication will have different orders:
$\bullet$ Left Identity: $I_m A = A$ (where $I$ is of order $m \times m$)
$\bullet$ Right Identity: $A I_n = A$ (where $I$ is of order $n \times n$)
Example 11. Given $A = \begin{bmatrix} 5 & -2 \\ 3 & 1 \end{bmatrix}$, verify that $AI = A$ where $I$ is the identity matrix of order $2$.
Answer:
Given:
$A = \begin{bmatrix} 5 & -2 \\ 3 & 1 \end{bmatrix}$ and $I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
Solution:
We need to find the product $AI$:
$AI = \begin{bmatrix} 5 & -2 \\ 3 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
$AI = \begin{bmatrix} (5 \times 1 + (-2) \times 0) & (5 \times 0 + (-2) \times 1) \\ (3 \times 1 + 1 \times 0) & (3 \times 0 + 1 \times 1) \end{bmatrix}$
$AI = \begin{bmatrix} 5 + 0 & 0 - 2 \\ 3 + 0 & 0 + 1 \end{bmatrix}$
$AI = \begin{bmatrix} 5 & -2 \\ 3 & 1 \end{bmatrix} = A$
Hence, $AI = A$ is verified.
5. Interaction with Scalar Multiplication
In matrix algebra, a scalar is a real number (or complex number) used to scale the elements of a matrix. When a scalar interacts with the product of two matrices, it exhibits a high degree of flexibility. This property, often called Scalar Association, allows the scalar to be shifted freely between the matrices in a product string without affecting the final outcome.
The Scalar Association Law
If $A$ and $B$ are two matrices compatible for multiplication, and $k$ is any scalar, then the following relation holds true:
$k(AB) = (kA)B = A(kB)$
This property indicates that we can either multiply the matrices first and then scale the result, or scale one of the individual matrices before performing the multiplication.
Mathematical Derivation
To prove this property, we examine the $(i, j)^{th}$ element of the resulting matrices on all sides of the equation.
Given:
Let $A = [a_{ir}]$ be a matrix of order $m \times n$ and $B = [b_{rj}]$ be a matrix of order $n \times p$. Let $k$ be a scalar.
Proof:
The $(i, j)^{th}$ element of the product $AB$ is given by $\sum\limits_{r=1}^{n} a_{ir}b_{rj}$.
Step 1: Elements of $k(AB)$
The $(i, j)^{th}$ element of $k(AB)$ is:
$k \left( \sum\limits_{r=1}^{n} a_{ir}b_{rj} \right) = \sum\limits_{r=1}^{n} k(a_{ir}b_{rj})$
Step 2: Elements of $(kA)B$
The matrix $(kA)$ has elements $[k a_{ir}]$. Thus, the $(i, j)^{th}$ element of $(kA)B$ is:
$\sum\limits_{r=1}^{n} (k a_{ir})b_{rj} = \sum\limits_{r=1}^{n} k(a_{ir}b_{rj})$
Step 3: Elements of $A(kB)$
The matrix $(kB)$ has elements $[k b_{rj}]$. Thus, the $(i, j)^{th}$ element of $A(kB)$ is:
$\sum\limits_{r=1}^{n} a_{ir}(k b_{rj}) = \sum\limits_{r=1}^{n} k(a_{ir}b_{rj})$
Conclusion:
Since the $(i, j)^{th}$ elements of $k(AB)$, $(kA)B$, and $A(kB)$ are identical for all $i$ and $j$, the property is proved.
$k(AB) = (kA)B = A(kB)$
[Verified by Element-wise Analysis]
Distributive Law for Scalars
If $k$ and $l$ are scalars and $A$ is a matrix, then:
$(k + l)A = kA + lA$
$k(A + B) = kA + kB$
Example 12. Let $k = 2$, $A = \begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix}$, and $B = \begin{bmatrix} 0 & 1 \\ 2 & 4 \end{bmatrix}$. Verify that $k(AB) = (kA)B$.
Answer:
LHS: $k(AB)$
First, calculate $AB$:
$AB = \begin{bmatrix} (1 \times 0 + 2 \times 2) & (1 \times 1 + 2 \times 4) \\ (3 \times 0 + 1 \times 2) & (3 \times 1 + 1 \times 4) \end{bmatrix} = \begin{bmatrix} 4 & 9 \\ 2 & 7 \end{bmatrix}$
Now, $k(AB) = 2 \begin{bmatrix} 4 & 9 \\ 2 & 7 \end{bmatrix} = \begin{bmatrix} 8 & 18 \\ 4 & 14 \end{bmatrix}$
RHS: $(kA)B$
First, calculate $kA$:
$kA = 2 \begin{bmatrix} 1 & 2 \\ 3 & 1 \end{bmatrix} = \begin{bmatrix} 2 & 4 \\ 6 & 2 \end{bmatrix}$
Now, $(kA)B = \begin{bmatrix} 2 & 4 \\ 6 & 2 \end{bmatrix} \begin{bmatrix} 0 & 1 \\ 2 & 4 \end{bmatrix}$
$(kA)B = \begin{bmatrix} (2 \times 0 + 4 \times 2) & (2 \times 1 + 4 \times 4) \\ (6 \times 0 + 2 \times 2) & (6 \times 1 + 2 \times 4) \end{bmatrix} = \begin{bmatrix} 8 & 18 \\ 4 & 14 \end{bmatrix}$
Result: LHS = RHS. The property is verified.
6. The Zero Product Property
In the algebra of real numbers, one of the most fundamental properties is the Zero Product Law, which states that if the product of two numbers is zero, then at least one of the numbers must be zero. However, in matrix algebra, this intuition fails. Matrix multiplication allows for the existence of "Zero Divisors"—non-zero matrices that, when multiplied, produce a zero matrix ($O$).
Comparison with Real Numbers
To understand why this is a crucial concept, consider the following comparison:
In Real Numbers ($a, b \in \mathbb{R}$):
$ab = 0 \Rightarrow a = 0 \text{ or } b = 0$
In Matrices ($A, B \in \text{M}_n$):
$AB = O \nRightarrow A = O \text{ or } B = O$
[Important Exception]
This means that $AB$ can be a zero matrix even if both $A$ and $B$ are non-zero matrices.
Why does this happen?
Matrix multiplication involves the dot product of rows of the first matrix and columns of the second matrix. If every row of $A$ is orthogonal to every column of $B$, the resulting product will be a zero matrix, regardless of whether $A$ or $B$ contain non-zero elements.
The Failure of the Cancellation Law
Because the Zero Product Property does not hold, the Cancellation Law also fails in matrix multiplication. In real numbers, if $ab = ac$ and $a \neq 0$, we can "cancel" $a$ to get $b = c$. In matrices, this is not permitted.
Mathematical Explanation:
Suppose $AB = AC$. We can rewrite this as:
$AB - AC = O$
$A(B - C) = O$
Since the product of two non-zero matrices can be zero, $A(B - C) = O$ does not imply that $A = O$ or $B - C = O$. Therefore, $B$ is not necessarily equal to $C$.
Summary Comparison Table
| Property | Real Numbers ($a, b, c$) | Matrices ($A, B, C$) |
|---|---|---|
| Zero Product | $ab = 0 \Rightarrow a=0 \text{ or } b=0$ | $AB = O \nRightarrow A=O \text{ or } B=O$ |
| Cancellation Law | $ab = ac \Rightarrow b = c$ ($a \neq 0$) | $AB = AC \nRightarrow B = C$ (even if $A \neq O$) |
| Condition for Success | Always holds for non-zero $a$ | Only holds if $A$ is Non-Singular ($|A| \neq 0$) |
Tip: In multiple-choice questions, look for options that suggest $A$ is "invertible" or "non-singular". If $|A| \neq 0$, then the Cancellation Law does hold because we can multiply both sides by $A^{-1}$. If the determinant is zero, the law fails.
Example 13. Provide two non-zero matrices $A$ and $B$ such that their product $AB$ is a zero matrix.
Answer:
Let us consider two matrices $A$ and $B$ of order $2 \times 2$:
Let $A = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$ and $B = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$
Observation:
Clearly, $A \neq O$ (as it has a non-zero element $1$ at $a_{12}$) and $B \neq O$ (as it has a non-zero element $1$ at $b_{11}$).
Product Calculation:
$AB = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$
$AB = \begin{bmatrix} (0 \times 1 + 1 \times 0) & (0 \times 0 + 1 \times 0) \\ (0 \times 1 + 0 \times 0) & (0 \times 0 + 0 \times 0) \end{bmatrix}$
$AB = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} = O$
Conclusion: The product is a zero matrix even though neither $A$ nor $B$ is a zero matrix.
Comprehensive Comparison: Real Numbers vs. Matrix Algebra
To master the properties of matrix multiplication, it is essential to contrast them with the familiar rules of real number algebra. While many properties are shared, the points of divergence (like non-commutativity and the failure of the cancellation law) are where examiners typically frame tricky questions.
The table below provides an elaborate comparison to help students identify the unique behavior of matrices during multiplication.
| Property | Real Numbers ($a, b, c \in \mathbb{R}$) | Matrices ($A, B, C \in \text{M}_n$) |
|---|---|---|
| Commutativity | $ab = ba$ (Always) | $AB \neq BA$ (Generally). Exceptions occur with Identity ($I$) or Inverse ($A^{-1}$). |
| Associativity | $(ab)c = a(bc)$ | $(AB)C = A(BC)$ (Always) provided they are compatible. |
| Distributivity | $a(b+c) = ab + ac$ | $A(B+C) = AB + AC$ (Left) and $(A+B)C = AC + BC$ (Right). |
| Identity Element | $a \cdot 1 = 1 \cdot a = a$ | $AI = IA = A$, where $I$ is the Identity Matrix. |
| Existence of Inverse | $a \cdot \frac{1}{a} = 1$ (exists for all $a \neq 0$) | $AA^{-1} = A^{-1}A = I$ (exists only if $|A| \neq 0$). |
| Zero Product Property | $ab = 0 \Rightarrow a=0$ or $b=0$ | $AB = O$ does not necessarily imply $A=O$ or $B=O$. |
| Cancellation Law | If $ab = ac$ and $a \neq 0$, then $b=c$ | If $AB = AC$, then $B$ is not necessarily equal to $C$. |
| Binomial Expansion | $(a+b)^2 = a^2 + 2ab + b^2$ | $(A+B)^2 = A^2 + AB + BA + B^2$. (Equals $A^2 + 2AB + B^2$ only if $AB=BA$). |
5. Powers of a Square Matrix and Matrix Polynomials
The concept of Powers of a Square Matrix and Matrix Polynomials is a vital extension of matrix multiplication. These concepts are frequently tested in competitive exams like JEE Main, JEE Advanced, and BITSAT, often involving the calculation of higher powers or finding the roots of a matrix-based equation.
Powers of a Square Matrix
For any square matrix $A$ of order $n$, the product $AA$ is defined. Similarly, higher powers are defined through repeated multiplication. It is important to note that powers are only defined for square matrices because the number of rows must match the number of columns for continuous multiplication.
Definitions:
$\bullet$ $A^1 = A$
$\bullet$ $A^2 = A \cdot A$
$\bullet$ $A^3 = A^2 \cdot A = A \cdot A \cdot A$
$\bullet$ In general, $A^{n+1} = A^n \cdot A$, where $n$ is a positive integer.
$\bullet$ For $n = 0$, $A^0 = I$, where $I$ is the Identity Matrix of the same order as $A$.
Laws of Exponents for Matrices
If $A$ is a square matrix and $m, n$ are non-negative integers, the following laws hold:
$A^m \cdot A^n = A^{m+n}$
$(A^m)^n = A^{mn}$
Note: Unlike real numbers, $(AB)^n = A^n B^n$ is not always true. It only holds if $A$ and $B$ commute ($AB = BA$).
Matrix Polynomials
Let $f(x) = a_0x^n + a_1x^{n-1} + a_2x^{n-2} + \dots + a_{n-1}x + a_n$ be a polynomial in $x$. If $A$ is a square matrix of order $n$, then the expression $f(A)$ is called a Matrix Polynomial.
Rules for Conversion:
To obtain $f(A)$ from $f(x)$:
1. Replace every occurrence of the variable $x$ with the matrix $A$.
2. Crucially, replace the constant term $a_n$ with $a_n I$, where $I$ is the Identity matrix of the same order as $A$. This is because we cannot add a scalar to a matrix directly.
$f(A) = a_0A^n + a_1A^{n-1} + \dots + a_{n-1}A + a_n I$
If $f(A) = O$ (Zero Matrix), then the matrix $A$ is said to be a "root" of the polynomial $f(x)$. This is the basis of the Cayley-Hamilton Theorem used in advanced competitive problems.
Transpose of a Matrix
The Transpose of a Matrix is a fundamental operation in linear algebra where the rows and columns of a given matrix are interchanged. This operation is widely used in solving systems of linear equations, finding the inverse of a matrix, and in various engineering applications.
Definition
Let $A = [a_{ij}]$ be an $m \times n$ matrix. Then the transpose of $A$, denoted by $A'$ or $A^T$, is the matrix obtained from $A$ by interchanging its rows and columns. Mathematically, $A'$ is an $n \times m$ matrix such that:
$(j, i)^{th} \text{ element of } A' = (i, j)^{th} \text{ element of } A$
i.e., if $A = [a_{ij}]_{m \times n}$, then $A' = [a_{ji}]_{n \times m}$.
Example:
If $A = \begin{bmatrix} 2 & 7 \\ 5 & 4 \end{bmatrix}_{2 \times 2}$, then $A' = \begin{bmatrix} 2 & 5 \\ 7 & 4 \end{bmatrix}_{2 \times 2}$.
If $B = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}_{2 \times 3}$, then $B' = \begin{bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{bmatrix}_{3 \times 2}$.
Properties of Transpose
For any matrices $A$ and $B$ of suitable orders, the following properties hold:
(i) Double Transpose
The Double Transpose Property is one of the most intuitive yet essential properties of matrix algebra. It states that if the operation of transposing a matrix (interchanging rows and columns) is applied twice in succession, the matrix returns to its original form. This property is analogous to the double negation law in logic or the reciprocal of a reciprocal in arithmetic.
Mathematical Statement
For any matrix $A$ of order $m \times n$, the transpose of its transpose is equal to the matrix $A$ itself.
$(A')' = A$
Formal Proof (Derivation)
To prove this property rigorously for competitive exams, we use element-wise notation and track the dimensions of the matrix through each transformation.
Step 1: Define the Original Matrix
Let $A$ be a matrix of order $m \times n$ such that:
$A = [a_{ij}]_{m \times n}$
where $1 \le i \le m$ (rows) and $1 \le j \le n$ (columns).
Step 2: First Transpose Operation
By the definition of a transpose, $A'$ is obtained by interchanging rows and columns. Let $A' = B$. Then $B$ is a matrix of order $n \times m$ such that:
$B = [b_{ji}]_{n \times m}$
$b_{ji} = a_{ij}$
[Definition of Transpose]
Step 3: Second Transpose Operation
Now, we take the transpose of $B$ (which is $A'$). Let $B' = C$. Then $C$ is a matrix of order $m \times n$ such that:
$c_{ij} = b_{ji}$
Substituting the value of $b_{ji}$ from above equation:
$c_{ij} = a_{ij}$
Step 4: Conclusion
Since the matrix $C$ (which is $(A')'$) and the matrix $A$ have the same order ($m \times n$) and their corresponding elements $c_{ij}$ and $a_{ij}$ are equal for all $i$ and $j$, we conclude:
$(A')' = A$
[Hence Proved]
Significance in Algebraic Simplification
This property is frequently used to simplify complex matrix expressions before solving. For example, when dealing with sums or products:
$\bullet$ To solve for $A$ when $A'$ is given: Just take the transpose of the given matrix.
$\bullet$ Simplifying $(A' + B')'$: Using the sum property and double transpose:
$(A' + B')' = (A')' + (B')' = A + B$
$\bullet$ Simplifying $(B'A')'$: Using the reversal law and double transpose:
$(B'A')' = (A')'(B')' = AB$
Example. Verify the double transpose property for the matrix $A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$.
Answer:
Given:
$A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}_{2 \times 3}$
First Transpose ($A'$):
Interchanging rows and columns of $A$:
$A' = \begin{bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{bmatrix}_{3 \times 2}$
Second Transpose $((A')')$:
Interchanging rows and columns of $A'$:
$(A')' = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}_{2 \times 3}$
Comparison:
We observe that $(A')' = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}$, which is exactly matrix $A$.
Hence, the property $(A')' = A$ is verified.
(ii) Scalar Multiplication
The Scalar Multiplication Property of transposes describes how a constant (scalar) interacts with the transpose operation. It states that when a matrix is multiplied by a scalar and then transposed, the result is the same as transposing the matrix first and then multiplying it by the scalar. In other words, the scalar is "unaffected" by the transpose operation because a scalar does not have rows or columns to interchange.
Mathematical Statement
If $A$ is any matrix of order $m \times n$ and $k$ is a scalar (any real or complex number), then:
$(kA)' = kA'$
[where $k$ is a scalar]
Formal Proof (Derivation)
To prove this property, we compare the elements of the matrix on the Left Hand Side (LHS) with the elements of the matrix on the Right Hand Side (RHS).
Step 1: Elements of LHS $(kA)'$
Let $A = [a_{ij}]$ be a matrix of order $m \times n$.
Then, the matrix $kA$ is obtained by multiplying every element of $A$ by $k$:
$kA = [k \cdot a_{ij}]_{m \times n}$
Now, taking the transpose of $kA$, the $(j, i)^{th}$ element of $(kA)'$ is the $(i, j)^{th}$ element of $kA$:
$(j, i)^{th} \text{ element of } (kA)' = k \cdot a_{ij}$
Step 2: Elements of RHS $kA'$
The matrix $A'$ is the transpose of $A$, so its $(j, i)^{th}$ element is $a_{ij}$:
$A' = [a_{ji}]_{n \times m}$ where the $(j, i)^{th}$ element is $a_{ij}$.
Now, multiplying the matrix $A'$ by the scalar $k$, the $(j, i)^{th}$ element of $kA'$ is:
$(j, i)^{th} \text{ element of } kA' = k \cdot a_{ij}$
Step 3: Conclusion
Comparing the above results, we see that the corresponding elements of $(kA)'$ and $kA'$ are equal. Also, both matrices have the same order $n \times m$.
$(kA)' = kA'$
[Hence Proved]
Example. If $k = 5$ and $A = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix}$, verify that $(kA)' = kA'$.
Answer:
Given:
$k = 5$ and $A = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix}$
LHS Calculation: $(kA)'$
First, find $kA$:
$kA = 5 \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix} = \begin{bmatrix} 5 & 15 \\ 10 & 20 \end{bmatrix}$
Now, take the transpose:
$(kA)' = \begin{bmatrix} 5 & 10 \\ 15 & 20 \end{bmatrix}$
RHS Calculation: $kA'$
First, find $A'$:
$A' = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$
Now, multiply by $k$:
$kA' = 5 \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} = \begin{bmatrix} 5 & 10 \\ 15 & 20 \end{bmatrix}$
Conclusion: Since LHS = RHS, the property $(kA)' = kA'$ is verified.
(iii) Negative of a Matrix
The Negative of a Matrix property of transposes is a specific application of the scalar multiplication property. It states that the operation of taking the negative of a matrix and then transposing it is equivalent to transposing the matrix first and then taking its negative. This property is particularly useful in the study of Skew-Symmetric Matrices and algebraic manipulation of matrix equations.
Mathematical Statement
If $A$ is any matrix of order $m \times n$, and $-A$ represents the negative of matrix $A$ (where every element $a_{ij}$ is replaced by $-a_{ij}$), then:
$(-A)' = -A'$
Formal Proof (Derivation)
The derivation of this property relies on the definition of the negative of a matrix as scalar multiplication by $k = -1$.
Step 1: Expressing Negative as Scalar Multiplication
By definition, the negative of matrix $A$ is obtained by multiplying the matrix by the scalar $-1$.
$-A = (-1)A$
Step 2: Applying the Scalar Property
We know from the general property of transposes that for any scalar $k$, $(kA)' = kA'$. Substituting $k = -1$ into this property:
$((-1)A)' = (-1)A'$
[Using $(kA)' = kA'$]
Step 3: Conclusion
Since $(-1)A = -A$ and $(-1)A' = -A'$, we can substitute these back into the above equation:
$(-A)' = -A'$
[Hence Proved]
Element-wise Explanation
Let $A = [a_{ij}]$. Then the $(i, j)^{th}$ element of $A$ is $a_{ij}$.
$\bullet$ The $(i, j)^{th}$ element of $-A$ is $-a_{ij}$.
$\bullet$ The $(j, i)^{th}$ element of $(-A)'$ is $-a_{ij}$.
$\bullet$ The $(j, i)^{th}$ element of $A'$ is $a_{ij}$.
$\bullet$ The $(j, i)^{th}$ element of $-A'$ is $-a_{ij}$.
Since the $(j, i)^{th}$ elements of both matrices are identical for all $j$ and $i$, the matrices are equal.
Example. If $A = \begin{bmatrix} 2 & -5 \\ 0 & 3 \end{bmatrix}$, verify that $(-A)' = -A'$.
Answer:
Given:
$A = \begin{bmatrix} 2 & -5 \\ 0 & 3 \end{bmatrix}$
LHS Calculation: $(-A)'$
First, find $-A$:
$-A = \begin{bmatrix} -2 & 5 \\ 0 & -3 \end{bmatrix}$
Now, take the transpose:
$(-A)' = \begin{bmatrix} -2 & 0 \\ 5 & -3 \end{bmatrix}$
RHS Calculation: $-A'$
First, find $A'$:
$A' = \begin{bmatrix} 2 & 0 \\ -5 & 3 \end{bmatrix}$
Now, take the negative of $A'$:
$-A' = \begin{bmatrix} -2 & 0 \\ 5 & -3 \end{bmatrix}$
Conclusion: Since LHS = RHS, the property $(-A)' = -A'$ is verified.
(iv) Sum and Difference
The Sum and Difference Property of transposes (often referred to as the Distributive Property over Addition) states that the transpose of the sum of two matrices is equal to the sum of their individual transposes. This linearity property simplifies complex matrix operations, especially when dealing with symmetric and skew-symmetric matrix decompositions.
Mathematical Statement
Let $A$ and $B$ be two matrices of the same order $m \times n$. Then:
For Addition:
$(A + B)' = A' + B'$
For Subtraction:
$(A - B)' = A' - B'$
Formal Proof (Derivation)
To prove these properties, we compare the elements of the matrices on both sides of the equation.
Proof for Addition:
Let $A = [a_{ij}]$ and $B = [b_{ij}]$ be two $m \times n$ matrices.
$\bullet$ LHS: $(A + B)'$
The $(i, j)^{th}$ element of $(A + B)$ is $(a_{ij} + b_{ij})$.
The $(j, i)^{th}$ element of $(A + B)'$ is the $(i, j)^{th}$ element of $(A + B)$.
$(j, i)^{th} \text{ element of } (A + B)' = a_{ij} + b_{ij}$
$\bullet$ RHS: $A' + B'$
The $(j, i)^{th}$ element of $A'$ is $a_{ij}$.
The $(j, i)^{th}$ element of $B'$ is $b_{ij}$.
$(j, i)^{th} \text{ element of } A' + B' = a_{ij} + b_{ij}$
Since the corresponding elements of $(A + B)'$ and $A' + B'$ are equal for all $j$ and $i$, and both result in an $n \times m$ matrix:
$(A + B)' = A' + B'$
[Hence Proved]
Proof for Subtraction:
The difference $A - B$ can be written as $A + (-1)B$. Using the sum property and the scalar property $(kB)' = kB'$:
$(A - B)' = (A + (-B))'$
$(A - B)' = A' + (-B)'$
$(A - B)' = A' - B'$
[Using Proof of Addition and Scalar Rule]
Extension to Multiple Matrices
The property is also applicable for the sum of three or more matrices of the same order. For any $k$ matrices $A_1, A_2, \dots, A_k$:
$(A_1 + A_2 + \dots + A_k)' = A_1' + A_2' + \dots + A_k'$
Example. If $A = \begin{bmatrix} 3 & 7 \\ 2 & 5 \end{bmatrix}$ and $B = \begin{bmatrix} 1 & 0 \\ 4 & -2 \end{bmatrix}$, verify that $(A + B)' = A' + B'$.
Answer:
LHS Calculation: $(A + B)'$
$A + B = \begin{bmatrix} 3+1 & 7+0 \\ 2+4 & 5-2 \end{bmatrix} = \begin{bmatrix} 4 & 7 \\ 6 & 3 \end{bmatrix}$
Now, $(A + B)' = \begin{bmatrix} 4 & 6 \\ 7 & 3 \end{bmatrix}$
RHS Calculation: $A' + B'$
$A' = \begin{bmatrix} 3 & 2 \\ 7 & 5 \end{bmatrix}$ and $B' = \begin{bmatrix} 1 & 4 \\ 0 & -2 \end{bmatrix}$
$A' + B' = \begin{bmatrix} 3+1 & 2+4 \\ 7+0 & 5-2 \end{bmatrix} = \begin{bmatrix} 4 & 6 \\ 7 & 3 \end{bmatrix}$
Conclusion: Since LHS = RHS, the property is verified.
(v) Reversal Law for Multiplication
The Reversal Law is one of the most significant properties of the transpose operation. It states that the transpose of the product of two matrices is equal to the product of their transposes taken in the reverse order. This property is a common area of testing in competitive exams because it contradicts the intuitive (but incorrect) assumption that $(AB)' = A'B'$.
Mathematical Statement
If $A$ is an $m \times n$ matrix and $B$ is an $n \times p$ matrix (such that the product $AB$ is defined), then:
$(AB)' = B'A'$
Dimensional Compatibility
To understand why the order must reverse, consider the dimensions:
$\bullet$ Order of $A = m \times n \Rightarrow$ Order of $A' = n \times m$
$\bullet$ Order of $B = n \times p \Rightarrow$ Order of $B' = p \times n$
$\bullet$ Order of $AB = m \times p \Rightarrow$ Order of $(AB)' = p \times m$
$\bullet$ Product $A'B'$ is not defined unless $m = p$. However, the product $B'A'$ has dimensions $(p \times n) \times (n \times m)$, resulting in a $p \times m$ matrix, which matches $(AB)'$.
Formal Proof (Derivation)
Let $A = [a_{ij}]$ be an $m \times n$ matrix and $B = [b_{jk}]$ be an $n \times p$ matrix.
Step 1: Element of $(AB)'$
The $(i, k)^{th}$ element of the product $AB$ is given by:
$(AB)_{ik} = \sum\limits_{j=1}^{n} a_{ij}b_{jk}$
By definition of transpose, the $(k, i)^{th}$ element of $(AB)'$ is the $(i, k)^{th}$ element of $AB$:
$((AB)')_{ki} = \sum\limits_{j=1}^{n} a_{ij}b_{jk}$
Step 2: Element of $B'A'$
Let $B' = [d_{kj}]$ where $d_{kj} = b_{jk}$ and $A' = [c_{ji}]$ where $c_{ji} = a_{ij}$.
The $(k, i)^{th}$ element of the product $B'A'$ is:
$(B'A')_{ki} = \sum\limits_{j=1}^{n} d_{kj}c_{ji}$
Substituting the original elements $b_{jk}$ and $a_{ij}$:
$(B'A')_{ki} = \sum\limits_{j=1}^{n} b_{jk}a_{ij} = \sum\limits_{j=1}^{n} a_{ij}b_{jk}$
Step 3: Conclusion
Comparing the above results, we see that the corresponding elements are identical. Since both matrices also have the same order $p \times m$, we have:
$(AB)' = B'A'$
[Hence Proved]
Example. If $A = \begin{bmatrix} 1 \\ -4 \\ 3 \end{bmatrix}$ and $B = \begin{bmatrix} -1 & 2 & 1 \end{bmatrix}$, verify that $(AB)' = B'A'$.
Answer:
LHS Calculation: $(AB)'$
$AB = \begin{bmatrix} 1 \\ -4 \\ 3 \end{bmatrix} \begin{bmatrix} -1 & 2 & 1 \end{bmatrix} = \begin{bmatrix} (1)(-1) & (1)(2) & (1)(1) \\ (-4)(-1) & (-4)(2) & (-4)(1) \\ (3)(-1) & (3)(2) & (3)(1) \end{bmatrix} $$ = \begin{bmatrix} -1 & 2 & 1 \\ 4 & -8 & -4 \\ -3 & 6 & 3 \end{bmatrix}$
Now, $(AB)' = \begin{bmatrix} -1 & 4 & -3 \\ 2 & -8 & 6 \\ 1 & -4 & 3 \end{bmatrix}$
RHS Calculation: $B'A'$
$B' = \begin{bmatrix} -1 \\ 2 \\ 1 \end{bmatrix}$ and $A' = \begin{bmatrix} 1 & -4 & 3 \end{bmatrix}$
$B'A' = \begin{bmatrix} -1 \\ 2 \\ 1 \end{bmatrix} \begin{bmatrix} 1 & -4 & 3 \end{bmatrix} = \begin{bmatrix} (-1)(1) & (-1)(-4) & (-1)(3) \\ (2)(1) & (2)(-4) & (2)(3) \\ (1)(1) & (1)(-4) & (1)(3) \end{bmatrix} $$ = \begin{bmatrix} -1 & 4 & -3 \\ 2 & -8 & 6 \\ 1 & -4 & 3 \end{bmatrix}$
Conclusion: Since LHS = RHS, the reversal law is verified.
3. Symmetric and Skew-Symmetric Matrices
A. Symmetric Matrix
A Symmetric Matrix is a special type of square matrix that remains identical even after its rows and columns are interchanged. This property of "self-reflection" across the principal diagonal makes it a cornerstone of linear algebra.
Definition and Condition
A square matrix $A = [a_{ij}]$ is said to be Symmetric if the transpose of the matrix is equal to the matrix itself. In other words, $A$ and its transpose $A'$ are indistinguishable.
$A' = A$
[Matrix Condition]
Element-wise Property
At the element level, the entry in the $i^{th}$ row and $j^{th}$ column must be exactly equal to the entry in the $j^{th}$ row and $i^{th}$ column for all values of $i$ and $j$.
$a_{ij} = a_{ji}$
[For all $i, j$]
Structural Characteristics
$\bullet$ Diagonal Elements: There are no restrictions on the diagonal elements ($a_{ii}$). They can be any real or complex number, as they remain in their original positions during a transpose operation.
$\bullet$ Geometric Symmetry: If you imagine the principal diagonal (from top-left to bottom-right) as a mirror, the elements on one side are the mirror images of the elements on the other side.
Standard Symbolic Example ($3 \times 3$):
A general symmetric matrix of order $3$ is often represented in Indian textbooks as:
$A = \begin{bmatrix} a & h & g \\ h & b & f \\ g & f & c \end{bmatrix}$
Notice how $a_{12} = a_{21} = h$, $a_{13} = a_{31} = g$, and $a_{23} = a_{32} = f$.
Example. Verify if the matrix $A = \begin{bmatrix} 2 & -1 & 4 \\ -1 & 5 & 0 \\ 4 & 0 & 3 \end{bmatrix}$ is symmetric.
Answer:
Step 1: Write the given matrix $A$
$A = \begin{bmatrix} 2 & -1 & 4 \\ -1 & 5 & 0 \\ 4 & 0 & 3 \end{bmatrix}$
Step 2: Find the transpose $A'$
Interchanging rows and columns:
$A' = \begin{bmatrix} 2 & -1 & 4 \\ -1 & 5 & 0 \\ 4 & 0 & 3 \end{bmatrix}$
Step 3: Compare $A$ and $A'$
Since every element $a_{ij}$ is equal to $a_{ji}$, we find that $A' = A$.
Conclusion: Matrix $A$ is a Symmetric Matrix.
B. Skew-Symmetric Matrix
A Skew-Symmetric Matrix (also known as an anti-symmetric matrix) is a square matrix whose transpose is equal to its negative. This property results in a unique structure where elements reflected across the principal diagonal have equal magnitude but opposite signs, and the principal diagonal itself consists entirely of zeros.
Definition and Condition
A square matrix $A = [a_{ij}]$ is called Skew-Symmetric if it satisfies the following matrix equation:
$A' = -A$
Element-wise Property
In terms of individual elements, for all values of $i$ and $j$, the entry in the $i^{th}$ row and $j^{th}$ column must be the negative of the entry in the $j^{th}$ row and $i^{th}$ column.
$a_{ij} = -a_{ji}$
[for all $i, j$]
The Principal Diagonal Property
A fundamental characteristic of every skew-symmetric matrix is that all its diagonal elements are zero.
Proof:
Given: $A$ is a skew-symmetric matrix, therefore $a_{ij} = -a_{ji}$ for all $i, j$.
For the elements of the principal diagonal, the row index is equal to the column index, i.e., $i = j$.
Substituting $j = i$ in the skew-symmetric condition:
$a_{ii} = -a_{ii}$
Adding $a_{ii}$ to both sides:
$a_{ii} + a_{ii} = 0$
$2a_{ii} = 0$
$a_{ii} = 0$
[Hence Proved]
Thus, every diagonal element of a skew-symmetric matrix must be zero.
General Structure
A general $3 \times 3$ skew-symmetric matrix is represented as follows:
$A = \begin{bmatrix} 0 & h & g \\ -h & 0 & f \\ -g & -f & 0 \end{bmatrix}$
Note that the elements $a_{12}=h$ and $a_{21}=-h$ are negatives of each other, as are the other pairs across the diagonal.
Example. Verify if the matrix $B = \begin{bmatrix} 0 & 2 & -3 \\ -2 & 0 & 4 \\ 3 & -4 & 0 \end{bmatrix}$ is skew-symmetric.
Answer:
Step 1: Find the transpose $B'$
$B' = \begin{bmatrix} 0 & -2 & 3 \\ 2 & 0 & -4 \\ -3 & 4 & 0 \end{bmatrix}$
Step 2: Find the negative of matrix $B$
$-B = -1 \times \begin{bmatrix} 0 & 2 & -3 \\ -2 & 0 & 4 \\ 3 & -4 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -2 & 3 \\ 2 & 0 & -4 \\ -3 & 4 & 0 \end{bmatrix}$
Step 3: Comparison
Since $B' = -B$, the matrix $B$ is Skew-Symmetric.
4. Decomposition of a Square Matrix
In linear algebra, a powerful property exists that allows any square matrix to be broken down into two distinct parts: a Symmetric Matrix and a Skew-Symmetric Matrix. This decomposition is not only possible for all square matrices but is also unique.
Theoretical Statement
Every square matrix $A$ can be uniquely expressed as the sum of a symmetric matrix and a skew-symmetric matrix.
The Decomposition Formula
For any square matrix $A$, we can write:
$A = \underbrace{\frac{1}{2}(A + A')}_{\text{Symmetric (P)}} + \underbrace{\frac{1}{2}(A - A')}_{\text{Skew-Symmetric (Q)}}$
Derivation and Proof
To establish this result, we first demonstrate that the two parts ($P$ and $Q$) satisfy the required conditions of symmetry and skew-symmetry.
Proof of Symmetry for P:
Let $P = \frac{1}{2}(A + A')$. Taking the transpose of $P$:
$P' = \left[ \frac{1}{2}(A + A') \right]'$
$P' = \frac{1}{2}(A' + (A')')$
$P' = \frac{1}{2}(A' + A) = P$
Since $P' = P$, matrix $P$ is Symmetric.
Proof of Skew-Symmetry for Q:
Let $Q = \frac{1}{2}(A - A')$. Taking the transpose of $Q$:
$Q' = \left[ \frac{1}{2}(A - A') \right]'$
$Q' = \frac{1}{2}(A' - (A')')$
$Q' = \frac{1}{2}(A' - A) = -\frac{1}{2}(A - A') = -Q$
Since $Q' = -Q$, matrix $Q$ is Skew-Symmetric.
Proof of Uniqueness
To prove that this representation is unique, we assume there is another representation and show it leads to the same matrices.
Given: Suppose $A = R + S$ where $R$ is symmetric ($R' = R$) and $S$ is skew-symmetric ($S' = -S$).
Taking transpose on both sides:
$A' = (R + S)'$
$A' = R' + S'$
$A' = R - S$
[Substituting $R'=R$ and $S'=-S$] ... (i)
We already have $A = R + S$ as equation (ii). Now, adding (i) and (ii):
$A + A' = (R + S) + (R - S)$
$A + A' = 2R \Rightarrow R = \frac{1}{2}(A + A')$
Subtracting (i) from (ii):
$A - A' = (R + S) - (R - S)$
$A - A' = 2S \Rightarrow S = \frac{1}{2}(A - A')$
This shows that $R$ and $S$ are exactly the same as $P$ and $Q$ derived earlier. Hence, the representation is unique.
Example. Express the matrix $A = \begin{bmatrix} 3 & 5 \\ 1 & -1 \end{bmatrix}$ as the sum of a symmetric and a skew-symmetric matrix.
Answer:
Given:
$A = \begin{bmatrix} 3 & 5 \\ 1 & -1 \end{bmatrix}$
Step 1: Find $A'$
$A' = \begin{bmatrix} 3 & 1 \\ 5 & -1 \end{bmatrix}$
Step 2: Find Symmetric Part (P)
$P = \frac{1}{2}(A + A') = \frac{1}{2} \left( \begin{bmatrix} 3 & 5 \\ 1 & -1 \end{bmatrix} + \begin{bmatrix} 3 & 1 \\ 5 & -1 \end{bmatrix} \right)$
$P = \frac{1}{2} \begin{bmatrix} 6 & 6 \\ 6 & -2 \end{bmatrix} = \begin{bmatrix} 3 & 3 \\ 3 & -1 \end{bmatrix}$
Step 3: Find Skew-Symmetric Part (Q)
$Q = \frac{1}{2}(A - A') = \frac{1}{2} \left( \begin{bmatrix} 3 & 5 \\ 1 & -1 \end{bmatrix} - \begin{bmatrix} 3 & 1 \\ 5 & -1 \end{bmatrix} \right)$
$Q = \frac{1}{2} \begin{bmatrix} 0 & 4 \\ -4 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 2 \\ -2 & 0 \end{bmatrix}$
Step 4: Express $A$ as the sum
$A = P + Q$
$\begin{bmatrix} 3 & 5 \\ 1 & -1 \end{bmatrix} = \begin{bmatrix} 3 & 3 \\ 3 & -1 \end{bmatrix} + \begin{bmatrix} 0 & 2 \\ -2 & 0 \end{bmatrix}$
Verification shows that $P$ is symmetric ($P' = P$) and $Q$ is skew-symmetric ($Q' = -Q$).
Elementary Operations on a Matrix
The operations performed on the rows or columns of a matrix to transform it into another form are called elementary operations or transformations. These operations are fundamental in finding the inverse of a matrix and solving systems of linear equations. There are six types of elementary operations: three due to rows and three due to columns.
Types of Elementary Operations
1. The interchange of any two rows (or columns)
This operation involves swapping all the elements of any two chosen rows or columns within a matrix. This transformation is often the first step in reducing a matrix to its Row Echelon Form or Identity Matrix, especially when the first element ($a_{11}$) is zero and needs to be replaced with a non-zero value from another row.
The symbolic notation used to represent the interchange of the $i^{th}$ row and the $j^{th}$ row is $R_i \leftrightarrow R_j$. Similarly, the interchange of the $i^{th}$ column and the $j^{th}$ column is denoted by $C_i \leftrightarrow C_j$.
$R_i \leftrightarrow R_j $ or $ C_i \leftrightarrow C_j$
[Interchange Notation]
It is important to note that interchanging rows or columns results in an Equivalent Matrix. While the visual arrangement of numbers changes, the systemic properties remain related.
Example 1. Given a $3 \times 3$ matrix $A = \begin{bmatrix} 0 & 2 & 1 \\ 1 & 2 & 1 \\ 2 & 4 & 5 \end{bmatrix}$. Apply the elementary row operation $R_1 \leftrightarrow R_2$ to obtain a new matrix $B$.
Answer:
Given:
Matrix $A = \begin{bmatrix} 0 & 2 & 1 \\ 1 & 2 & 1 \\ 2 & 4 & 5 \end{bmatrix}$
To Find:
The matrix $B$ after interchanging the first and second rows.
Solution:
To perform the operation $R_1 \leftrightarrow R_2$, we take all elements of Row 1 and move them to Row 2, and simultaneously move all elements of Row 2 to Row 1. Row 3 remains unchanged.
Row 1 elements: $(0, 2, 1)$
Row 2 elements: $(1, 2, 1)$
After swapping:
$B = \begin{bmatrix} 1 & 2 & 1 \\ 0 & 2 & 1 \\ 2 & 4 & 5 \end{bmatrix}$
[By applying $R_1 \leftrightarrow R_2$]
Here, the resulting matrix $B$ is said to be equivalent to matrix $A$.
Example 2. Consider the matrix $C = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$. Apply the elementary column operation $C_1 \leftrightarrow C_2$.
Answer:
Solution:
In this case, we swap the vertical columns. The elements of Column 1 $(5, 7)$ will swap positions with the elements of Column 2 $(6, 8)$.
$C' = \begin{bmatrix} 6 & 5 \\ 8 & 7 \end{bmatrix}$
[By applying $C_1 \leftrightarrow C_2$]
2. Multiplication of the elements of any row (or column) by a non-zero real number
This operation involves taking every element in a specific row (or column) and multiplying it by a constant scalar $k$. It is important to note that this scalar $k$ must be non-zero ($k \neq 0$). If we were to multiply a row by zero, all elements in that row would become zero, effectively destroying the information contained in that row and changing the rank of the matrix.
The symbolic representation for this transformation is as follows:
$R_i \rightarrow kR_i$
[where $k \neq 0$]
Similarly, for column operations, the notation is:
$C_i \rightarrow kC_i$
[where $k \neq 0$]
This operation is frequently used to make the leading element (pivot) of a row equal to $1$, which simplifies subsequent subtraction or addition steps.
Example. Consider the matrix $B = \begin{bmatrix} 4 & 5 & 6 \\ 1 & 2 & 4 \end{bmatrix}$. Apply the elementary column operation $C_3 \rightarrow \frac{1}{2} C_3$ to obtain the transformed matrix $B'$.
Answer:
Given:
Matrix $B = \begin{bmatrix} 4 & 5 & 6 \\ 1 & 2 & 4 \end{bmatrix}$
Operation: $C_3 \rightarrow \frac{1}{2} C_3$
To Find:
The resulting matrix $B'$ after multiplying the elements of the third column by the scalar $\frac{1}{2}$.
Solution:
In matrix $B$, the third column $C_3$ consists of the elements $6$ and $4$. We apply the multiplication as follows:
For the first row, third column: $\frac{1}{2} \times 6 = 3$
For the second row, third column: $\frac{1}{2} \times 4 = 2$
Using the cancellation format for clarity:
$B' = \begin{bmatrix} 4 & 5 & \frac{\cancel{6}^3}{2} \\ 1 & 2 & \frac{\cancel{4}^2}{2} \end{bmatrix} = \begin{bmatrix} 4 & 5 & 3 \\ 1 & 2 & 2 \end{bmatrix}$
[Final Transformed Matrix]
The columns $C_1$ and $C_2$ remain unchanged during this process. The matrix $B'$ is equivalent to $B$.
3. Addition to the elements of any row (or column), the corresponding elements of any other row (or column) multiplied by any non-zero real number
This is the most versatile and frequently utilized operation in matrix theory. It involves modifying a target row (or column) by adding to it a scalar multiple of another row (or column). This operation is specifically designed to create zeros in desired positions.
The symbolic representation for this transformation is as follows:
$R_i \rightarrow R_i + kR_j$
[Row transformation]
Similarly, for column operations, the notation is:
$C_i \rightarrow C_i + kC_j$
[Column transformation]
In these notations, $k$ is any non-zero real number. It is important to note that the row being used as a reference ($R_j$) remains unchanged in the matrix; only the target row ($R_i$) is modified.
Example 1. Given a $2 \times 2$ matrix $C = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$, apply the elementary row operation $R_2 \rightarrow R_2 - 3R_1$ to make the element at position $c_{21}$ equal to zero.
Answer:
Given:
Matrix $C = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$
Operation: $R_2 \rightarrow R_2 + (-3)R_1$
To Find:
The resulting matrix $C'$ after performing the specified row addition.
Solution:
We need to update the elements of the second row ($R_2$) by subtracting three times the corresponding elements of the first row ($R_1$).
Step 1: Calculate the new value for the first element of $R_2$ ($c_{21}$):
$3 + (-3)(1) = 0$
(New $c_{21}$)
Step 2: Calculate the new value for the second element of $R_2$ ($c_{22}$):
$4 + (-3)(2) = 4 - 6 = -2$
(New $c_{22}$)
Putting these values back into the matrix:
$C' = \begin{bmatrix} 1 & 2 \\ 0 & -2 \end{bmatrix}$
[Transformed Matrix]
Thus, the operation successfully converted the element $3$ into $0$, making the matrix upper triangular.
Example 2. Let $A = \begin{bmatrix} 1 & -2 & 1 \\ 3 & 4 & 5 \\ 0 & 3 & -2 \end{bmatrix}$. Apply the following elementary operations:
(a) $R_1 \leftrightarrow R_2$ to obtain matrix $B$.
(b) $R_1 \rightarrow 3R_1$ to obtain matrix $D$.
(c) $R_2 \rightarrow R_2 + 4R_1$ to obtain matrix $F$.
Answer:
(a) Interchanging the first and second rows ($R_1 \leftrightarrow R_2$):
$B = \begin{bmatrix} 3 & 4 & 5 \\ 1 & -2 & 1 \\ 0 & 3 & -2 \end{bmatrix}$
[Row 1 and Row 2 swapped]
(b) Multiplying the first row of $A$ by 3 ($R_1 \rightarrow 3R_1$):
$D = \begin{bmatrix} 3 & -6 & 3 \\ 3 & 4 & 5 \\ 0 & 3 & -2 \end{bmatrix}$
[Elements of $R_1$ multiplied by 3]
(c) Adding 4 times the first row to the second row ($R_2 \rightarrow R_2 + 4R_1$):
Calculation for $R_2$: $[3 + 4(1), 4 + 4(-2), 5 + 4(1)] = [7, -4, 9]$
$F = \begin{bmatrix} 1 & -2 & 1 \\ 7 & -4 & 9 \\ 0 & 3 & -2 \end{bmatrix}$
[Result of $R_2 + 4R_1$]
Equivalent Matrices
Two matrices $A$ and $B$ are said to be equivalent if one can be obtained from the other by applying a finite number of elementary row or column operations. In mathematical notation, this relationship is expressed using the tilde symbol ($\sim$).
$A \sim B$
[Matrices $A$ and $B$ are equivalent]
It is important to understand that equivalent matrices have the same order and the same rank. For example, if we take a matrix $A$ and perform a row interchange ($R_1 \leftrightarrow R_2$) to get matrix $B$, then $A$ and $B$ are equivalent.
Summary of Notations
| Operation Type | Row Notation | Column Notation |
|---|---|---|
| Interchange of any two rows/columns | $R_i \leftrightarrow R_j$ | $C_i \leftrightarrow C_j$ |
| Multiplication by a non-zero scalar $k$ | $R_i \rightarrow kR_i$ | $C_i \rightarrow kC_i$ |
| Addition of a multiple of another row/column | $R_i \rightarrow R_i + kR_j$ | $C_i \rightarrow C_i + kC_j$ |
Invertible Matrices
In the study of Linear Algebra, a square matrix is considered invertible (or non-singular) if there exists another square matrix that, when multiplied with it, results in the Identity matrix. This concept is analogous to the reciprocal of a non-zero number in arithmetic.
Let $A$ be a square matrix of order $n$. $A$ is called invertible if there exists a square matrix $B$ of the same order $n$ such that:
$AB = BA = I$
Where $I$ is the identity matrix of order $n$. In such a case, matrix $B$ is called the inverse of matrix $A$ and is denoted by $A^{-1}$.
Important Remarks
1. Square Matrix Requirement: A rectangular matrix does not possess an inverse. For the products $AB$ and $BA$ to be defined and equal, both $A$ and $B$ must be square matrices of the same order.
2. Mutual Inverse: If $B$ is the inverse of $A$, then by symmetry, $A$ is also the inverse of $B$.
3. Condition for Invertibility: Not every square matrix has an inverse. If the determinant of a matrix is zero, it is called a singular matrix and is not invertible.
Example. Verify if $A = \begin{bmatrix} 2 & 3 \\ 1 & 2 \end{bmatrix}$ and $B = \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix}$ are inverses of each other.
Answer:
Solution:
To verify, we calculate the product $AB$:
$AB = \begin{bmatrix} 2 & 3 \\ 1 & 2 \end{bmatrix} \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix}$
$AB = \begin{bmatrix} (2)(2) + (3)(-1) & (2)(-3) + (3)(2) \\ (1)(2) + (2)(-1) & (1)(-3) + (2)(2) \end{bmatrix}$
$AB = \begin{bmatrix} 4-3 & -6+6 \\ 2-2 & -3+4 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = I$
Similarly, we calculate $BA$:
$BA = \begin{bmatrix} 2 & -3 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 2 & 3 \\ 1 & 2 \end{bmatrix} = \begin{bmatrix} 4-3 & 6-6 \\ -2+2 & -3+4 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = I$
Since $AB = BA = I$, matrix $A$ is invertible and $B = A^{-1}$.
Uniqueness of Inverse
Theorem: If a square matrix has an inverse, then that inverse is unique.
Proof:
Let $A$ be an invertible square matrix of order $n$. Suppose $B$ and $C$ are two different inverses of $A$.
By the definition of inverse:
$AB = BA = I$
[As $B$ is inverse of $A$]
$AC = CA = I$
[As $C$ is inverse of $A$]
Now, consider the matrix $B$. We can write:
$B = BI$
$B = B(AC)$
$B = (BA)C$
[By Associative Law]
$B = IC$
$B = C$
[Using above equation]
This proves that the inverse of a square matrix is unique.
Inverse of a Matrix by Elementary Operations
The inverse of an invertible matrix $A$ can be found using either Elementary Row Operations or Elementary Column Operations. However, it is mandatory to use only one type of operation throughout the process.
1. Using Elementary Row Operations
To find the inverse of a square matrix $A$ using Elementary Row Operations, we utilize the property of the identity matrix. Any matrix multiplied by the identity matrix remains unchanged. For a square matrix $A$, we can write the fundamental relation:
$A = IA$
In this method, we perform a sequence of row transformations on the matrix $A$ on the Left Hand Side (LHS) until it is reduced to the Identity Matrix ($I$). Crucially, every operation performed on the LHS must be simultaneously performed on the identity matrix $I$ situated on the Right Hand Side (RHS). The matrix $A$ on the extreme right of the RHS remains untouched throughout the process.
The Mathematical Logic
When we apply a row operation, it is mathematically equivalent to pre-multiplying by an elementary matrix. Suppose we apply a series of row operations represented by elementary matrices $E_1, E_2, \dots, E_n$ such that:
$(E_n \dots E_2 E_1) A = I$
[LHS becomes Identity]
Applying the same operations to the identity matrix on the RHS:
$(E_n \dots E_2 E_1) I = B$
[RHS Identity becomes B]
Since $(E_n \dots E_1) = A^{-1}$, it follows that $B = A^{-1}$. Thus, the final matrix obtained on the RHS is the unique inverse of the original matrix.
Step-by-Step Procedure
1. Write the matrix equation $A = IA$.
2. Use row transformations ($R_i \leftrightarrow R_j$, $R_i \rightarrow kR_i$, or $R_i \rightarrow R_i + kR_j$) to transform the LHS into an identity matrix.
3. Always target the elements in a specific order: first make $a_{11} = 1$, then make other elements in the first column zero, then move to $a_{22} = 1$, and so on.
4. Caution: If at any stage during the operations, all elements of a row in the LHS matrix become zero, then $A^{-1}$ does not exist.
Example. Find the inverse of matrix $A = \begin{bmatrix} 1 & 2 \\ 3 & 7 \end{bmatrix}$ using elementary row operations.
Answer:
Given:
$A = \begin{bmatrix} 1 & 2 \\ 3 & 7 \end{bmatrix}$
To Find:
$A^{-1}$ using row operations.
Solution:
We start by writing the equation $A = IA$:
$\begin{bmatrix} 1 & 2 \\ 3 & 7 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} A$
Step 1: To make the element $a_{21}$ zero, apply $R_2 \rightarrow R_2 - 3R_1$.
LHS: $7 - 3(2) = 1$ and $3 - 3(1) = 0$.
RHS: $0 - 3(1) = -3$ and $1 - 3(0) = 1$.
$\begin{bmatrix} 1 & 2 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ -3 & 1 \end{bmatrix} A$
[Applying $R_2 \rightarrow R_2 - 3R_1$]
Step 2: To make the element $a_{12}$ zero, apply $R_1 \rightarrow R_1 - 2R_2$.
LHS: $1 - 2(0) = 1$ and $2 - 2(1) = 0$.
RHS: $1 - 2(-3) = 7$ and $0 - 2(1) = -2$.
$\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 7 & -2 \\ -3 & 1 \end{bmatrix} A$
[Applying $R_1 \rightarrow R_1 - 2R_2$]
Now, the equation is in the form $I = BA$. Therefore:
$A^{-1} = \begin{bmatrix} 7 & -2 \\ -3 & 1 \end{bmatrix}$
[Final Result]
2. Using Elementary Column Operations
The Elementary Column Operation method is an alternative to row operations for finding the inverse of a square matrix. While row operations involve pre-multiplication by elementary matrices, column operations rely on post-multiplication. This is why we must start with a different fundamental identity compared to the row method.
For any square matrix $A$ of order $n$, we can express it as the product of itself and the identity matrix $I$ as follows:
$A = AI$
In this approach, we apply a sequence of column transformations on the matrix $A$ located on the Left Hand Side (LHS) to reduce it to the Identity Matrix ($I$). Simultaneously, we apply the exact same sequence of column transformations to the identity matrix $I$ located on the Right Hand Side (RHS). Note that the matrix $A$ on the RHS remains unchanged.
The Mathematical Derivation
Suppose $A$ is an invertible matrix. By performing elementary column operations, we are essentially post-multiplying $A$ by elementary matrices $E_1, E_2, \dots, E_n$. Our goal is to find a sequence of such operations such that:
$A(E_1 E_2 \dots E_n) = I$
[LHS reduced to Identity]
Applying these same operations to the identity matrix on the RHS of equation (v):
$I(E_1 E_2 \dots E_n) = B$
[RHS Identity becomes B]
Since the product $(E_1 E_2 \dots E_n)$ effectively represents $A^{-1}$, the matrix $B$ generated on the RHS is the unique inverse of $A$.
$A^{-1} = B$
[where $I = AB$]
Operational Procedure
1. Setup: Write the equation $A = AI$.
2. Transformation: Use column operations ($C_i \leftrightarrow C_j$, $C_i \rightarrow kC_i$, or $C_i \rightarrow C_i + kC_j$) to transform the LHS matrix $A$ into $I$.
3. Simultaneity: Apply every operation performed on the LHS to the $I$ on the RHS.
4. Order of Operations: It is generally recommended to fix the matrix row by row when using column operations. First, make $a_{11} = 1$, then make other elements in the first row zero using column transformations.
Example. Find the inverse of matrix $A = \begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix}$ using elementary column operations.
Answer:
Given:
$A = \begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix}$
To Find:
$A^{-1}$ using column transformations.
Solution:
We write the equation $A = AI$:
$\begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix} = \begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
(Starting Equation)
Step 1: To make the element $a_{12}$ zero, apply $C_2 \rightarrow C_2 - 3C_1$.
LHS: $3 - 3(1) = 0$ and $7 - 3(2) = 1$.
RHS: $0 - 3(1) = -3$ and $1 - 3(0) = 1$.
$\begin{bmatrix} 1 & 0 \\ 2 & 1 \end{bmatrix} = A \begin{bmatrix} 1 & -3 \\ 0 & 1 \end{bmatrix}$
[Applying $C_2 \rightarrow C_2 - 3C_1$]
Step 2: To make the element $a_{21}$ zero, apply $C_1 \rightarrow C_1 - 2C_2$.
LHS: $1 - 2(0) = 1$ and $2 - 2(1) = 0$.
RHS: $1 - 2(-3) = 7$ and $0 - 2(1) = -2$.
$\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = A \begin{bmatrix} 7 & -3 \\ -2 & 1 \end{bmatrix}$
[Applying $C_1 \rightarrow C_1 - 2C_2$]
The equation is now in the form $I = AB$, where $B$ is the inverse.
$A^{-1} = \begin{bmatrix} 7 & -3 \\ -2 & 1 \end{bmatrix}$
Important Observation for Competitions
During the process of finding the inverse of a matrix using elementary transformations, there are specific indicators that tell us whether an inverse actually exists.
While performing a sequence of elementary row operations on $A = IA$ (or elementary column operations on $A = AI$), if we arrive at a stage where all the elements in one or more rows (or columns) of the matrix $A$ on the Left Hand Side (LHS) become zero, then the inverse of the matrix does not exist.