Chapter 4 Determinants (Concepts)
Building upon the study of matrices, this chapter introduces the Determinant, a unique scalar value associated only with square matrices. Determinants are powerful tools in linear algebra, used to determine if a matrix is invertible and to solve complex systems of linear equations.
The calculation of a determinant depends on the matrix order. For a $2 \times 2$ matrix, it is simply $\mathbf{|A| = ad - bc}$. For $3 \times 3$ matrices and higher, we utilize Minors and Cofactors to expand the determinant along a specific row or column. We also explore the Adjoint of a Matrix ($\text{adj } A$), which leads to the crucial formula for the Inverse of a Matrix: $$\mathbf{A^{-1} = \frac{1}{|A|} (\text{adj } A)}$$ This formula is valid only for non-singular matrices, where $|A| \neq 0$.
Beyond matrix inversion, determinants are applied to calculate the area of a triangle and to test for the collinearity of three points. This page, prepared by learningspot.co, features images for visualisation of concepts, flowcharts, mindmaps, and examples to help students master these analytical techniques effectively.
Introduction to Determinants
The study of Determinants originated in the late 17th century, primarily through the work of the mathematician Gottfried Wilhelm Leibnitz in 1693. Determinants were initially developed as a clever algebraic shortcut to determine whether a system of linear equations has a unique solution and to find that solution quickly without lengthy elimination processes.
While a Matrix is an arrangement of numbers (a mathematical object), a Determinant is a specific numerical value (a scalar) that is uniquely associated with every square matrix. This value provides critical information about the matrix, such as its invertibility and the volume scaling factor of the linear transformation it represents.
Determinants and Systems of Linear Equations
To understand the origin of the determinant, consider a general system of two linear equations with two variables, $x$ and $y$:
$a_1x + b_1y = c_1$
…(i)
$a_2x + b_2y = c_2$
…(ii)
To solve for $x$ and $y$ using the method of elimination, we multiply equation (i) by $b_2$ and equation (ii) by $b_1$:
$a_1b_2x + b_1b_2y = c_1b_2$
…(iii)
$a_2b_1x + b_1b_2y = c_2b_1$
…(iv)
Subtracting equation (iv) from equation (iii), we get:
$(a_1b_2 - a_2b_1)x = c_1b_2 - c_2b_1$
From this result, it is clear that for $x$ (and similarly $y$) to have a unique and well-defined solution, the denominator must not be zero. The specific expression that "determines" this uniqueness is:
$a_1b_2 - a_2b_1 \neq 0$
…(v)
This numerical value, $a_1b_2 - a_2b_1$, is called the determinant of the coefficient matrix $A = \begin{bmatrix} a_1 & b_1 \\ a_2 & b_2 \end{bmatrix}$.
Example. Given the system of equations $2x + 3y = 5$ and $4x + 6y = 10$, determine if the system has a unique solution by calculating the determinant of its coefficient matrix.
Answer:
Given:
Coefficient $a_1 = 2, b_1 = 3$
Coefficient $a_2 = 4, b_2 = 6$
To Find:
The value of the determinant $a_1b_2 - a_2b_1$.
Solution:
Applying the determinant formula for a $2 \times 2$ matrix:
Determinant = $(2 \times 6) - (4 \times 3)$
Determinant = $12 - 12$
$\text{Determinant} = 0$
Conclusion: Since the determinant is zero, the system of equations does not have a unique solution. It may have infinitely many solutions or no solution at all.
Definition of Determinant
In Linear Algebra, a determinant is a scalar value that is a function of the entries of a square matrix. It characterizes many properties of the matrix and the linear map represented by that matrix. Specifically, the determinant helps in determining if a matrix is invertible and in calculating the volume of orientation-preserving transformations.
The Mathematical Definition
To every square matrix $A = [a_{ij}]$ of order $n$, where the entries $a_{ij}$ are real or complex numbers, we can associate a unique number (real or complex) called the determinant of the matrix $A$. This association can be viewed as a mathematical function.
If $M$ is the set of all square matrices and $R$ is the set of real numbers, then the determinant function $f$ is defined as:
$f: M \rightarrow R$
[Functional Mapping]
This is expressed as $f(A) = |A|$, where $|A|$ represents the determinant value of matrix $A$.
Notations and Representation
There are several ways to denote the determinant of a matrix $A$:
1. $\det A$
2. $|A|$
3. $\Delta$ (Delta - commonly used in physics and engineering contexts)
If we have a square matrix $A$ of order 2:
$A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$
The determinant of $A$ is written by replacing the square brackets with vertical bars:
$|A| = \begin{vmatrix} a & b \\ c & d \end{vmatrix}$
Important Remarks
1. The "Absolute Value" Confusion: In the context of matrices, the symbol $|A|$ is read as "determinant of A". It does not mean the absolute value of $A$. While absolute value results in a non-negative number, a determinant can be negative, zero, or even a complex number.
2. Requirement of Square Shape: Determinants are strictly defined only for square matrices ($n \times n$). A rectangular matrix ($m \times n$ where $m \neq n$) does not have a determinant.
Value of a Determinant
Determinant of Order One
The simplest form of a determinant is that of a square matrix of order 1. A matrix of order $1 \times 1$ contains only one element, representing a single row and a single column.
Mathematical Definition
Let $A = [a]$ be a square matrix of order 1. The determinant of matrix A (denoted by $\det A$ or $|A|$) is defined as the value of the element itself. In other words, if a matrix has only one entry, its determinant is simply that entry.
$\det A = |a| = a$
This rule applies regardless of whether the number is positive, negative, or zero. It is the most basic building block of determinant theory, which eventually scales up to higher-order matrices through expansion.
Crucial Distinction: Determinant vs. Absolute Value
Students often confuse the vertical bars $| \ |$ used for determinants with the "modulus" or "absolute value" function used in real numbers. It is vital to understand the difference to avoid calculation errors.
| Context | Symbol | Input | Output for $-7$ |
|---|---|---|---|
| Determinant | $|A|$ | Square Matrix $[a]$ | $|-7| = -7$ |
| Absolute Value | $|x|$ | Real Number $x$ | $|-7| = 7$ |
As shown in the table above, the determinant of a $1 \times 1$ matrix preserves the sign of the element, whereas the absolute value function always returns a non-negative result.
Example. Find the determinant of the following matrices:
(a) $A = [5]$
(b) $B = [-12]$
Answer:
(a) For matrix $A = [5]$:
Given: A square matrix $A$ of order 1 with element $a_{11} = 5$.
To Find: $|A|$.
Solution: By the definition of a first-order determinant, the value is the element itself.
$|A| = |5| = 5$
[Rule: $|a|=a$]
(b) For matrix $B = [-12]$:
Given: A square matrix $B$ of order 1 with element $b_{11} = -12$.
Solution: Applying the same rule from equation (i):
$|B| = |-12| = -12$
[Determinant preserves sign]
Determinant of Order Two
A square matrix of order 2 consists of two rows and two columns, containing a total of four elements. The determinant associated with this matrix provides a single numerical value that is used in finding areas, solving linear equations, and determining the invertibility of matrices.
Definition and Calculation
Let $A$ be a square matrix of order 2, represented as:
$A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}$
The value of the determinant of $A$ is obtained by the difference between the product of the elements of the principal diagonal (from top-left to bottom-right) and the product of the elements of the secondary diagonal (from bottom-left to top-right).
$\det A = |A| = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = a_{11}a_{22} - a_{12}a_{21}$
This is often visualized as a cross-multiplication process where the downward product is taken as positive and the upward product is taken as negative.
Example 1. Evaluate the determinant $|A| = \begin{vmatrix} 3 & -2 \\ 5 & 4 \end{vmatrix}$.
Answer:
Given:
A $2 \times 2$ determinant with elements:
$a_{11} = 3, a_{12} = -2, a_{21} = 5, a_{22} = 4$
To Find:
The numerical value of $|A|$.
Solution:
Using the formula for the expansion of a second-order determinant from equation (ii):
$\det A = (a_{11} \times a_{22}) - (a_{21} \times a_{12})$
[Cross-multiplication rule]
Substituting the given values:
$|A| = (3 \times 4) - (5 \times -2)$
$|A| = 12 - (-10)$
$|A| = 12 + 10 = 22$
[Final value of determinant]
Example 2. Evaluate $|B| = \begin{vmatrix} x & x+1 \\ x-1 & x \end{vmatrix}$.
Answer:
Solution:
Applying the cross-multiplication rule:
$|B| = (x \times x) - (x-1)(x+1)$
$|B| = x^2 - (x^2 - 1)$
$|B| = x^2 - x^2 + 1 = 1$
[Using algebraic identity $(a-b)(a+b) = a^2 - b^2$]
Determinant of Order Three
A determinant of order 3 is associated with a square matrix of order $3 \times 3$. It consists of $3^2 = 9$ elements arranged in three horizontal rows and three vertical columns. The evaluation of a third-order determinant involves a process called "Expansion", where the determinant is reduced to a combination of second-order determinants.
Consider the general square matrix of order 3:
$A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$
Expansion of the Determinant
The value of the determinant $|A|$ can be obtained by expanding it along any one of its three rows or three columns. The most common method is expansion along the First Row ($R_1$).
The General Formula (Expansion along $R_1$)
To expand along the first row, we multiply each element of $R_1$ by the determinant of the $2 \times 2$ matrix that remains after deleting the row and column in which that element is located. We also apply a specific Sign Convention.
$|A| = a_{11} \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} + a_{13} \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix}$
The signs $(+, -, +)$ are derived from the formula $(-1)^{i+j}$, where $i$ is the row number and $j$ is the column number.
Sign Convention Table
For a determinant of order 3, the signs to be prefixed to the product of elements and their corresponding minors are as follows:
| Column 1 | Column 2 | Column 3 |
|---|---|---|
| $+$ | $-$ | $+$ |
| $-$ | $+$ | $-$ |
| $+$ | $-$ | $+$ |
Detailed Step-by-Step Expansion
To find the value, solve the $2 \times 2$ determinants (minors) as follows:
$|A| = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) $$ + a_{13}(a_{21}a_{32} $$ - a_{22}a_{31})$
Notes for Exams
1. Zero Determinant: If any two rows or any two columns of a determinant are identical or proportional, the value of the determinant is zero.
2. Diagonal Matrix: The determinant of a diagonal matrix is simply the product of its diagonal elements ($a_{11} \times a_{22} \times a_{33}$).
3. Efficiency: Always look for the row or column with the most zeros to expand. This reduces the number of $2 \times 2$ determinants you need to solve.
Working Rule for Expansion
1. Assign signs to the elements of the row/column chosen for expansion following the pattern:
$\begin{bmatrix} + & - & + \\ - & + & - \\ + & - & + \end{bmatrix}$
2. Multiply each signed element by the determinant of order 2 obtained after deleting the row and column in which the element occurs.
Example. Evaluate the determinant: $\Delta = \begin{vmatrix} 1 & 2 & 4 \\ -1 & 3 & 0 \\ 4 & 1 & 0 \end{vmatrix}$
Answer:
Given:
$\Delta = \begin{vmatrix} 1 & 2 & 4 \\ -1 & 3 & 0 \\ 4 & 1 & 0 \end{vmatrix}$
Solution:
To simplify the calculation, we can expand along Column 3 ($C_3$) because it contains the maximum number of zeros.
Expanding along $C_3$ using signs $(+, -, +)$ for the third column:
$\Delta = 4 \begin{vmatrix} -1 & 3 \\ 4 & 1 \end{vmatrix} - 0 \begin{vmatrix} 1 & 2 \\ 4 & 1 \end{vmatrix} + 0 \begin{vmatrix} 1 & 2 \\ -1 & 3 \end{vmatrix}$
$\Delta = 4 [(-1 \times 1) - (4 \times 3)] - 0 + 0$
$\Delta = 4 [-1 - 12]$
$\Delta = 4 [-13]$
$\Delta = -52$
[Final result]
Determinant of Order Four or Higher Order
The method of evaluating determinants of order 4 or higher is a direct extension of the method used for order 3. A determinant of order $n$ is expanded into a sum of $n$ determinants of order $(n-1)$. This recursive process continues until we reach determinants of order 2, which can be easily calculated using cross-multiplication.
A determinant of order 4 contains $4^2 = 16$ elements. Its expansion along the first row involves four third-order determinants. Following the factorial rule, the total number of terms in the final expanded expression of an $n^{th}$ order determinant is $n!$. Therefore, a $4 \times 4$ determinant has $4 \times 3 \times 2 \times 1 = 24$ terms.
Sign Convention for Order 4
Just as in lower orders, the sign of each element $a_{ij}$ is determined by the position factor $(-1)^{i+j}$. For a $4 \times 4$ determinant, the sign pattern follows a checkerboard arrangement:
$\begin{vmatrix} + & - & + & - \\ - & + & - & + \\ + & - & + & - \\ - & + & - & + \end{vmatrix}$
[Sign Pattern for Order 4]
General Expansion Formula
Let $A$ be a determinant of order 4. Expanding along the first row ($R_1$):
$\Delta = a_{11}M_{11} - a_{12}M_{12} + a_{13}M_{13} - a_{14}M_{14}$
Where $M_{11}, M_{12}, M_{13}, \text{ and } M_{14}$ are the minors of the respective elements, each being a determinant of order 3.
Example. Evaluate the fourth-order determinant: $\Delta = \begin{vmatrix} 1 & 0 & 2 & 0 \\ 2 & 3 & 0 & 1 \\ 0 & 4 & 0 & 0 \\ 1 & 2 & 1 & 3 \end{vmatrix}$
Answer:
Step 1: Choose the most efficient row/column.
To minimize calculations, we should expand along the row or column containing the maximum number of zeros. In this determinant, Row 3 ($R_3$) has three zeros.
Step 2: Apply the sign convention for $R_3$.
The signs for $R_3$ are $(+, -, +, -)$ according to the pattern in equation (i). However, since $a_{31}=0$, $a_{33}=0$, and $a_{34}=0$, we only need to calculate the term for $a_{32}=4$.
Step 3: Expand along $R_3$.
$\Delta = - (4) \begin{vmatrix} 1 & 2 & 0 \\ 2 & 0 & 1 \\ 1 & 1 & 3 \end{vmatrix}$
[Expanding along $R_3$]
Step 4: Evaluate the $3 \times 3$ determinant.
Expanding the inner determinant along $R_1$:
$|M_{32}| = 1 \begin{vmatrix} 0 & 1 \\ 1 & 3 \end{vmatrix} - 2 \begin{vmatrix} 2 & 1 \\ 1 & 3 \end{vmatrix} + 0 \begin{vmatrix} 2 & 0 \\ 1 & 1 \end{vmatrix}$
$|M_{32}| = 1 (0 - 1) - 2 (6 - 1) + 0$
$|M_{32}| = -1 - 2(5) = -11$
Step 5: Final Calculation.
Substitute $|M_{32}|$ back into the equation for $\Delta$:
$\Delta = -4 \times (-11)$
$\Delta = 44$
[Final Result]
Determinants of Order $n$ (Higher Order)
For any square matrix of order $n$, the determinant is defined as the sum of products of the elements of any row or column by their corresponding cofactors:
$\Delta = \sum\limits_{j=1}^{n} a_{ij} A_{ij}$
[Expansion along $i^{th}$ row]
As the order $n$ increases, the computational complexity increases exponentially.
Special Cases and Properties
1. Determinant of an Upper/Lower Triangular Matrix
If all elements above or below the principal diagonal are zero, the determinant is simply the product of the diagonal elements, regardless of the order.
$\Delta = a_{11} \times a_{22} \times \dots \times a_{nn}$
[Diagonal Product Rule]
2. Determinant of a Scalar Matrix
If $A$ is a scalar matrix of order $n$ with diagonal element $k$, then:
$|A| = k^n$
Minors and Cofactors
To evaluate higher-order determinants and to find the Inverse of a Matrix, it is essential to understand the concepts of Minors and Cofactors. These represent the localized value of an element within the larger determinant structure.
1. Minors ($M_{ij}$)
A Minor is the determinant value of a sub-matrix formed by removing specific rows and columns. For every element $a_{ij}$ in a square matrix of order $n$, there exists a corresponding minor $M_{ij}$.
Definition
The minor of an element $a_{ij}$ is the determinant of the sub-matrix of order $(n-1)$ obtained by deleting the $i^{th}$ row and $j^{th}$ column of the original determinant. Thus, if the original determinant is of order 3, its minors will be of order 2.
$M_{ij} = \text{det of sub-matrix after removing } R_i \text{ and } C_j$
2. Cofactors ($A_{ij}$)
A Cofactor is essentially a "signed minor". It takes the numerical value of the minor and attaches a positive or negative sign based on the physical position of the element within the matrix grid.
The Mathematical Relation
The cofactor of an element $a_{ij}$, denoted by $A_{ij}$ (or sometimes $C_{ij}$), is defined by the following formula:
$A_{ij} = (-1)^{i+j} M_{ij}$
[General formula for Cofactor]
Here, the term $(-1)^{i+j}$ acts as a toggle switch:
1. If $(i+j)$ is Even, then $(-1)^{i+j} = 1$, so $A_{ij} = M_{ij}$.
2. If $(i+j)$ is Odd, then $(-1)^{i+j} = -1$, so $A_{ij} = -M_{ij}$.
Sign Convention Matrix
Calculating $(-1)^{i+j}$ for every element is time-consuming. Instead, students use the following "Checkerboard" sign patterns:
For Order 2 ($2 \times 2$):
$\begin{bmatrix} + & - \\ - & + \end{bmatrix}$
For Order 3 ($3 \times 3$):
$\begin{bmatrix} + & - & + \\ - & + & - \\ + & - & + \end{bmatrix}$
Example 1. Find all the minors and cofactors of the elements of the determinant $\Delta = \begin{vmatrix} 1 & -2 \\ 4 & 3 \end{vmatrix}$.
Answer:
Given: $\Delta = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix} = \begin{vmatrix} 1 & -2 \\ 4 & 3 \end{vmatrix}$
Solution for Minors:
$M_{11} = |3| = 3$
$M_{12} = |4| = 4$
$M_{21} = |-2| = -2$
$M_{22} = |1| = 1$
Solution for Cofactors:
Using $A_{ij} = (-1)^{i+j} M_{ij}$:
$A_{11} = (-1)^{1+1} M_{11} = (1)(3) = 3$
$A_{12} = (-1)^{1+2} M_{12} = (-1)(4) = -4$
$A_{21} = (-1)^{2+1} M_{21} = (-1)(-2) = 2$
$A_{22} = (-1)^{2+2} M_{22} = (1)(1) = 1$
Example 2. Find the minor and cofactor of each element of the determinant $\Delta = \begin{vmatrix} 1 & -2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{vmatrix}$.
Answer:
Given:
The determinant $\Delta = \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix} = \begin{vmatrix} 1 & -2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{vmatrix}$.
To Find:
The Minors ($M_{ij}$) and Cofactors ($A_{ij}$) for all 9 elements.
Solution:
We use the formula $A_{ij} = (-1)^{i+j} M_{ij}$. The sign convention for a $3 \times 3$ determinant is $\begin{bmatrix} + & - & + \\ - & + & - \\ + & - & + \end{bmatrix}$.
Elements of the First Row ($R_1$)
1. For element $a_{11} = 1$:
$M_{11} = \begin{vmatrix} 5 & 6 \\ 8 & 9 \end{vmatrix} = 45 - 48 = -3$
[Minor of $a_{11}$]
$A_{11} = (-1)^{1+1} (-3) = -3$
[Cofactor of $a_{11}$]
2. For element $a_{12} = -2$:
$M_{12} = \begin{vmatrix} 4 & 6 \\ 7 & 9 \end{vmatrix} = 36 - 42 = -6$
[Minor of $a_{12}$]
$A_{12} = (-1)^{1+2} (-6) = 6$
[Cofactor of $a_{12}$]
3. For element $a_{13} = 3$:
$M_{13} = \begin{vmatrix} 4 & 5 \\ 7 & 8 \end{vmatrix} = 32 - 35 = -3$
[Minor of $a_{13}$]
$A_{13} = (-1)^{1+3} (-3) = -3$
[Cofactor of $a_{13}$]
Elements of the Second Row ($R_2$)
4. For element $a_{21} = 4$:
$M_{21} = \begin{vmatrix} -2 & 3 \\ 8 & 9 \end{vmatrix} = -18 - 24 = -42$
[Minor of $a_{21}$]
$A_{21} = (-1)^{2+1} (-42) = 42$
[Cofactor of $a_{21}$]
5. For element $a_{22} = 5$:
$M_{22} = \begin{vmatrix} 1 & 3 \\ 7 & 9 \end{vmatrix} = 9 - 21 = -12$
[Minor of $a_{22}$]
$A_{22} = (-1)^{2+2} (-12) = -12$
[Cofactor of $a_{22}$]
6. For element $a_{23} = 6$:
$M_{23} = \begin{vmatrix} 1 & -2 \\ 7 & 8 \end{vmatrix} = 8 - (-14) = 22$
[Minor of $a_{23}$]
$A_{23} = (-1)^{2+3} (22) = -22$
[Cofactor of $a_{23}$]
Elements of the Third Row ($R_3$)
7. For element $a_{31} = 7$:
$M_{31} = \begin{vmatrix} -2 & 3 \\ 5 & 6 \end{vmatrix} = -12 - 15 = -27$
[Minor of $a_{31}$]
$A_{31} = (-1)^{3+1} (-27) = -27$
[Cofactor of $a_{31}$]
8. For element $a_{32} = 8$:
$M_{32} = \begin{vmatrix} 1 & 3 \\ 4 & 6 \end{vmatrix} = 6 - 12 = -6$
[Minor of $a_{32}$]
$A_{32} = (-1)^{3+2} (-6) = 6$
[Cofactor of $a_{32}$]
9. For element $a_{33} = 9$:
$M_{33} = \begin{vmatrix} 1 & -2 \\ 4 & 5 \end{vmatrix} = 5 - (-8) = 13$
[Minor of $a_{33}$]
$A_{33} = (-1)^{3+3} (13) = 13$
[Cofactor of $a_{33}$]
Final Verification
Let us verify the value of the determinant $\Delta$ using elements of $R_1$ and their cofactors:
$\Delta = a_{11}A_{11} + a_{12}A_{12} + a_{13}A_{13}$
$\Delta = (1)(-3) + (-2)(6) + (3)(-3)$
$\Delta = -3 - 12 - 9$
$\Delta = -24$
[Result using $R_1$]
Alternatively, using $R_2$ cofactors with $R_1$ elements (Property Check):
$a_{11}A_{21} + a_{12}A_{22} + a_{13}A_{23}$
$(1)(42) + (-2)(-12) + (3)(-22)$
$42 + 24 - 66 = 66 - 66 = 0$
$0$
[Matches Theorem Property]
Crucial Theorems
Understanding the interplay between elements and their cofactors leads to two fundamental results:
Theorem 1: Value of Determinant
The sum of the products of elements of any row (or column) with their corresponding cofactors is equal to the value of the determinant ($\Delta$).
$a_{11}A_{11} + a_{12}A_{12} + a_{13}A_{13} = \Delta$
[Expansion by $R_1$]
Theorem 2: The Zero Property
The sum of the products of elements of any row (or column) with the cofactors of any other row (or column) is always zero.
$a_{11}A_{21} + a_{12}A_{22} + a_{13}A_{23} = 0$
[Elements of $R_1$, Cofactors of $R_2$]
Comparison Summary
| Feature | Minor ($M_{ij}$) | Cofactor ($A_{ij}$) |
|---|---|---|
| Sign | Depends only on the sub-matrix values. | Includes positional sign $(-1)^{i+j}$. |
| Application | Basic expansion of determinants. | Used in Adjoint and Inverse matrices. |
| Order | $(n-1)$ for a matrix of order $n$. | $(n-1)$ for a matrix of order $n$. |
Properties of Determinants
The properties of determinants are essential tools in linear algebra, particularly for simplifying the computation of high-order determinants. Understanding these properties allows for solving complex problems without performing tedious expansions.
Property 1: The All-Zero Property
If each element in a row or in a column of a determinant is zero, then the value of the determinant is zero.
Verification
Consider a determinant $\Delta$ where the second row consists entirely of zeros:
$\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ 0 & 0 & 0 \\ a_3 & b_3 & c_3 \end{vmatrix}$
Expanding along the second row ($R_2$):
$\Delta = -0 \begin{vmatrix} b_1 & c_1 \\ b_3 & c_3 \end{vmatrix} + 0 \begin{vmatrix} a_1 & c_1 \\ a_3 & c_3 \end{vmatrix} - 0 \begin{vmatrix} a_1 & b_1 \\ a_3 & b_3 \end{vmatrix}$
$\Delta = 0$
Property 2: Triangular Determinant Property
If each element on one side of the principal diagonal of a determinant is zero (i.e., the matrix is upper triangular, lower triangular, or diagonal), then the value of the determinant is the product of the diagonal elements.
Verification
Let $\Delta$ be an upper triangular determinant:
$\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ 0 & b_2 & c_2 \\ 0 & 0 & c_3 \end{vmatrix}$
Expanding along the first column ($C_1$):
$\Delta = a_1 \begin{vmatrix} b_2 & c_2 \\ 0 & c_3 \end{vmatrix} - 0 + 0$
$\Delta = a_1(b_2 c_3 - 0 \times c_2) = a_1 b_2 c_3$
Property 3: Transpose Invariance
The value of a determinant remains unchanged if its rows and columns are interchanged. In terms of matrices, the determinant of a square matrix $A$ is equal to the determinant of its transpose $A'$.
$|A| = |A'|$
Expansion Proof
Let $\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$. Its transpose is $\Delta_1 = \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}$.
Expanding $\Delta$ by $R_1$:
$\Delta = a_1(b_2c_3 - b_3c_2) - b_1(a_2c_3 - a_3c_2) + c_1(a_2b_3 - a_3b_2)$
Expanding $\Delta_1$ by $C_1$ (which is effectively $R_1$ of $\Delta$):
$\Delta_1 = a_1(b_2c_3 - c_2b_3) - b_1(a_2c_3 - c_2a_3) + c_1(a_2b_3 - b_2a_3)$
Clearly, $\Delta = \Delta_1$.
Property 4: Reflection Property (Interchange)
The reflection property states that if any two rows (or columns) of a determinant are interchanged, the magnitude of the determinant remains the same, but its sign is reversed.
Verification
Let us consider a general determinant of third order:
$\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$
Expanding $\Delta$ along the first row ($R_1$):
$\Delta = a_1(b_2c_3 - b_3c_2) - b_1(a_2c_3 - a_3c_2) + c_1(a_2b_3 - a_3b_2)$
Now, let $\Delta_1$ be the determinant obtained by interchanging the first and third columns ($C_1 \leftrightarrow C_3$):
$\Delta_1 = \begin{vmatrix} c_1 & b_1 & a_1 \\ c_2 & b_2 & a_2 \\ c_3 & b_3 & a_3 \end{vmatrix}$
Expanding $\Delta_1$ along the first row ($R_1$):
$\Delta_1 = c_1(b_2a_3 - b_3a_2) - b_1(c_2a_3 - c_3a_2) + a_1(c_2b_3 - c_3b_2)$
$\Delta_1 = c_1a_3b_2 - c_1a_2b_3 - b_1c_2a_3 + b_1c_3a_2 + a_1c_2b_3 - a_1c_3b_2$
Rearranging the terms to compare it with the value of $\Delta$:
$\Delta_1 = - [a_1(b_2c_3 - b_3c_2) - b_1(a_2c_3 - a_3c_2) + c_1(a_2b_3 - a_3b_2)]$
$\Delta_1 = -\Delta$
This confirms that interchanging two parallel lines (rows or columns) results in a change of sign.
Corollary
The corollary extends Property 4 to the movement of a single row or column across multiple rows or columns. It states: If any row (or column) of a determinant $\Delta$ is passed over $m$ rows (or columns), then the resulting determinant $\Delta_1$ is given by:
$\Delta_1 = (-1)^m \Delta$
Derivation of the Corollary
Interchanging a row with its adjacent row counts as $1$ operation ($m=1$). According to Property 4, the sign changes once ($(-1)^1$). If we want to move a row across $m$ rows, we must perform $m$ successive adjacent interchanges.
Step-by-step example: Moving the first column ($C_1$) over the next two columns ($C_2$ and $C_3$). Here $m=2$.
Initial State: $\Delta = \begin{vmatrix} C_1 & C_2 & C_3 \end{vmatrix}$
Step 1: Interchange $C_1$ and $C_2$ (Adjacent interchange).
$\Delta' = \begin{vmatrix} C_2 & C_1 & C_3 \end{vmatrix}$
$\Delta' = (-1)^1 \Delta$ ... (i)
Step 2: Interchange the new $C_2$ (which is original $C_1$) with $C_3$.
$\Delta_1 = \begin{vmatrix} C_2 & C_3 & C_1 \end{vmatrix}$
$\Delta_1 = (-1) \Delta'$ ... (ii)
Substituting the value of $\Delta'$ from (i) into (ii):
$\Delta_1 = (-1) \times [(-1) \Delta]$
$\Delta_1 = (-1)^2 \Delta$
[As $m=2$]
Thus, passing a row/column over $m$ lines results in $m$ sign changes, justifying the factor $(-1)^m$.
Property 5: Identical Rows/Columns
Property 5 states that if any two parallel lines (two rows or two columns) of a determinant are identical—meaning all corresponding elements are exactly the same—then the value of the determinant is zero.
Theoretical Proof (Using Reflection Property)
This property is a direct logical consequence of Property 4 (the interchange property). Let $\Delta$ be a determinant with two identical rows, say $R_1$ and $R_2$.
According to Property 4, if we interchange any two rows of a determinant, the sign of the determinant changes. Let the new determinant after interchanging $R_1$ and $R_2$ be $\Delta_1$.
$\Delta_1 = -\Delta$
... (i)
However, since $R_1$ and $R_2$ are identical, interchanging them does not actually change the appearance or the elements of the determinant. Therefore, the value remains exactly the same.
$\Delta_1 = \Delta$
... (ii)
Equating (i) and (ii), we get:
$\Delta = -\Delta$
Adding $\Delta$ to both sides:
$2\Delta = 0$
$\Delta = 0$
Verification by General Expansion
Let us verify this by expanding a $3 \times 3$ determinant where the first and second rows are identical.
Let $\Delta = \begin{vmatrix} a & b & c \\ a & b & c \\ x & y & z \end{vmatrix}$
Expanding along the third row ($R_3$):
$\Delta = x \begin{vmatrix} b & c \\ b & c \end{vmatrix} - y \begin{vmatrix} a & c \\ a & c \end{vmatrix} + z \begin{vmatrix} a & b \\ a & b \end{vmatrix}$
Calculating the $2 \times 2$ minors:
$\Delta = x(bc - bc) - y(ac - ac) + z(ab - ab)$
$\Delta = x(0) - y(0) + z(0)$
$\Delta = 0$
Property 6: Scalar Multiplication
The Scalar Multiplication Property states that if every element of a particular row (or a particular column) of a determinant is multiplied by a non-zero constant $k$, then the value of the resulting determinant is $k$ times the value of the original determinant.
Verification and Mathematical Proof
Let the original determinant be $\Delta$ and the new determinant be $\Delta_1$, where the second row of $\Delta$ has been multiplied by $k$.
$\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$
$\Delta_1 = \begin{vmatrix} a_1 & b_1 & c_1 \\ ka_2 & kb_2 & kc_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$
Expanding $\Delta_1$ along the second row ($R_2$):
$\Delta_1 = -ka_2(b_1c_3 - b_3c_1) + kb_2(a_1c_3 - a_3c_1) - kc_2(a_1b_3 - a_3b_1)$
Taking $k$ as a common factor from the expression:
$\Delta_1 = k [ -a_2(b_1c_3 - b_3c_1) + b_2(a_1c_3 - a_3c_1) - c_2(a_1b_3 - a_3b_1) ]$
$\Delta_1 = k \Delta$
[By definition of determinant expansion]
Important Remarks
1. Factoring out from a Row/Column
Conversely, this property implies that we can extract a common factor from any single row or column of a determinant. This is a primary technique used to simplify determinants before expansion.
2. Determinant of a Scalar Multiple of a Matrix
In matrix algebra, $kA$ represents a matrix where every element of $A$ is multiplied by $k$. If $A$ is a square matrix of order $n$, then $kA$ has $n$ rows, each being multiplied by $k$. Applying Property 6 to each of these $n$ rows sequentially:
$|kA| = k \cdot k \cdot ... \cdot k \text{ ($n$ times)} \cdot |A|$
$|kA| = k^n |A|$
For example, if $A$ is a $3 \times 3$ matrix and $|A| = 5$, then $|2A| = 2^3 \times 5 = 40$.
Special Case: Determinant of Skew-Symmetric Matrices
A square matrix $A = [a_{ij}]$ is called a skew-symmetric matrix if its transpose is equal to its negative. Mathematically, this is expressed as:
$A' = -A$
In a skew-symmetric matrix, the elements satisfy the condition $a_{ij} = -a_{ji}$. Consequently, for diagonal elements where $i = j$, we have $a_{ii} = -a_{ii} \implies 2a_{ii} = 0 \implies a_{ii} = 0$. Thus, all diagonal elements of a skew-symmetric matrix are zero.
Theorem: The determinant of a skew-symmetric matrix of odd order is always zero.
Proof:
Let $A$ be a skew-symmetric matrix of order $n \times n$. By definition:
$A' = -A$
Taking the determinant on both sides of the equation:
$\text{det}(A') = \text{det}(-A)$
... (i)
Now, we apply two fundamental properties of determinants:
1. Property of Transpose: The determinant of a matrix and its transpose are equal.
$\text{det}(A') = \text{det}(A)$
(Property 3)
2. Property of Scalar Multiplication: For any scalar $k$ and a matrix of order $n$.
$\text{det}(kA) = k^n \text{det}(A)$
(Property 6)
Applying these to equation (i), where the scalar $k = -1$:
$\text{det}(A) = (-1)^n \text{det}(A)$
[Substituting properties]
Case Analysis based on Order ($n$)
Case I: When $n$ is an ODD integer
If $n$ is odd (e.g., $1, 3, 5, \dots$), then $(-1)^n = -1$. Substituting this into equation (iii):
$\text{det}(A) = -1 \cdot \text{det}(A)$
$\text{det}(A) + \text{det}(A) = 0$
$2 \cdot \text{det}(A) = 0$
$\text{det}(A) = 0$
Conclusion: Every skew-symmetric matrix of odd order (like $3 \times 3$) has a determinant value of zero.
Case II: When $n$ is an EVEN integer
If $n$ is even (e.g., $2, 4, 6, \dots$), then $(-1)^n = 1$. Substituting this into equation (iii):
$\text{det}(A) = 1 \cdot \text{det}(A)$
$\text{det}(A) = \text{det}(A)$
This result is always true (a tautology). It does not mean the determinant is zero. Instead, for even-order skew-symmetric matrices, the determinant is always a perfect square of some polynomial of its elements (this polynomial is known as the Pfaffian).
Property 7: Sum Property
The Sum Property states that if each element of a row (or a column) of a determinant is expressed as the sum of two (or more) terms, then the determinant can be expressed as the sum of two (or more) determinants. It is important to note that the other rows or columns remain unaltered in this process.
Mathematical Representation
Consider a determinant where the elements of the first column are sums of two terms:
$\Delta = \begin{vmatrix} a_1 + d_1 & b_1 & c_1 \\ a_2 + d_2 & b_2 & c_2 \\ a_3 + d_3 & b_3 & c_3 \end{vmatrix}$
According to this property, $\Delta$ can be split as follows:
$\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} d_1 & b_1 & c_1 \\ d_2 & b_2 & c_2 \\ d_3 & b_3 & c_3 \end{vmatrix}$
Verification and Derivation
To prove this, let us expand the determinant $\Delta$ along the first column ($C_1$).
The expansion is given by:
$\Delta = (a_1 + d_1) \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - (a_2 + d_2) \begin{vmatrix} b_1 & c_1 \\ b_3 & c_3 \end{vmatrix} + (a_3 + d_3) \begin{vmatrix} b_1 & c_1 \\ b_2 & c_2 \end{vmatrix}$
By using the distributive property of multiplication over addition, we can rewrite the expression as:
$\Delta = \left[ a_1 \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - a_2 \begin{vmatrix} b_1 & c_1 \\ b_3 & c_3 \end{vmatrix} + a_3 \begin{vmatrix} b_1 & c_1 \\ b_2 & c_2 \end{vmatrix} \right] $$ + \left[ d_1 \begin{vmatrix} b_2 & c_2 \\ b_3 & c_3 \end{vmatrix} - d_2 \begin{vmatrix} b_1 & c_1 \\ b_3 & c_3 \end{vmatrix} + d_3 \begin{vmatrix} b_1 & c_1 \\ b_2 & c_2 \end{vmatrix} \right]$
The first bracketed expression is the expansion of $\begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$ along $C_1$.
The second bracketed expression is the expansion of $\begin{vmatrix} d_1 & b_1 & c_1 \\ d_2 & b_2 & c_2 \\ d_3 & b_3 & c_3 \end{vmatrix}$ along $C_1$.
$\Delta = \Delta_a + \Delta_d$
[Hence Verified]
Property 8: Property of Invariance
The Property of Invariance is perhaps the most powerful tool used in the evaluation of determinants. It states that the value of a determinant remains unchanged if to each element of any row (or column), we add the equi-multiples of the corresponding elements of one or more other rows (or columns).
In symbolic form, the value of a determinant $\Delta$ remains the same under the operations:
$R_i \to R_i + k R_j$ or $C_i \to C_i + k C_j$
[where $i \neq j$]
Verification and Derivation
Let us consider a third-order determinant:
$\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$
Suppose we apply the operation $C_1 \to C_1 + kC_2$. Let the new determinant be $\Delta_1$:
$\Delta_1 = \begin{vmatrix} a_1 + kb_1 & b_1 & c_1 \\ a_2 + kb_2 & b_2 & c_2 \\ a_3 + kb_3 & b_3 & c_3 \end{vmatrix}$
By applying Property 7 (The Sum Property), we can express $\Delta_1$ as the sum of two determinants:
$\Delta_1 = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} kb_1 & b_1 & c_1 \\ kb_2 & b_2 & c_2 \\ kb_3 & b_3 & c_3 \end{vmatrix}$
In the second determinant of above equation, we can take out the common factor $k$ from the first column ($C_1$) using Property 6:
$\Delta_1 = \Delta + k \begin{vmatrix} b_1 & b_1 & c_1 \\ b_2 & b_2 & c_2 \\ b_3 & b_3 & c_3 \end{vmatrix}$
Now, observe that in the second determinant, the first and second columns ($C_1$ and $C_2$) are identical. According to Property 5, the value of such a determinant is zero.
$\Delta_1 = \Delta + k(0)$
[Since $C_1 = C_2$]
$\Delta_1 = \Delta$
Thus, the value of the determinant remains invariant under such operations. This derivation can be extended to operations involving multiple rows, such as $R_1 \to R_1 + kR_2 + mR_3$.
Property 9: Element-Cofactor Relation
This property establishes a fundamental relationship between the elements of a determinant and their corresponding Cofactors. It is divided into two distinct parts: one that defines the value of the determinant and another that results in zero.
Definitions: Minor and Cofactor
Before proceeding to the property, let us define the terms:
1. Minor ($M_{ij}$): The determinant obtained by deleting the $i^{th}$ row and $j^{th}$ column in which the element $a_{ij}$ lies.
2. Cofactor ($A_{ij}$): The minor with a positional sign, defined as:
$A_{ij} = (-1)^{i+j} M_{ij}$
The Two Parts of Property 9
Part A: Sum of Products with Same Row/Column Cofactors
The sum of the products of elements of any row (or column) with their own corresponding cofactors is equal to the value of the determinant ($\Delta$).
$\Delta = \sum\limits_{j=1}^{n} a_{ij} A_{ij}$
[For any fixed row $i$]
Part B: Sum of Products with Different Row/Column Cofactors
The sum of the products of elements of any row (or column) with the cofactors of any other row (or column) is always zero.
$\sum\limits_{j=1}^{n} a_{ij} A_{kj} = 0$, if $i \neq k$
Verification of the Zero Property (Part B)
Let $\Delta = \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix}$.
Let us multiply the elements of the first row ($R_1$) by the cofactors of the third row ($R_3$).
Elements of $R_1$: $a_{11}, a_{12}, a_{13}$
Cofactors of $R_3$:
$A_{31} = (-1)^{3+1} \begin{vmatrix} a_{12} & a_{13} \\ a_{22} & a_{23} \end{vmatrix}$
$A_{32} = (-1)^{3+2} \begin{vmatrix} a_{11} & a_{13} \\ a_{21} & a_{23} \end{vmatrix}$
$A_{33} = (-1)^{3+3} \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{vmatrix}$
The sum of the products is:
$S = a_{11} A_{31} + a_{12} A_{32} + a_{13} A_{33}$
$S = a_{11}(a_{12}a_{23} - a_{22}a_{13}) - a_{12}(a_{11}a_{23} - a_{21}a_{13}) $$ + a_{13}(a_{11}a_{22} - a_{21}a_{12})$
Expanding the brackets:
$S = a_{11}a_{12}a_{23} - a_{11}a_{22}a_{13} - a_{12}a_{11}a_{23} + a_{12}a_{21}a_{13} + a_{13}a_{11}a_{22} $$ - a_{13}a_{21}a_{12}$
By observing the terms, we see they cancel out in pairs:
$S = 0$
Why does this happen? (Logical Proof)
When we calculate the sum $\sum a_{ij} A_{kj}$ (where $i \neq k$), we are essentially expanding a new determinant $\Delta'$ where the $k^{th}$ row has been replaced by the $i^{th}$ row. This results in a determinant with two identical rows ($R_i$ and $R_k$).
For example, if we use elements of $R_1$ with cofactors of $R_3$, it is equivalent to finding the value of:
$\Delta' = \begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{11} & a_{12} & a_{13} \end{vmatrix}$
Since $R_1 = R_3$ in this new determinant, according to Property 5:
$\Delta' = 0$
Property 10: Determinant of Product
Property 10 is a vital multiplication theorem for determinants. It states that if $A$ and $B$ are two square matrices of the same order, then the determinant of the product of the matrices is equal to the product of their individual determinants.
$\text{det}(AB) = \text{det}(A) \cdot \text{det}(B)$
This can also be written in vertical bar notation as $|AB| = |A||B|$.
Verification and Algebraic Derivation
To verify this property, let us consider two general matrices of order $2 \times 2$.
Let $A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$ and $B = \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix}$
Step 1: Find individual determinants
$\text{det}(A) = ad - bc$
... (i)
$\text{det}(B) = \alpha\delta - \beta\gamma$
... (ii)
Step 2: Find the product matrix $AB$
$AB = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix} = \begin{bmatrix} a\alpha+b\gamma & a\beta+b\delta \\ c\alpha+d\gamma & c\beta+d\delta \end{bmatrix}$
Step 3: Calculate $\text{det}(AB)$
$\text{det}(AB) = (a\alpha+b\gamma)(c\beta+d\delta) - (a\beta+b\delta)(c\alpha+d\gamma)$
Expanding the terms:
$\text{det}(AB) = (ac\alpha\beta + ad\alpha\delta + bc\gamma\beta + bd\gamma\delta) - (ac\alpha\beta + ad\beta\gamma $$ + bc\alpha\delta + bd\gamma\delta)$
Cancelling common terms ($ac\alpha\beta$ and $bd\gamma\delta$):
$\text{det}(AB) = ad\alpha\delta + bc\gamma\beta - ad\beta\gamma - bc\alpha\delta$
Rearranging and factoring:
$\text{det}(AB) = ad(\alpha\delta - \beta\gamma) - bc(\alpha\delta - \beta\gamma)$
$\text{det}(AB) = (ad - bc)(\alpha\delta - \beta\gamma)$
[From (i) and (ii)]
Therefore, $\text{det}(AB) = \text{det}(A) \cdot \text{det}(B)$.
Elementary Operations on Determinants
To evaluate determinants of higher orders ($3 \times 3$ or more) without tedious calculations, we perform Elementary Operations. These operations are systematic ways to transform a determinant into a simpler form—ideally a triangular form or one containing the maximum number of zeros—while keeping track of how the value of the determinant changes.
Notations and Types of Operations
There are three primary types of operations that can be performed on either rows or columns:
| Type of Operation | Row Notation | Column Notation |
|---|---|---|
| Interchange: Swapping two rows or columns. | $R_i \leftrightarrow R_j$ | $C_i \leftrightarrow C_j$ |
| Scalar Multiplication: Multiplying a row/column by a non-zero constant. | $R_i \to kR_i$ | $C_i \to kC_i$ |
| Linear Combination: Adding a multiple of one row/column to another. | $R_i \to R_i + kR_j$ | $C_i \to C_i + kC_j$ |
Detailed Explanation of Operations
1. Interchange ($R_i \leftrightarrow R_j$)
When you swap two parallel lines, the sign of the determinant changes. If the original determinant is $\Delta$ and the new one is $\Delta_1$:
$\Delta_1 = -\Delta$
2. Scalar Multiplication ($R_i \to kR_i$)
Multiplying a single row or column by a constant $k$ multiplies the entire determinant value by $k$.
$\Delta_{new} = k \cdot \Delta_{old}$
Strategic Note: We often use this to "take out" a common factor. If we want to transform a row within the determinant symbol using $R_i \to kR_i$, we must compensate by multiplying the determinant by $\frac{1}{k}$ outside to maintain numerical equality.
$\Delta = \frac{1}{k} \begin{vmatrix} \dots \\ k R_i \\ \dots \end{vmatrix}$
[Correction Factor]
3. Addition of Multiples ($R_i \to R_i + kR_j$)
This is the most frequently used operation because it does not change the value of the determinant. It is used to create zeros in a row or column to make expansion easier.
$\Delta_{new} = \Delta_{old}$
Area of a Triangle
In coordinate geometry, we have learned that the area of a triangle with vertices $A(x_1, y_1)$, $B(x_2, y_2)$, and $C(x_3, y_3)$ is given by the expression $\frac{1}{2} |x_1(y_2 - y_3) + x_2(y_3 - y_1) + x_3(y_1 - y_2)|$. This expression can be very elegantly represented and calculated using the determinant form.
Determinant Formula for Area
The area of a triangle whose vertices are $(x_1, y_1)$, $(x_2, y_2)$ and $(x_3, y_3)$ is given by:
$\text{Area of } \Delta ABC = \frac{1}{2} \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix}$
Derivation of the Formula
Let us expand the determinant along the first row ($R_1$):
$\Delta = x_1 \begin{vmatrix} y_2 & 1 \\ y_3 & 1 \end{vmatrix} - y_1 \begin{vmatrix} x_2 & 1 \\ x_3 & 1 \end{vmatrix} + 1 \begin{vmatrix} x_2 & y_2 \\ x_3 & y_3 \end{vmatrix}$
(Expansion by $R_1$)
Solving the $2 \times 2$ determinants:
$\Delta = x_1(y_2 - y_3) - y_1(x_2 - x_3) + 1(x_2y_3 - x_3y_2)$
$\Delta = x_1(y_2 - y_3) + y_1(x_3 - x_2) + (x_2y_3 - x_3y_2)$
Rearranging the terms to match the standard coordinate geometry formula:
$\Delta = x_1(y_2 - y_3) + x_2(y_3 - y_1) + x_3(y_1 - y_2)$
Since the area is a physical quantity, it must always be positive. Therefore, we always take the absolute value of the determinant.
Important Points to Remember
1. Since area is always a positive quantity, we take the absolute value of the determinant in (i). If the determinant calculation results in a negative number, ignore the sign.
2. If the area of the triangle is given in a problem, use both positive and negative values of the determinant for calculations (e.g., to find an unknown coordinate $k$).
3. The area of a triangle formed by three collinear points is always zero.
Condition of Collinearity of Three Points
Three points $A(x_1, y_1)$, $B(x_2, y_2)$, and $C(x_3, y_3)$ are said to be collinear (lying on the same straight line) if and only if the area of the triangle formed by them is zero.
Mathematical Condition:
$\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} = 0$
Example 1. Find the area of the triangle whose vertices are $(3, 8)$, $(-4, 2)$ and $(5, 1)$ using determinants.
Answer:
The area of the triangle with vertices $(3, 8)$, $(-4, 2)$ and $(5, 1)$ is given by:
$\Delta = \frac{1}{2} \begin{vmatrix} 3 & 8 & 1 \\ -4 & 2 & 1 \\ 5 & 1 & 1 \end{vmatrix}$
Expanding along $R_1$:
$\Delta = \frac{1}{2} [ 3(2 - 1) - 8(-4 - 5) + 1(-4 - 10) ]$
$\Delta = \frac{1}{2} [ 3(1) - 8(-9) + 1(-14) ]$
$\Delta = \frac{1}{2} [ 3 + 72 - 14 ]$
$\Delta = \frac{1}{2} [ 61 ]$
$\Delta = 30.5$ sq. units
Example 2. Show that the points $A(a, b+c)$, $B(b, c+a)$, and $C(c, a+b)$ are collinear.
Answer:
The points are collinear if the determinant of their coordinates is zero.
$\Delta = \begin{vmatrix} a & b+c & 1 \\ b & c+a & 1 \\ c & a+b & 1 \end{vmatrix}$
Applying the operation $C_2 \to C_2 + C_1$:
$\Delta = \begin{vmatrix} a & a+b+c & 1 \\ b & a+b+c & 1 \\ c & a+b+c & 1 \end{vmatrix}$
Taking common factor $(a+b+c)$ from $C_2$:
$\Delta = (a+b+c) \begin{vmatrix} a & 1 & 1 \\ b & 1 & 1 \\ c & 1 & 1 \end{vmatrix}$
Since $C_2$ and $C_3$ are identical, the value of the determinant is $0$.
$\Delta = (a+b+c) \times 0 = 0$
(Property of Identical Columns)
Therefore, the points $A, B,$ and $C$ are collinear.
Equation of a Line passing through Two Points
If we are given two points $A(x_1, y_1)$ and $B(x_2, y_2)$, any point $P(x, y)$ on the line passing through $A$ and $B$ will be collinear with $A$ and $B$. Thus, the equation of the line is:
$\begin{vmatrix} x & y & 1 \\ x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \end{vmatrix} = 0$
Adjoint and Inverse of a Square Matrix
The Adjoint (also known as the Adjunct) of a square matrix is a specific matrix derived from the cofactors of its elements. It plays a pivotal role in linear algebra, especially in finding the inverse of a matrix and solving systems of linear equations using Cramer's Rule or the Matrix Method.
General Definition and Construction
Let $A = [a_{ij}]$ be a square matrix of order $n$. The adjoint of $A$ is the transpose of the matrix formed by the cofactors of the elements of $A$.
If $C = [A_{ij}]$ is the matrix of cofactors, then:
$\text{adj } A = [A_{ij}]^T$
Step-by-Step Construction
To find the adjoint of any square matrix, follow these three essential steps:
Step I: Find the Minors ($M_{ij}$) - For each element $a_{ij}$, calculate the determinant of the sub-matrix formed by deleting the $i^{th}$ row and $j^{th}$ column.
Step II: Calculate the Cofactors ($A_{ij}$) - Apply the positional sign to each minor using the formula:
$A_{ij} = (-1)^{i+j} M_{ij}$
Step III: Form the Adjoint Matrix - Arrange the cofactors into a matrix and then transpose it. The rows of cofactors become the columns of the adjoint.
Adjoint of a $2 \times 2$ Matrix
For a second-order matrix, the process is straightforward. Let us derive the general form.
Consider $A = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}$
Derivation:
First, calculate the cofactors:
$A_{11} = (-1)^{1+1} (a_{22}) = a_{22}$
$A_{12} = (-1)^{1+2} (a_{21}) = -a_{21}$
$A_{21} = (-1)^{2+1} (a_{12}) = -a_{12}$
$A_{22} = (-1)^{2+2} (a_{11}) = a_{11}$
The cofactor matrix is $C = \begin{bmatrix} a_{22} & -a_{21} \\ -a_{12} & a_{11} \end{bmatrix}$.
Now, take the transpose of $C$ to find the Adjoint:
$\text{adj } A = \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \end{bmatrix}$
[Final Adjoint Form]
The Shortcut Rule
To find the adjoint of a $2 \times 2$ matrix instantly:
1. Interchange the elements of the primary diagonal ($a_{11}$ and $a_{22}$).
2. Change the signs of the elements of the secondary diagonal ($a_{12}$ and $a_{21}$), but keep them in their original positions.
Adjoint of a $3 \times 3$ Matrix
For a third-order square matrix $A$ defined as:
$A = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}$
We first compute the Cofactor Matrix $C$, where each element $A_{ij}$ is the cofactor of $a_{ij}$:
$C = \begin{bmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{bmatrix}$
The Adjoint of A ($\text{adj } A$) is the transpose of this cofactor matrix $C$. Thus, the rows of the cofactor matrix become the columns of the adjoint matrix.
$\text{adj } A = C^T = \begin{bmatrix} A_{11} & A_{21} & A_{31} \\ A_{12} & A_{22} & A_{32} \\ A_{13} & A_{23} & A_{33} \end{bmatrix}$
Note: It is a common mistake to forget the transpose step. Always remember that the first row of cofactors ($A_{11}, A_{12}, A_{13}$) must be written as the first column of the adjoint.
Theorem 1: Fundamental Property of Adjoint
This theorem provides the mathematical link between a matrix, its adjoint, and its determinant. It states that the product of a square matrix and its adjoint (in any order) results in a scalar matrix where the diagonal elements are equal to the determinant of the matrix.
$A(\text{adj } A) = (\text{adj } A)A = |A|I$
Where $I$ is the identity matrix of the same order as $A$.
Detailed Derivation of Theorem 1
To derive this, let us perform the matrix multiplication $A(\text{adj } A)$ for a $3 \times 3$ matrix. Let the product matrix be $P = [p_{ij}]$.
$A(\text{adj } A) = \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} \begin{bmatrix} A_{11} & A_{21} & A_{31} \\ A_{12} & A_{22} & A_{32} \\ A_{13} & A_{23} & A_{33} \end{bmatrix}$
The element $p_{ij}$ in the product matrix is obtained by multiplying the $i^{th}$ row of $A$ with the $j^{th}$ column of $(\text{adj } A)$. However, the $j^{th}$ column of $(\text{adj } A)$ is composed of the cofactors of the $j^{th}$ row of $A$.
Thus, $p_{ij} = a_{i1}A_{j1} + a_{i2}A_{j2} + a_{i3}A_{j3}$.
From the properties of determinants (specifically Property 9), we have two cases for this sum:
Case 1: When $i = j$ (Diagonal Elements)
When the elements of a row are multiplied by their own corresponding cofactors, the sum is equal to the determinant $|A|$.
$a_{i1}A_{i1} + a_{i2}A_{i2} + a_{i3}A_{i3} = |A|$
[Element-Cofactor sum property]
Case 2: When $i \neq j$ (Off-diagonal Elements)
When the elements of a row are multiplied by the cofactors of a different row, the sum is zero.
$a_{i1}A_{j1} + a_{i2}A_{j2} + a_{i3}A_{j3} = 0$
[Elements of $R_i$ with cofactors of $R_j$]
Substituting these values back into the product matrix:
$A(\text{adj } A) = \begin{bmatrix} |A| & 0 & 0 \\ 0 & |A| & 0 \\ 0 & 0 & |A| \end{bmatrix}$
Taking the scalar $|A|$ common from the matrix:
$A(\text{adj } A) = |A| \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
$A(\text{adj } A) = |A|I$
Similarly, it can be proven that $(\text{adj } A)A = |A|I$. Hence, the theorem is verified.
Singular and Non-Singular Matrices
The classification of square matrices into Singular and Non-Singular categories is a fundamental concept in matrix algebra. This distinction determines whether a matrix can be inverted and plays a critical role in solving systems of linear equations.
1. Singular Matrix
A square matrix $A$ is said to be a Singular Matrix if the value of its determinant is equal to zero.
$|A| = 0$
Geometrically, a singular matrix represents a transformation that collapses space into a lower dimension (e.g., collapsing a 2D plane into a 1D line). From an algebraic perspective, if a matrix is singular, its rows or columns are linearly dependent.
Examples of Singular Matrices ($|A| = 0$)
A matrix is singular if its determinant evaluates to zero. This usually happens when rows or columns are multiples of each other.
Example A: A $2 \times 2$ Singular Matrix
Consider the matrix $A = \begin{bmatrix} 1 & 2 \\ 3 & 6 \end{bmatrix}$
$|A| = (1 \times 6) - (3 \times 2)$
$|A| = 6 - 6 = 0$
Example B: A $3 \times 3$ Singular Matrix
Consider the matrix $B = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 0 & 0 & 0 \end{bmatrix}$
Since the third row consists entirely of zeros, the determinant is automatically zero based on Property 1 of determinants.
$|B| = 0$
2. Non-Singular Matrix
A square matrix $A$ is said to be a Non-Singular Matrix if the value of its determinant is non-zero.
$|A| \neq 0$
Non-singular matrices are often referred to as Invertible Matrices because the existence of a non-zero determinant is the necessary and sufficient condition for the existence of an inverse matrix ($A^{-1}$).
Examples of Non-Singular Matrices ($|A| \neq 0$)
A matrix is non-singular (or invertible) if its determinant is any value other than zero.
Example C: The Identity Matrix
Consider the $2 \times 2$ Identity matrix $I = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$
$|I| = (1 \times 1) - (0 \times 0) = 1$
Since $1 \neq 0$, the identity matrix is always non-singular.
Example D: A $2 \times 2$ Non-Singular Matrix
Consider the matrix $C = \begin{bmatrix} 2 & 5 \\ 1 & 3 \end{bmatrix}$
$|C| = (2 \times 3) - (1 \times 5)$
$|C| = 6 - 5 = 1$
Since $|C| = 1 \neq 0$, matrix $C$ is non-singular and its inverse $C^{-1}$ exists.
Inverse of a Square Matrix
The Inverse of a matrix is a fundamental concept equivalent to the reciprocal of a number in basic arithmetic. If the product of two square matrices results in the identity matrix, one is said to be the inverse of the other. The inverse is a key requirement for solving matrix equations of the form $AX = B$.
Definition of Invertibility
A square matrix $A$ of order $n$ is called invertible if there exists another square matrix $B$ of the same order $n$ such that their product in either order results in the Identity matrix $I$.
$AB = BA = I$
In this case, the matrix $B$ is called the multiplicative inverse of $A$, and it is symbolically represented as $A^{-1}$.
Theorem 2: Uniqueness of Inverse
This theorem states that if a square matrix possesses an inverse, that inverse is unique. A matrix cannot have two different inverse matrices.
Proof of Uniqueness
Given: Let $A$ be an invertible square matrix of order $n$. Suppose $B$ and $C$ are two distinct matrices that are both inverses of $A$.
To Prove: $B = C$
Proof: Since $B$ is an inverse of $A$, by definition:
$AB = BA = I$
... (i)
Similarly, since $C$ is also an inverse of $A$:
$AC = CA = I$
... (ii)
Now, consider the matrix $B$. We can write $B$ as:
$B = BI$
Substituting the value of $I$ from equation (ii) ($I = AC$):
$B = B(AC)$
By the associative property of matrix multiplication:
$B = (BA)C$
Substituting the value of $BA$ from equation (i) ($BA = I$):
$B = IC$
$B = C$
[Hence Proved]
Theorem 3: Condition for Invertibility
A square matrix $A$ is invertible if and only if it is a non-singular matrix, i.e., its determinant is not zero ($|A| \neq 0$).
Necessary Condition
If $A$ is invertible, there exists a matrix $B$ such that $AB = I$.
Taking the determinant on both sides:
$|AB| = |I|$
$|A||B| = 1$
For the product $|A||B|$ to be $1$, it is mathematically necessary that $|A| \neq 0$.
Sufficient Condition and Derivation of Formula
If $|A| \neq 0$, we can prove that $A$ is invertible by using the property of the adjoint:
$A(\text{adj } A) = (\text{adj } A)A = |A|I$
Since $|A| \neq 0$, we can divide the entire equation by the scalar $|A|$:
$\frac{A(\text{adj } A)}{|A|} = \frac{|A|I}{|A|}$
$A \left( \frac{1}{|A|} \text{adj } A \right) = I$
By comparing this with the definition $AB = I$, we find the expression for the inverse:
$A^{-1} = \frac{1}{|A|} \text{adj } A$
[Inverse Formula]
Steps to find $A^{-1}$ for a $3 \times 3$ Matrix
1. Calculate $|A|$: If $|A| = 0$, stop; the inverse does not exist.
2. Calculate Cofactors: Find $A_{11}, A_{12}, \dots, A_{33}$.
3. Form Adjoint: Write the cofactor matrix and transpose it.
4. Divide: Multiply the adjoint matrix by the scalar $\frac{1}{|A|}$.
To find the inverse of a third-order square matrix, we follow a systematic four-step process. Let us apply these steps to the matrix $A$, which represents a portfolio of investments in $\textsf{₹}$ Crores.
$A = \begin{bmatrix} 1 & 3 & 3 \\ 1 & 4 & 3 \\ 1 & 3 & 4 \end{bmatrix}$
Step 1: Calculate the Determinant ($|A|$)
First, we check if the matrix is non-singular. We expand along the first row ($R_1$):
$|A| = 1 \begin{vmatrix} 4 & 3 \\ 3 & 4 \end{vmatrix} - 3 \begin{vmatrix} 1 & 3 \\ 1 & 4 \end{vmatrix} + 3 \begin{vmatrix} 1 & 4 \\ 1 & 3 \end{vmatrix}$
$|A| = 1(16 - 9) - 3(4 - 3) + 3(3 - 4)$
$|A| = 1(7) - 3(1) + 3(-1)$
$|A| = 7 - 3 - 3 = 1$
[As $|A| \neq 0$, $A^{-1}$ exists]
Step 2: Calculate the Cofactors ($A_{ij}$)
We calculate the cofactors for all nine elements using the formula $A_{ij} = (-1)^{i+j} M_{ij}$.
Cofactors of $R_1$:
$A_{11} = +(16 - 9) = 7$
$A_{12} = -(4 - 3) = -1$
$A_{13} = +(3 - 4) = -1$
Cofactors of $R_2$:
$A_{21} = -(12 - 9) = -3$
$A_{22} = +(4 - 3) = 1$
$A_{23} = -(3 - 3) = 0$
Cofactors of $R_3$:
$A_{31} = +(9 - 12) = -3$
$A_{32} = -(3 - 3) = 0$
$A_{33} = +(4 - 3) = 1$
Step 3: Form the Adjoint Matrix ($\text{adj } A$)
The Adjoint is the transpose of the cofactor matrix. We arrange the cofactors calculated above into columns:
$\text{adj } A = \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$
Step 4: Divide Adjoint by Determinant
Finally, we apply the inverse formula to obtain $A^{-1}$:
$A^{-1} = \frac{1}{|A|} \text{adj } A$
Substituting the values from the above equations, we get:
$A^{-1} = \frac{1}{1} \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$
$A^{-1} = \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix}$
[Final Inverse Matrix]
Verification: We can check the result by performing matrix multiplication $A \cdot A^{-1} = I$.
$\begin{bmatrix} 1 & 3 & 3 \\ 1 & 4 & 3 \\ 1 & 3 & 4 \end{bmatrix} \begin{bmatrix} 7 & -3 & -3 \\ -1 & 1 & 0 \\ -1 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 7-3-3 & -3+3+0 & -3+0+3 \\ 7-4-3 & -3+4+0 & -3+0+3 \\ 7-3-4 & -3+3+0 & -3+0+4 \end{bmatrix} $$ = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$
Theorem 4: Cancellation Laws in Matrix Algebra
In ordinary algebra, if $ax = ay$ and $a \neq 0$, we can "cancel" $a$ to conclude that $x = y$. However, in matrix algebra, the Cancellation Law does not hold in general. Theorem 4 defines the specific condition (non-singularity) under which we can safely cancel a matrix from an equation.
Statement of Theorem 4
Let $A$, $B$, and $C$ be square matrices of the same order $n$. If $A$ is a non-singular matrix (i.e., $|A| \neq 0$), then:
(i) Left Cancellation Law:
$AB = AC \implies B = C$
(ii) Right Cancellation Law:
$BA = CA \implies B = C$
Proof of the Theorem
Proof of (i) Left Cancellation Law
Given: $A, B, C$ are square matrices and $A$ is non-singular ($|A| \neq 0$).
Since $A$ is non-singular, its inverse $A^{-1}$ exists. Now, consider the equation:
$AB = AC$
Pre-multiplying both sides of the above equation by $A^{-1}$:
$A^{-1}(AB) = A^{-1}(AC)$
By the associative property of matrix multiplication:
$(A^{-1}A)B = (A^{-1}A)C$
Since $A^{-1}A = I$ (Identity Matrix):
$IB = IC$
$B = C$
[Property of Identity Matrix]
Proof of (ii) Right Cancellation Law
Consider the equation:
$BA = CA$
Post-multiplying both sides of above equation by $A^{-1}$ (which exists because $|A| \neq 0$):
$(BA)A^{-1} = (CA)A^{-1}$
By using the associative property:
$B(AA^{-1}) = C(AA^{-1})$
Since $AA^{-1} = I$:
$BI = CI$
$B = C$
[Property of Identity Matrix]
Crucial Remark
Warning: The condition $|A| \neq 0$ is absolutely necessary. If $A$ is a singular matrix ($|A| = 0$), then $AB = AC$ does not necessarily imply $B = C$.
Counter-Example (Indian Context)
Let $A = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}$ (Singular as $|A|=0$).
Let $B = \begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix}$ and $C = \begin{bmatrix} 0 & 0 \\ 5 & 5 \end{bmatrix}$.
Calculating $AB$:
$AB = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 1 & 1 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$
Calculating $AC$:
$AC = \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 5 & 5 \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$
Here, $AB = AC = O$ (Zero Matrix), but clearly $B \neq C$. This demonstrates why you cannot cancel a singular matrix.
Theorem 5: Reversal Law for Inverses
The Reversal Law for Inverses is a fundamental property in matrix algebra which states that the inverse of the product of two invertible matrices is the product of their inverses taken in the reverse order.
Statement of Theorem 5
If $A$ and $B$ are invertible (non-singular) square matrices of the same order $n$, then their product $AB$ is also invertible, and the inverse is given by:
$(AB)^{-1} = B^{-1}A^{-1}$
Proof and Derivation
Part 1: Proving that $AB$ is Invertible
A matrix is invertible if and only if its determinant is non-zero. Since $A$ and $B$ are invertible matrices:
$|A| \neq 0$ and $|B| \neq 0$
(Given)
We know from the property of determinants of product matrices:
$|AB| = |A| \cdot |B|$
Since the product of two non-zero numbers is always non-zero, we have $|AB| \neq 0$. Therefore, $AB$ is a non-singular matrix and its inverse $(AB)^{-1}$ exists.
Part 2: Verifying the Reversal Law
By the definition of an inverse, a matrix $X$ is the inverse of $Y$ if $XY = YX = I$. To prove that $B^{-1}A^{-1}$ is the inverse of $AB$, we must show that their product results in the Identity matrix $I$.
Let us consider the product $(AB)(B^{-1}A^{-1})$:
$(AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1}$
[By Associative Property]
$= A(I)A^{-1}$
[Since $BB^{-1} = I$]
$= AA^{-1} = I$
[Property of Identity Matrix]
Now, let us check the product in the reverse order, i.e., $(B^{-1}A^{-1})(AB)$:
$(B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B$
[By Associative Property]
$= B^{-1}(I)B$
[Since $A^{-1}A = I$]
$= B^{-1}B = I$
Since $(AB)(B^{-1}A^{-1}) = (B^{-1}A^{-1})(AB) = I$, by the definition of matrix inverse, we conclude:
$(AB)^{-1} = B^{-1}A^{-1}$
[Hence Proved]
Generalized Reversal Law
The law can be extended to the product of any number of non-singular square matrices of the same order. If $A_1, A_2, A_3, \dots, A_n$ are invertible matrices, then:
$(A_1 A_2 A_3 \dots A_n)^{-1} = A_n^{-1} \dots A_3^{-1} A_2^{-1} A_1^{-1}$
Important Remarks and Applications
1. Relation with Reversal Law for Transposes
We should note the similarity with the reversal law for transposes, which states $(AB)' = B'A'$. However, while the transpose law applies to all matrices (provided the product is defined), the inverse law only applies to square non-singular matrices.
2. Inverse of a Matrix Power
If we take $B = A$ in Theorem 5, we get $(A \cdot A)^{-1} = A^{-1} \cdot A^{-1} \implies (A^2)^{-1} = (A^{-1})^2$. In general:
$(A^m)^{-1} = (A^{-1})^m$
3. Double Inverse Property
The inverse of an inverse matrix returns the original matrix:
$(A^{-1})^{-1} = A$
Theorem 6: Transpose of an Inverse
The Transpose of an Inverse property (Theorem 6) states that for any non-singular square matrix, the operations of taking the inverse and taking the transpose are commutative. In simpler terms, the inverse of the transpose of a matrix is equal to the transpose of its inverse.
Statement of Theorem 6
If $A$ is an invertible (non-singular) square matrix, then its transpose $A'$ is also invertible, and the following identity holds:
$(A')^{-1} = (A^{-1})'$
Verification and Proof
Part 1: Existence of $(A')^{-1}$
For a matrix to be invertible, its determinant must be non-zero. Let $A$ be an invertible matrix of order $n$. By the property of transposes, we know that the determinant of a matrix is equal to the determinant of its transpose:
$\text{det}(A') = \text{det}(A)$
Since $A$ is invertible, $|A| \neq 0$, which implies $|A'| \neq 0$. Thus, $A'$ is non-singular and its inverse $(A')^{-1}$ is guaranteed to exist.
Part 2: Derivation of the Identity
By the fundamental definition of a matrix inverse, we have:
$AA^{-1} = I = A^{-1}A$
Taking the transpose of all sides in the above equation:
$(AA^{-1})' = I' = (A^{-1}A)'$
Applying the Reversal Law for Transposes, which states $(AB)' = B'A'$, and noting that the transpose of the Identity matrix is itself ($I' = I$):
$(A^{-1})' A' = I$
[Applying reversal law to $(AA^{-1})'$] ... (i)
Similarly, expanding the second part of the above equation:
$A' (A^{-1})' = I$
[Applying reversal law to $(A^{-1}A)'$] ... (ii)
From equations (i) and (ii), we see that the product of $A'$ and $(A^{-1})'$ results in the Identity matrix $I$. By the definition of matrix inverse ($XY = I \implies Y = X^{-1}$), we conclude:
$(A')^{-1} = (A^{-1})'$
[Hence Proved]
Theorem 7: Determinant of an Adjoint Matrix
This property allows students to find the determinant of an adjoint matrix without having to actually compute the adjoint itself, which is a significant time-saver.
Statement of the Theorem
If $A$ is a non-singular square matrix of order $n$, then the determinant of its adjoint is equal to the determinant of the matrix raised to the power $(n-1)$.
$|\text{adj } A| = |A|^{n-1}$
Derivation of the Formula
To derive this property, we use the fundamental identity relating a matrix and its adjoint.
We know that for any square matrix $A$ of order $n$:
$A(\text{adj } A) = |A|I_n$
[Identity property]
Now, taking the determinant on both sides of above equation:
$\text{det}(A \cdot \text{adj } A) = \text{det}(|A|I_n)$
Using the property of determinants of product matrices, $\text{det}(AB) = \text{det}(A)\text{det}(B)$, we expand the left side:
$\text{det}(A) \cdot \text{det}(\text{adj } A) = \text{det}(|A|I_n)$
...(i)
On the right side, $|A|$ is a scalar constant $k$. We know that for a constant $k$ and a matrix of order $n$, $\text{det}(kI_n) = k^n \text{det}(I_n)$. Since $\text{det}(I_n) = 1$, we get:
$\text{det}(|A|I_n) = |A|^n \cdot 1$
[Scalar multiplication property] ... (ii)
Substituting the result from (ii) into (i):
$|A| \cdot |\text{adj } A| = |A|^n$
Assuming $A$ is non-singular ($|A| \neq 0$), we can divide both sides by $|A|$:
$|\text{adj } A| = \frac{|A|^n}{|A|} $
$|\text{adj } A| = |A|^{n-1}$
[Hence Proved]
Solution of a System of Linear Equations
In the study of Linear Algebra, one of the most significant applications is solving a set of simultaneous linear equations. While elementary methods like substitution and elimination work for two variables, the Matrix Inverse Method is a systematic approach required for systems involving three or more variables.
This method allows us to determine the Consistency and Inconsistency of the system, which indicates whether a solution exists or not.
Consistency and Inconsistency of Systems
Before solving a system, we must classify it based on the existence of solutions. The following definitions are fundamental:
Consistent System: A system of equations is said to be consistent if it has at least one solution (either a unique solution or infinitely many solutions).
Inconsistent System: A system of equations is said to be inconsistent if it has no solution at all.
Nature of Solutions in Non-Homogeneous Systems
For a non-homogeneous system (where the constant terms are not all zero), there are three possibilities:
| Possibility | System Classification | Geometrical Interpretation |
|---|---|---|
| Unique Solution | Consistent and Independent | Lines/Planes intersect at a single point. |
| Infinite Solutions | Consistent and Dependent | Lines/Planes are coincident (overlapping). |
| No Solution | Inconsistent | Lines/Planes are parallel and distinct. |
Representation as a Matrix Equation
Consider a system of simultaneous linear equations in three variables $x$, $y$, and $z$:
$a_{1}x + b_{1}y + c_{1}z = d_{1}$
$a_{2}x + b_{2}y + c_{2}z = d_{2}$
$a_{3}x + b_{3}y + c_{3}z = d_{3}$
This system can be compacted into a single matrix equation. We define three specific matrices:
1. Coefficient Matrix ($A$): Formed by the coefficients of the variables.
$A = \begin{bmatrix} a_{1} & b_{1} & c_{1} \\ a_{2} & b_{2} & c_{2} \\ a_{3} & b_{3} & c_{3} \end{bmatrix}$
2. Variable Matrix ($X$): A column matrix consisting of the unknowns.
$X = \begin{bmatrix} x \\ y \\ z \end{bmatrix}$
3. Constant Matrix ($B$): A column matrix consisting of the terms on the right-hand side.
$B = \begin{bmatrix} d_{1} \\ d_{2} \\ d_{3} \end{bmatrix}$
The entire system is then represented by the linear matrix equation:
$AX = B$
Derivation of the Matrix Method (Martin's Rule)
To solve a system of linear equations using matrices, we first represent the system in the form of the matrix equation $AX = B$. For this method to yield a specific solution, we must assume that the coefficient matrix $A$ is non-singular, meaning its determinant $|A|$ is non-zero ($|A| \neq 0$), which guarantees that $A^{-1}$ exists.
Step-by-Step Derivation:
Consider the matrix equation representing the system of equations:
$AX = B$
Since $A$ is non-singular, $A^{-1}$ exists. We pre-multiply both sides of above equation by $A^{-1}$:
$A^{-1}(AX) = A^{-1}B$
[Pre-multiplying by $A^{-1}$]
By the Associative Law of matrix multiplication, we can change the grouping of the matrices:
$(A^{-1}A)X = A^{-1}B$
[By Associative Law]
By the definition of an inverse matrix, the product of a matrix and its inverse results in the Identity Matrix ($I$):
$IX = A^{-1}B$
[$\because A^{-1}A = I$]
Since multiplying any matrix by the Identity Matrix leaves it unchanged ($IX = X$):
$X = A^{-1}B$
This result, $X = A^{-1}B$, allows us to find the values of all variables in $X$ by calculating the inverse of $A$ and multiplying it by the constant matrix $B$.
Proof of Uniqueness of Solution
It is not enough to find a solution; we must also verify that only one such solution exists. We prove this using the method of contradiction.
Proof:
Let us assume that the system $AX = B$ has two distinct solutions, $X_{1}$ and $X_{2}$. If both are solutions, they must satisfy the original equation:
$AX_{1} = B$
$AX_{2} = B$
Since both expressions are equal to $B$, we can equate them:
$AX_{1} = AX_{2}$
Now, we pre-multiply both sides by $A^{-1}$ (which exists because $|A| \neq 0$):
$A^{-1}(AX_{1}) = A^{-1}(AX_{2})$
[Pre-multiplying by $A^{-1}$]
Using the Associative Law again:
$(A^{-1}A)X_{1} = (A^{-1}A)X_{2}$
Substituting the identity matrix $I$ for $A^{-1}A$:
$IX_{1} = IX_{2}$
Which simplifies to:
$X_{1} = X_{2}$
Our assumption that $X_{1}$ and $X_{2}$ were distinct solutions is false. Therefore, the solution for the matrix equation $AX = B$ is unique.
Criterion for Consistency or Inconsistency
In the study of Linear Algebra, determining whether a system of equations can be solved is as important as finding the solution itself. For a system of non-homogeneous linear equations represented by the matrix equation $AX = B$, the determinant of the coefficient matrix $A$ and the adjoint of $A$ play a decisive role in identifying the nature of the system.
Definitions of Consistency
A system of equations is classified based on the existence and number of solutions it possesses:
1. Consistent System: A system is said to be consistent if it has at least one set of values for the unknowns that satisfies all equations. This includes systems with a unique solution or infinitely many solutions.
2. Inconsistent System: A system is said to be inconsistent if there is no set of values that satisfies all equations simultaneously. In this case, the Solution Set is empty.
The Criterion Table
For a system of $n$ linear equations with $n$ unknowns, the following criteria are applied to check for consistency:
| Determinant Condition | Product Condition | Nature of System | Solution Type |
|---|---|---|---|
| $|A| \neq 0$ | Not Required | Consistent & Independent | Unique Solution |
| $|A| = 0$ | $(\text{adj } A)B = O$ | Consistent & Dependent | Infinitely many solutions |
| $|A| = 0$ | $(\text{adj } A)B \neq O$ | Inconsistent | No solution |
Detailed Analysis of Cases
Case I: Non-Singular Matrix ($|A| \neq 0$)
If the determinant of the coefficient matrix is non-zero, the matrix $A$ is invertible. The system is always consistent and possesses a single, unique solution.
$X = A^{-1}B$
Case II: Singular Matrix ($|A| = 0$)
When the determinant is zero, $A^{-1}$ does not exist. The nature of the system then depends on the product of the Adjoint Matrix and the Constant Matrix $B$.
(a) If $(\text{adj } A)B \neq O$: This represents a situation where the equations are contradictory. Geometrically, in a 2D system, the lines are parallel and distinct. The system is Inconsistent and has No Solution.
(b) If $(\text{adj } A)B = O$: This represents a situation where the equations are redundant or dependent. Geometrically, the lines or planes coincide. The system is generally Consistent and has Infinitely Many Solutions.
Note on Singular Case:
In some rare cases where $|A| = 0$ and $(\text{adj } A)B = O$, the system could still be inconsistent if the planes do not have a common intersection, but this condition is typically treated as the case for infinite solutions.Example 1. Examine the consistency of the system:
$x + 3y = 5$
$2x + 6y = 10$
Answer:
The system can be written as $AX = B$, where:
$A = \begin{bmatrix} 1 & 3 \\ 2 & 6 \end{bmatrix}$, $X = \begin{bmatrix} x \\ y \end{bmatrix}$ and $B = \begin{bmatrix} 5 \\ 10 \end{bmatrix}$
First, find the determinant of $A$:
$|A| = (1 \times 6) - (3 \times 2) = 6 - 6 = 0$
Since $|A| = 0$, $A$ is a singular matrix. We check $(\text{adj } A)B$.
$\text{adj } A = \begin{bmatrix} 6 & -3 \\ -2 & 1 \end{bmatrix}$
$(\text{adj } A)B = \begin{bmatrix} 6 & -3 \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 5 \\ 10 \end{bmatrix} = \begin{bmatrix} 30 - 30 \\ -10 + 10 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} = O$
Since $|A| = 0$ and $(\text{adj } A)B = O$, the system is consistent and has infinitely many solutions.
For any arbitrary number $k$, let $y = k$. Then $x = 5 - 3k$. This pair $(5-3k, k)$ satisfies the system.
Example 2. Examine the consistency of the system:
$x + 3y = 5$
$2x + 6y = 8$
Answer:
Here, $A = \begin{bmatrix} 1 & 3 \\ 2 & 6 \end{bmatrix}$ and $B = \begin{bmatrix} 5 \\ 8 \end{bmatrix}$
Calculating the determinant:
$|A| = (1 \times 6) - (3 \times 2) = 0$
Calculating $(\text{adj } A)B$:
$(\text{adj } A)B = \begin{bmatrix} 6 & -3 \\ -2 & 1 \end{bmatrix} \begin{bmatrix} 5 \\ 8 \end{bmatrix} = \begin{bmatrix} 30 - 24 \\ -10 + 8 \end{bmatrix} = \begin{bmatrix} 6 \\ -2 \end{bmatrix} \neq O$
Since $|A| = 0$ and $(\text{adj } A)B \neq O$, the system is inconsistent and has no solution.
Example 3. Solve the following system of linear equations using the matrix method:
$2x + 5y = 1$
$3x + 2y = 7$
Answer:
Step 1: Represent the system in matrix form $AX = B$.
Let $A = \begin{bmatrix} 2 & 5 \\ 3 & 2 \end{bmatrix}$, $X = \begin{bmatrix} x \\ y \end{bmatrix}$, and $B = \begin{bmatrix} 1 \\ 7 \end{bmatrix}$.
Step 2: Find the determinant $|A|$.
$|A| = (2 \times 2) - (5 \times 3) = 4 - 15 = -11$
$|A| \neq 0$
[Non-singular matrix]
Since $|A| \neq 0$, the inverse $A^{-1}$ exists and the system has a unique solution.
Step 3: Find the Adjoint of $A$ ($\text{adj } A$).
For a 2x2 matrix, we interchange the diagonal elements and change the signs of the off-diagonal elements:
$\text{adj } A = \begin{bmatrix} 2 & -5 \\ -3 & 2 \end{bmatrix}$
Step 4: Find the Inverse $A^{-1}$.
$A^{-1} = \frac{1}{|A|} \text{adj } A = \frac{1}{-11} \begin{bmatrix} 2 & -5 \\ -3 & 2 \end{bmatrix} = \begin{bmatrix} -2/11 & 5/11 \\ 3/11 & -2/11 \end{bmatrix}$
Step 5: Solve for $X = A^{-1}B$.
$X = \begin{bmatrix} -2/11 & 5/11 \\ 3/11 & -2/11 \end{bmatrix} \begin{bmatrix} 1 \\ 7 \end{bmatrix}$
$X = \begin{bmatrix} (-2/11 \times 1) + (5/11 \times 7) \\ (3/11 \times 1) + (-2/11 \times 7) \end{bmatrix}$
$X = \begin{bmatrix} -2/11 + 35/11 \\ 3/11 - 14/11 \end{bmatrix} = \begin{bmatrix} 33/11 \\ -11/11 \end{bmatrix} = \begin{bmatrix} 3 \\ -1 \end{bmatrix}$
Final Solution: $x = 3, y = -1$.
Example 4. Solve the system of equations using matrix method:
$x - y + z = 4$
$2x + y - 3z = 0$
$x + y + z = 2$
Answer:
Step 1: Matrix Representation $AX = B$.
$A = \begin{bmatrix} 1 & -1 & 1 \\ 2 & 1 & -3 \\ 1 & 1 & 1 \end{bmatrix}$, $X = \begin{bmatrix} x \\ y \\ z \end{bmatrix}$, $B = \begin{bmatrix} 4 \\ 0 \\ 2 \end{bmatrix}$
Step 2: Find $|A|$.
$|A| = 1(1 - (-3)) - (-1)(2 - (-3)) + 1(2 - 1)$
$|A| = 1(4) + 1(5) + 1(1) = 4 + 5 + 1 = 10$
Since $|A| = 10 \neq 0$, $A^{-1}$ exists.
Step 3: Find the Cofactors of $A$.
$A_{11} = 4, A_{12} = -5, A_{13} = 1$
$A_{21} = 2, A_{22} = 0, A_{23} = -2$
$A_{31} = 2, A_{32} = 5, A_{33} = 3$
Step 4: Find $\text{adj } A$ and $A^{-1}$.
$\text{adj } A = \begin{bmatrix} 4 & -5 & 1 \\ 2 & 0 & -2 \\ 2 & 5 & 3 \end{bmatrix}^T = \begin{bmatrix} 4 & 2 & 2 \\ -5 & 0 & 5 \\ 1 & -2 & 3 \end{bmatrix}$
$A^{-1} = \frac{1}{10} \begin{bmatrix} 4 & 2 & 2 \\ -5 & 0 & 5 \\ 1 & -2 & 3 \end{bmatrix}$
Step 5: Solve $X = A^{-1}B$.
$X = \frac{1}{10} \begin{bmatrix} 4 & 2 & 2 \\ -5 & 0 & 5 \\ 1 & -2 & 3 \end{bmatrix} \begin{bmatrix} 4 \\ 0 \\ 2 \end{bmatrix}$
$X = \frac{1}{10} \begin{bmatrix} 16 + 0 + 4 \\ -20 + 0 + 10 \\ 4 + 0 + 6 \end{bmatrix} = \frac{1}{10} \begin{bmatrix} 20 \\ -10 \\ 10 \end{bmatrix} = \begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix}$
Final Solution: $x = 2, y = -1, z = 1$.
Example 5. The cost of $4\textsf{ kg}$ onion, $3\textsf{ kg}$ wheat and $2\textsf{ kg}$ rice is $\textsf{₹} 60$. The cost of $2\textsf{ kg}$ onion, $4\textsf{ kg}$ wheat and $6\textsf{ kg}$ rice is $\textsf{₹} 90$. The cost of $6\textsf{ kg}$ onion $2\textsf{ kg}$ wheat and $3\textsf{ kg}$ rice is $\textsf{₹} 70$. Find cost of each item per kg by matrix method.
Answer:
Let the cost per kg of onion, wheat, and rice be $x, y, \text{ and } z$ respectively.
According to the question, we have the following system of equations:
$4x + 3y + 2z = 60$
$2x + 4y + 6z = 90$
$6x + 2y + 3z = 70$
This can be written as $AX = B$ where:
$A = \begin{bmatrix} 4 & 3 & 2 \\ 2 & 4 & 6 \\ 6 & 2 & 3 \end{bmatrix}$, $X = \begin{bmatrix} x \\ y \\ z \end{bmatrix}$, $B = \begin{bmatrix} 60 \\ 90 \\ 70 \end{bmatrix}$
On calculating, we find $|A| = 4(12 - 12) - 3(6 - 36) + 2(4 - 24) $$ = 0 + 90 - 40 = 50$.
Since $|A| = 50 \neq 0$, a unique solution exists. Using the formula $X = A^{-1}B$, we find:
$\text{adj } A = \begin{bmatrix} 0 & -5 & 10 \\ 30 & 0 & -20 \\ -20 & 10 & 10 \end{bmatrix}$
$X = \frac{1}{50} \begin{bmatrix} 0 & -5 & 10 \\ 30 & 0 & -20 \\ -20 & 10 & 10 \end{bmatrix} \begin{bmatrix} 60 \\ 90 \\ 70 \end{bmatrix} = \begin{bmatrix} 5 \\ 8 \\ 8 \end{bmatrix}$
Final Solution: Cost of onion = $\textsf{₹} 5/\text{kg}$, Wheat = $\textsf{₹} 8/\text{kg}$, Rice = $\textsf{₹} 8/\text{kg}$.
Solution of Linear Equations by Determinants
The method of solving a system of simultaneous linear equations using determinants is one of the most elegant applications of Linear Algebra. Known popularly as Cramer’s Rule, this technique provides a direct way to compute the values of unknown variables without performing matrix inversion. It is highly valued in Competitive Examinations like JEE Mains and NDA for its procedural simplicity.
Example of a Three-Variable System
Consider a non-homogeneous system of linear equations in three unknowns ($x, y, z$):
$x + y + z = 6$
$x - y + z = 2$
$2x + y - z = 1$
To determine if a set of values is a solution, we must substitute those values into every equation. If the Left Hand Side (LHS) equals the Right Hand Side (RHS) for all equations, the set is a valid solution.
Verification of Solution
Let us verify the values $x = 1, y = 2, \text{ and } z = 3$ for the system provided above.
Step 1: Check Equation 1
$1 + 2 + 3 = 6$
($6 = 6$)
Step 2: Check Equation 2
$1 - 2 + 3 = 2$
($2 = 2$)
Step 3: Check Equation 3
$2(1) + 2 - 3 = 1$
($1 = 1$)
Since the values satisfy all three equations simultaneously, $\{1, 2, 3\}$ is the unique solution set for this system.
Classification of Systems of Equations
To apply Cramer's Rule effectively, we must first classify the system based on its constant terms and its solvability.
(i) Homogeneous vs. Non-Homogeneous Systems
Consider the general form of a linear system:
$a_1x + b_1y + c_1z = d_1$
$a_2x + b_2y + c_2z = d_2$
$a_3x + b_3y + c_3z = d_3$
Homogeneous Systems of Equations
A system of linear equations is said to be Homogeneous if the constant term in every equation is zero. The general form for a 3-variable system is:
$a_1x + b_1y + c_1z = 0$
$a_2x + b_2y + c_2z = 0$
$a_3x + b_3y + c_3z = 0$
Key Characteristics:
(i) Always Consistent: A homogeneous system can never be inconsistent. Since the values $x = 0, y = 0, z = 0$ always satisfy every equation, the system always has at least one solution.
(ii) Trivial and Non-Trivial Solutions:
$\bullet$ Trivial Solution: The solution $\{0, 0, 0\}$ is called the trivial solution. It exists regardless of the coefficients.
$\bullet$ Non-Trivial Solution: If the system has solutions other than the origin, they are called non-trivial solutions. These exist only if the determinant of the coefficient matrix is zero.
$D = 0$
[Condition for Non-Trivial Solutions]
(iii) Geometrical Interpretation: Every equation represents a plane passing through the Origin (0,0,0). Therefore, all planes in a homogeneous system intersect at least at the origin.
Non-Homogeneous Systems of Equations
A system is called Non-Homogeneous if at least one constant term on the RHS is non-zero ($d_i \neq 0$). The general form is:
$a_1x + b_1y + c_1z = d_1$
$a_2x + b_2y + c_2z = d_2$
$a_3x + b_3y + c_3z = d_3$
Key Characteristics:
(i) Consistency is Not Guaranteed: Unlike homogeneous systems, these can be Inconsistent (having no solution). This occurs when the planes are parallel or arranged such that they do not share a common point of intersection.
(ii) Types of Solutions:
$\bullet$ Unique Solution: Occurs if $D \neq 0$.
$\bullet$ Infinite Solutions: Occurs if $D = 0$ and $D_1 = D_2 = D_3 = 0$.
$\bullet$ No Solution: Occurs if $D = 0$ and at least one of $D_1, D_2, D_3 \neq 0$.
(iii) Geometrical Interpretation: The planes represented by these equations do not necessarily pass through the origin. Their intersection depends entirely on the alignment of the coefficients and the constants.
Comparative Analysis (Summary Table)
The following table provides a direct comparison between the two systems:
| Feature | Homogeneous System ($AX = O$) | Non-Homogeneous System ($AX = B$) |
|---|---|---|
| Constant Terms | All constants are zero ($d_i = 0$). | At least one constant is non-zero ($d_i \neq 0$). |
| Consistency | Always Consistent. | May be Consistent or Inconsistent. |
| Solution $\{0, 0, 0\}$ | Always a solution (Trivial). | May or may not be a solution. |
| Determinant $D \neq 0$ | Unique Trivial solution only. | Unique Non-Trivial solution. |
| Determinant $D = 0$ | Infinitely many Non-Trivial solutions. | Infinite solutions OR No solution. |
| Origin (0,0,0) | Planes always pass through origin. | Planes may not pass through origin. |
(ii) Consistency vs. Inconsistency
$\bullet$ Consistent System: A system is said to be consistent if it possesses at least one solution. This includes cases with a unique solution or infinitely many solutions.
$\bullet$ Inconsistent System: A system is said to be inconsistent if it has no solution at all. Geometrically, this occurs when the lines or planes represented by the equations do not have a common point of intersection.
Detailed Nature of Solutions (Non-Homogeneous)
For a non-homogeneous system, there are three primary possibilities regarding the existence of solutions:
| Nature of Solution | System Classification | Geometrical interpretation |
|---|---|---|
| Unique Solution | Consistent and Independent | Intersection at a single point |
| Infinite Number of Solutions | Consistent and Dependent | Intersection along a line or coincident planes |
| No Solution | Inconsistent | Parallel and non-coincident lines/planes |
Cramer’s Rule for Two Linear Equations
Cramer’s Rule is a specific method in Linear Algebra used to solve a system of linear equations by using determinants. For a system involving two variables, it provides a direct formulaic approach to find the values of the unknowns. This method is also referred to as the determinant method of solving simultaneous equations.
Representation of the System
Consider a system of two non-homogeneous linear equations in two variables, $x$ and $y$:
$a_{1}x + b_{1}y = d_{1}$
…(i)
$a_{2}x + b_{2}y = d_{2}$
…(ii)
Here, $a_{1}, b_{1}, a_{2}, b_{2}$ are the coefficients of the variables, and $d_{1}, d_{2}$ are the constant terms on the Right Hand Side (RHS).
Defining the Determinants
To solve the system, we define three specific determinants: $D$, $D_{1}$ (often denoted as $D_{x}$), and $D_{2}$ (often denoted as $D_{y}$).
Summary of Determinant Components
| Determinant | Construction Rule | Mathematical Form |
|---|---|---|
| $D$ | Determinant of the coefficients of $x$ and $y$. | $\begin{vmatrix} a_{1} & b_{1} \\ a_{2} & b_{2} \end{vmatrix}$ |
| $D_{1}$ | Replace 1st column (coefficients of $x$) with constants $d_{1}, d_{2}$. | $\begin{vmatrix} d_{1} & b_{1} \\ d_{2} & b_{2} \end{vmatrix}$ |
| $D_{2}$ | Replace 2nd column (coefficients of $y$) with constants $d_{1}, d_{2}$. | $\begin{vmatrix} a_{1} & d_{1} \\ a_{2} & d_{2} \end{vmatrix}$ |
Formal Derivation of Cramer's Rule
The derivation involves the property of determinants where a scalar can be multiplied into a column and the use of elementary column operations.
Step 1: Start with the product of the variable $x$ and the coefficient determinant $D$. Using the property that a constant multiplied by a determinant is equivalent to multiplying any one column by that constant:
$x \cdot D = x \begin{vmatrix} a_{1} & b_{1} \\ a_{2} & b_{2} \end{vmatrix} = \begin{vmatrix} a_{1}x & b_{1} \\ a_{2}x & b_{2} \end{vmatrix}$
Step 2: Apply the column operation $C_{1} \to C_{1} + yC_{2}$. This operation does not change the value of the determinant:
$x \cdot D = \begin{vmatrix} a_{1}x + b_{1}y & b_{1} \\ a_{2}x + b_{2}y & b_{2} \end{vmatrix}$
Step 3: From equations (i) and (ii), we know that $a_{1}x + b_{1}y = d_{1}$ and $a_{2}x + b_{2}y = d_{2}$. Substituting these values into the first column:
$x \cdot D = \begin{vmatrix} d_{1} & b_{1} \\ d_{2} & b_{2} \end{vmatrix}$
[Substituting from Eq. (i) & (ii)]
Since the determinant on the RHS is defined as $D_{1}$, we have:
$x \cdot D = D_{1}$
Step 4: Similarly, by multiplying $D$ by $y$ and applying the column operation $C_{2} \to C_{2} + xC_{1}$, we can derive the relationship for $y$:
$y \cdot D = D_{2}$
The Unique Solution and the Condition $D \neq 0$
The existence of a unique solution for a system of linear equations depends entirely on the determinant of the coefficients, $D$. This is often linked to the concept of a Non-Singular Matrix. If the determinant $D$ is non-zero, it guarantees that the equations are independent and provide a single, specific point of intersection in the Cartesian plane.
Logical Conclusion of the Derivation
From the algebraic derivations performed using column operations, we established two fundamental relationships between the variables and the determinants:
$x \cdot D = D_{1}$
[From previous derivation]
$y \cdot D = D_{2}$
[From previous derivation]
Mathematically, to isolate $x$ and $y$, we must divide both sides by $D$. However, division by a variable or a value is only permissible if that value is not zero. Therefore, the condition $D \neq 0$ is the necessary and sufficient condition for a unique solution to exist.
The Final Formulas (Cramer's Rule)
When $D \neq 0$, the values of the unknowns are found by calculating the ratio of the modified determinants to the coefficient determinant:
$x = \frac{D_{1}}{D}$ , $y = \frac{D_{2}}{D}$
These formulas allow us to solve for $x$ and $y$ directly without using substitution or elimination methods. Each ratio represents the Scaling Factor by which the constants shift the intersection point from the origin.
Geometrical Interpretation of $D \neq 0$
In Coordinate Geometry, two linear equations in $x$ and $y$ represent two straight lines. The condition $D \neq 0$ signifies that the slopes of these two lines are different.
| Condition | Geometrical State | Result |
|---|---|---|
| $D \neq 0$ | Lines have different slopes. | Intersection at exactly one point. |
| $D = 0$ | Lines are parallel. | No intersection (or infinite). |
The solution $\{x, y\}$ obtained from the above equation represents the Coordinates of the Point of Intersection of these two lines.
Conditions for Consistency and Nature of Solutions
The solvability of a system of two linear equations depends entirely on the values of the three determinants: $D$, $D_{1}$, and $D_{2}$. By analyzing these values, we can categorize the system as Consistent (having a solution) or Inconsistent (having no solution).
The Three Fundamental Cases
For the non-homogeneous system $a_{1}x + b_{1}y = d_{1}$ and $a_{2}x + b_{2}y = d_{2}$, the nature of the solution set is determined by the following criteria:
| Condition | Nature of System | Number of Solutions | Geometrical Interpretation |
|---|---|---|---|
| $D \neq 0$ | Consistent and Independent | Unique Solution | Intersecting Lines |
| $D = 0$ and ($D_{1} \neq 0$ or $D_{2} \neq 0$) | Inconsistent | No Solution | Parallel Lines |
| $D = D_{1} = D_{2} = 0$ | Consistent and Dependent | Infinitely Many Solutions | Coincident Lines |
Detailed Analysis of Each Case
Case I: Unique Solution ($D \neq 0$)
When the determinant of the coefficients is non-zero, the system is Consistent. This condition implies that the ratios of the coefficients of $x$ and $y$ are not equal ($a_{1}/a_{2} \neq b_{1}/b_{2}$), meaning the two lines have different slopes.
The unique values for the variables are calculated as:
$x = \frac{D_{1}}{D}$ , $y = \frac{D_{2}}{D}$
Case II: No Solution ($D = 0$ and $D_{1}, D_{2} \neq 0$)
If $D = 0$, it implies that $a_{1}b_{2} - a_{2}b_{1} = 0$, or $\frac{a_{1}}{a_{2}} = \frac{b_{1}}{b_{2}}$. Geometrically, the two lines are parallel. If at least one of the other determinants ($D_{1}$ or $D_{2}$) is non-zero, it means the lines are parallel but distinct (they never touch).
In this scenario, the relationship $x \cdot D = D_{1}$ becomes:
$x \cdot 0 = \text{Non-Zero Value}$
[Mathematically Impossible]
Thus, the system is Inconsistent and the solution set is empty.
Case III: Infinitely Many Solutions ($D = D_{1} = D_{2} = 0$)
When all three determinants are zero, the ratios of the coefficients and the constants are all equal ($\frac{a_{1}}{a_{2}} = \frac{b_{1}}{b_{2}} = \frac{d_{1}}{d_{2}}$). Geometrically, this means the two equations represent the same line (Coincident Lines).
Every point on the line is a solution. To find the general solution in the Indian Boards format, we typically assume $x = k$ and express $y$ in terms of $k$:
$y = \frac{d_{1} - a_{1}k}{b_{1}}$
Example 1. Solve the following system of linear equations using the determinant method (Cramer's Rule):
$x + y = 3$
…(i)
$2x - y = 0$
…(ii)
Answer:
Step 1: Identify the Coefficients and Constants.
From the given equations, we have:
$a_{1} = 1, b_{1} = 1, d_{1} = 3$
$a_{2} = 2, b_{2} = -1, d_{2} = 0$
Step 2: Calculate the Coefficient Determinant ($D$).
$D$ is formed by the coefficients of the variables $x$ and $y$:
$D = \begin{vmatrix} 1 & 1 \\ 2 & -1 \end{vmatrix}$
$D = (1 \times -1) - (1 \times 2)$
$D = -1 - 2 = -3$
($D \neq 0$)
Since $D \neq 0$, the system is Consistent and will have a Unique Solution.
Step 3: Calculate the Modified Determinants ($D_{1}$ and $D_{2}$).
To find $D_{1}$, replace the first column of $D$ with the constant terms ($3, 0$):
$D_{1} = \begin{vmatrix} 3 & 1 \\ 0 & -1 \end{vmatrix} = (3 \times -1) - (1 \times 0) = -3 - 0 = -3$
To find $D_{2}$, replace the second column of $D$ with the constant terms ($3, 0$):
$D_{2} = \begin{vmatrix} 1 & 3 \\ 2 & 0 \end{vmatrix} = (1 \times 0) - (3 \times 2) = 0 - 6 = -6$
Step 4: Find the values of $x$ and $y$.
Using Cramer’s Rule formulas:
$x = \frac{D_{1}}{D} = \frac{-3}{-3} = 1$
$y = \frac{D_{2}}{D} = \frac{-6}{-3} = 2$
Final Solution: The solution set is $\{1, 2\}$. Geometrically, these two lines intersect at the point $(1, 2)$.
Example 2. Examine the nature of the solution for the system of equations:
$x + 2y = 4$
…(i)
$2x + 4y = 9$
…(ii)
Answer:
Step 1: Calculate the Coefficient Determinant ($D$).
$D = \begin{vmatrix} 1 & 2 \\ 2 & 4 \end{vmatrix}$
$D = (1 \times 4) - (2 \times 2)$
$D = 4 - 4 = 0$
(Singular determinant)
Since $D = 0$, the system is either Inconsistent (No Solution) or Consistent Dependent (Infinite Solutions). We must check $D_{1}$ and $D_{2}$ to proceed.
Step 2: Calculate $D_{1}$.
Replace the first column of $D$ with the constant terms ($4, 9$):
$D_{1} = \begin{vmatrix} 4 & 2 \\ 9 & 4 \end{vmatrix}$
$D_{1} = (4 \times 4) - (2 \times 9) = 16 - 18 = -2$
$D_{1} = -2$
($D_{1} \neq 0$)
Step 3: Analyze the Consistency.
We found that $D = 0$ but $D_{1} \neq 0$. Applying the relationship $x \cdot D = D_{1}$, we get:
$x \cdot 0 = -2$
[Which is impossible]
There is no real value of $x$ that satisfies this condition.
Conclusion: Since $D = 0$ and at least one of the other determinants ($D_{1}$) is non-zero, the system is Inconsistent. Geometrically, these equations represent Parallel Lines that never intersect.
Example 3. Solve the system of equations using Cramer's Rule:
$3x + 2y = 12$
$5x - 3y = 1$
Answer:
Step 1: Calculate $D$.
$D = \begin{vmatrix} 3 & 2 \\ 5 & -3 \end{vmatrix} = (3 \times -3) - (2 \times 5) = -9 - 10 = -19$
Step 2: Calculate $D_{1}$.
$D_{1} = \begin{vmatrix} 12 & 2 \\ 1 & -3 \end{vmatrix} = (12 \times -3) - (2 \times 1) = -36 - 2 = -38$
Step 3: Calculate $D_{2}$.
$D_{2} = \begin{vmatrix} 3 & 12 \\ 5 & 1 \end{vmatrix} = (3 \times 1) - (12 \times 5) = 3 - 60 = -57$
Step 4: Find $x$ and $y$.
$x = \frac{D_{1}}{D} = \frac{-38}{-19} = 2$
$y = \frac{D_{2}}{D} = \frac{-57}{-19} = 3$
Final Solution: $x = 2, y = 3$.
3. Cramer’s Rule for Three Linear Equations
Consider the system:
$a_1x + b_1y + c_1z = d_1$
$a_2x + b_2y + c_2z = d_2$
$a_3x + b_3y + c_3z = d_3$
Let $D$ be the determinant of the coefficients of $x, y, \text{ and } z$:
$D = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$
Let $D_1, D_2, \text{ and } D_3$ be obtained from $D$ by replacing the first, second, and third columns respectively by the constant terms $d_1, d_2, d_3$:
$D_1 = \begin{vmatrix} d_1 & b_1 & c_1 \\ d_2 & b_2 & c_2 \\ d_3 & b_3 & c_3 \end{vmatrix}$, \$ $D_2 = \begin{vmatrix} a_1 & d_1 & c_1 \\ a_2 & d_2 & c_2 \\ a_3 & d_3 & c_3 \end{vmatrix}$, \$ $D_3 = \begin{vmatrix} a_1 & b_1 & d_1 \\ a_2 & b_2 & d_2 \\ a_3 & b_3 & d_3 \end{vmatrix}$
If $D \neq 0$, the system has a unique solution given by:
$x = \frac{D_1}{D} \text{ , } y = \frac{D_2}{D} \text{ , } z = \frac{D_3}{D}$
... (iv)
4. Conditions for Consistency (Non-Homogeneous)
The following table summarizes the criteria for the nature of solutions in a non-homogeneous system:
| Condition | Nature of System | Number of Solutions |
|---|---|---|
| $D \neq 0$ | Consistent and Independent | Unique Solution |
| $D = 0$ and at least one of $D_1, D_2, D_3 \neq 0$ | Inconsistent | No Solution |
| $D = D_1 = D_2 = D_3 = 0$ | Consistent (Dependent) or Inconsistent | Infinitely many solutions (usually) |
5. Homogeneous System Analysis
For a homogeneous system where $d_1 = d_2 = d_3 = 0$:
1. The values $x=0, y=0, z=0$ always satisfy the equations. This is called the Trivial Solution.
2. If $D \neq 0$, the system has only the trivial solution.
3. If $D = 0$, the system has Infinitely many solutions (including the trivial one). These non-zero solutions are called Non-Trivial Solutions.
Competitive Exam Corner: Cramer's Rule (JEE/NDA)
1. Geometrical Meaning: For 2 variables, the equations represent straight lines. If $D \neq 0$, the lines intersect at one point. If $D=0, D_1 \neq 0$, the lines are parallel. If $D=D_1=D_2=0$, the lines coincide.
2. Homogeneous Shortcut: If a question in JEE Mains mentions a "non-trivial solution," immediately set the determinant $D = 0$.
3. Speed Tip: While solving 3-variable systems, if you find $D=0$, calculate $D_1$ first. If $D_1 \neq 0$, you can stop and conclude "No Solution" immediately without finding $D_2$ or $D_3$.
4. Indian Numeric System: In Indian competitive exams, constant terms are sometimes provided on the LHS (e.g., $ax + by + c = 0$). Always shift them to the RHS ($ax + by = -c$) before applying Cramer’s Rule to avoid sign errors in $D_1, D_2, D_3$.
5. $\textsf{₹}$ Exam Insight: Cramer's rule is often preferred over the Matrix Method in Objective Exams as it avoids the tedious calculation of cofactors and adjoints.