Chapter 5 Continuity and Differentiability (Concepts)
Welcome to Chapter 5: Continuity and Differentiability! This chapter explores the very heart of calculus, focusing on the smoothness and connectivity of functions. We formally define continuity at a point $x = c$ using the limit condition: $\lim_{x \to c} f(x) = f(c)$. For this to hold true, the Left-Hand Limit, Right-Hand Limit, and the actual function value must all be equal.
We then progress to Differentiability, which measures a function's instantaneous rate of change. A vital theorem states that every differentiable function is necessarily continuous, though the converse is not always true—as demonstrated by the sharp "corner" in a modulus function. To master complex calculations, we utilize the Chain Rule, Implicit Differentiation, and Logarithmic Differentiation.
The chapter also introduces second-order derivatives and foundational results like Rolle's Theorem and Lagrange's Mean Value Theorem. These theorems provide a critical link between the average rate of change over an interval and the slope of the tangent at a specific point.
To enhance your understanding, this page includes visualizations, flowcharts, mindmaps, and practical examples. This page is prepared by learningspot.co to provide a structured and comprehensive learning experience for every student, ensuring a deep mastery of differential calculus.
Continuity
In this chapter, we bridge the gap between basic foundational limits and advanced analytical calculus by exploring the critical concept of Continuity and its profound relationship with Differentiability. While the syllabus in Class XI focused on the first principles of derivatives and the differentiation of elementary polynomial and trigonometric functions, we now transition toward a more rigorous mathematical framework.
Our journey begins with understanding how functions behave in a local neighborhood. We will extend our differentiation toolkit to handle composite functions through the application of the Chain Rule, as well as inverse trigonometric functions and implicit functions, where the dependent and independent variables are inextricably linked.
Furthermore, this chapter introduces the differentiation of transcendental functions, specifically exponential and logarithmic functions. This leads to the highly efficient and powerful technique of logarithmic differentiation, which simplifies the process of finding derivatives for functions of the form $[f(x)]^{g(x)}$ or products of multiple complex terms. Such methods are indispensable when dealing with growth and decay models in various scientific fields.
Finally, we will delve into the theoretical foundations of calculus by studying two pivotal theorems: Rolle's Theorem and Lagrange's Mean Value Theorem (LMVT). These theorems provide the existence conditions for points within an interval where the instantaneous rate of change equals the average rate of change. Mastering these concepts is vital for a comprehensive understanding of the Mean Value Theorem, which serves as a prerequisite for Integral Calculus.
Continuity of a Real Function at a Point
Intuitively, a real-valued function $f(x)$ is said to be continuous at a specific point if its graph does not exhibit any "break," "jump," or "hole" at that location. In a physical sense, if a curve can be traced on a sheet of paper from the immediate left of a point to its immediate right without lifting the pen, the function is continuous at that point. This continuity implies that the value of the function at neighboring points is "close enough" to the actual value at the point under consideration.
Mathematically, the Continuity of a function at $x = c$ is defined by its limiting behavior. We say $f(x)$ is continuous at $x = c$ if the following three conditions are satisfied simultaneously:
1. The function $f(x)$ is defined at $x = c$, i.e., $f(c)$ exists and is a finite real number.
2. The limit of the function as $x$ approaches $c$ exists. This requires the Left Hand Limit (LHL) to be equal to the Right Hand Limit (RHL):
$\lim\limits_{x \to c^-} f(x) = \lim\limits_{x \to c^+} f(x)$
3. The value of the limit is exactly equal to the value of the function at that point:
$\lim\limits_{x \to c} f(x) = f(c)$
Example 1. Consider a real function $f$ defined as follows:
$f(x) = \begin{cases} 4 & , & x \neq 3 \\ 6 & , & x = 3 \end{cases}$
Examine the continuity of the function $f$ at the point $x = 3$.
Answer:
Given: A piecewise function where the value of the function is a constant $4$ for all real numbers except $3$, and specifically defined as $6$ at $x = 3$.
To Find: To determine if the function $f(x)$ is continuous at the point $x = 3$.
Solution: To check the continuity of $f(x)$ at $x = 3$, we must verify three conditions: the existence of the function value, the existence of the limit, and the equality of the two.
Step 1: Finding the value of the function at the given point
From the definition of the function, the value of $f(x)$ exactly at $x = 3$ is provided directly.
$f(3) = 6$
[Given in the function definition]
Step 2: Finding the limit of the function as $x \to 3$
To find the limit, we consider the values of the function in the immediate neighbourhood of $3$. In this neighbourhood, $x$ is close to $3$ but $x \neq 3$. Therefore, we use the first part of the function definition, where $f(x) = 4$.
$\lim\limits_{x \to 3} f(x) = \lim\limits_{x \to 3} (4)$
Since the limit of a constant is the constant itself:
$\lim\limits_{x \to 3} f(x) = 4$
Step 3: Comparing the limit and the function value
Now, we compare the results obtained from above equations.
We observe that:
$\lim\limits_{x \to 3} f(x) \neq f(3)$
[Since $4 \neq 6$]
Conclusion: Since the limiting value of the function as $x$ approaches $3$ does not coincide with the actual value of the function at $x = 3$, the function $f$ is not continuous at $x = 3$.
As seen in the graph, there is a "hole" at the point $(3, 4)$ on the horizontal line $y = 4$, and a displaced solid point at $(3, 6)$. This causes a break in the graph, confirming the discontinuity.
Alternative Approach (Using LHL and RHL)
We can also verify the existence of the limit by checking the Left Hand Limit (LHL) and the Right Hand Limit (RHL) separately.
For LHL: As $x \to 3^-$, $x$ is slightly less than $3$ ($x \neq 3$).
$\text{LHL} = \lim\limits_{h \to 0} f(3 - h) = \lim\limits_{h \to 0} (4) = 4$
For RHL: As $x \to 3^+$, $x$ is slightly greater than $3$ ($x \neq 3$).
$\text{RHL} = \lim\limits_{h \to 0} f(3 + h) = \lim\limits_{h \to 0} (4) = 4$
Since $\text{LHL} = \text{RHL} = 4$, the $\lim\limits_{x \to 3} f(x)$ exists and is equal to $4$. However, since $f(3) = 6$, the condition for continuity is violated.
Example 2. Consider a real-valued function $f$ defined as $f(x) = \dfrac{x^2 - x - 6}{x - 3}$. Examine whether this function is continuous at the point $x = 3$.
Answer:
Given: A rational function where the numerator is a quadratic expression and the denominator is a linear expression.
To Find: The continuity of $f(x)$ at $x = 3$.
Step 1: Checking if the function is defined at $x = 3$
For a function to be continuous at a point $c$, the first condition is that $f(c)$ must exist as a finite real number. Let us evaluate the function at $x = 3$:
$f(3) = \dfrac{3^2 - 3 - 6}{3 - 3} = \dfrac{9 - 9}{0} = \dfrac{0}{0}$
The result is an indeterminate form, which means the function value $f(3)$ is not defined in its current form.
Step 2: Evaluating the limiting behavior as $x \to 3$
Even though the function is not defined at the point, we can check the limit as $x$ approaches $3$. We begin by factorising the numerator using the splitting of the middle term method:
$x^2 - x - 6 = x^2 - 3x + 2x - 6$
$= x(x - 3) + 2(x - 3) = (x - 3)(x + 2)$
Now, we substitute this back into the limit expression:
$\lim\limits_{x \to 3} f(x) = \lim\limits_{x \to 3} \dfrac{(x - 3)(x + 2)}{x - 3}$
Since we are taking the limit, $x$ is approaching $3$ but $x \neq 3$. This allows us to cancel the common factor $(x - 3)$ from the numerator and the denominator:
$\lim\limits_{x \to 3} f(x) = \lim\limits_{x \to 3} \dfrac{\cancel{(x - 3)}^{1}(x + 2)}{\cancel{x - 3}_{1}}$
$\lim\limits_{x \to 3} f(x) = \lim\limits_{x \to 3} (x + 2) = 3 + 2 = 5$
... (i)
Step 3: Comparison and Final Conclusion
We observe that while the limit exists and is equal to $5$, the function itself is not defined at $x = 3$. For continuity, we require:
$\lim\limits_{x \to 3} f(x) = f(3)$
Since this condition is not satisfied, the function is discontinuous at $x = 3$.
Graphically, the function represents the straight line $y = x + 2$ for all values except $x = 3$, where there is a "hole" at the point $(3, 5)$.
How to make this function continuous?
To "remove" this discontinuity, we must redefine the function at the specific point $x = 3$ to match the value of the limit. We can write the redefined continuous function as follows:
$g(x) = \begin{cases} \dfrac{x^2 - x - 6}{x - 3} & , & x \neq 3 \\ 5 & , & x = 3 \end{cases}$
By defining $g(3) = 5$, we ensure that there is no break in the graph, making the function continuous over its entire domain.
Example 3. Consider the piecewise function $f$ defined as follows:
$f(x) = \begin{cases} x + 2 & , & x < 1 \\ 4 & , & x = 1 \\ x - 2 & , & x > 1 \end{cases}$
Examine the continuity of the function at the point $x = 1$.
Answer:
Given: A function $f(x)$ whose behavior changes at the point $x = 1$.
To Find: Whether $f(x)$ is continuous at $x = 1$.
To determine if the function is continuous at $x = 1$, we must evaluate the Left Hand Limit (LHL), the Right Hand Limit (RHL), and the Function Value at that point. If all three are equal, the function is continuous.
Step 1: Evaluation of Left Hand Limit (LHL)
For $x < 1$, the function is defined as $f(x) = x + 2$. We approach $1$ from the left side:
$\text{LHL} = \lim\limits_{x \to 1^-} f(x) = \lim\limits_{x \to 1} (x + 2) = 1 + 2 = 3$
Step 2: Evaluation of Right Hand Limit (RHL)
For $x > 1$, the function is defined as $f(x) = x - 2$. We approach $1$ from the right side:
$\text{RHL} = \lim\limits_{x \to 1^+} f(x) = \lim\limits_{x \to 1} (x - 2) = 1 - 2 = -1$
Step 3: Evaluation of Function Value
From the definition of the piecewise function, the value exactly at $x = 1$ is:
$f(1) = 4$
[Given in function definition]
Conclusion:
By comparing the results from the above equations, we observe that:
$\text{LHL} \neq \text{RHL}$
Since the Left Hand Limit and the Right Hand Limit are not equal, the $\lim\limits_{x \to 1} f(x)$ does not exist. Furthermore, neither limit is equal to the function value $f(1) = 4$.
Because there is a finite difference between the LHL and the RHL, the graph exhibits a "jump." Therefore, the function $f$ is discontinuous at $x = 1$. This type of discontinuity is known as a Non-Removable Jump Discontinuity.
Summary of Results
1. $\text{LHL} = 3$
2. $\text{RHL} = -1$
3. $f(1) = 4$
Since the condition $\lim\limits_{x \to c^-} f(x) = \lim\limits_{x \to c^+} f(x) = f(c)$ is not met, the function is not continuous.
Definition of One-Sided Continuity
In the study of real analysis, a point $c$ on the real number line can be approached from two distinct directions: from the left-hand side (values smaller than $c$) and from the right-hand side (values larger than $c$). This leads us to the concept of one-sided limits, which are the building blocks for defining One-Sided Continuity.
Understanding one-sided continuity is crucial for examining functions defined on specific intervals or functions that undergo a sudden change in their rule of definition at a particular point, such as piecewise-defined functions.
1. Left Continuity (Continuity from the Left)
A function $f(x)$ is said to be left continuous or continuous from the left at a point $x = c$ if the limit of the function as $x$ approaches $c$ from the negative direction (left side) is exactly equal to the actual value of the function at $c$.
For a function to be left continuous, the following two conditions must be satisfied:
i. The function value $f(c)$ must exist and be a finite real number.
ii. The Left Hand Limit (LHL) must exist and match $f(c)$.
$\lim\limits_{x \to c^-} f(x) = f(c)$
In terms of a small positive increment $h$, where $h \to 0$, this can be written as:
$\lim\limits_{h \to 0} f(c - h) = f(c)$
[Where $h > 0$]
2. Right Continuity (Continuity from the Right)
A function $f(x)$ is said to be right continuous or continuous from the right at a point $x = c$ if the limit of the function as $x$ approaches $c$ from the positive direction (right side) is exactly equal to the actual value of the function at $c$.
Similar to left continuity, for a function to be right continuous:
i. The function value $f(c)$ must be defined.
ii. The Right Hand Limit (RHL) must exist and match $f(c)$.
$\lim\limits_{x \to c^+} f(x) = f(c)$
Using the increment $h$, this is represented as:
$\lim\limits_{h \to 0} f(c + h) = f(c)$
[Where $h > 0$]
Condition for General Continuity at a Point
For a function $f(x)$ to be considered generally continuous (or simply "continuous") at a point $x = c$, it must not have any break, jump, or hole at that point. This happens only if the function approaches the same finite value from both the left and the right, and that value coincides with the defined value of the function at that point.
Therefore, a function $f$ is continuous at $x = c$ if and only if it is both left continuous and right continuous at $x = c$. Mathematically, this composite condition is expressed as:
$\lim\limits_{x \to c^-} f(x) = \lim\limits_{x \to c^+} f(x) = f(c)$
If we denote $\lim\limits_{x \to c} f(x)$ as the two-sided limit, the condition simplifies to:
$\lim\limits_{x \to c} f(x) = f(c)$
Summary of Failure of Continuity:
1. If $\lim\limits_{x \to c^-} f(x) = \lim\limits_{x \to c^+} f(x)$ but they are not equal to $f(c)$, the function has a removable discontinuity.
2. If $\lim\limits_{x \to c^-} f(x) \neq \lim\limits_{x \to c^+} f(x)$, the function has a jump discontinuity (non-removable).
3. If either of the one-sided limits becomes $\pm \infty$, the function has an infinite discontinuity.
Example 4. Consider the function $f$ defined on the set of real numbers $\mathbb{R}$ as follows:
$f(x) = \begin{cases} 3x - 1 & , & x < 2 \\ 5 & , & x = 2 \\ x^2 + 1 & , & x > 2 \end{cases}$
Analyze the Left Continuity, Right Continuity, and General Continuity of the function at $x = 2$.
Answer:
Given: A piecewise function $f(x)$ with a transition point at $x = 2$.
To Find: The continuity status (Left, Right, and General) at $x = 2$.
Step 1: Evaluation of the Function Value
From the definition of the function, the value of $f(x)$ at exactly $x = 2$ is explicitly given.
$f(2) = 5$
[Given] ... (i)
Step 2: Checking Left Continuity
To check for Left Continuity, we calculate the Left Hand Limit (LHL) by approaching $2$ from values slightly less than $2$ ($x < 2$).
We use the branch $f(x) = 3x - 1$:
$\text{LHL} = \lim\limits_{x \to 2^-} f(x) = \lim\limits_{x \to 2} (3x - 1)$
$\text{LHL} = 3(2) - 1 = 5$
... (ii)
Comparing equation (i) and (ii):
$\text{LHL} = f(2) = 5$
(True)
Conclusion for Left Continuity: Since $\text{LHL} = f(2)$, the function $f$ is Left Continuous at $x = 2$.
Step 3: Checking Right Continuity
To check for Right Continuity, we calculate the Right Hand Limit (RHL) by approaching $2$ from values slightly greater than $2$ ($x > 2$).
We use the branch $f(x) = x^2 + 1$:
$\text{RHL} = \lim\limits_{x \to 2^+} f(x) = \lim\limits_{x \to 2} (x^2 + 1)$
$\text{RHL} = (2)^2 + 1 = 5$
... (iii)
Comparing equation (i) and (iii):
$\text{RHL} = f(2) = 5$
(True)
Conclusion for Right Continuity: Since $\text{RHL} = f(2)$, the function $f$ is Right Continuous at $x = 2$.
Step 4: Checking General Continuity
A function is generally continuous at a point if and only if it is both left continuous and right continuous. From our findings:
1. The function is Left Continuous ($\text{LHL} = f(2)$).
2. The function is Right Continuous ($\text{RHL} = f(2)$).
3. Thus, $\text{LHL} = \text{RHL} = f(2) = 5$.
Final Conclusion: Since the limiting values from both sides coincide with the functional value, the function $f$ is Generally Continuous at $x = 2$.
Reasons for Discontinuity
In the study of calculus, understanding why a function fails to be continuous is as important as knowing why it is continuous. A discontinuity represents a "disruption" in the flow of a function's graph. For a function $f(x)$ to be continuous at a point $x = c$, three strict criteria must be met: the function must be defined, the limit must exist, and they must be equal. If any of these criteria are violated, the function is said to be discontinuous at that point.
Let us explore the specific reasons for these failures in detail:
1. The Function is not defined at $x = c$
This occurs when $c$ is not in the domain of $f(x)$. Even if the limit exists from both sides, if there is no "point" to fill the gap, the function is discontinuous. This often happens in rational functions where the denominator becomes zero.
$f(x) = \dfrac{x^2 - 4}{x - 2}$
[At $x = 2$, $f(x)$ is $\frac{0}{0}$]
2. The Limit of the Function does not exist at $x = c$
This is a more "severe" type of discontinuity. It happens if the function approaches different values from the left and right, or if the function grows without bound (approaches infinity). Mathematically:
$\lim\limits_{x \to c^-} f(x) \neq \lim\limits_{x \to c^+} f(x)$
3. The Limit exists but is not equal to the Function Value
In this case, the function is defined at $x = c$, and the limit also exists as $x \to c$, but the two values do not match. It looks like a "displaced point" on the graph.
$\lim\limits_{x \to c} f(x) = L \neq f(c)$
Classification: Removable vs Non-Removable Discontinuity
Discontinuities are broadly classified into two categories based on whether they can be "fixed" or not.
A. Removable Discontinuity
A discontinuity at $x = c$ is called removable if $\lim\limits_{x \to c} f(x)$ exists but is not equal to $f(c)$ (or $f(c)$ is undefined). It is called "removable" because we can make the function continuous by simply redefining $f(c)$ to be equal to the value of the limit.
Example of Removable Discontinuity
Consider $f(x) = \dfrac{\sin x}{x}$ at $x = 0$.
$\lim\limits_{x \to 0} \dfrac{\sin x}{x} = 1$
[Standard Limit]
However, $f(0)$ is undefined. We can "remove" this by defining $f(0) = 1$.
B. Non-Removable Discontinuity
A discontinuity is non-removable if the limit $\lim\limits_{x \to c} f(x)$ does not exist. No matter how we define or redefine the function at $x = c$, we cannot make it continuous. There are two main sub-types:
1. Jump Discontinuity: This occurs when the LHL and RHL both exist as finite numbers but are not equal. The graph "jumps" from one height to another.
$\text{Jump} = |\text{RHL} - \text{LHL}|$
2. Infinite Discontinuity: This occurs when one or both of the one-sided limits go to $\infty$ or $-\infty$. This creates a vertical asymptote on the graph. For example, $f(x) = \dfrac{1}{x - 3}$ at $x = 3$.
Continuity of a Function in an Interval
In our previous discussions, we analyzed the continuity of a function at a specific isolated point. However, we are often required to discuss the continuity of a function over an entire range of values, known as an interval. This is essential for applying theorems like Rolle's Theorem or Mean Value Theorem.
Continuity in an Open Interval
A real-valued function $f$ is said to be continuous in an open interval $(a, b)$ if it is continuous at each and every point belonging to that interval. Mathematically, if we pick any arbitrary point $c$ such that $a < c < b$, the following condition must hold true:
$\lim\limits_{x \to c} f(x) = f(c)$
$\forall \ c \in (a, b)$
For example, a polynomial function like $f(x) = x^2 + 3x + 2$ is continuous in the open interval $(1, 5)$ because it is continuous at every real number between $1$ and $5$.
Continuity in a Closed Interval
Defining continuity for a closed interval $[a, b]$ is more nuanced because we cannot approach the endpoints from both sides. For instance, at the starting point $a$, we can only approach from the right (values greater than $a$). Similarly, at the endpoint $b$, we can only approach from the left (values less than $b$).
Therefore, a function $f$ is said to be continuous in the closed interval $[a, b]$ if it satisfies the following three conditions:
1. Continuity in the interior: $f$ is continuous at every point in the open interval $(a, b)$.
2. Continuity at the left endpoint: $f$ is continuous from the right at $x = a$.
$\lim\limits_{x \to a^+} f(x) = f(a)$
3. Continuity at the right endpoint: $f$ is continuous from the left at $x = b$.
$\lim\limits_{x \to b^-} f(x) = f(b)$
Continuous Function Definition
A function is broadly termed a Continuous Function if it is continuous at every point in its Domain. If the domain of a function is a closed interval $[a, b]$, then the function must satisfy the conditions for closed interval continuity mentioned above.
Domain of Continuity: The set of all points at which a function is continuous is called its domain of continuity. It is important to note that the domain of continuity can be a proper subset of the function's actual domain if there are points where the function is defined but discontinuous.
Example. Examine the continuity of the function $f(x) = \sqrt{9 - x^2}$ in the interval $[-3, 3]$.
Answer:
Given: $f(x) = \sqrt{9 - x^2}$ for $x \in [-3, 3]$.
Step 1: Continuity in the Open Interval $(-3, 3)$
Let $c$ be any point such that $-3 < c < 3$.
$\lim\limits_{x \to c} f(x) = \lim\limits_{x \to c} \sqrt{9 - x^2} = \sqrt{9 - c^2}$
Since $f(c) = \sqrt{9 - c^2}$ exists for all $c \in (-3, 3)$, the function is continuous in the open interval.
Step 2: Right Continuity at $x = -3$
$\lim\limits_{x \to -3^+} \sqrt{9 - x^2} = \sqrt{9 - (-3)^2} = 0$
$f(-3) = \sqrt{9 - 9} = 0$
Since $\lim\limits_{x \to -3^+} f(x) = f(-3)$, the function is right continuous at $x = -3$.
Step 3: Left Continuity at $x = 3$
$\lim\limits_{x \to 3^-} \sqrt{9 - x^2} = \sqrt{9 - (3)^2} = 0$
$f(3) = \sqrt{9 - 9} = 0$
Since $\lim\limits_{x \to 3^-} f(x) = f(3)$, the function is left continuous at $x = 3$.
Conclusion: Since the function is continuous in $(-3, 3)$, right continuous at $-3$, and left continuous at $3$, the function $f(x) = \sqrt{9 - x^2}$ is continuous in the closed interval $[-3, 3]$.
Summary of Continuity in Intervals
| Interval Type | Notation | Conditions for Continuity |
|---|---|---|
| Open Interval | $(a, b)$ | Continuous at every point $c \in (a, b)$. |
| Closed Interval | $[a, b]$ | Continuous in $(a, b)$ + Right continuous at $a$ + Left continuous at $b$. |
| Semi-Open (Left) | $(a, b]$ | Continuous in $(a, b)$ + Left continuous at $b$. |
| Semi-Open (Right) | $[a, b)$ | Continuous in $(a, b)$ + Right continuous at $a$. |
Properties of Continuous Functions
In this section, we explore the algebraic and structural properties of continuous functions. These properties allow us to determine the continuity of complex functions by breaking them down into simpler, well-known components.
Algebra of Continuous Functions
The study of Algebra of Continuous Functions allows us to determine the continuity of complex expressions by understanding the behavior of their individual components. These properties are derived directly from the algebra of limits that we studied in earlier classes. When two functions are continuous at a specific point, their algebraic combinations—such as their sum, difference, product, or quotient—also tend to maintain that continuity, provided certain conditions are met.
Theorem 1: Continuity at a Point
Suppose $f$ and $g$ are two real-valued functions that are continuous at a specific real number $c$. Let $\alpha$ be any constant real number. Then the following algebraic properties hold true:
(i) Scalar Multiple: The function $\alpha f$ is continuous at $x = c$.
(ii) Addition: The function $f + g$ is continuous at $x = c$.
(iii) Subtraction: The function $f - g$ is continuous at $x = c$.
(iv) Multiplication: The function $fg$ is continuous at $x = c$.
(v) Division: The function $\frac{f}{g}$ is continuous at $x = c$, provided $g(c) \neq 0$.
Formal Proofs of Algebra of Continuous Functions
To establish the algebraic properties of Continuous Functions, we rely on the fundamental properties of limits. Suppose we are given two real-valued functions $f$ and $g$ that are continuous at a point $x = c$. By the definition of continuity, this implies:
$\lim\limits_{x \to c} f(x) = f(c)$ and $\lim\limits_{x \to c} g(x) = g(c)$
[Definition of Continuity]
Now, we provide the step-by-step formal proof for each part of Theorem 1 below.
(i) Proof for Scalar Multiple ($\alpha f$)
Let $\alpha$ be any real constant. To prove that $\alpha f$ is continuous at $x = c$, we must show that the limit of $(\alpha f)(x)$ as $x \to c$ equals $(\alpha f)(c)$.
$\lim\limits_{x \to c} (\alpha f)(x) = \lim\limits_{x \to c} [\alpha \cdot f(x)]$
By the property of limits, a constant can be taken outside the limit operation:
$ = \alpha \cdot \lim\limits_{x \to c} f(x)$
$ = \alpha \cdot f(c) = (\alpha f)(c)$
Since the limit equals the functional value, $\alpha f$ is continuous at $x = c$.
(ii) Proof for Addition ($f + g$)
To prove continuity of the sum, we evaluate the limit of the sum of functions:
$\lim\limits_{x \to c} (f + g)(x) = \lim\limits_{x \to c} [f(x) + g(x)]$
Using the limit sum rule ($\lim [u+v] = \lim u + \lim v$):
$ = \lim\limits_{x \to c} f(x) + \lim\limits_{x \to c} g(x)$
$ = f(c) + g(c) = (f + g)(c)$
Hence, the sum $f + g$ is continuous at $x = c$.
(iii) Proof for Subtraction ($f - g$)
Similarly, for the difference of two continuous functions:
$\lim\limits_{x \to c} (f - g)(x) = \lim\limits_{x \to c} [f(x) - g(x)]$
Using the limit difference rule:
$ = \lim\limits_{x \to c} f(x) - \lim\limits_{x \to c} g(x)$
$ = f(c) - g(c) = (f - g)(c)$
Hence, the difference $f - g$ is continuous at $x = c$.
(iv) Proof for Multiplication ($fg$)
For the product of two functions, we use the limit product rule:
$\lim\limits_{x \to c} (fg)(x) = \lim\limits_{x \to c} [f(x) \cdot g(x)]$
$ = \left[ \lim\limits_{x \to c} f(x) \right] \cdot \left[ \lim\limits_{x \to c} g(x) \right]$
$ = f(c) \cdot g(c) = (fg)(c)$
Hence, the product $fg$ is continuous at $x = c$.
(v) Proof for Division ($\frac{f}{g}$)
To prove the continuity of the quotient, we must assume that the denominator is not zero at the point of consideration, i.e., $g(c) \neq 0$.
$\lim\limits_{x \to c} \left( \frac{f}{g} \right)(x) = \lim\limits_{x \to c} \left[ \frac{f(x)}{g(x)} \right]$
Using the limit quotient rule:
$ = \frac{\lim\limits_{x \to c} f(x)}{\lim\limits_{x \to c} g(x)}$
$ = \frac{f(c)}{g(c)} = \left( \frac{f}{g} \right)(c)$
[Provided $g(c) \neq 0$]
Hence, the quotient $\frac{f}{g}$ is continuous at $x = c$, provided the denominator is non-zero.
Theorem 2: Continuity on Domains
While the first theorem established continuity at a specific point, Theorem 2 generalizes this concept to entire sets of points. In real analysis and calculus, we define the Domain of Continuity as the set of all points where a function satisfies the definition of continuity. If $D_f$ represents the domain of continuity for function $f$ and $D_g$ represents the same for function $g$, then any algebraic combination of these functions is continuous on the region where both functions are simultaneously continuous, typically denoted as the intersection of their domains ($D_f \cap D_g$).
Let $f$ and $g$ be two real functions with domains of continuity $D_f$ and $D_g$ respectively. Then the following properties hold:
(i) Scalar Multiple: The function $\alpha f$ is continuous on $D_f$ for all $\alpha \in \mathbb{R}$.
(ii) Addition: The function $f + g$ is continuous on the intersection $D_f \cap D_g$.
(iii) Subtraction: The function $f - g$ is continuous on the intersection $D_f \cap D_g$.
(iv) Multiplication: The function $fg$ is continuous on the intersection $D_f \cap D_g$.
(v) Division: The function $\frac{f}{g}$ is continuous on $D_f \cap D_g$, excluding points where $g(x) = 0$.
Formal Proofs for Theorem 2
The proofs for Theorem 2 rely on the fact that for a function to be continuous on a domain, it must be continuous at every arbitrary point within that domain. We utilize the results from Theorem 1 (continuity at a point) to validate these properties.
(i) Proof for Scalar Multiple ($\alpha f$):
Let $c$ be any arbitrary point in the domain of continuity $D_f$.
$\lim\limits_{x \to c} f(x) = f(c)$
[$\because f$ is continuous on $D_f$]
Now, we consider the limit of $(\alpha f)$ at $x = c$:
$\lim\limits_{x \to c} (\alpha f)(x) = \lim\limits_{x \to c} [\alpha \cdot f(x)]$
$ = \alpha \cdot \lim\limits_{x \to c} f(x)$
$ = \alpha \cdot f(c) = (\alpha f)(c)$
Since this holds for every $c \in D_f$, $\alpha f$ is continuous on $D_f$.
(ii, iii, iv) Proof for Addition, Subtraction, and Multiplication:
Let $c$ be any arbitrary point in the intersection $D_f \cap D_g$. This means $c$ belongs to both $D_f$ and $D_g$.
$\lim\limits_{x \to c} f(x) = f(c)$ and $\lim\limits_{x \to c} g(x) = g(c)$
[$\because c \in D_f \cap D_g$]
For addition:
$\lim\limits_{x \to c} (f + g)(x) = \lim\limits_{x \to c} f(x) + \lim\limits_{x \to c} g(x)$
$ = f(c) + g(c) = (f + g)(c)$
Similar derivations apply for subtraction ($f-g$) and multiplication ($fg$):
$\lim\limits_{x \to c} (fg)(x) = \lim\limits_{x \to c} f(x) \cdot \lim\limits_{x \to c} g(x) = f(c) \cdot g(c) = (fg)(c)$
Since $c$ is an arbitrary point in $D_f \cap D_g$, the combined functions are continuous on the entire intersection.
(v) Proof for Division ($\frac{f}{g}$):
Let $c \in D_f \cap D_g$ such that $g(c) \neq 0$.
$\lim\limits_{x \to c} \left( \frac{f}{g} \right)(x) = \frac{\lim\limits_{x \to c} f(x)}{\lim\limits_{x \to c} g(x)}$
$ = \frac{f(c)}{g(c)} = \left( \frac{f}{g} \right)(c)$
[$\because g(c) \neq 0$]
Thus, $\frac{f}{g}$ is continuous at all points in the intersection where the denominator does not vanish.
Higher Powers and Reciprocals
Based on the multiplication and division properties, we can derive the following for any function $f$ continuous on $D_f$:
1. Power Rule: The function $f^n$ (where $n$ is a positive integer) is continuous on $D_f$.
$f^n = f \cdot f \cdot f \dots$ ($n$ times)
[By repeated multiplication rule]
2. Reciprocal Rule: The function $\frac{1}{f}$ is continuous on $D_f$ except at points where $f(x) = 0$.
$\lim\limits_{x \to c} \frac{1}{f(x)} = \frac{1}{f(c)}$
[Provided $f(c) \neq 0$]
Summary Table of Domain Continuity
| Operation | Resulting Function | Domain of Continuity |
|---|---|---|
| Scalar Multiplication | $\alpha f$ | $D_f$ |
| Addition / Subtraction | $f \pm g$ | $D_f \cap D_g$ |
| Multiplication | $fg$ | $D_f \cap D_g$ |
| Division | $f / g$ | $(D_f \cap D_g) \setminus \{x : g(x) = 0\}$ |
| Integral Power | $f^n$ | $D_f$ |
Continuity of Polynomial and Rational Functions
In the hierarchy of real functions, polynomials and rational functions represent the most well-behaved classes of functions. Their continuity is not just a localized property but a global one.
Theorem 3: Continuity of Polynomial Functions
A polynomial function is defined as a real-valued function of the form:
$f(x) = a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0$
where $n$ is a non-negative integer and $a_0, a_1, \dots, a_n$ are real constants with $a_n \neq 0$. Theorem 3 states that every polynomial function is continuous everywhere on the real line $\mathbb{R}$.
Formal Proof of Theorem 3
To prove that a function is continuous everywhere, we must show it is continuous at an arbitrary real number $c$.
Let $c$ be any arbitrary real number. We evaluate the limit of $f(x)$ as $x$ approaches $c$:
$\lim\limits_{x \to c} f(x) = \lim\limits_{x \to c} (a_n x^n + a_{n-1} x^{n-1} + \dots + a_1 x + a_0)$
By applying the Algebra of Limits (limit of a sum is the sum of limits), we get:
$ = \lim\limits_{x \to c} (a_n x^n) + \lim\limits_{x \to c} (a_{n-1} x^{n-1}) + \dots + \lim\limits_{x \to c} a_0$
$ = a_n c^n + a_{n-1} c^{n-1} + \dots + a_1 c + a_0$
$\lim\limits_{x \to c} f(x) = f(c)$
[Definition of Continuity]
Since the limit at any arbitrary point $c$ is equal to the function value $f(c)$, the polynomial function $f$ is continuous at every point in its domain $\mathbb{R}$.
Corollaries to Theorem 3
1. Constant Function: Any function $f(x) = k$ (where $k$ is a constant) is a polynomial of degree zero. Therefore, a constant function is continuous everywhere.
2. Identity Function: The function $f(x) = x$ is a polynomial of degree one. Consequently, the identity function is continuous everywhere.
Theorem 4: Continuity of Rational Functions
A rational function is a function that can be expressed as the ratio of two polynomial functions. Specifically:
$f(x) = \frac{g(x)}{h(x)}$
[where $g(x), h(x)$ are polynomials]
Theorem 4 states that a rational function is continuous at every point in its domain. This means it is continuous for all $x$ except those where the denominator $h(x) = 0$.
Formal Proof of Theorem 4
The domain of $f$, denoted as $D_f$, consists of all real numbers $x$ such that $h(x) \neq 0$.
Let $c$ be any point in $D_f$. This implies $h(c) \neq 0$.
Since $g(x)$ and $h(x)$ are polynomial functions, they are continuous at $x = c$ (by Theorem 3). Thus:
$\lim\limits_{x \to c} g(x) = g(c)$ and $\lim\limits_{x \to c} h(x) = h(c)$
By the Algebra of Continuous Functions (Theorem 1, part v), the quotient of two continuous functions is continuous provided the denominator is non-zero:
$\lim\limits_{x \to c} f(x) = \lim\limits_{x \to c} \frac{g(x)}{h(x)} = \frac{\lim\limits_{x \to c} g(x)}{\lim\limits_{x \to c} h(x)}$
$ = \frac{g(c)}{h(c)} = f(c)$
[Since $h(c) \neq 0$]
Since $\lim\limits_{x \to c} f(x) = f(c)$, $f$ is continuous at every point $c$ in its domain. This concludes that a rational function is continuous throughout its entire domain of definition.
Example. Determine the domain of continuity for the rational function $f(x) = \frac{x^2 - 9}{x^2 - 5x + 6}$.
Answer:
To Find: The set of points where the given rational function is continuous.
Step 1: Identify the denominator and set it to zero.
The function is continuous everywhere except where $h(x) = 0$.
$x^2 - 5x + 6 = 0$
$(x - 2)(x - 3) = 0$
The denominator is zero at $x = 2$ and $x = 3$.
Step 2: Define the Domain of Continuity.
According to Theorem 4, the rational function $f$ is continuous at every point of its domain. Therefore:
Domain $= \mathbb{R} \setminus \{2, 3\}$
…(iv)
Conclusion: The function is continuous for all real numbers except at $x = 2$ and $x = 3$. At these points, the function has infinite or removable discontinuities depending on the numerator's behavior.
Summary Comparison
| Function Type | Continuity Range | Exceptions |
|---|---|---|
| Polynomial Functions | Everywhere ($\mathbb{R}$) | None |
| Rational Functions | Whole Domain | Where denominator $= 0$ |
| Constant/Identity | Everywhere ($\mathbb{R}$) | None |
Continuity of Absolute Value Functions
In the study of real functions, the Absolute Value Function (or Modulus Function) plays a significant role, especially in understanding distance and magnitude. When we apply the absolute value operation to a continuous function, the resulting function preserves the property of continuity.
Theorem 5: Continuity of Absolute Value (Modulus) Function
Statement: If a real-valued function $f$ is continuous at a point $c$, then its absolute value function $|f|$, defined by $|f|(x) = |f(x)|$ for all $x$ in the domain, is also continuous at $x = c$.
Proof of Theorem 5
Given that $f$ is continuous at $x = c$, by the definition of continuity, we have:
$\lim\limits_{x \to c} f(x) = f(c)$
…(i)
We need to prove that $\lim\limits_{x \to c} |f(x)| = |f(c)|$. Using the property of limits for absolute values, or the triangle inequality property $||a| - |b|| \leq |a - b|$, we can observe that as $f(x)$ approaches $f(c)$, the magnitude $|f(x)|$ must approach $|f(c)|$.
Evaluating the limit:
$\lim\limits_{x \to c} |f|(x) = \lim\limits_{x \to c} |f(x)|$
$ = |\lim\limits_{x \to c} f(x)|$
$ = |f(c)| = |f|(c)$
[From equation (i)]
Hence, the absolute value function $|f|$ is continuous at $x = c$.
Important Note on the Converse:
It is crucial to remember that the converse of Theorem 5 is not necessarily true. If $|f|$ is continuous at a point, it does not guarantee that $f$ itself is continuous at that point. A function can oscillate between positive and negative values in a way that creates a jump, yet its absolute magnitude remains constant and continuous.
Example. Consider the function $f: \mathbb{R} \to \mathbb{R}$ defined by:
$f(x) = \begin{cases} 1 & , & \text{if } x \in \mathbb{Z} \\ -1 & , & \text{if } x \notin \mathbb{Z} \end{cases}$
Show that the function $f$ is discontinuous at every integer $c \in \mathbb{Z}$, but its absolute value function $|f|$ is continuous for all $x \in \mathbb{R}$.
Answer:
Given: A piecewise function $f(x)$ that assigns the value $1$ to integers and $-1$ to non-integers. We are also considering the modulus function $|f|(x) = |f(x)|$.
To Prove:
1. $f(x)$ is discontinuous at every $c \in \mathbb{Z}$.
2. $|f|(x)$ is continuous for all $x \in \mathbb{R}$.
Proof Part 1: Discontinuity of $f(x)$ at Integers
Let $c$ be any arbitrary integer ($c \in \mathbb{Z}$). To check the continuity at $x = c$, we examine the functional value and the limit at that point.
Step 1: Functional Value
According to the definition of the function, for any integer $c$:
$f(c) = 1$
[Given for $x \in \mathbb{Z}$]
Step 2: Existence of Limit
We now find the limit of $f(x)$ as $x \to c$. In the limiting process, $x$ approaches $c$ but $x \neq c$. On the real number line, the set of integers $\mathbb{Z}$ is a discrete set. This means that for any integer $c$, there exists a small neighborhood $(c - \delta, c + \delta)$ such that no other integer except $c$ itself lies in this interval.
Therefore, for all points $x$ such that $0 < |x - c| < \delta$, $x$ is not an integer. In this neighborhood, the function value is always $-1$.
Calculating the Left Hand Limit (LHL):
$\text{LHL} = \lim\limits_{x \to c^-} f(x) = \lim\limits_{x \to c^-} (-1) = -1$
Calculating the Right Hand Limit (RHL):
$\text{RHL} = \lim\limits_{x \to c^+} f(x) = \lim\limits_{x \to c^+} (-1) = -1$
Since $\text{LHL} = \text{RHL}$, the limit exists:
$\lim\limits_{x \to c} f(x) = -1$
Step 3: Comparison
Comparing above equations:
$\lim\limits_{x \to c} f(x) \neq f(c)$
Since the limit of the function as $x$ approaches $c$ is not equal to the value of the function at $c$, $f(x)$ is discontinuous at every integer point.
Proof Part 2: Continuity of $|f|(x)$
Now, let us examine the absolute value function $g(x) = |f(x)|$. We analyze its value for all real numbers $x \in \mathbb{R}$ by considering two exhaustive cases:
Case I: When $x$ is an integer ($x \in \mathbb{Z}$)
In this case, $f(x) = 1$.
$|f(x)| = |1| = 1$
Case II: When $x$ is not an integer ($x \notin \mathbb{Z}$)
In this case, $f(x) = -1$.
$|f(x)| = |-1| = 1$
From above equations, we observe that for every real number $x$:
$|f|(x) = 1$
[$\forall x \in \mathbb{R}$]
The function $|f|$ is a constant function. Since every constant function is a polynomial of degree zero, it is inherently continuous at every point in its domain. Mathematically, for any real number $a$:
$\lim\limits_{x \to a} |f|(x) = \lim\limits_{x \to a} (1) = 1 = |f|(a)$
Conclusion: We have proved that while the original function $f$ suffers from a removable discontinuity at every integer, the absolute value function $|f|$ is continuous everywhere on the real line. This example serves as a crucial proof that the converse of the theorem "continuity of $f \implies$ continuity of $|f|$" does not hold true.
Continuity of Inverse and Composite Functions
To analyze the behavior of more sophisticated mathematical models, such as those found in Physics and Advanced Calculus, we extend our understanding to Inversion and Composition. These operations allow us to determine the continuity of complex expressions by evaluating the nature of their component functions.
Theorem 6: Continuity of Inverse Functions
Let $f$ be a one-one (injective) and continuous function defined on a closed interval $[a, b]$ with a range $[c, d]$. According to this theorem, the inverse function $f^{-1} : [c, d] \to [a, b]$ is also a continuous function throughout its entire domain $[c, d]$.
This theorem is fundamental to Inverse Trigonometry. For instance, consider the function $f(x) = \sin x$. When restricted to its principal value branch $\left [ -\frac{\pi}{2} , \frac{\pi}{2} \right ]$, it is continuous and one-one. Therefore, its inverse function:
$f^{-1}(x) = \sin^{-1} x$
[Defined on $[-1, 1]$]
is guaranteed to be continuous for every value of $x$ in the interval $[-1, 1]$. Similar logic applies to $\cos^{-1} x$, $\tan^{-1} x$, and other inverse circular functions within their respective domains.
Theorem 7: Continuity of Composite Functions
The Composition of functions, often referred to as a "function of a function," involves applying one function to the result of another. Let $f$ and $g$ be two real-valued functions such that the composition $(g \circ f)$ is defined.
Statement: If the function $f$ is continuous at a point $c$, and the function $g$ is continuous at the point $f(c)$, then the composite function $(g \circ f)$ is also continuous at the point $c$.
Mathematically, the limit of a composite function can be evaluated by "passing the limit inside" the outer function:
$\lim\limits_{x \to c} (g \circ f)(x) = g\left( \lim\limits_{x \to c} f(x) \right) = g(f(c))$
Illustrative Examples of Composite Continuity
1. Trigonometric-Polynomial Composition: Consider $h(x) = \sin(x^2)$. Here, $f(x) = x^2$ is a continuous polynomial function on $\mathbb{R}$, and $g(x) = \sin x$ is a continuous trigonometric function on $\mathbb{R}$. Their composition $g(f(x)) = \sin(x^2)$ is continuous everywhere.
2. Multi-nested Absolute Functions: Consider $h(x) = |1 - x + |x||$. This function is composed of several continuous parts:
i. $u(x) = |x|$ (Continuous)
ii. $v(x) = 1 - x + u(x)$ (Continuous, being a sum of polynomials and $|x|$)
iii. $h(x) = |v(x)|$ (Continuous, by Theorem 5)
Since each layer of the composition is continuous, the entire function $h(x)$ is continuous on $\mathbb{R}$.
Example. Prove that the function $h(x) = |\cos x|$ is a continuous function for all real numbers.
Answer:
To Prove: The function $h(x) = |\cos x|$ is continuous for $x \in \mathbb{R}$.
Proof: We can decompose the given function $h(x)$ into two distinct functions to apply the property of composite continuity.
Let the "inner" function be $f(x)$ and the "outer" function be $g(x)$ defined as follows:
$f(x) = \cos x$
[Trigonometric Part]
$g(x) = |x|$
[Modulus Part]
The composite function $(g \circ f)(x)$ is formed as:
$(g \circ f)(x) = g(f(x)) = g(\cos x) = |\cos x|$
We analyze the individual continuity of these functions:
1. $\cos x$: We know that the cosine function is a basic trigonometric function which is continuous at every real number $x \in \mathbb{R}$.
2. $|x|$: The absolute value function is also continuous everywhere on the real line.
Since $f(x)$ is continuous for all $x$, and $g(x)$ is continuous at every value $f(x)$ takes, we invoke Theorem 7.
$h(x) = |\cos x|$ is continuous
[By Composition Theorem]
Conclusion: The function $h(x) = |\cos x|$ is continuous for all $x \in \mathbb{R}$ as it is a composition of two continuous functions.
Continuity of Trigonometric Functions
The trigonometric functions—sine, cosine, tangent, cotangent, secant, and cosecant—are the building blocks of periodic phenomena in mathematics and physics. A thorough understanding of their continuity is mandatory. We say a trigonometric function is a continuous function if it is continuous at every point in its domain of definition.
Below we provide the elaborate proofs for the continuity of each basic trigonometric function by examining their limiting behavior.
1. Continuity of $f(x) = \sin x$
The domain of the sine function is the set of all real numbers, $D_f = \mathbb{R}$. To prove its continuity, let $c$ be any arbitrary real number.
To evaluate the limit at $x = c$, let us substitute $x = c + h$. As $x \to c$, it implies $h \to 0$.
$\lim\limits_{x \to c} \sin x = \lim\limits_{h \to 0} \sin(c + h)$
Using the sum-to-product trigonometric identity: $\sin(A + B) $$ = \sin A \cos B $$ + \cos A \sin B$
$ = \lim\limits_{h \to 0} (\sin c \cos h + \cos c \sin h)$
By the algebra of limits, we can distribute the limit over addition and multiplication:
$ = \left(\sin c \cdot \lim\limits_{h \to 0} \cos h \right) + \left(\cos c \cdot \lim\limits_{h \to 0} \sin h \right)$
Applying the standard limits $\lim\limits_{h \to 0} \cos h = 1$ and $\lim\limits_{h \to 0} \sin h = 0$:
$ = (\sin c \cdot 1) + (\cos c \cdot 0)$
$\lim\limits_{x \to c} \sin x = \sin c$
[Which is equal to $f(c)$]
Since the limit equals the functional value at an arbitrary point $c$, $\sin x$ is continuous for all $x \in \mathbb{R}$.
2. Continuity of $f(x) = \cos x$
Similar to sine, the domain of the cosine function is $\mathbb{R}$. Let $c \in \mathbb{R}$. We again substitute $x = c + h$, so as $x \to c$, $h \to 0$.
$\lim\limits_{x \to c} \cos x = \lim\limits_{h \to 0} \cos(c + h)$
Using the identity: $\cos(A + B) = \cos A \cos B - \sin A \sin B$
$ = \lim\limits_{h \to 0} (\cos c \cos h - \sin c \sin h)$
$ = \left(\cos c \cdot \lim\limits_{h \to 0} \cos h \right) - \left(\sin c \cdot \lim\limits_{h \to 0} \sin h \right)$
$ = (\cos c \cdot 1) - (\sin c \cdot 0) = \cos c$
Since $\lim\limits_{x \to c} \cos x = \cos c = f(c)$, $\cos x$ is continuous for all $x \in \mathbb{R}$.
3. Continuity of $f(x) = \tan x$
To analyze the continuity of $f(x) = \tan x$, we first identify its definition in terms of the fundamental trigonometric ratios:
$f(x) = \tan x = \dfrac{\sin x}{\cos x}$
Step 1: Identifying the Domain
The function is defined only when the denominator is non-zero. Therefore, we must exclude points where $\cos x = 0$. We know that $\cos x = 0$ when $x$ is an odd multiple of $\frac{\pi}{2}$.
$D_f = \mathbb{R} \setminus \left\{ (2n+1)\frac{\pi}{2} : n \in \mathbb{Z} \right\}$
[Domain of definition]
Step 2: Applying the Algebra of Continuous Functions
We utilize Theorem 1 (v), which states that if two functions $g(x)$ and $h(x)$ are continuous at a point $c$, then their quotient $\frac{g(x)}{h(x)}$ is also continuous at $c$, provided $h(c) \neq 0$.
Let $g(x) = \sin x$ and $h(x) = \cos x$. We have already proven that:
i. $g(x) = \sin x$ is continuous for all $x \in \mathbb{R}$.
ii. $h(x) = \cos x$ is continuous for all $x \in \mathbb{R}$.
For any point $c$ in the domain $D_f$, we know that $\cos c \neq 0$. Therefore, according to the quotient rule for continuity, the function $\tan x = \frac{\sin x}{\cos x}$ must be continuous at $x = c$.
Step 3: Formal Limit Verification
Let $c \in D_f$. We evaluate the limit as follows:
$\lim\limits_{x \to c} \tan x = \lim\limits_{x \to c} \left( \dfrac{\sin x}{\cos x} \right)$
Using the property that the limit of a quotient is the quotient of the limits:
$ = \dfrac{\lim\limits_{x \to c} \sin x}{\lim\limits_{x \to c} \cos x}$
$ = \dfrac{\sin c}{\cos c} = \tan c$
[Since $\cos c \neq 0$]
Since $\lim\limits_{x \to c} f(x) = f(c)$ for every point $c$ in its domain, we conclude that the tangent function is continuous.
4. Continuity of $f(x) = \text{cosec } x$
The cosecant function, denoted by $f(x) = \text{cosec } x$, is defined as the reciprocal of the sine function. To determine its continuity, we first establish its mathematical definition and domain:
$f(x) = \text{cosec } x = \dfrac{1}{\sin x}$
Step 1: Domain Analysis
The function is defined for all real numbers $x$ except those where the denominator, $\sin x$, becomes zero. We know that $\sin x = 0$ for all integral multiples of $\pi$. Therefore, the domain $D_f$ is:
$D_f = \mathbb{R} \setminus \{ n\pi : n \in \mathbb{Z} \}$
[Domain of definition]
Step 2: Proving Continuity
Let $c$ be any arbitrary real number in the domain $D_f$. We evaluate the limit of $f(x)$ as $x \to c$:
$\lim\limits_{x \to c} f(x) = \lim\limits_{x \to c} \left( \dfrac{1}{\sin x} \right)$
By the quotient rule for limits:
$ = \dfrac{\lim\limits_{x \to c} 1}{\lim\limits_{x \to c} \sin x}$
Since the constant function $1$ and the function $\sin x$ are both continuous for all real numbers, and given that $c \in D_f$ (meaning $\sin c \neq 0$):
$ = \dfrac{1}{\sin c} = \text{cosec } c$
Because $\lim\limits_{x \to c} f(x) = f(c)$, the function $\text{cosec } x$ is continuous at every point in its domain. Thus, it is a continuous function.
5. Continuity of $f(x) = \sec x$
The secant function, $f(x) = \sec x$, is the reciprocal of the cosine function. Its continuity analysis mirrors that of the cosecant function.
$f(x) = \sec x = \dfrac{1}{\cos x}$
Step 1: Domain Analysis
The function is undefined where $\cos x = 0$, which occurs at odd multiples of $\frac{\pi}{2}$.
$D_f = \mathbb{R} \setminus \left\{ (2n+1)\dfrac{\pi}{2} : n \in \mathbb{Z} \right\}$
Step 2: Proving Continuity
Let $c \in D_f$. We examine the limit:
$\lim\limits_{x \to c} \sec x = \lim\limits_{x \to c} \left( \dfrac{1}{\cos x} \right) = \dfrac{\lim\limits_{x \to c} 1}{\lim\limits_{x \to c} \cos x}$
Since the cosine function is continuous everywhere and $\cos c \neq 0$ for $c \in D_f$:
$ = \dfrac{1}{\cos c} = \sec c$
Conclusion: The function $f(x) = \sec x$ is continuous throughout its entire domain of definition.
6. Continuity of $f(x) = \cot x$
The cotangent function, $f(x) = \cot x$, can be expressed as the ratio of the cosine function to the sine function. This makes it a quotient of two proven continuous functions.
$f(x) = \cot x = \dfrac{\cos x}{\sin x}$
Step 1: Domain Analysis
The cotangent function is defined wherever $\sin x$ is non-zero. Similar to cosecant, we exclude all points $x = n\pi$.
$D_f = \mathbb{R} \setminus \{ n\pi : n \in \mathbb{Z} \}$
Step 2: Proving Continuity
Consider $c$ to be any point such that $c \in D_f$. We check the continuity by evaluating the limit:
$\lim\limits_{x \to c} \cot x = \lim\limits_{x \to c} \left( \dfrac{\cos x}{\sin x} \right)$
Applying the Limit Quotient Rule:
$ = \dfrac{\lim\limits_{x \to c} \cos x}{\lim\limits_{x \to c} \sin x}$
$ = \dfrac{\cos c}{\sin c} = \cot c$
[Since $\sin c \neq 0$]
Conclusion: Since $\lim\limits_{x \to c} \cot x = f(c)$ for all points where the function is defined, $\cot x$ is a continuous function.
Summary Table of Trigonometric Continuity
| Trigonometric Function | Domain of Continuity | Condition for Continuity |
|---|---|---|
| $\sin x$ | $\mathbb{R}$ | Continuous everywhere |
| $\cos x$ | $\mathbb{R}$ | Continuous everywhere |
| $\tan x$ | $\mathbb{R} \setminus \{(2n+1)\frac{\pi}{2}\}$ | Continuous in domain |
| $\text{cosec } x$ | $\mathbb{R} \setminus \{n\pi\}$ | Continuous in domain |
| $\sec x$ | $\mathbb{R} \setminus \{(2n+1)\frac{\pi}{2}\}$ | Continuous in domain |
| $\cot x$ | $\mathbb{R} \setminus \{n\pi\}$ | Continuous in domain |
Summary of Inverse Trigonometric Functions
By applying Theorem 6 to the basic trigonometric functions (within their restricted principal domains), we conclude that all Inverse Trigonometric Functions are continuous within their domains. This is a critical result for evaluating limits involving $\sin^{-1} x$, $\tan^{-1} x$, etc.
| Inverse Function | Domain of Continuity | Principal Value Range |
|---|---|---|
| $f(x) = \sin^{-1} x$ | $[-1, 1]$ | $[-\frac{\pi}{2}, \frac{\pi}{2}]$ |
| $f(x) = \cos^{-1} x$ | $[-1, 1]$ | $[0, \pi]$ |
| $f(x) = \tan^{-1} x$ | $\mathbb{R}$ (or $(-\infty, \infty)$) | $(-\frac{\pi}{2}, \frac{\pi}{2})$ |
| $f(x) = \cot^{-1} x$ | $\mathbb{R}$ | $(0, \pi)$ |
| $f(x) = \sec^{-1} x$ | $\mathbb{R} \setminus (-1, 1)$ i.e., $(-\infty, -1] \cup [1, \infty)$ | $[0, \pi] \setminus \{\frac{\pi}{2}\}$ |
| $f(x) = \text{cosec}^{-1} x$ | $\mathbb{R} \setminus (-1, 1)$ | $[-\frac{\pi}{2}, \frac{\pi}{2}] \setminus \{0\}$ |
Relationship Between Differentiability and Continuity
The concept of Differentiability is one of the most significant pillars of Calculus. While Continuity ensures that there are no "gaps" or "breaks" in a function, Differentiability is a stricter condition that requires the function to be "smooth" and "regular" in its behavior. Differentiability is defined as the existence of a unique, non-vertical tangent at a point on the curve.
Mathematically, if a function is differentiable at a point, it means the graph doesn't just connect; it connects without any sharp corners or kinks. Physically, it implies that the Instantaneous Rate of Change of the function exists and is well-defined at that specific moment.
Fundamental Definitions of Derivative
Let $f$ be a real-valued function defined on an open interval containing the point $x = c$. The function $f$ is said to be differentiable at $x = c$ if the following limit exists finitely:
$\lim\limits_{x \to c} \frac{f(x) - f(c)}{x - c}$
[General Definition]
To analyze this limit more effectively, especially in piecewise functions, we split the investigation into two directions: the Left-Hand Derivative and the Right-Hand Derivative.
1. Left-Hand Derivative (LHD)
The Left-Hand Derivative represents the slope of the curve as we approach the point $c$ from the left side (values smaller than $c$). It is denoted by $L f'(c)$ or $f'(c^-)$. To calculate it, we substitute $x = c - h$, where $h$ is a very small positive quantity ($h > 0$). As $x \to c^-$, $h \to 0$.
$L f'(c) = \lim\limits_{h \to 0} \frac{f(c - h) - f(c)}{-h}$
[Slope from the Left]
2. Right-Hand Derivative (RHD)
The Right-Hand Derivative represents the slope of the curve as we approach the point $c$ from the right side (values larger than $c$). It is denoted by $R f'(c)$ or $f'(c^+)$. We substitute $x = c + h$, where $h > 0$. As $x \to c^+$, $h \to 0$.
$R f'(c) = \lim\limits_{h \to 0} \frac{f(c + h) - f(c)}{h}$
[Slope from the Right]
The Necessary Condition for Differentiability
A function $f(x)$ is said to be differentiable at $x = c$ if and only if the following three conditions are satisfied:
1. The Left-Hand Derivative ($L f'(c)$) exists and is a finite real number.
2. The Right-Hand Derivative ($R f'(c)$) exists and is a finite real number.
3. Both the one-sided derivatives are exactly equal.
$L f'(c) = R f'(c) = \text{Finite Value}$
If these conditions are met, the common value is called the Derivative of $f$ at $c$, denoted by $f'(c)$. If $L f'(c) \neq R f'(c)$, the function is said to be non-differentiable at that point, which usually manifests as a "corner" or "sharp turn" on the graph.
Geometrical Interpretation of Differentiability
To visualize Differentiability, we consider a curve $y = f(x)$ and a fixed point $P(c, f(c))$ on it. Let $Q(c + h, f(c + h))$ be a neighboring point on the curve. The line passing through points $P$ and $Q$ is called a secant line.
The slope of this secant line $PQ$ is given by the ratio of the change in $y$ to the change in $x$:
$\text{Slope of } PQ = \frac{f(c + h) - f(c)}{h}$
... (i)
As $h \to 0$, the point $Q$ moves along the curve and approaches point $P$. If the function is differentiable at $c$, the secant line $PQ$ approaches a unique limiting position. This limiting line is the Tangent to the curve at point $P$.
For the derivative to exist, the tangent approached from the left (as $h \to 0^-$) must be the same as the tangent approached from the right (as $h \to 0^+$). This implies the curve must be smooth at point $P$. The slope of this unique tangent is the Differential Coefficient $f'(c)$.
Visualizing Non-Differentiability: Sharp Corners
A common mistake is assuming that every connected (continuous) curve is differentiable. However, if a curve has a "sharp corner" or "kink," it is not differentiable at that point because the tangent from the left does not match the tangent from the right.
Consider the absolute value function $f(x) = |x|$. It is continuous at $x = 0$ because you can draw it without lifting your pen. However, look at its slopes:
1. For $x < 0$, the slope is always $-1$.
2. For $x > 0$, the slope is always $+1$.
At the origin ($x=0$), the graph forms a sharp "V" shape. Because $L f'(0) = -1$ and $R f'(0) = 1$, and since $-1 \neq 1$, there is no unique tangent at the origin. Thus, the function is continuous but not differentiable at $x = 0$.
Condition for Differentiability at a Point
In Advanced Calculus, merely being continuous is not enough for a function to be "well-behaved." A function $f(x)$ is said to be differentiable at a point $x = c$ if and only if there exists a unique, finite tangent to the curve at that point. This requires that the rate of change of the function as approached from the left side must exactly match the rate of change as approached from the right side.
Mathematically, the condition for differentiability is expressed through the equality of the Left-Hand Derivative (LHD) and the Right-Hand Derivative (RHD). Both must exist independently as finite real numbers.
$L f'(c) = R f'(c)$
[Equality of one-sided derivatives]
If above equation is satisfied, the function is differentiable at $c$, and this common finite value is designated as the Derivative or Differential Coefficient of $f$ at $x = c$. It is symbolically represented as $f'(c)$ or $\left. \frac{dy}{dx} \right|_{x=c}$.
If the limits are infinite, or if $L f'(c) \neq R f'(c)$, the function is said to be non-differentiable at that point. Geometrically, this manifests as a "sharp corner," a "kink," or a vertical tangent on the graph.
Differentiability in an Interval
We are often required to examine the differentiability of a function over a specific range of values rather than a single point. This is categorized into two types of intervals:
1. Differentiability in an Open Interval (a, b)
A function $f$ is said to be differentiable in the open interval $(a, b)$ if it is differentiable at every single point $x$ belonging to that interval. If even one point within the interval exists where the derivative fails to exist, the function is not differentiable in $(a, b)$.
2. Differentiability in a Closed Interval [a, b]
Defining differentiability for a closed interval $[a, b]$ is more rigorous because it includes the endpoints. A function $f$ is differentiable in $[a, b]$ if it satisfies the following three criteria:
i. $f$ is differentiable at every point in the open interval $(a, b)$.
ii. $f$ is differentiable from the right at the starting point $x = a$. This means the following limit must exist finitely:
$R f'(a) = \lim\limits_{h \to 0^+} \frac{f(a + h) - f(a)}{h}$
iii. $f$ is differentiable from the left at the ending point $x = b$. This means the following limit must exist finitely:
$L f'(b) = \lim\limits_{h \to 0^-} \frac{f(b + h) - f(b)}{h}$
Domain of Differentiability
The set of all points in the domain of a function $f$ where the function is differentiable is called its Domain of Differentiability. It is a fundamental rule in calculus that the domain of differentiability is always a subset of the domain of continuity. In other words, a function must first be continuous to have any chance of being differentiable.
Consider the function $f(x) = |x|$. Its Domain of Continuity is the entire set of real numbers, $\mathbb{R}$. however, since it has a sharp corner at the origin, its Domain of Differentiability is $\mathbb{R} \setminus \{0\}$.
Comparison of Domains
| Function Type | Domain of Continuity | Domain of Differentiability |
|---|---|---|
| Polynomial Functions | $\mathbb{R}$ | $\mathbb{R}$ |
| $|x|$ (Modulus) | $\mathbb{R}$ | $\mathbb{R} \setminus \{0\}$ |
| $[x]$ (Greatest Integer) | $\mathbb{R} \setminus \mathbb{Z}$ | $\mathbb{R} \setminus \mathbb{Z}$ |
| $\sqrt{x}$ | $[0, \infty)$ | $(0, \infty)$ |
Note: The most common points of non-differentiability occur where the function definition changes (piecewise points) or where the "inner" function of a modulus becomes zero.
Theorem: Differentiability Implies Continuity
Statement: If a real-valued function $f$ is differentiable at a point $x = c$, then it is necessarily continuous at that point.
Given: $f$ is differentiable at $x = c$.
To Prove: $f$ is continuous at $x = c$.
Proof of the Theorem
Since the function $f$ is differentiable at $x = c$, by the definition of the derivative, the limit of the difference quotient exists and is a finite real number.
$f'(c) = \lim\limits_{h \to 0} \frac{f(c + h) - f(c)}{h}$
[Exists and is finite] ... (i)
To prove that the function is continuous at $x = c$, we must show that the limiting value of the function as $x$ approaches $c$ is equal to the functional value $f(c)$. Mathematically, we must prove:
$\lim\limits_{h \to 0} f(c + h) = f(c)$
For any $h \neq 0$, we can algebraically manipulate the difference $[f(c + h) - f(c)]$ by multiplying and dividing by $h$:
$f(c + h) - f(c) = \frac{f(c + h) - f(c)}{h} \cdot h$
Now, we take the limit on both sides of the above equation as $h \to 0$:
$\lim\limits_{h \to 0} [f(c + h) - f(c)] = \lim\limits_{h \to 0} \left[ \frac{f(c + h) - f(c)}{h} \cdot h \right]$
Using the product rule for limits ($\lim [u \cdot v] = \lim u \cdot \lim v$):
$ \lim\limits_{h \to 0} [f(c + h) - f(c)] = \left[ \lim\limits_{h \to 0} \frac{f(c + h) - f(c)}{h} \right] \cdot \lim\limits_{h \to 0} h$
Substituting the value of the derivative from equation (i):
$ \lim\limits_{h \to 0} [f(c + h) - f(c)] = f'(c) \cdot 0$
[$\because \lim\limits_{h \to 0} h = 0$]
$ \lim\limits_{h \to 0} f(c + h) - f(c) = 0$
Adding $f(c)$ to both sides, we get:
$\lim\limits_{h \to 0} f(c + h) = f(c)$
... (ii)
According to the formal definition of Continuity, equation (ii) confirms that the function $f$ is continuous at $x = c$. Hence Proved.
Important Corollary and the False Converse
Corollary: Every differentiable function is continuous. This implies that differentiability is a stronger condition than continuity. If you know a function is differentiable, you can immediately conclude it is continuous without further checking.
Warning: The converse of this theorem is NOT necessarily true. A function may be perfectly continuous at a point but fail to be differentiable at that point. This typically occurs at points where the graph has a "corner," "sharp turn," or a "kink."
The Converse: Continuity Does Not Imply Differentiability
In our study of the theorem "Differentiability $\implies$ Continuity," we established that if a function has a derivative at a point, it must be connected (continuous) at that point. However, the Converse of this Theorem is not necessarily true.
A function can be perfectly continuous at a point (meaning you can draw its graph through that point without lifting your pen) but fail to be differentiable at that same point. This occurs when the graph of the function exhibits a "sharp corner," a "kink," or a "cusp." At such points, the function does not have a unique tangent because the slope of the curve changes abruptly from one side to the other.
Example. Let $f: \mathbb{R} \to \mathbb{R}$ be the modulus function defined by $f(x) = |x|$. Examine the continuity and differentiability of the function at the origin ($x = 0$).
Answer:
Given: $f(x) = |x|$, which can be expressed as the piecewise function:
$f(x) = \begin{cases} -x & , & x < 0 \\ 0 & , & x = 0 \\ x & , & x > 0 \end{cases}$
Part 1: Examination of Continuity at $x = 0$
For $f(x)$ to be continuous at $x = 0$, the Left Hand Limit (LHL), Right Hand Limit (RHL), and the functional value must all be equal.
Step 1: Functional Value
$f(0) = |0| = 0$
Step 2: Limiting Values
$\text{LHL} = \lim\limits_{x \to 0^-} f(x) = \lim\limits_{x \to 0} (-x) = 0$
$\text{RHL} = \lim\limits_{x \to 0^+} f(x) = \lim\limits_{x \to 0} (x) = 0$
From above equations, we see that:
$\text{LHL} = \text{RHL} = f(0)$
(Continuous)
Conclusion: The function $f(x) = |x|$ is continuous at $x = 0$.
Part 2: Examination of Differentiability at $x = 0$
To check for differentiability, we evaluate the Left-Hand Derivative and Right-Hand Derivative using the first principles of differentiation.
Step 1: Left-Hand Derivative (LHD)
$L f'(0) = \lim\limits_{h \to 0} \frac{f(0 - h) - f(0)}{-h}$
$L f'(0) = \lim\limits_{h \to 0} \frac{|-h| - 0}{-h}$
$L f'(0) = \lim\limits_{h \to 0} \frac{h}{-h}$
[Since $h > 0 \implies |-h| = h$]
$L f'(0) = -1$
Step 2: Right-Hand Derivative (RHD)
$R f'(0) = \lim\limits_{h \to 0} \frac{f(0 + h) - f(0)}{h}$
$R f'(0) = \lim\limits_{h \to 0} \frac{|h| - 0}{h}$
$R f'(0) = \lim\limits_{h \to 0} \frac{h}{h} = 1$
Step 3: Comparison
By comparing above equations, we find:
$L f'(0) \neq R f'(0)$
[As $-1 \neq 1$]
Conclusion: Since the left-hand derivative and the right-hand derivative exist as finite numbers but are not equal, the derivative $f'(0)$ does not exist. Thus, $f(x) = |x|$ is not differentiable at $x = 0$.
Geometrical Insight
If we look at the graph of $f(x) = |x|$, it forms a sharp V-shape at the origin. On the left side ($x < 0$), the graph is a straight line with a constant slope of $-1$. On the right side ($x > 0$), it is a straight line with a constant slope of $1$. At the point $x = 0$, there is a sudden change in slope. Because there is no unique tangent that can represent both sides of the curve at that point, the function fails to be differentiable.
Standard Functions and their Status at Critical Points
| Function | Critical Point ($c$) | Continuous at $c$? | Differentiable at $c$? |
|---|---|---|---|
| Polynomial: $x^2$ | Any $x \in \mathbb{R}$ | Yes | Yes |
| Modulus: $|x|$ | $x = 0$ | Yes | No (Sharp Corner) |
| Greatest Integer: $[x]$ | Any $x \in \mathbb{Z}$ | No (Jump) | No |
| Radical: $\sqrt{x}$ | $x = 0$ | Yes (Right-Cont.) | No (Infinite Slope) |
| Mod-Square: $|x|^2$ | $x = 0$ | Yes | Yes (Slope $= 0$) |
Note: While a sharp corner ($LHD \neq RHD$) is the most common reason for non-differentiability in continuous functions, a function also fails to be differentiable if the curve has a Vertical Tangent (where the slope becomes $\pm \infty$), such as in $f(x) = x^{\frac{1}{3}}$ at $x = 0$.
Derivatives of Composite Functions
In the real world, most mathematical models are not simple independent functions but combinations of multiple functions, known as Composite Functions. A composite function is formed when one function is nested inside another. Such functions are typically denoted by $(f \circ g)(x) = f(g(x))$. To differentiate these, we use the Chain Rule, which is arguably the most fundamental and frequently used tool in all of differential calculus.
The intuition behind the Chain Rule is that if a variable $y$ depends on $u$, and $u$ in turn depends on $x$, then the rate of change of $y$ with respect to $x$ is the product of the rate of change of $y$ with respect to $u$ and the rate of change of $u$ with respect to $x$.
The Chain Rule
Theorem: Let $y = f(u)$ be a differentiable function of $u$ and let $u = g(x)$ be a differentiable function of $x$. Then $y$ is a differentiable function of $x$, and its derivative is given by the product of the derivatives of the outer and inner functions.
This theorem is formally expressed in Leibniz notation as follows:
$\dfrac{dy}{dx} = \dfrac{dy}{du} \cdot \dfrac{du}{dx}$
[Leibniz Notation]
Alternative Form (Prime Notation):
If $h(x) = (f \circ g)(x) = f(g(x))$, then the derivative $h'(x)$ is calculated by differentiating the "outer" function while keeping the "inner" function as its argument, and then multiplying by the derivative of the "inner" function:
$h'(x) = f'(g(x)) \cdot g'(x)$
[Prime Notation]
The "Onion" Analogy
Think of a composite function like an onion. To reach the core, you must peel it layer by layer, starting from the outermost one. In a composite function like $f(g(x))$:
1. The Outer Layer ($f$): We first differentiate the outer function $f$. During this step, we treat the entire inner part $g(x)$ as a single, unmodified block (often visualized as a variable $u$). This gives us $f'(g(x))$.
2. The Inner Layer ($g$): Once the outer layer is "peeled" (differentiated), we move inside to differentiate the inner function $g$ with respect to $x$. This gives us $g'(x)$.
3. Linking the Chain: The final result is the product of these individual derivatives. This "chaining" process can continue indefinitely if there are multiple nested functions (e.g., $f(g(h(x)))$).
Steps to Apply the Chain Rule Effectively
For a given function $y$, follow these systematic steps:
Step I: Identify the outer function and the inner function. For example, in $\sin(x^2)$, the outer function is the sine function and the inner function is $x^2$.
Step II: Differentiate the outer function with respect to the inner one. In our example, the derivative of $\sin(x^2)$ with respect to $(x^2)$ is $\cos(x^2)$.
Step III: Differentiate the inner function with respect to $x$. In our example, the derivative of $x^2$ is $2x$.
Step IV: Multiply the results from Step II and Step III.
Example. Find the derivative of the function $y = (3x^2 + 5)^7$ with respect to $x$ by applying the Chain Rule.
Answer:
Given: A composite function $y = (3x^2 + 5)^7$. Here, the function is of the form $y = [f(x)]^n$.
To Find: The rate of change of $y$ with respect to $x$, denoted as $\dfrac{dy}{dx}$.
Step-by-Step Solution
To solve this using the Substitution Method (Chain Rule), we decompose the function into an "inner" part and an "outer" part.
Step 1: Substitution
Let the expression inside the parenthesis be represented by a new variable $u$.
$u = 3x^2 + 5$
... (i)
Substituting equation (i) into the original function, we get the outer function:
$y = u^7$
... (ii)
Step 2: Differentiating the Outer Function
We differentiate $y$ with respect to $u$ using the Power Rule ($\frac{d}{dx}x^n = nx^{n-1}$):
$\dfrac{dy}{du} = \dfrac{d}{du}(u^7) = 7u^6$
... (iii)
Step 3: Differentiating the Inner Function
Now, we differentiate $u$ with respect to $x$ from equation (i):
$\dfrac{du}{dx} = \dfrac{d}{dx}(3x^2 + 5)$
$\dfrac{du}{dx} = 3(2x) + 0 = 6x$
... (iv)
Step 4: Applying the Chain Rule Formula
The Chain Rule states that the total derivative is the product of the derivatives of the individual layers:
$\dfrac{dy}{dx} = \dfrac{dy}{du} \cdot \dfrac{du}{dx}$
[Formula]
Substituting the values from equation (iii) and equation (iv) into equation (v):
$\dfrac{dy}{dx} = (7u^6) \cdot (6x)$
Step 5: Back-Substitution and Simplification
Finally, we replace $u$ with its original expression in terms of $x$ (from equation i):
$\dfrac{dy}{dx} = 7(3x^2 + 5)^6 \cdot 6x$
$\dfrac{dy}{dx} = 42x(3x^2 + 5)^6$
[Final Answer]
Extension to Multiple Chains
The Chain Rule is not limited to just two functions. If a function is composed of three parts, such as $y = f(g(h(x)))$, the chain simply extends:
$\dfrac{dy}{dx} = \dfrac{dy}{du} \cdot \dfrac{du}{dv} \cdot \dfrac{dv}{dx}$
[Where $u = g(v)$ and $v = h(x)$]
Note: You are often expected to perform the Chain Rule "mentally" without substituting $u$ or $v$. Practice seeing the "layers" of the function to speed up your calculations during the exam.
Corollaries and Specialized Rules
Corollary 1: The General Power Rule
The General Power Rule is a specific application of the chain rule where the "outer" function is a power function. This rule is exceptionally useful when differentiating expressions where a whole function is raised to a constant exponent.
Statement: If $h(x) = [f(x)]^n$, where $f(x)$ is a differentiable function of $x$ and $n$ is any real number, then the derivative is given by:
$h'(x) = n[f(x)]^{n-1} \cdot f'(x)$
Proof of the General Power Rule
To prove this, we decompose the function into its constituent layers using a temporary variable $u$.
Let the "inner" function be:
$u = f(x)$
(Substitution)
Then the original function $h(x)$ can be written in terms of $u$ as an "outer" function $g(u)$:
$y = g(u) = u^n$
By the Chain Rule, we know that:
$\dfrac{dy}{dx} = \dfrac{dy}{du} \cdot \dfrac{du}{dx}$
…(v)
Now, we differentiate the two parts separately:
1. Differentiating $y$ with respect to $u$:
$\dfrac{dy}{du} = \dfrac{d}{du}(u^n) = nu^{n-1}$
2. Differentiating $u$ with respect to $x$:
$\dfrac{du}{dx} = \dfrac{d}{dx}[f(x)] = f'(x)$
Substituting these values back into the chain rule formula:
$\dfrac{dy}{dx} = (nu^{n-1}) \cdot f'(x)$
Finally, replacing $u$ with the original function $f(x)$:
$h'(x) = n[f(x)]^{n-1} \cdot f'(x)$
[Hence Proved]
Corollary 2: Parametric Differentiation
In many problems in Coordinate Geometry and Physics (such as projectile motion), variables $x$ and $y$ are expressed in terms of a common third variable, usually $t$ or $\theta$, known as a parameter.
If $x = \phi(t)$ and $y = \psi(t)$, we can find the derivative of $y$ with respect to $x$ without eliminating the parameter $t$. By the chain rule:
$\dfrac{dy}{dx} = \dfrac{dy}{dt} \cdot \dfrac{dt}{dx}$
Since $\dfrac{dt}{dx} = \dfrac{1}{dx/dt}$, the formula simplifies to:
$\dfrac{dy}{dx} = \dfrac{\dfrac{dy}{dt}}{\dfrac{dx}{dt}}$
[provided $\dfrac{dx}{dt} \neq 0$]
Corollary 3: Derivative of Inverse Functions
There is a reciprocal relationship between the derivative of a function and the derivative of its inverse. This property is crucial when we want to find the slope of an inverse function (like $\sin^{-1}x$) using the known derivative of the original function (like $\sin x$).
Statement: If $y = f(x)$ is a differentiable function such that its inverse $x = f^{-1}(y)$ exists, then the derivative of $y$ with respect to $x$ is the reciprocal of the derivative of $x$ with respect to $y$.
$\dfrac{dy}{dx} = \dfrac{1}{\dfrac{dx}{dy}}$
[provided $\dfrac{dx}{dy} \neq 0$]
This corollary also highlights the Fundamental Identity of differential coefficients:
$\dfrac{dy}{dx} \cdot \dfrac{dx}{dy} = 1$
Example. If $x = a \cos \theta$ and $y = a \sin \theta$, find $\dfrac{dy}{dx}$ using parametric differentiation.
Answer:
Step 1: Differentiate $x$ and $y$ with respect to the parameter $\theta$.
$\dfrac{dx}{d\theta} = -a \sin \theta$
$\dfrac{dy}{d\theta} = a \cos \theta$
Step 2: Apply the parametric differentiation formula (Corollary 2).
$\dfrac{dy}{dx} = \dfrac{dy/d\theta}{dx/d\theta} = \dfrac{a \cos \theta}{-a \sin \theta}$
$\dfrac{dy}{dx} = -\cot \theta$
[Final Result]
Derivative of the Absolute Value (Modulus) Function
The Absolute Value Function, denoted by $f(x) = |x|$, presents a unique case in calculus. While the function is continuous everywhere on the real line $\mathbb{R}$, it is not differentiable at $x = 0$ due to the sharp "V-shaped" corner at the origin. However, for all points where $x \neq 0$, the derivative can be explicitly calculated. This is often derived using two distinct methods: the Chain Rule (Algebraic Approach) and the Piecewise Definition (Geometric Approach).
Method 1: Algebraic Derivation (Using Chain Rule)
To find the derivative of $y = |x|$ for $x \neq 0$, we can utilize the algebraic identity that expresses the modulus as the principal square root of a square. This allows us to treat the modulus function as a composite function.
Step 1: Define the function in composite form.
$y = |x| = \sqrt{x^2}$
Step 2: Apply the Chain Rule. Let $u = x^2$ (inner function) and $y = \sqrt{u}$ (outer function). Differentiating $y$ with respect to $x$ gives:
$\dfrac{dy}{dx} = \dfrac{d}{du}(\sqrt{u}) \cdot \dfrac{du}{dx}$
Step 3: Differentiate the individual layers.
$\dfrac{dy}{du} = \dfrac{1}{2\sqrt{u}}$
[Power Rule]
$\dfrac{du}{dx} = 2x$
[Power Rule]
Step 4: Combine the results and substitute $u = x^2$ back.
$\dfrac{dy}{dx} = \dfrac{1}{2\sqrt{x^2}} \cdot 2x$
Step 5: Simplify the expression. Note that $\sqrt{x^2} = |x|$.
$\dfrac{dy}{dx} = \dfrac{\cancel{2} \cdot x}{\cancel{2} \cdot |x|}$
$\dfrac{d}{dx}|x| = \dfrac{x}{|x|}$
[Valid for $x \neq 0$]
Method 2: Piecewise Derivation
We can also find the derivative by breaking the function into its linear components based on the domain. Recall that the modulus function is defined as:
$f(x) = \begin{cases} x & , & x > 0 \\ -x & , & x < 0 \end{cases}$
Case I: When $x > 0$
The function is $y = x$. Differentiating with respect to $x$:
$\dfrac{dy}{dx} = 1$
Case II: When $x < 0$
The function is $y = -x$. Differentiating with respect to $x$:
$\dfrac{dy}{dx} = -1$
Case III: When $x = 0$
As established in the study of differentiability, the Left-Hand Derivative ($-1$) does not equal the Right-Hand Derivative ($+1$). Therefore, the derivative does not exist at $x = 0$.
Summary of the Derivative of $|x|$
The derivative of the modulus function can be summarized as the Signum Function, which yields the sign of $x$. For competitive exams like JEE, this result is extremely useful for solving problems involving absolute values in integrands or differential equations.
$\dfrac{d}{dx}|x| = \text{sgn}(x) = \begin{cases} 1 & , & x > 0 \\ \text{undefined} & , & x = 0 \\ -1 & , & x < 0 \end{cases}$
General Formula for Absolute Value of a Function
Using the Chain Rule, if $y = |f(x)|$, then the derivative is given by:
$\dfrac{dy}{dx} = \dfrac{f(x)}{|f(x)|} \cdot f'(x)$
[for $f(x) \neq 0$]
Example. Differentiate $y = |x^2 - 4|$ with respect to $x$.
Answer:
To Find: The derivative of $y = |x^2 - 4|$.
Solution: Let $f(x) = x^2 - 4$. Then $y = |f(x)|$.
Using the general formula from equation (viii):
$\dfrac{dy}{dx} = \dfrac{f(x)}{|f(x)|} \cdot f'(x)$
Since $f'(x) = \dfrac{d}{dx}(x^2 - 4) = 2x$, we substitute the values:
$\dfrac{dy}{dx} = \dfrac{x^2 - 4}{|x^2 - 4|} \cdot (2x)$
Conclusion: The derivative exists for all $x$ except where $x^2 - 4 = 0$, i.e., at $x = \pm 2$.
Derivatives of Inverse Trigonometric Functions
The derivatives of inverse trigonometric functions play a pivotal role in advanced calculus. Their importance stems from the fact that they allow us to differentiate expressions involving angles and, more critically, they provide the basis for integrating many algebraic rational functions.
A crucial concept to understand here is the Principal Value Branch. Since trigonometric functions like $\sin x$, $\cos x$, etc., are periodic, they are many-to-one. For an inverse to exist, a function must be one-to-one and onto (bijective). Therefore, we restrict the domains of the original trigonometric functions to specific intervals (branches) where they are strictly monotonic. The most commonly used interval is called the Principal Value Branch, and the derivatives derived below are valid within the interiors of these specific domains.
Derivatives and Domains
Below is a comprehensive table showing the derivative of each inverse trigonometric function along with its domain of validity.
| Function $y = f(x)$ | Derivative $\frac{dy}{dx}$ | Domain of Derivative |
|---|---|---|
| $\sin^{-1} x$ | $\frac{1}{\sqrt{1 - x^2}}$ | $-1 < x < 1$ |
| $\cos^{-1} x$ | $-\frac{1}{\sqrt{1 - x^2}}$ | $-1 < x < 1$ |
| $\tan^{-1} x$ | $\frac{1}{1 + x^2}$ | $x \in R$ |
| $\cot^{-1} x$ | $-\frac{1}{1 + x^2}$ | $x \in R$ |
| $\sec^{-1} x$ | $\frac{1}{|x|\sqrt{x^2 - 1}}$ | $|x| > 1$ |
| $\text{cosec}^{-1} x$ | $-\frac{1}{|x|\sqrt{x^2 - 1}}$ | $|x| > 1$ |
Detailed Derivations
The derivative of an inverse trigonometric function tells us the instantaneous rate of change of the angle with respect to its trigonometric value. To find these derivatives, we typically use the method of Implicit Differentiation combined with fundamental trigonometric identities.
1. Derivative of $\sin^{-1} x$
To find the derivative of $y = \sin^{-1} x$, we must first understand its constraints. The function is defined for $x \in [-1, 1]$, but it is only differentiable in the open interval $(-1, 1)$ because the slope becomes infinite at the endpoints $x = 1$ and $x = -1$.
Given:
$y = \sin^{-1} x$
where $x \in (-1, 1)$ and $y \in \left( -\frac{\pi}{2}, \frac{\pi}{2} \right)$
Proof:
From the definition of inverse functions, we can rewrite the expression as:
$x = \sin y$
... (i)
Now, we differentiate both sides of equation (i) with respect to $x$ by applying the Chain Rule on the right-hand side:
$\frac{d}{dx}(x) = \frac{d}{dx}(\sin y)$
$1 = \cos y \cdot \frac{dy}{dx}$
To isolate $\frac{dy}{dx}$, we rearrange the terms:
$\frac{dy}{dx} = \frac{1}{\cos y}$
... (ii)
Our goal is to express the derivative purely in terms of $x$. We use the fundamental identity $\sin^2 y + \cos^2 y = 1$, which gives us:
$\cos y = \pm \sqrt{1 - \sin^2 y}$
Crucial Logic: Since the principal value branch of $y = \sin^{-1} x$ is $\left( -\frac{\pi}{2}, \frac{\pi}{2} \right)$, the angle $y$ lies in the first or fourth quadrant. In these quadrants, the cosine function is always non-negative ($\cos y \geq 0$). Therefore, we discard the negative root.
$\cos y = \sqrt{1 - \sin^2 y}$
[$\because \cos y > 0$ for $y \in (-\frac{\pi}{2}, \frac{\pi}{2})$] ... (iii)
Substituting $\sin y = x$ from equation (i) into equation (iii):
$\cos y = \sqrt{1 - x^2}$
... (iv)
Finally, substituting the value of $\cos y$ from (iv) into equation (ii):
$\frac{dy}{dx} = \frac{1}{\sqrt{1 - x^2}}$
Conclusion: The derivative of the inverse sine function is: $\frac{d}{dx}(\sin^{-1} x) = \frac{1}{\sqrt{1 - x^2}}$, provided $|x| < 1$.
Example. Differentiate $f(x) = \sin^{-1} (5x)$ with respect to $x$.
Answer:
Let $y = \sin^{-1} (5x)$. To find $\frac{dy}{dx}$, we use the chain rule. Let $u = 5x$.
Then, $y = \sin^{-1} u$.
By the chain rule:
$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$
We know that $\frac{d}{du}(\sin^{-1} u) = \frac{1}{\sqrt{1 - u^2}}$ and $\frac{d}{dx}(5x) = 5$.
Substituting these values:
$\frac{dy}{dx} = \frac{1}{\sqrt{1 - (5x)^2}} \cdot 5$
$\frac{dy}{dx} = \frac{5}{\sqrt{1 - 25x^2}}$
[Valid for $|5x| < 1 \implies |x| < \frac{1}{5}$]
2. Derivative of $\cos^{-1} x$
The derivative of $y = \cos^{-1} x$ is determined using implicit differentiation. Similar to $\sin^{-1} x$, this function is defined for $x \in [-1, 1]$ but is differentiable only within the open interval $(-1, 1)$, as the tangent at the endpoints $x = \pm 1$ becomes vertical.
Given:
$y = \cos^{-1} x$
where $x \in (-1, 1)$ and $y \in (0, \pi)$
Proof:
From the definition of the inverse cosine function, we can express $x$ in terms of $y$:
$x = \cos y$
... (i)
Next, we differentiate both sides of equation (i) with respect to $x$. For the left-hand side, $\frac{d}{dx}(x) = 1$. For the right-hand side, we apply the Chain Rule, as $y$ is a function of $x$:
$\frac{d}{dx}(x) = \frac{d}{dx}(\cos y)$
$\frac{d}{dx}(x) = \frac{d}{dy}(\cos y) \cdot \frac{dy}{dx}$
$\mathbf{1 = (-\sin y) \cdot \frac{dy}{dx}}$
Now, we solve for $\frac{dy}{dx}$:
$\frac{dy}{dx} = -\frac{1}{\sin y}$
... (ii)
To express $\sin y$ in terms of $x$, we use the trigonometric identity $\sin^2 y + \cos^2 y = 1$:
$\sin y = \pm \sqrt{1 - \cos^2 y}$
Crucial Logic: For the principal value branch of $y = \cos^{-1} x$, the angle $y$ lies in the interval $(0, \pi)$. In this interval (which covers the first and second quadrants), the value of $\sin y$ is always positive. Therefore, we choose the positive square root:
$\sin y = \sqrt{1 - \cos^2 y}$
[$\because \sin y > 0$ for $y \in (0, \pi)$] ... (iii)
Substitute $\cos y = x$ from equation (i) into equation (iii):
$\sin y = \sqrt{1 - x^2}$
... (iv)
Finally, substitute this expression for $\sin y$ from (iv) back into equation (ii):
$\frac{dy}{dx} = -\frac{1}{\sqrt{1 - x^2}}$
Conclusion: The derivative of the inverse cosine function is: $\frac{d}{dx}(\cos^{-1} x) = -\frac{1}{\sqrt{1 - x^2}}$, for $x \in (-1, 1)$.
Example. Differentiate $f(x) = \cos^{-1} (x^2)$ with respect to $x$.
Answer:
Let $y = \cos^{-1} (x^2)$. We use the chain rule. Let $u = x^2$.
Then, $y = \cos^{-1} u$.
Applying the chain rule formula, $\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$:
First, find $\frac{dy}{du}$:
$\frac{dy}{du} = \frac{d}{du}(\cos^{-1} u) = -\frac{1}{\sqrt{1 - u^2}}$
Next, find $\frac{du}{dx}$:
$\frac{du}{dx} = \frac{d}{dx}(x^2) = 2x$
Now, substitute these back into the chain rule formula:
$\frac{dy}{dx} = -\frac{1}{\sqrt{1 - (x^2)^2}} \cdot (2x)$
$\frac{dy}{dx} = -\frac{2x}{\sqrt{1 - x^4}}$
[Valid for $|x^2| < 1 \implies |x| < 1$]
3. Derivative of $\tan^{-1} x$
To find the derivative of $y = \tan^{-1} x$, we use the method of implicit differentiation. This function is defined for all $x \in \mathbb{R}$ and its principal value range is the open interval $\left( -\frac{\pi}{2}, \frac{\pi}{2} \right)$.
Given:
$y = \tan^{-1} x$
where $x \in \mathbb{R}$ and $y \in \left( -\frac{\pi}{2}, \frac{\pi}{2} \right)$
Proof:
By the definition of inverse trigonometric functions, we can write:
$x = \tan y$
... (i)
Differentiating both sides of equation (i) with respect to $x$:
$\frac{d}{dx}(x) = \frac{d}{dx}(\tan y)$
Applying the Chain Rule on the right-hand side:
$1 = \sec^2 y \cdot \frac{dy}{dx}$
[$\because \frac{d}{dy}(\tan y) = \sec^2 y$] ... (ii)
Rearranging the equation to solve for $\frac{dy}{dx}$:
$\frac{dy}{dx} = \frac{1}{\sec^2 y}$
... (iii)
To convert the expression into terms of $x$, we use the standard trigonometric identity: $\sec^2 y = 1 + \tan^2 y$.
$\sec^2 y = 1 + \tan^2 y$
[Fundamental Identity] ... (iv)
Substituting the value $\tan y = x$ from equation (i) into equation (iv):
$\sec^2 y = 1 + x^2$
Now, substituting this result into equation (iii), we obtain the final derivative:
$\frac{dy}{dx} = \frac{1}{1 + x^2}$
Conclusion: The derivative of the inverse tangent function is: $\mathbf{\frac{d}{dx}(\tan^{-1} x) = \frac{1}{1 + x^2}}$, for all $x \in \mathbb{R}$.
Example. Differentiate $y = \tan^{-1} \left( \frac{\cos x + \sin x}{\cos x - \sin x} \right)$ with respect to $x$.
Answer:
Before differentiating, we simplify the expression inside the bracket. Divide the numerator and denominator by $\cos x$:
$y = \tan^{-1} \left( \frac{1 + \tan x}{1 - \tan x} \right)$
Using the identity $\tan \left( \frac{\pi}{4} + x \right) = \frac{1 + \tan x}{1 - \tan x}$:
$y = \tan^{-1} \left[ \tan \left( \frac{\pi}{4} + x \right) \right]$
$y = \frac{\pi}{4} + x$
(Simplified form)
Now, differentiating both sides with respect to $x$:
$\frac{dy}{dx} = \frac{d}{dx} \left( \frac{\pi}{4} \right) + \frac{d}{dx}(x)$
$\frac{dy}{dx} = 0 + 1 = 1$
Final Result: $\frac{dy}{dx} = 1$.
4. Derivatives of $\cot^{-1} x$, $\sec^{-1} x$, and $\text{cosec}^{-1} x$
In this section, we complete the derivation for the remaining three inverse trigonometric functions: $\cot^{-1} x$, $\sec^{-1} x$, and $\text{cosec}^{-1} x$.
(i) Derivative of $\cot^{-1} x$
Proof: Let $y = \cot^{-1} x$, where $x \in \mathbb{R}$ and $y \in (0, \pi)$.
$x = \cot y$
... (i)
Differentiating both sides with respect to $x$ using the Chain Rule:
$1 = -\text{cosec}^2 y \cdot \frac{dy}{dx}$
[$\because \frac{d}{dy}(\cot y) = -\text{cosec}^2 y$] ... (ii)
Rearranging to find $\frac{dy}{dx}$:
$\frac{dy}{dx} = -\frac{1}{\text{cosec}^2 y}$
Using the identity $\text{cosec}^2 y = 1 + \cot^2 y$ and substituting $x = \cot y$ from (i):
$\frac{dy}{dx} = -\frac{1}{1 + x^2}$
Conclusion: $\frac{d}{dx}(\cot^{-1} x) = -\frac{1}{1 + x^2}$, for all $x \in \mathbb{R}$.
(ii) Derivative of $\sec^{-1} x$
Proof: Let $y = \sec^{-1} x$, where $|x| \geq 1$ and $y \in [0, \pi], y \neq \frac{\pi}{2}$.
$x = \sec y$
Differentiating with respect to $x$:
$1 = \sec y \tan y \cdot \frac{dy}{dx}$
$\frac{dy}{dx} = \frac{1}{\sec y \tan y}$
We know that $\tan y = \pm \sqrt{\sec^2 y - 1} = \pm \sqrt{x^2 - 1}$. However, the slope of $\sec^{-1} x$ is always positive in its domain. To ensure a positive derivative, we use the absolute value $|x|$:
Conclusion: $\frac{d}{dx}(\sec^{-1} x) = \frac{1}{|x|\sqrt{x^2 - 1}}$, for $|x| > 1$.
(iii) Derivative of $\text{cosec}^{-1} x$
Proof: Following a similar implicit differentiation for $x = \text{cosec} y$:
$1 = -\text{cosec} y \cot y \cdot \frac{dy}{dx}$
Solving for the derivative and applying absolute value logic:
Conclusion: $\frac{d}{dx}(\text{cosec}^{-1} x) = -\frac{1}{|x|\sqrt{x^2 - 1}}$, for $|x| > 1$.
Example. If $y = \sec^{-1} x + \text{cosec}^{-1} x$, find $\frac{dy}{dx}$ for $|x| > 1$.
Answer:
Method 1 (Direct Differentiation):
Differentiating both terms with respect to $x$:
$\frac{dy}{dx} = \frac{d}{dx}(\sec^{-1} x) + \frac{d}{dx}(\text{cosec}^{-1} x)$
$\frac{dy}{dx} = \frac{1}{|x|\sqrt{x^2 - 1}} + \left( -\frac{1}{|x|\sqrt{x^2 - 1}} \right)$
$\frac{dy}{dx} = 0$
Method 2 (Using Identity):
We know from Inverse Trigonometric identities that:
$\sec^{-1} x + \text{cosec}^{-1} x = \frac{\pi}{2}$
(For $|x| \geq 1$)
Differentiating the constant $\frac{\pi}{2}$ with respect to $x$:
$\frac{dy}{dx} = \frac{d}{dx}\left(\frac{\pi}{2}\right) = 0$
Final Result: The derivative is $0$.
Differentiation by Substitution
In many calculus problems, particularly those involving Inverse Trigonometric Functions, direct differentiation can become extremely tedious and prone to algebraic errors. To streamline this, we use the technique of Trigonometric Substitution. This method allows us to simplify a complex algebraic expression into a single trigonometric term, which then "cancels out" the inverse function based on the property $f^{-1}(f(\theta)) = \theta$.
Standard Substitutions
The choice of substitution depends on the algebraic form of the expression. Below is a comprehensive list of suggested substitutions:
| Algebraic Expression | Suggested Substitution |
|---|---|
| $a^2 - x^2$ | $x = a \sin \theta$ or $x = a \cos \theta$ |
| $a^2 + x^2$ | $x = a \tan \theta$ or $x = a \cot \theta$ |
| $x^2 - a^2$ | $x = a \sec \theta$ or $x = a \text{cosec} \theta$ |
| $\sqrt{\frac{a-x}{a+x}}$ or $\sqrt{\frac{a+x}{a-x}}$ | $x = a \cos 2\theta$ or $x = a \cos \theta$ |
| $\sqrt{\frac{x}{a-x}}$ | $x = a \sin^2 \theta$ |
| $a \cos x \pm b \sin x$ | $a = r \cos \alpha$ and $b = r \sin \alpha$ |
General Strategy for Substitution
1. Identification: Observe the algebraic structure (e.g., $1-x^2$, $1+x^2$).
2. Substitution: Replace $x$ with the corresponding trigonometric function and identify the range of the new variable $\theta$.
3. Simplification: Use trigonometric identities like $\sin 2\theta$, $\cos 2\theta$, or $\tan 2\theta$ to simplify the internal expression.
4. Final Differentiation: Convert the result back to $x$ and differentiate.
Example. Differentiate $y = \tan^{-1} \left( \frac{2x}{1-x^2} \right)$ with respect to $x$, where $|x| < 1$.
Answer:
Let $y = \tan^{-1} \left( \frac{2x}{1-x^2} \right)$.
Step 1: Substitution
Looking at the form $\frac{2x}{1-x^2}$, it resembles the identity for $\tan 2\theta$. So, we put:
$x = \tan \theta$
$\implies \theta = \tan^{-1} x$ ... (i)
Step 2: Simplify the function
Substitute $x = \tan \theta$ in the given expression:
$y = \tan^{-1} \left( \frac{2 \tan \theta}{1 - \tan^2 \theta} \right)$
Using the identity $\tan 2\theta = \frac{2 \tan \theta}{1 - \tan^2 \theta}$:
$y = \tan^{-1} (\tan 2\theta)$
[Trigonometric Identity] ... (ii)
Since $|x| < 1$, the value of $2\theta$ falls within the principal branch $(-\frac{\pi}{2}, \frac{\pi}{2})$, thus:
$y = 2\theta$
Substituting $\theta$ from (i):
$y = 2 \tan^{-1} x$
Step 3: Differentiation
Differentiating both sides with respect to $x$:
$\frac{dy}{dx} = 2 \cdot \frac{d}{dx}(\tan^{-1} x)$
$\frac{dy}{dx} = \frac{2}{1 + x^2}$
[Final Result]
Important Remarks
1. Principal Value Branch: Always check the given range of $x$. If $x$ is outside the range where $f^{-1}(f(\theta)) = \theta$, you may need to adjust the formula (e.g., $y = \pi - 2\theta$).
2. Alternative Substitution: Sometimes more than one substitution works. For example, in $\sqrt{a^2-x^2}$, both $a \sin \theta$ and $a \cos \theta$ are valid, but usually one leads to a simpler sign convention.
3. Half Angle Formulae: Be proficient in using $1 + \cos 2\theta = 2 \cos^2 \theta$ and $1 - \cos 2\theta = 2 \sin^2 \theta$. These are frequently used in radical expressions ($\sqrt{\dots}$).
Implicit Differentiation
In calculus, we generally encounter functions where the dependent variable $y$ is expressed directly in terms of the independent variable $x$. However, in many mathematical models, $x$ and $y$ are intertwined in a way that makes it difficult or impossible to isolate $y$. This leads us to the study of Implicit Functions.
Explicit vs Implicit Functions
When we can isolate $y$ on one side of the equation, the function is explicit. When $x$ and $y$ are mixed together in a way that $y$ cannot be easily separated, the function is implicit.
Explicit Polynomial Functions
In an explicit function, the value of $y$ is directly dependent on $x$. You can calculate $y$ immediately by substituting any value for $x$. These are the standard functions we encounter in early algebra.
$y = 4x^3 - 5x^2 + 2x - 10$
[Explicit Polynomial] ... (i)
To differentiate equation (i), we simply apply the power rule term by term to find $\frac{dy}{dx}$.
Implicit Polynomial Functions
In an implicit function, the variables are intertwined within a polynomial equation. While $y$ still depends on $x$, it is not isolated. Many curves in coordinate geometry, such as ellipses or hyperbolas, are represented this way.
$x^2 + 3xy + 2y^2 = 15$
[Implicit Polynomial] ... (ii)
In equation (ii), the term $3xy$ is a product of both variables. To find the derivative, we must differentiate the entire equation with respect to $x$, treating $y$ as a function of $x$ (applying the Product Rule to $3xy$ and the Chain Rule to $2y^2$).
Why Use Implicit Differentiation?
For a polynomial like $x^2 + y^2 = 25$, if we try to make it explicit, we get $y = \pm \sqrt{25 - x^2}$. This creates two separate functions. Implicit differentiation allows us to find a single expression for the slope ($\frac{dy}{dx}$) that works for the entire curve at once, without needing to deal with square roots or multiple cases.
Comparison Table
| Feature | Explicit Function | Implicit Function |
|---|---|---|
| Structure | $y = f(x)$ | $f(x, y) = 0$ |
| Variable Separation | Variables are separated. | Variables are mixed. |
| Difficulty | Easy to differentiate directly. | Requires Implicit Differentiation. |
| Example | $y = \sqrt{x}$ | $x^2 + y^2 = r^2$ |
Example. Differentiate the implicit polynomial $x^2 + y^2 = 100$ with respect to $x$.
Answer:
Given the equation of a circle with radius 10 units:
$x^2 + y^2 = 100$
Differentiating both sides with respect to $x$:
$\frac{d}{dx}(x^2) + \frac{d}{dx}(y^2) = \frac{d}{dx}(100)$
$2x + 2y \frac{dy}{dx} = 0$
[Using Chain Rule for $y^2$]
Solving for $\frac{dy}{dx}$:
$2y \frac{dy}{dx} = -2x$
$\frac{dy}{dx} = -\frac{x}{y}$
Existence and Differentiability
Before we differentiate, we must understand whether the equation $f(x, y) = 0$ actually defines $y$ as a function of $x$. In calculus, for a relation to be a function, it must pass the Vertical Line Test. However, many implicit equations represent curves that fail this test globally. We categorize these into three distinct possibilities:
(i) Representation of Multiple Functions
A single implicit equation can represent two or more distinct functions. A classic example in Coordinate Geometry is the circle centered at the origin with radius $r = 5$.
$x^2 + y^2 - 25 = 0$
If we attempt to solve for $y$, we get $y^2 = 25 - x^2$, which yields two separate branches:
$f_1(x) = \sqrt{25 - x^2}$
[Upper Semi-circle]
$f_2(x) = -\sqrt{25 - x^2}$
[Lower Semi-circle]
Implicit differentiation allows us to find the slope of the tangent $\frac{dy}{dx}$ for both branches simultaneously without needing to separate them.
(ii) Equations Defining No Real Function
Some implicit equations are mathematically valid but contain no real-valued solutions for $x$ and $y$. These equations represent an empty set in the Cartesian plane.
$x^2 + y^2 + 25 = 0$
[No real $x, y$ satisfy this as sum of squares $\neq -25$]
(iii) The Assumption of Differentiability
While performing Implicit Differentiation, we work under the Implicit Function Theorem. We assume that in a small neighbourhood around a point $(x, y)$ on the curve, the equation defines $y$ as a differentiable function of $x$. We must ensure the curve is "smooth" (no sharp corners) at that point.
The Systematic Process of Implicit Differentiation
To find the derivative $\frac{dy}{dx}$ when $y$ is an implicit function of $x$, follow these four logical steps. This method is highly effective for solving Rate Measure problems.
Step 1: Uniform Differentiation
Differentiate every term on both sides of the equation with respect to $x$. Do not move terms yet. Even constants must be differentiated (resulting in zero).
Step 2: Application of the Chain Rule
This is the most critical step. Whenever you differentiate a term containing $y$, treat $y$ as $f(x)$. After applying the standard formula, you must multiply by $\frac{dy}{dx}$.
| Term to Differentiate | Result with respect to $x$ |
|---|---|
| $\frac{d}{dx}(x^n)$ | $nx^{n-1}$ |
| $\frac{d}{dx}(y^n)$ | $ny^{n-1} \cdot \frac{dy}{dx}$ |
| $\frac{d}{dx}(\sin y)$ | $\cos y \cdot \frac{dy}{dx}$ |
Step 3: Isolation of $\frac{dy}{dx}$
Rearrange the resulting algebraic equation so that all terms containing the factor $\frac{dy}{dx}$ are on the Left Hand Side (LHS) and all other terms are on the Right Hand Side (RHS).
Step 4: Factorization and Division
Factor out $\frac{dy}{dx}$ and divide the RHS by the remaining coefficient to obtain the final explicit expression for the derivative.
Example. Find the slope of the tangent to the curve $x^3 + y^3 = 3axy$ at any point $(x, y)$.
Answer:
Given the equation of the Folium of Descartes:
$x^3 + y^3 = 3axy$
Differentiating both sides with respect to $x$:
$\frac{d}{dx}(x^3) + \frac{d}{dx}(y^3) = \frac{d}{dx}(3axy)$
$3x^2 + 3y^2 \frac{dy}{dx} = 3a \left[ x \frac{dy}{dx} + y(1) \right]$
[Using Product Rule on $xy$]
Dividing throughout by $3$ and expanding:
$x^2 + y^2 \frac{dy}{dx} = ax \frac{dy}{dx} + ay$
Collecting $\frac{dy}{dx}$ terms on LHS:
$y^2 \frac{dy}{dx} - ax \frac{dy}{dx} = ay - x^2$
Factoring $\frac{dy}{dx}$:
$\frac{dy}{dx} (y^2 - ax) = ay - x^2$
$\frac{dy}{dx} = \frac{ay - x^2}{y^2 - ax}$
Exponential and Logarithmic Functions
In this section, we explore two of the most important classes of functions in calculus: Exponential and Logarithmic functions. These functions are inverses of each other and appear frequently in population growth models, compound interest, and complex physics equations.
Exponential Functions
If $a$ is any positive real number ($a > 0$), then the function $f$ defined by $f(x) = a^x$ is called the General Exponential Function. Here, $a$ is referred to as the base and $x$ as the exponent or index.
The behavior of this function depends significantly on the value of the base $a$:
1. Case I ($a > 1$): The function is strictly increasing. As $x$ increases, $a^x$ grows rapidly. This is known as exponential growth. For example, $2^x, 10^x$.
2. Case II ($0 < a < 1$): The function is strictly decreasing. As $x$ increases, $a^x$ approaches zero. This is known as exponential decay. For example, $(1/2)^x, (0.5)^x$.
3. Case III ($a = 1$): The function $f(x) = 1^x = 1$ becomes a constant function, which is a horizontal line.
Domain and Range Analysis
For a general exponential function $f(x) = a^x$ where $a > 0$ and $a \neq 1$, the properties of the domain and range are as follows:
1. Domain ($D_f$):
The domain of a function is the set of all possible input values ($x$) for which the function is defined. For the exponential function, we can raise a positive base $a$ to any real number power—be it positive, negative, zero, or even irrational.
Domain $(D_f) = \mathbb{R}$
(Set of all real numbers)
In interval notation, this is expressed as $x \in (-\infty, \infty)$.
2. Range ($R_f$):
The range is the set of all resulting output values ($y$). A crucial property of exponential functions with a positive base is that they never produce a negative result or zero, regardless of the exponent.
Range $(R_f) = (0, \infty)$
(Set of all positive real numbers)
Note: Even if $x$ is a very large negative number (e.g., $10^{-100}$), the result is $\frac{1}{10^{100}}$, which is extremely small but still greater than $0$. Thus, the x-axis ($y = 0$) acts as a horizontal asymptote for the graph.
Graphical Representation
The behavior of the general exponential function $f(x) = a^x$ is fundamentally determined by the value of its base $a$. We distinguish between growth and decay based on whether the function is strictly increasing or strictly decreasing.
1. Strictly Increasing Exponential Function ($a > 1$)
When the base $a$ is greater than $1$ (e.g., $y = 2^x$, $y = e^x$, $y = 10^x$), the function is strictly increasing. This means as the value of $x$ increases, the value of $y$ also increases at an accelerating rate.
Key Features:
1. Intercept: The graph always passes through the point $(0, 1)$ because $a^0 = 1$ for any $a > 0$.
2. Asymptote: As $x \to -\infty$, the graph approaches the x-axis ($y = 0$) but never touches it. Thus, the x-axis is a horizontal asymptote.
3. Growth: For $x > 0$, the function grows towards $+\infty$. For $x < 0$, the function stays between $0$ and $1$.
2. Strictly Decreasing Exponential Function ($0 < a < 1$)
When the base $a$ lies between $0$ and $1$ (e.g., $y = (1/2)^x$ or $y = (0.5)^x$), the function is strictly decreasing. This is often referred to as Exponential Decay in Physics (e.g., Radioactive Decay).
Key Features:
1. Intercept: Like the increasing version, it also passes through $(0, 1)$.
2. Asymptote: As $x \to \infty$, the graph approaches the x-axis ($y = 0$).
3. Decay: For $x > 0$, the function value decreases towards zero. For $x < 0$, the function value grows very large towards $+\infty$.
Comparative Analysis of Graphs
It is helpful to visualize both graphs on the same coordinate plane. Notice that the graph of $y = (1/a)^x$ is a reflection of the graph of $y = a^x$ about the y-axis.
Properties of $a^x$
The following laws of indices govern the operations of exponential functions. These are fundamental for solving Calculus and Algebra problems in the Indian competitive exam context.
| Property Name | Mathematical Law | Condition |
|---|---|---|
| Identity Property | $a^0 = 1$ | $a \neq 0$ |
| Product Law | $a^x \cdot a^y = a^{x+y}$ | Same base, add powers |
| Quotient Law | $\frac{a^x}{a^y} = a^{x-y}$ | Same base, subtract powers |
| Power of a Power | $(a^x)^y = a^{xy}$ | Multiply exponents |
| Negative Exponent | $a^{-x} = \frac{1}{a^x}$ | Reciprocal relationship |
| Distributive Law | $(ab)^x = a^x \cdot b^x$ | Different bases, same power |
The Natural Exponential Function ($e^x$)
In calculus, the most "natural" base to use is the irrational number $e$ (Euler's number). Its approximate value is given below:
$e \approx 2.718281828...$
The function $f(x) = e^x$ is unique because its slope at any point is exactly equal to its y-coordinate. In other words, it is the only function that is its own derivative.
Series Expansion of $e^x$: For competitive exams, it is helpful to remember the infinite series expansion for $e^x$:
$e^x = \sum\limits_{n=0}^{\infty} \frac{x^n}{n!} = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$
Example. Compare the values of $f(x) = 2^x$ and $g(x) = (0.5)^x$ at $x = 3$.
Answer:
For $f(x) = 2^x$:
$f(3) = 2^3 = 2 \times 2 \times 2 = 8$.
Since $a = 2 > 1$, this represents growth.
For $g(x) = (0.5)^x$:
$g(3) = (0.5)^3 = \left( \frac{1}{2} \right)^3$
$g(3) = \frac{1}{2 \times 2 \times 2} = \frac{1}{8} = 0.125$
Since $a = 0.5 < 1$, this represents decay.
Logarithmic Functions
The Logarithmic Function is mathematically defined as the inverse of the exponential function. It allows us to determine the exponent to which a fixed base must be raised to produce a given number. Logarithms are extensively used in Chemistry for $pH$ calculations and in Physics for sound intensity ($dB$) and radioactive decay.
Definition and Fundamental Relation
If $a$ is a positive real number ($a > 0$) and $a \neq 1$, then for every positive real number $x$, there exists a unique real number $y$ such that $a^y = x$. This value $y$ is called the logarithm of $x$ to the base $a$.
$y = \log_a x \iff a^y = x$
[Inverse Relation]
Domain and Range Constraints:
1. Domain: Logarithms are only defined for positive real numbers. Thus, $x \in (0, \infty)$. The logarithm of zero or a negative number is not defined in the real number system.
2. Range: The resulting value of a logarithm can be any real number. Thus, $R_f = (-\infty, \infty)$.
3. Base: The base $a$ must always be positive ($a > 0$) and not equal to $1$.
Properties and Laws of Logarithms
These laws are essential for simplifying complex expressions before differentiation.
| Law Name | Mathematical Expression | Condition |
|---|---|---|
| Product Law | $\log_a (xy) = \log_a x + \log_a y$ | $x, y > 0$ |
| Quotient Law | $\log_a \left( \frac{x}{y} \right) = \log_a x - \log_a y$ | $x, y > 0$ |
| Power Law | $\log_a (x^n) = n \log_a x$ | $x > 0$ |
| Base Change Formula | $\log_a x = \frac{\log_b x}{\log_b a}$ | $a, b > 0, a, b \neq 1$ |
| Reciprocal Law | $\log_a b = \frac{1}{\log_b a}$ | $a, b \neq 1$ |
| Log of Unity | $\log_a 1 = 0$ | Base $a > 0$ |
Natural vs Common Logarithms
(i) Natural Logarithm (Base $e$)
The Natural Logarithm uses the transcendental number $e$ (Euler's number) as its base, where $e \approx 2.71828$. In higher mathematics and calculus, this is the most "natural" base because the derivative of $\ln x$ is a simple reciprocal ($\frac{1}{x}$).
$\log_e x = \ln x = \log x$
[Notation used in Calculus]
(ii) Common Logarithm (Base $10$)
The Common Logarithm uses the base $10$. This base is preferred for scientific calculations because it aligns with our decimal numeric system. It is widely used in Chemistry for calculating $pH$ values and in Physics for measuring sound intensity in decibels ($dB$).
$\log_{10} x = \log x$
[Notation used in Chemistry/Physics]
Comparison Table
| Feature | Natural Logarithm ($\ln x$) | Common Logarithm ($\log_{10} x$) |
|---|---|---|
| Base | $e \approx 2.71828$ | $10$ |
| Standard Notation | $\ln x$ or $\log x$ (in Calculus) | $\log x$ (in Algebra/Chemistry) |
| Relationship | $\ln x = 2.303 \log_{10} x$ | $\log_{10} x = \frac{\ln x}{2.303}$ |
| Primary Use | Calculus, Growth models | $pH$, $dB$, Richter Scale |
Graphical Behavior of Logarithms
The graph of a logarithmic function $f(x) = \log_a x$ is the reflection of the exponential function $f(x) = a^x$ across the line $y = x$. Its behavior is strictly determined by the base $a$. Note: Since the base $a$ must be positive and $a \neq 1$, we analyze the cases $a > 1$ and $0 < a < 1$.
Case I: Base $a > 1$ (Strictly Increasing)
When the base is greater than $1$ (like $\ln x$ or $\log_{10} x$), the function is strictly increasing. As $x$ grows, $y$ grows slowly. The graph exists only for $x > 0$.
Features:
1. Vertical Asymptote: As $x \to 0^{+}$, $y \to -\infty$ (the y-axis is the asymptote).
2. X-intercept: The graph passes through $(1, 0)$ because $\log_a 1 = 0$.
3. Growth: As $x \to \infty$, $y \to \infty$.
Case II: Base $0 < a < 1$ (Strictly Decreasing)
When the base lies between $0$ and $1$ (like $\log_{0.5} x$), the function is strictly decreasing. This behavior is less common in standard calculus but appears in specific decay-related logarithmic scales.
Features:
1. Vertical Asymptote: As $x \to 0^{+}$, $y \to \infty$.
2. X-intercept: The graph still passes through $(1, 0)$.
3. Decay: As $x \to \infty$, $y \to -\infty$.
Important Mathematical Expansions
Expansions are powerful tools in calculus, especially for evaluating limits, finding derivatives, and approximating complex functions. Being proficient with Binomial and Exponential series is mandatory for solving advanced problems.
1. Binomial Expansion for Positive Integer $(x + a)^n$
If $n$ is a positive integer, the expansion of $(x + a)^n$ is finite and contains $n+1$ terms. This is the standard Binomial Theorem as taught in NCERT Class 11.
The general formula is given by:
$(x + a)^n = \sum\limits_{r=0}^{n} {^{n}C_{r}} x^{n-r} a^{r}$
Expanding the summation:
$(x + a)^n = x^n + nx^{n-1}a + \frac{n(n-1)}{2!}x^{n-2}a^2 + \dots + a^n$
Note: Here, $^{n}C_{r} = \frac{n!}{r!(n-r)!}$ represents the Binomial Coefficient.
2. Binomial Expansion for Any Index $(1 + x)^n$
When $n$ is a negative integer or a fraction (rational number), the expansion becomes an infinite series. For this series to be convergent (meaning it sums to a finite value), the condition $|x| < 1$ must be satisfied.
The expansion is defined as:
$(1 + x)^n = 1 + nx + \frac{n(n-1)}{2!}x^2 + \frac{n(n-1)(n-2)}{3!}x^3 + \dots \infty$
Commonly Used Special Cases:
| Function | Expansion Series |
|---|---|
| $(1 + x)^{-1}$ | $1 - x + x^2 - x^3 + x^4 - \dots$ |
| $(1 - x)^{-1}$ | $1 + x + x^2 + x^3 + x^4 + \dots$ |
| $(1 + x)^{-2}$ | $1 - 2x + 3x^2 - 4x^3 + \dots$ |
| $(1 - x)^{-2}$ | $1 + 2x + 3x^2 + 4x^3 + \dots$ |
3. Exponential Series $e^x$ and $e^{-x}$
The exponential function $e^x$ can be represented as an infinite power series. This series is absolutely convergent for all real values of $x$.
(i) Expansion of $e^x$
$e^x = \sum\limits_{n=0}^{\infty} \frac{x^n}{n!}$
Expanding the terms:
$e^x = 1 + \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots \infty$
(ii) Expansion of $e^{-x}$
By replacing $x$ with $-x$ in above equation, we get the alternating series:
$e^{-x} = 1 - \frac{x}{1!} + \frac{x^2}{2!} - \frac{x^3}{3!} + \dots \infty$
4. The Mathematical Constant $e$
The constant $e$ is an irrational number that arises naturally in various mathematical contexts. It can be defined using either a Limit or the Expansion Series above by setting $x = 1$.
Definition as a Limit:
$e = \lim\limits_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n$
Definition as a Series:
By substituting $x = 1$ in the $e^x$ expansion:
$e = 1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \dots \infty$
The approximate value of $e$ in the decimal system is:
$e \approx 2.718281828 \dots$
Important Limits of Exponential and Logarithmic Functions
In calculus, limits involving the transcendental number $e$ are fundamental. These limits form the basis for the derivatives of exponential and logarithmic functions. Mastering these derivations is essential as they are often used to solve indeterminate forms of type $0/0$ or $1^\infty$.
Limits of the Reciprocal Function $\frac{1}{x}$
The reciprocal function $f(x) = \frac{1}{x}$ is a Rectangular Hyperbola. Understanding its limits at the origin and at infinity is the cornerstone of calculus. It helps us understand the behavior of functions when they become undefined or unboundedly large.
1. Limit as $x \to 0$ (Behavior near the Origin)
To evaluate the limit of $\frac{1}{x}$ as $x$ approaches $0$, we must analyze the Right Hand Limit (RHL) and the Left Hand Limit (LHL) separately. This is because the behavior of the function changes drastically depending on which side of zero we approach from.
(a) Right Hand Limit (RHL)
When $x$ approaches $0$ from the positive side ($x > 0$), the denominator becomes a very small positive number, making the fraction $\frac{1}{x}$ extremely large and positive.
| $x$ | 0.1 | 0.01 | 0.001 | 0.0001 |
|---|---|---|---|---|
| $f(x) = \frac{1}{x}$ | 10 | 100 | 1000 | 10000 |
Mathematically, we represent this as:
$\lim\limits_{x \to 0^+} \frac{1}{x} = \infty$
[RHL]
(b) Left Hand Limit (LHL)
When $x$ approaches $0$ from the negative side ($x < 0$), the denominator is a very small negative number. This results in the fraction $\frac{1}{x}$ becoming unboundedly large in the negative direction.
| $x$ | -0.1 | -0.01 | -0.001 | -0.0001 |
|---|---|---|---|---|
| $f(x) = \frac{1}{x}$ | -10 | -100 | -1000 | -10000 |
$\lim\limits_{x \to 0^-} \frac{1}{x} = -\infty$
[LHL]
(c) Conclusion on Existence of Limit
For a limit to exist, the LHL must equal the RHL. Since $\infty \neq -\infty$:
$\lim\limits_{x \to 0^+} \frac{1}{x} \neq \lim\limits_{x \to 0^-} \frac{1}{x}$
Therefore, $\lim\limits_{x \to 0} \frac{1}{x}$ Does Not Exist (DNE).
2. Limit as $x \to \infty$ (End Behavior)
As $x$ grows larger and larger without bound (e.g., ten, a thousand, a lakh, a crore), the value of the reciprocal $\frac{1}{x}$ becomes smaller and smaller, shrinking toward zero.
| $x$ | 10 | 1000 | 1,00,000 | 10,00,00,000 |
|---|---|---|---|---|
| $f(x) = \frac{1}{x}$ | 0.1 | 0.001 | 0.00001 | 0.00000001 |
In the language of calculus, we say that the x-axis ($y=0$) is the Horizontal Asymptote of the function.
$\lim\limits_{x \to \infty} \frac{1}{x} = 0$
$\lim\limits_{x \to -\infty} \frac{1}{x} = 0$
Summary of Concepts
1. Approaching zero from the right gives positive infinity.
2. Approaching zero from the left gives negative infinity.
3. Approaching infinity (either positive or negative) always yields zero.
The Fundamental Limit defining $e$
The number $e$, also known as Euler's Number, is a fundamental mathematical constant that is the base of the natural logarithm. It is an irrational and transcendental number, approximately equal to $2.71828$. It is defined primarily through the following limits which arise in the study of compound interest and calculus.
1. The Limit at Infinity: $\lim\limits_{x \to \infty} \left( 1 + \frac{1}{x} \right)^x$
Derivation using Binomial Theorem
To derive this limit, we consider the expression $\left( 1 + \frac{1}{x} \right)^x$ and expand it using the Binomial Theorem for any index:
$(1 + z)^n = 1 + nz + \frac{n(n-1)}{2!}z^2 + \frac{n(n-1)(n-2)}{3!}z^3 + \dots$
Let $z = \frac{1}{x}$ and $n = x$. Substituting these values into the expansion, we get:
$\left( 1 + \frac{1}{x} \right)^x = 1 + x \cdot \left(\frac{1}{x}\right) + \frac{x(x-1)}{2!} \left(\frac{1}{x}\right)^2 + \frac{x(x-1)(x-2)}{3!} \left(\frac{1}{x}\right)^3 + \dots$
Simplifying each term by distributing the $x$ powers into the numerator brackets:
$\left( 1 + \frac{1}{x} \right)^x = 1 + 1 + \frac{x^2(1 - 1/x)}{2! \cdot x^2} + \frac{x^3(1 - 1/x)(1 - 2/x)}{3! \cdot x^3} + \dots$
$\left( 1 + \frac{1}{x} \right)^x = 1 + 1 + \frac{(1 - \frac{1}{x})}{2!} + \frac{(1 - \frac{1}{x})(1 - \frac{2}{x})}{3!} + \dots$
Now, we take the limit as $x \to \infty$. As $x$ becomes infinitely large, the terms $\frac{1}{x}, \frac{2}{x}, \frac{3}{x}, \dots$ all tend toward $0$.
$\lim\limits_{x \to \infty} \left( 1 + \frac{1}{x} \right)^x = 1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \dots$
The sum of the infinite series $1 + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \dots$ is precisely the value of $e$.
Therefore: $\lim\limits_{x \to \infty} \left( 1 + \frac{1}{x} \right)^x = e$
Numerical Illustration of the Limit
To understand how this function approaches $e$ ($\approx 2.71828$), let us observe the values of the function for increasing values of $x$:
| Value of $x$ | Value of $\left( 1 + \frac{1}{x} \right)^x$ |
|---|---|
| 1 | $2.00000$ |
| 10 | $2.59374$ |
| 100 | $2.70481$ |
| 1,000 | $2.71692$ |
| 10,000 | $2.71814$ |
| 1,00,000 (1 Lakh) | $2.71827$ |
| $\infty$ | $e \approx 2.71828$ |
2. The Limit at Zero: $\lim\limits_{x \to 0} (1 + x)^{1/x}$
The limit $\lim\limits_{x \to 0} (1 + x)^{1/x}$ is a fundamental result in calculus, representing the Exponential Limit in its local form. It is often used to resolve indeterminate forms of the type $1^\infty$ when the variable $x$ approaches zero. This limit is critical for deriving the derivative of logarithmic and exponential functions.
Detailed Derivation by Substitution
To evaluate this limit, we relate it to the previously defined limit at infinity ($\lim\limits_{n \to \infty} (1 + 1/n)^n = e$) by performing a change of variable.
To Find: $\lim\limits_{x \to 0} (1 + x)^{1/x}$
Step-by-Step Proof:
Let us introduce a new variable $y$ such that:
$y = \frac{1}{x}$
[Substitution]
From the above substitution, it follows that:
$x = \frac{1}{y}$
Now, we must determine the behavior of $y$ as $x$ approaches $0$. Since $y$ is the reciprocal of $x$:
1. As $x \to 0^+$ (approaching zero from the right), $y = \frac{1}{x} \to \infty$.
2. As $x \to 0^-$ (approaching zero from the left), $y = \frac{1}{x} \to -\infty$.
Let us consider the case where $x \to 0^+$, which implies $y \to \infty$. Substituting $x$ and the limit variable into the original expression:
$\lim\limits_{x \to 0^+} (1 + x)^{1/x} = \lim\limits_{y \to \infty} \left( 1 + \frac{1}{y} \right)^y$
By the fundamental definition of Euler's number ($e$), we know that:
$\lim\limits_{y \to \infty} \left( 1 + \frac{1}{y} \right)^y = e$
Similarly, it can be proven that for $x \to 0^-$ (where $y \to -\infty$), the limit also converges to $e$. Since the Right-Hand Limit (RHL) and Left-Hand Limit (LHL) are equal:
Result:
$\lim\limits_{x \to 0} (1 + x)^{1/x} = e$
Numerical Illustration
As $x$ gets closer to $0$, the value of the function approaches $2.71828...$
| Value of $x$ | Value of $(1 + x)^{1/x}$ |
|---|---|
| 0.1 | $(1.1)^{10} \approx 2.59374$ |
| 0.01 | $(1.01)^{100} \approx 2.70481$ |
| 0.001 | $(1.001)^{1000} \approx 2.71692$ |
| 0.0001 | $(1.0001)^{10000} \approx 2.71814$ |
| $\to 0$ | $e \approx 2.71828$ |
Generalizations of the Limit
In competitive examinations, generalized forms of these limits are frequently encountered. If $\lim\limits_{x \to a} f(x) = 0$ and $\lim\limits_{x \to a} g(x) = \infty$ such that the limit takes the form $1^\infty$, then:
$\lim\limits_{x \to a} [1 + f(x)]^{g(x)} = e^{\lim\limits_{x \to a} f(x) \cdot g(x)}$
Common Identities:
1. $\lim\limits_{x \to \infty} \left( 1 + \frac{a}{x} \right)^x = e^a$
2. $\lim\limits_{x \to \infty} \left( 1 + \frac{a}{x} \right)^{bx} = e^{ab}$
3. $\lim\limits_{x \to 0} (1 + ax)^{1/x} = e^a$
Example. Evaluate the limit: $\lim\limits_{x \to \infty} \left( \frac{x+6}{x+1} \right)^{x+4}$
Answer:
First, we rewrite the expression in the form $(1 + f(x))$:
$\frac{x+6}{x+1} = \frac{(x+1) + 5}{x+1} = 1 + \frac{5}{x+1}$
The limit becomes: $\lim\limits_{x \to \infty} \left( 1 + \frac{5}{x+1} \right)^{x+4}$
This is of the form $1^\infty$. Using the shortcut formula $e^{\lim f(x)g(x)}$:
$L = e^{\lim\limits_{x \to \infty} \left( \frac{5}{x+1} \right) \cdot (x+4)}$
$L = e^{\lim\limits_{x \to \infty} \frac{5x + 20}{x + 1}}$
Dividing numerator and denominator by $x$:
$L = e^{\lim\limits_{x \to \infty} \frac{5 + 20/x}{1 + 1/x}} = e^{5/1} = e^5$
Final Answer: $e^5$
Logarithmic and Exponential Limits
Logarithmic and exponential limits are essential tools in calculus, particularly in deriving the derivatives of functions like $e^x$ and $\log x$. These limits often result in the indeterminate form $\frac{0}{0}$ and are solved using the fundamental properties of Euler's number $e$.
The Standard Logarithmic Limit
Statement: $\lim\limits_{x \to 0} \frac{\log_e (1 + x)}{x} = 1$
To Prove: The limit of the ratio of the natural logarithm of $(1+x)$ to $x$, as $x$ approaches zero, is unity.
Proof (Using Properties of Logarithms):
We begin by rewriting the expression using the coefficient property of logarithms, where $n \log a = \log a^n$.
$\lim\limits_{x \to 0} \frac{1}{x} \log_e (1 + x) = \lim\limits_{x \to 0} \log_e (1 + x)^{1/x}$
Since the logarithmic function is continuous for all values in its domain $(0, \infty)$, we can swap the limit operator and the logarithm function:
$\log_e \left[ \lim\limits_{x \to 0} (1 + x)^{1/x} \right]$
[Continuity of $\log$ function]
From the fundamental definition of $e$, we know that $\lim\limits_{x \to 0} (1 + x)^{1/x} = e$. Substituting this value:
$\log_e (e) = 1$
[$\because \log_a a = 1$]
Alternate Method (Using Series Expansion):
We know the Taylor series for $\log_e (1+x)$ is:
$\log_e (1+x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \dots$
Dividing the entire series by $x$:
$\frac{\log_e (1+x)}{x} = 1 - \frac{x}{2} + \frac{x^2}{3} - \dots$
Applying the limit $x \to 0$, all terms containing $x$ vanish, leaving us with $1$.
The Standard Exponential Limit (Base $e$)
Statement: $\lim\limits_{x \to 0} \frac{e^x - 1}{x} = 1$
Derivation by Substitution:
Let us use substitution to transform this into a logarithmic limit.
Given: $\lim\limits_{x \to 0} \frac{e^x - 1}{x}$
Let $e^x - 1 = y$.
This implies $e^x = 1 + y$.
Taking the natural logarithm ($\log_e$) on both sides:
$x = \log_e (1 + y)$
Now, we check the limit change: As $x \to 0$, $e^x \to e^0$, which means $e^x \to 1$. Therefore, $y = (e^x - 1) \to 0$. Substituting $x$ and $y$ into the original limit:
$\lim\limits_{x \to 0} \frac{e^x - 1}{x} = \lim\limits_{y \to 0} \frac{y}{\log_e (1 + y)}$
To simplify, we move $y$ to the denominator of the denominator:
$\lim\limits_{y \to 0} \frac{1}{\frac{\log_e (1 + y)}{y}}$
Since $\lim\limits_{y \to 0} \frac{\log_e (1 + y)}{y} = 1$, the expression becomes $\frac{1}{1} = 1$.
Result: $\lim\limits_{x \to 0} \frac{e^x - 1}{x} = 1$
General Exponential Limit (Base $a$)
Statement: $\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \log_e a$, where $a > 0, a \neq 1$
Derivation (Using Substitution Method):
To evaluate the limit $\lim\limits_{x \to 0} \frac{a^x - 1}{x}$, we use the method of substitution as follows:
Let $a^x - 1 = y$.
From this substitution, we can express $a^x$ as:
$a^x = 1 + y$
Taking natural logarithm ($\log_e$) on both sides:
$\log_e a^x = \log_e (1 + y)$
Using the property $\log m^n = n \log m$:
$x \log_e a = \log_e (1 + y)$
Solving for $x$:
$x = \frac{\log_e (1 + y)}{\log_e a}$
[$\because \log_e e = 1$]
Now, we determine the change in the limit variable. As $x \to 0$, we have $a^x \to a^0 = 1$. Consequently:
$y = (a^x - 1) \to (1 - 1) = 0$
Substituting the values of $x$ and $y$ into the original limit expression:
$\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \lim\limits_{y \to 0} \frac{y}{\frac{\log_e (1 + y)}{\log_e a}}$
Rearranging the expression by moving $\log_e a$ to the numerator:
$= \lim\limits_{y \to 0} \left[ \frac{y \cdot \log_e a}{\log_e (1 + y)} \right]$
We can further rewrite this to utilize the standard logarithmic limit $\lim\limits_{y \to 0} \frac{\log_e (1 + y)}{y} = 1$:
$= \log_e a \cdot \lim\limits_{y \to 0} \left[ \frac{1}{\frac{\log_e (1 + y)}{y}} \right]$
Applying the limit:
$= \log_e a \cdot \frac{1}{1}$
[Using $\lim\limits_{y \to 0} \frac{\log_e(1+y)}{y} = 1$]
Final Result:
$\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \log_e a$
Key Observations
1. If the base $a$ is replaced by $e$, the formula becomes $\lim\limits_{x \to 0} \frac{e^x - 1}{x} = \log_e e = 1$.
2. The result $\log_e a$ is often written as $\ln a$ in modern calculus notations.
3. This limit is the basis for the derivative of $a^x$, which is $a^x \ln a$.
Numerical Comparison
The following table shows the convergence of the exponential limit for different bases as $x$ approaches a very small value ($0.0001$).
| Base ($a$) | Function $f(x) = \frac{a^x - 1}{x}$ at $x=0.0001$ | Actual Value ($\log_e a$) |
|---|---|---|
| $e \approx 2.718$ | $1.00005$ | $1.00000$ |
| $2$ | $0.69317$ | $0.69315$ |
| $10$ | $2.30285$ | $2.30258$ |
| $1/2$ | $-0.69312$ | $-0.69315$ |
Example. Evaluate: $\lim\limits_{x \to 0} \frac{3^x - 2^x}{x}$
Answer:
To use our standard limits, we subtract and add $1$ in the numerator:
$\lim\limits_{x \to 0} \frac{(3^x - 1) - (2^x - 1)}{x}$
Splitting the fraction:
$\lim\limits_{x \to 0} \left[ \frac{3^x - 1}{x} - \frac{2^x - 1}{x} \right]$
Applying the formula $\lim\limits_{x \to 0} \frac{a^x - 1}{x} = \log_e a$:
$= \log_e 3 - \log_e 2$
Using the property $\log m - \log n = \log \frac{m}{n}$:
$= \log_e \left( \frac{3}{2} \right)$
Final Answer: $\log_e (1.5)$
Standard Exponential and Logarithmic Limits
| Limit Form | Result | Condition |
|---|---|---|
| $\lim\limits_{x \to \infty} \left( 1 + \frac{1}{x} \right)^x$ | $e$ | $x$ is a real number |
| $\lim\limits_{x \to 0} (1 + x)^{1/x}$ | $e$ | $x \to 0^+$ or $x \to 0^-$ |
| $\lim\limits_{x \to 0} \frac{\log_e(1+x)}{x}$ | $1$ | $x > -1, x \neq 0$ |
| $\lim\limits_{x \to 0} \frac{e^x - 1}{x}$ | $1$ | $x$ is in radians (for trig relations) |
| $\lim\limits_{x \to 0} \frac{a^x - 1}{x}$ | $\log_e a$ | $a > 0, a \neq 1$ |
| $\lim\limits_{x \to 0} \frac{\log_a(1+x)}{x}$ | $\log_a e$ (or $\frac{1}{\log_e a}$) | $a > 0, a \neq 1$ |
Derivatives of Exponential and Logarithmic Functions
The derivatives of exponential and logarithmic functions are fundamental to Calculus. These functions are unique because the derivative of the natural exponential function is the function itself, and the derivative of the natural logarithm follows a simple reciprocal rule. These results are derived using the First Principle of Derivatives (also known as the Delta Method).
1. Derivative of the Exponential Function $e^x$
Statement: The derivative of $e^x$ with respect to $x$ is $e^x$ for all $x \in \mathbb{R}$.
To Prove: $\frac{d}{dx}(e^x) = e^x$
Proof (Using First Principles):
Let $f(x) = e^x$. According to the definition of the derivative:
$f'(x) = \lim\limits_{h \to 0} \frac{f(x + h) - f(x)}{h}$
[Definition of Derivative]
Substituting $f(x) = e^x$ into the equation:
$f'(x) = \lim\limits_{h \to 0} \frac{e^{x + h} - e^x}{h}$
Using the law of exponents $e^{x+h} = e^x \cdot e^h$, we can factor out $e^x$:
$f'(x) = \lim\limits_{h \to 0} \frac{e^x(e^h - 1)}{h}$
Since $e^x$ does not depend on $h$, we can move it outside the limit operator:
$f'(x) = e^x \cdot \lim\limits_{h \to 0} \frac{e^h - 1}{h}$
We know from the standard exponential limits that $\lim\limits_{h \to 0} \frac{e^h - 1}{h} = 1$. Substituting this value:
$f'(x) = e^x \cdot 1$
Final Result:
$\frac{d}{dx}(e^x) = e^x$
2. Derivative of the Logarithmic Function $\log_e x$
Statement: The derivative of $\log_e x$ (or $\ln x$) is $\frac{1}{x}$ for all $x > 0$.
To Prove: $\frac{d}{dx}(\log_e x) = \frac{1}{x}$
Proof (Using First Principles):
Let $f(x) = \log_e x$, where $x > 0$. Using the definition of derivative:
$f'(x) = \lim\limits_{h \to 0} \frac{\log_e(x + h) - \log_e x}{h}$
Using the property of logarithms $\log_e m - \log_e n = \log_e \left(\frac{m}{n}\right)$:
$f'(x) = \lim\limits_{h \to 0} \frac{\log_e \left(\frac{x + h}{x}\right)}{h}$
Simplifying the fraction inside the logarithm:
$f'(x) = \lim\limits_{h \to 0} \frac{\log_e \left(1 + \frac{h}{x}\right)}{h}$
To use the standard limit $\lim\limits_{u \to 0} \frac{\log_e(1 + u)}{u} = 1$, we multiply and divide the expression by $x$:
$f'(x) = \lim\limits_{h \to 0} \left[ \frac{1}{x} \cdot \frac{\log_e \left(1 + \frac{h}{x}\right)}{\frac{h}{x}} \right]$
As $h \to 0$, it follows that $\frac{h}{x} \to 0$. Therefore, the limit of the second term becomes $1$:
$f'(x) = \frac{1}{x} \cdot 1$
[$\because \lim\limits_{u \to 0} \frac{\log_e(1+u)}{u} = 1$]
Final Result:
$\frac{d}{dx}(\log_e x) = \frac{1}{x}$
Numerical Comparison Table
Observe how the slope (derivative) changes for different functions at $x = 1$.
| Function $f(x)$ | Derivative $f'(x)$ | Value at $x=1$ |
|---|---|---|
| $e^x$ | $e^x$ | $2.718...$ |
| $2^x$ | $2^x \log_e 2$ | $\approx 1.386$ |
| $\log_e x$ | $1/x$ | $1.000$ |
| $x^2$ | $2x$ | $2.000$ |
Important Deductions in Derivatives
Building upon the basic derivatives of $e^x$ and $\log_e x$, we can derive more general formulas for cases where the base is a constant $a$ (where $a > 0, a \neq 1$) or where the function involves absolute values.
1. Derivative of $a^x$ ($a > 0, a \neq 1$)
Statement: The derivative of the general exponential function $a^x$ is $a^x \log_e a$.
Derivation:
We know that any positive number $a$ can be expressed as $e^{\log_e a}$. Therefore, the function $y = a^x$ can be rewritten as:
$y = (e^{\log_e a})^x = e^{x \log_e a}$
[Using $a = e^{\log_e a}$]
Now, differentiating both sides with respect to $x$ using the Chain Rule:
$\frac{d}{dx}(a^x) = \frac{d}{dx}(e^{x \log_e a})$
$\frac{d}{dx}(a^x) = e^{x \log_e a} \cdot \frac{d}{dx}(x \log_e a)$
Since $\log_e a$ is a constant, its derivative $\frac{d}{dx}(x \cdot \text{constant})$ is simply the constant itself:
$\frac{d}{dx}(a^x) = e^{x \log_e a} \cdot \log_e a$
Substituting $e^{x \log_e a}$ back as $a^x$:
$\frac{d}{dx}(a^x) = a^x \log_e a$
2. Derivative of $\log_a x$ ($x > 0, a > 0, a \neq 1$)
Statement: The derivative of the logarithm with base $a$ is $\frac{1}{x \log_e a}$.
Derivation:
To differentiate $\log_a x$, we first use the Base Changing Formula to convert it to natural logarithms (base $e$):
$\log_a x = \frac{\log_e x}{\log_e a}$
[Base changing formula]
Now, differentiating with respect to $x$:
$\frac{d}{dx}(\log_a x) = \frac{d}{dx} \left( \frac{1}{\log_e a} \cdot \log_e x \right)$
Since $\frac{1}{\log_e a}$ is a constant, we take it outside the differentiation:
$\frac{d}{dx}(\log_a x) = \frac{1}{\log_e a} \cdot \frac{d}{dx}(\log_e x)$
We know that $\frac{d}{dx}(\log_e x) = \frac{1}{x}$. Therefore:
$\frac{d}{dx}(\log_a x) = \frac{1}{x \log_e a}$
3. Derivative of $\log |x|$ ($x \neq 0$)
Statement: The derivative of $\log |x|$ is $\frac{1}{x}$ for all non-zero $x$.
Proof:
The function $f(x) = \log |x|$ is defined piece-wise:
$f(x) = \begin{cases} \log x & , & x > 0 \\ \log(-x) & , & x < 0 \end{cases}$
Case I: When $x > 0$, $\frac{d}{dx}(\log x) = \frac{1}{x}$.
Case II: When $x < 0$, we use the chain rule for $\log(-x)$:
$\frac{d}{dx}[\log(-x)] = \frac{1}{-x} \cdot \frac{d}{dx}(-x) = \frac{1}{-x} \cdot (-1) = \frac{1}{x}$
In both cases, the result is the same. Thus:
$\frac{d}{dx}(\log |x|) = \frac{1}{x}, \quad x \neq 0$
4. Derivative of $\log_a |x|$ ($x \neq 0, a > 0, a \neq 1$)
Statement: The derivative of $\log_a |x|$ is $\frac{1}{x \log_e a}$.
Derivation:
Combining the Base Changing Formula and the derivative of $\log |x|$:
$\frac{d}{dx}(\log_a |x|) = \frac{d}{dx} \left( \frac{\log_e |x|}{\log_e a} \right)$
$= \frac{1}{\log_e a} \cdot \frac{d}{dx}(\log_e |x|)$
$= \frac{1}{\log_e a} \cdot \frac{1}{x}$
Final Result:
$\frac{d}{dx}(\log_a |x|) = \frac{1}{x \log_e a}$
Logarithmic Differentiation
Logarithmic differentiation is a powerful technique used to simplify the process of differentiating complex functions. Instead of applying the product or quotient rules directly, we take the natural logarithm ($\log_e$) of both sides of the equation first. This transformation allows us to use logarithmic properties to convert products into sums and powers into coefficients, making the differentiation much more manageable.
When to Use Logarithmic Differentiation
This method is usually preferred in the following two types of mathematical problems:
(i) Products or Quotients of Multiple Functions: When a function consists of a long product or a complex quotient of several terms, taking the logarithm converts the product into a sum of logarithms and the quotient into a difference, which can be differentiated term-by-term.
(ii) Functions of the form $[f(x)]^{g(x)}$: When the variable occurs in both the base and the exponent (i.e., a variable raised to a variable power), logarithmic differentiation is the standard approach because the direct application of the power rule ($x^n$) or exponential rule ($a^x$) is not applicable.
Derivative of $u^v$ (Variable Base and Variable Exponent)
Let $u$ and $v$ be differentiable functions of $x$. Suppose we need to find the derivative of $y = u^v$.
Proof and Derivation:
Given:
$y = u^v$
... (i)
Step 1: Taking natural logarithm ($\log_e$) on both sides:
$\log_e y = \log_e (u^v)$
Using the property $\log m^n = n \log m$:
$\log_e y = v \log_e u$
Step 2: Differentiating both sides with respect to $x$ using the Chain Rule on the left and the Product Rule on the right:
$\frac{d}{dx}(\log_e y) = \frac{d}{dx}(v \cdot \log_e u)$
$\frac{1}{y} \cdot \frac{dy}{dx} = \frac{d}{dx}(v \log_e u)$
[$\frac{d}{dx}(\log y) = \frac{1}{y} \frac{dy}{dx}$]
Step 3: Multiplying both sides by $y$ to solve for $\frac{dy}{dx}$:
$\frac{dy}{dx} = y \left[ \frac{d}{dx}(v \log_e u) \right]$
Substituting the original value of $y = u^v$ from equation (i):
$\frac{d}{dx}(u^v) = u^v \left[ \frac{dv}{dx} \cdot \log_e u + v \cdot \frac{1}{u} \frac{du}{dx} \right]$
Example 1. Differentiate $y = x^x$ with respect to $x$.
Answer:
Given: $y = x^x$
Taking log on both sides: $\log y = x \log x$
Differentiating with respect to $x$ using the product rule:
$\frac{1}{y} \frac{dy}{dx} = \frac{d}{dx}(x) \cdot \log x + x \cdot \frac{d}{dx}(\log x)$
$\frac{1}{y} \frac{dy}{dx} = 1 \cdot \log x + x \cdot \frac{1}{x}$
$\frac{1}{y} \frac{dy}{dx} = \log x + 1$
$\frac{dy}{dx} = y(1 + \log x)$
Substituting $y = x^x$:
$\frac{dy}{dx} = x^x(1 + \log_e x)$
Example 2. Differentiate $y = \sqrt{\frac{(x-1)(x-2)}{(x-3)(x-4)}}$
Answer:
This is a complex product/quotient. Taking log simplifies it:
$\log y = \log \left[ \frac{(x-1)(x-2)}{(x-3)(x-4)} \right]^{1/2}$
$\log y = \frac{1}{2} [ \log(x-1) + \log(x-2) - \log(x-3) - \log(x-4) ]$
Now, differentiating both sides is much easier:
$\frac{1}{y} \frac{dy}{dx} = \frac{1}{2} \left[ \frac{1}{x-1} + \frac{1}{x-2} - \frac{1}{x-3} - \frac{1}{x-4} \right]$
$\frac{dy}{dx} = \frac{y}{2} \left[ \frac{1}{x-1} + \frac{1}{x-2} - \frac{1}{x-3} - \frac{1}{x-4} \right]$
Derivatives of Functions in Parametric Forms
In classical calculus, we usually deal with functions where $y$ is expressed directly in terms of $x$ (Explicit functions like $y = x^2$) or where $x$ and $y$ are entangled in an equation (Implicit functions like $x^2 + y^2 = 25$). However, in many physical phenomena—such as the trajectory of a projectile in physics or the movement of a point on a rotating wheel—it is more convenient to express both $x$ and $y$ as functions of a third independent variable, usually time ($t$) or an angle ($\theta$).
These are called Parametric Equations, and the auxiliary variable ($t$ or $\theta$) is known as the Parameter. The process of finding the rate of change of $y$ with respect to $x$ in such cases is known as Parametric Differentiation.
Definition and Notation
A functional relationship is said to be in parametric form if both the coordinates $x$ and $y$ are given as functions of the same parameter $t$. Let these functions be differentiable in a certain interval:
$x = f(t)$
[Function for $x$]
$y = g(t)$
[Function for $y$]
In this system, for every value of the parameter $t$, we obtain a corresponding pair $(x, y)$ which represents a point on the curve. To find the slope of the tangent to this curve, we need to calculate $\frac{dy}{dx}$.
Detailed Derivation of the Formula
The derivation relies on the Chain Rule of differentiation. Since $y$ is a function of $t$, and $x$ is also a function of $t$, we can technically view $y$ as a composite function of $x$.
Proof:
By the Chain Rule, the derivative of $y$ with respect to the parameter $t$ can be expressed as the product of the derivative of $y$ with respect to $x$ and the derivative of $x$ with respect to $t$:
$\frac{dy}{dt} = \frac{dy}{dx} \cdot \frac{dx}{dt}$
[Chain Rule]
Our objective is to isolate $\frac{dy}{dx}$. By dividing both sides of equation (iii) by $\frac{dx}{dt}$, we get:
$\frac{dy}{dx} = \frac{\frac{dy}{dt}}{\frac{dx}{dt}}$
[Provided $\frac{dx}{dt} \neq 0$]
Alternatively, using prime notation ($'$) to denote differentiation with respect to the parameter $t$, where $g'(t) = \frac{dy}{dt}$ and $f'(t) = \frac{dx}{dt}$, the formula can be written as:
$\frac{dy}{dx} = \frac{g'(t)}{f'(t)}$
[Functional Notation]
Important Note: This formula allows us to find the derivative $\frac{dy}{dx}$ as a function of the parameter $t$ directly, without the often difficult or impossible step of eliminating the parameter to find the Cartesian equation $y = F(x)$.
Condition of Validity
The standard formula for the derivative of a parametric function is:
$\frac{dy}{dx} = \frac{dy/dt}{dx/dt}$
Since this formula involves division by $\frac{dx}{dt}$, it is only valid when the denominator is non-zero. When the denominator becomes zero, we encounter two distinct geometric scenarios:
Case I: Vertical Tangent ($\frac{dx}{dt} = 0$ and $\frac{dy}{dt} \neq 0$)
If the rate of change of $x$ with respect to the parameter is zero, but $y$ is still changing, the denominator of our slope formula becomes zero. Mathematically, the slope approaches infinity ($\infty$).
Geometric Meaning: At this specific value of $t$, the tangent to the curve is perfectly vertical (parallel to the Y-axis). This commonly occurs at the leftmost and rightmost points of circles or ellipses.
Case II: Singular Points or Cusps ($\frac{dx}{dt} = 0$ and $\frac{dy}{dt} = 0$)
If both the horizontal and vertical rates of change are zero at the same instant, the limit takes the indeterminate form $\frac{0}{0}$. In physics, this corresponds to a moment where the "speed" of the particle along the path is zero.
Geometric Meaning: Such points are often Singular Points. The curve may have a Cusp (a sharp point where the curve reverses direction) or a self-intersection. To find the slope at such a point, one must calculate the limit of the ratio using L'Hôpital's Rule:
$\text{Slope} = \lim\limits_{t \to t_0} \frac{g'(t)}{f'(t)}$
Common Parametric Forms in Coordinate Geometry
Identifying parametric forms of conic sections is crucial. The following table lists common parametric representations:
| Curve Name | Cartesian Form | Parametric Form ($x, y$) | Parameter |
|---|---|---|---|
| Circle | $x^2 + y^2 = r^2$ | $x = r \cos \theta, y = r \sin \theta$ | $\theta$ |
| Parabola | $y^2 = 4ax$ | $x = at^2, y = 2at$ | $t$ |
| Ellipse | $\frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ | $x = a \cos \theta, y = b \sin \theta$ | $\theta$ |
| Hyperbola | $\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1$ | $x = a \sec \theta, y = b \tan \theta$ | $\theta$ |
Example 1. Find $\frac{dy}{dx}$ if $x = at^2$ and $y = 2at$.
Answer:
Step 1: Differentiate $x$ with respect to $t$.
$\frac{dx}{dt} = \frac{d}{dt}(at^2) = 2at$
Step 2: Differentiate $y$ with respect to $t$.
$\frac{dy}{dt} = \frac{d}{dt}(2at) = 2a$
Step 3: Use the parametric differentiation formula.
$\frac{dy}{dx} = \frac{dy/dt}{dx/dt} = \frac{2a}{2at}$
$\frac{dy}{dx} = \frac{1}{t}$
Final Answer: $\frac{dy}{dx} = \frac{1}{t}$
Example 2. Find the slope of the tangent to the circle $x = a \cos \theta, y = a \sin \theta$ at $\theta = \frac{\pi}{4}$.
Answer:
Step 1: Differentiate both parametric equations.
$\frac{dx}{d\theta} = -a \sin \theta$
$\frac{dy}{d\theta} = a \cos \theta$
Step 2: Calculate $\frac{dy}{dx}$.
$\frac{dy}{dx} = \frac{a \cos \theta}{-a \sin \theta} = -\cot \theta$
Step 3: Evaluate at $\theta = \frac{\pi}{4}$.
$\text{Slope} = -\cot\left(\frac{\pi}{4}\right) = -1$
Second Order Derivative
The concept of Successive Differentiation refers to the process of differentiating a function repeatedly. If a function $y = f(x)$ is differentiable, its derivative $f'(x)$ is also a function of $x$. If $f'(x)$ is itself differentiable, we can find its derivative, which is termed the Second Order Derivative. This process can theoretically continue to the $3^{rd}$, $4^{th}$, or $n^{th}$ order, provided the derivatives exist.
Mathematical Definition and Derivation
Let $y = f(x)$ be a function defined on an interval. The first derivative, which represents the instantaneous rate of change of $y$ with respect to $x$, is given by:
$\frac{dy}{dx} = f'(x)$
The second order derivative is obtained by applying the differential operator $\frac{d}{dx}$ to the first derivative. It measures the rate of change of the slope of the original function. It is derived as follows:
$\frac{d}{dx} \left( \frac{dy}{dx} \right) = \frac{d^2y}{dx^2}$
[Definition of Second Order]
Similarly, for the third order derivative:
$\frac{d}{dx} \left( \frac{d^2y}{dx^2} \right) = \frac{d^3y}{dx^3} = f'''(x)$
[Third Order Derivative]
2. Comprehensive Table of Notations
A student must be proficient in recognizing all of them to avoid confusion during examinations.
| Order | Standard Notation | Function Notation | Suffix Notation | Operator Notation |
|---|---|---|---|---|
| $1^{st}$ Order | $\frac{dy}{dx}$ | $f'(x)$ | $y_1$ or $y'$ | $Dy$ |
| $2^{nd}$ Order | $\frac{d^2y}{dx^2}$ | $f''(x)$ | $y_2$ or $y''$ | $D^2y$ |
| $3^{rd}$ Order | $\frac{d^3y}{dx^3}$ | $f'''(x)$ | $y_3$ or $y'''$ | $D^3y$ |
| $n^{th}$ Order | $\frac{d^ny}{dx^n}$ | $f^{(n)}(x)$ | $y_n$ | $D^ny$ |
Reading Leibniz Notations
The notations introduced by Gottfried Wilhelm Leibniz involve the use of the differential operator $\frac{d}{dx}$. These are read as follows:
| Notation | How to Read (Pronunciation) | Context |
|---|---|---|
| $\frac{dy}{dx}$ | "d-y by d-x" OR "Derivative of $y$ with respect to $x$" | First Order |
| $\frac{d^2y}{dx^2}$ | "d-two-y by d-x square" | Second Order |
| $\frac{d^3y}{dx^3}$ | "d-three-y by d-x cube" | Third Order |
| $\frac{d^ny}{dx^n}$ | "d-n-y by d-x to the power $n$" | $n^{th}$ Order |
Important Note: In the expression $\frac{d^2y}{dx^2}$, the $2$ is not an exponent in the algebraic sense (like $x^2$); it indicates the order of differentiation. However, it is conventionally read as "square" or "two".
Reading Lagrange and Newton Notations
These notations are more concise and are frequently used.
(i) Dash or Prime Notation ($f'$)
The use of "dashes" is the most common way to refer to derivatives in Indian classrooms.
$f'(x)$
Read as: "f-dash-x" or "f-prime-x"
$f''(x)$
Read as: "f-double-dash-x" or "f-double-prime-x"
$f'''(x)$
Read as: "f-triple-dash-x" or "f-triple-prime-x"
(ii) Suffix or Subscript Notation ($y_n$)
When derivatives are written as subscripts, they are read as simple numbers.
$y_1$
Read as: "y-one"
$y_2$
Read as: "y-two"
$y_n$
Read as: "y-n"
Operator and Dot Notations
While less common in pure calculus, these appear often in Physics and Higher Engineering Mathematics.
(i) Operator Notation ($D$)
$Dy$
Read as: "Capital D of y"
$D^2y$
Read as: "D-square-y"
(ii) Dot Notation (Newton's Notation)
Commonly used for derivatives with respect to time ($t$) in Physics.
$\dot{y}$
Read as: "y-dot"
$\ddot{y}$
Read as: "y-double-dot"
Important Remarks and Properties
The study of second order derivatives involves several nuances that distinguish it from first order differentiation. Understanding the failure of the simple chain rule, the hierarchy of differentiability, and the formal notation for specific values is essential for mastering Calculus.
The Chain Rule in Second Order Derivatives
A common misconception is that the second derivative follows the same product-based chain rule as the first derivative. However, because the first derivative of a composite function is itself a product of two functions, the second derivative must be calculated using the Product Rule.
Derivation of the Second Order Chain Rule:
Let $y$ be a differentiable function of $u$, and $u$ be a differentiable function of $x$. By the First Order Chain Rule:
$\frac{dy}{dx} = \frac{dy}{du} \cdot \frac{du}{dx}$
... (i)
To find the second order derivative $\frac{d^2y}{dx^2}$, we differentiate equation (i) with respect to $x$ using the Product Rule:
$\frac{d^2y}{dx^2} = \frac{d}{dx} \left( \frac{dy}{du} \cdot \frac{du}{dx} \right)$
$\frac{d^2y}{dx^2} = \left[ \frac{d}{dx} \left( \frac{dy}{du} \right) \right] \cdot \frac{du}{dx} + \frac{dy}{du} \cdot \left[ \frac{d}{dx} \left( \frac{du}{dx} \right) \right]$
Now, we apply the chain rule to the term $\frac{d}{dx} \left( \frac{dy}{du} \right)$ since $\frac{dy}{du}$ is a function of $u$:
$\frac{d}{dx} \left( \frac{dy}{du} \right) = \frac{d}{du} \left( \frac{dy}{du} \right) \cdot \frac{du}{dx} = \frac{d^2y}{du^2} \cdot \frac{du}{dx}$
[Chain Rule on $\frac{dy}{du}$] ... (ii)
Substituting equation (ii) into our product rule expansion:
$\frac{d^2y}{dx^2} = \left( \frac{d^2y}{du^2} \cdot \frac{du}{dx} \right) \cdot \frac{du}{dx} + \frac{dy}{du} \cdot \frac{d^2u}{dx^2}$
Final Formula:
$\frac{d^2y}{dx^2} = \frac{d^2y}{du^2} \left( \frac{du}{dx} \right)^2 + \frac{dy}{du} \frac{d^2u}{dx^2}$
Existence of Higher Order Derivatives
A function being differentiable once does not automatically imply it is differentiable twice. In mathematical analysis, there is a clear hierarchy of smoothness:
Continuity $\leftarrow$ Differentiability ($C^1$) $\leftarrow$ Twice Differentiability ($C^2$)
Case Study: The Function $f(x) = x|x|$
Consider the function $f(x) = x|x|$ at the point $x = 0$. This is a popular problem in Indian Engineering Entrance Exams.
Step 1: Define the function piecewise.
$f(x) = \begin{cases} x^2 & , & x \geq 0 \\ -x^2 & , & x < 0 \end{cases}$
Step 2: Find the first derivative $f'(x)$.
Differentiating both pieces:
$f'(x) = \begin{cases} 2x & , & x > 0 \\ -2x & , & x < 0 \end{cases}$
At $x=0$, the Left Hand Derivative (LHD) is $0$ and the Right Hand Derivative (RHD) is $0$. Thus, $f'(0) = 0$. The first derivative exists and is continuous.
Step 3: Find the second derivative $f''(x)$.
Differentiating $f'(x)$:
$f''(x) = \begin{cases} 2 & , & x > 0 \\ -2 & , & x < 0 \end{cases}$
Now, let us examine the limit at $x=0$:
$\lim\limits_{x \to 0^+} f''(x) = 2$
$\lim\limits_{x \to 0^-} f''(x) = -2$
Since the Left Hand and Right Hand limits of the first derivative are not equal ($2 \neq -2$), the second derivative does not exist at $x=0$.
Evaluation at a Specific Point (Notation)
In various contexts, we need to denote the value of the second derivative at a specific point $x=c$. The following table summarizes the standard notations used in CBSE and competitive textbooks.
| Notation Format | Symbol | How it is Read |
|---|---|---|
| Function Notation | $f''(c)$ | "f-double-dash at c" |
| Leibniz Notation | $\left( \frac{d^2y}{dx^2} \right)_{x=c}$ | "The second derivative evaluated at x equals c" |
| Suffix Notation | $(y_2)_c$ | "y-two at c" |
Example 1. If $y = x^3 + \tan x$, find $\frac{d^2y}{dx^2}$.
Answer:
Given: $y = x^3 + \tan x$
Step 1: Find the first derivative.
Differentiating with respect to $x$:
$\frac{dy}{dx} = \frac{d}{dx}(x^3) + \frac{d}{dx}(\tan x)$
$\frac{dy}{dx} = 3x^2 + \sec^2 x$
Step 2: Find the second derivative.
Differentiating again with respect to $x$:
$\frac{d^2y}{dx^2} = \frac{d}{dx}(3x^2) + \frac{d}{dx}(\sec^2 x)$
$\frac{d^2y}{dx^2} = 6x + 2\sec x \cdot \frac{d}{dx}(\sec x)$
$\frac{d^2y}{dx^2} = 6x + 2\sec x \cdot (\sec x \tan x)$
$\frac{d^2y}{dx^2} = 6x + 2\sec^2 x \tan x$
Example 2. If $y = 5 \cos x - 3 \sin x$, prove that $\frac{d^2y}{dx^2} + y = 0$.
Answer:
Given: $y = 5 \cos x - 3 \sin x$
Step 1: Find the first derivative ($y_1$).
Differentiating with respect to $x$:
$\frac{dy}{dx} = -5 \sin x - 3 \cos x$
... (i)
Step 2: Find the second derivative ($y_2$).
Differentiating equation (i) again with respect to $x$:
$\frac{d^2y}{dx^2} = -5 \cos x - 3(-\sin x)$
$\frac{d^2y}{dx^2} = -5 \cos x + 3 \sin x$
Step 3: Factorize the result.
Taking the negative sign common:
$\frac{d^2y}{dx^2} = -(5 \cos x - 3 \sin x)$
Using the given value of $y$:
$\frac{d^2y}{dx^2} = -y$
[From Given]
Rearranging the terms:
$\frac{d^2y}{dx^2} + y = 0$
Hence Proved.
Example 3. If $y = e^{ax} \sin bx$, find $\frac{d^2y}{dx^2}$.
Answer:
Step 1: Find the first derivative using the Product Rule.
Differentiating $y$ with respect to $x$:
$\frac{dy}{dx} = e^{ax} \cdot \frac{d}{dx}(\sin bx) + \sin bx \cdot \frac{d}{dx}(e^{ax})$
$\frac{dy}{dx} = b e^{ax} \cos bx + a e^{ax} \sin bx$
... (i)
Step 2: Find the second derivative.
Differentiating equation (i) again using the Product Rule for both terms:
$\frac{d^2y}{dx^2} = b \frac{d}{dx}(e^{ax} \cos bx) + a \frac{d}{dx}(e^{ax} \sin bx)$
$\frac{d^2y}{dx^2} = b [e^{ax}(-b \sin bx) + a e^{ax} \cos bx] + a [e^{ax}(b \cos bx) $$ + a e^{ax} \sin bx]$
Step 3: Simplify the expression.
$\frac{d^2y}{dx^2} = -b^2 e^{ax} \sin bx + ab e^{ax} \cos bx + ab e^{ax} \cos bx + a^2 e^{ax} \sin bx$
Combining like terms:
$\frac{d^2y}{dx^2} = (a^2 - b^2) e^{ax} \sin bx + 2ab e^{ax} \cos bx$
Final Answer: $e^{ax} [ (a^2 - b^2) \sin bx + 2ab \cos bx ]$
Example 4. If $y = \cos^{-1} x$, find $\frac{d^2y}{dx^2}$ in terms of $y$ alone.
Answer:
Given: $y = \cos^{-1} x \implies x = \cos y$
Step 1: Differentiate $x = \cos y$ with respect to $x$.
$1 = -\sin y \cdot \frac{dy}{dx}$
$\frac{dy}{dx} = -\frac{1}{\sin y} = -\text{cosec } y$
... (i)
Step 2: Differentiate with respect to $x$ again.
$\frac{d^2y}{dx^2} = \frac{d}{dx}(-\text{cosec } y)$
$\frac{d^2y}{dx^2} = -(-\text{cosec } y \cot y) \cdot \frac{dy}{dx}$
$\frac{d^2y}{dx^2} = (\text{cosec } y \cot y) \cdot \frac{dy}{dx}$
Step 3: Substitute $\frac{dy}{dx}$ from equation (i).
$\frac{d^2y}{dx^2} = (\text{cosec } y \cot y) \cdot (-\text{cosec } y)$
$\frac{d^2y}{dx^2} = -\text{cosec}^2 y \cot y$
Final Answer: $-\text{cosec}^2 y \cot y$
Example 5. If $x = a \cos^3 \theta$ and $y = a \sin^3 \theta$, find $\frac{d^2y}{dx^2}$ at $\theta = \frac{\pi}{4}$.
Answer:
Step 1: Find parametric derivatives.
$\frac{dx}{d\theta} = 3a \cos^2 \theta (-\sin \theta) = -3a \cos^2 \theta \sin \theta$
$\frac{dy}{d\theta} = 3a \sin^2 \theta (\cos \theta) = 3a \sin^2 \theta \cos \theta$
Step 2: Find $\frac{dy}{dx}$.
$\frac{dy}{dx} = \frac{dy/d\theta}{dx/d\theta} = \frac{3a \sin^2 \theta \cos \theta}{-3a \cos^2 \theta \sin \theta} = -\frac{\sin \theta}{\cos \theta} = -\tan \theta$
Step 3: Find the second derivative $\frac{d^2y}{dx^2}$.
$\frac{d^2y}{dx^2} = \frac{d}{d\theta}(-\tan \theta) \cdot \frac{d\theta}{dx}$
$\frac{d^2y}{dx^2} = -\sec^2 \theta \cdot \frac{1}{-3a \cos^2 \theta \sin \theta} = \frac{1}{3a \cos^4 \theta \sin \theta}$
Step 4: Evaluate at $\theta = \frac{\pi}{4}$.
At $\theta = \pi/4$, $\sin \theta = 1/\sqrt{2}$ and $\cos \theta = 1/\sqrt{2}$.
$\frac{d^2y}{dx^2} = \frac{1}{3a (1/\sqrt{2})^4 (1/\sqrt{2})} = \frac{1}{3a (1/4) (1/\sqrt{2})} = \frac{4\sqrt{2}}{3a}$
Final Answer: $\frac{4\sqrt{2}}{3a}$
Rolle’s Theorem
Rolle’s Theorem serves as a bridge between the average rate of change of a function and its instantaneous rate of change. Historically, it was proposed by Michel Rolle in 1691. This theorem is taught as a specific case of the Mean Value Theorem. It essentially states that a continuous and smooth curve that starts and ends at the same vertical level must have at least one point where its slope is zero.
Formal Statement and the Three Pillars
For Rolle's Theorem to be applicable, a function must satisfy three specific "Pillars" or conditions. If any of these conditions fail, the theorem cannot guarantee the existence of a point $c$.
| Condition | Description | Mathematical Importance |
|---|---|---|
| Continuity | $f$ is continuous in $[a, b]$ | Ensures there are no "gaps" or "breaks" in the curve. |
| Differentiability | $f$ is derivable in $(a, b)$ | Ensures the curve is "smooth" with no sharp corners (like $|x|$). |
| Equality | $f(a) = f(b)$ | Ensures the starting and ending points are on the same horizontal line. |
If these conditions are met, there exists at least one value $c \in (a, b)$ such that:
$f'(c) = 0$
Logical Derivation (The Mean Value Connection)
Logic of Proof:
The derivation of Rolle's Theorem relies on the Extreme Value Theorem. Since $f(x)$ is continuous on the closed interval $[a, b]$, it must attain an absolute maximum value ($M$) and an absolute minimum value ($m$) somewhere in the interval.
Case 1: If $f(x)$ is a constant function.
In this case, $f(x) = k$ for all $x \in [a, b]$. Therefore, the derivative is zero for every point in the interval.
$f'(x) = 0$
[Derivative of constant is zero]
Case 2: If $f(x)$ is not constant.
Since $f(a) = f(b)$, and the function is not constant, it must either increase or decrease at some point. Therefore, it must reach either a maximum or a minimum at some point $c$ strictly between $a$ and $b$ (i.e., $c \in (a, b)$).
At a local maximum or minimum point $c$, if the function is differentiable, the derivative must be zero:
$f'(c) = 0$
[Fermat's Theorem on Extrema]
Geometrical Interpretation of Rolle’s Theorem
The geometrical interpretation of Rolle’s Theorem translates abstract algebraic conditions into visual properties of a curve. It essentially guarantees that for a smooth, continuous path that begins and ends at the same altitude, there must be at least one "summit" or "valley" where the path is momentarily level.
Let $y = f(x)$ represent a curve in the Cartesian plane. Let $A$ and $B$ be two points on this curve corresponding to the values $x = a$ and $x = b$.
Detailed Analysis of the Geometric Conditions
To visualize the theorem, we must understand what the three conditions represent on a coordinate plane:
(i) Continuity in $[a, b]$
This ensures that the graph of the function from point $A$ to point $B$ is a single unbroken curve. There are no holes, jumps, or vertical asymptotes. You can draw the entire curve from $A$ to $B$ without lifting your pen from the paper.
(ii) Differentiability in $(a, b)$
This implies that the curve is smooth throughout the interval. There are no sharp corners, "kinks," or cusps. Geometrically, this means that at every point between $A$ and $B$, there is a unique, well-defined tangent line.
(iii) $f(a) = f(b)$
This condition signifies that the ordinates (y-coordinates) of the starting point $A$ and the ending point $B$ are equal. Geometrically, the chord $AB$ connecting the endpoints is a horizontal line segment parallel to the x-axis.
Graphical Representation of the Theorem
Rolle’s Theorem asserts that if these conditions are met, there must be at least one point $P(c, f(c))$ on the curve, situated between $A$ and $B$, where the tangent is parallel to the x-axis. Since the slope of a horizontal line is zero, we have $f'(c) = 0$.
Case I: Existence of Exactly One Point
In the simplest case, the curve rises from $A$ to a maximum height and then falls back to the level of $B$ (or vice-versa). At the highest point (or lowest point), the curve "turns," and the tangent at that peak is horizontal.
In the figure above, $f'(c) = 0$ at exactly one point $c$ such that $a < c < b$.
Case II: Existence of Multiple Points
A curve may fluctuate or oscillate several times between $A$ and $B$. For every "hump" or "dip" the curve makes, there will be a corresponding point where the tangent becomes horizontal.
In this figure, we observe three distinct points $c_1, c_2,$ and $c_3$ within the interval $(a, b)$ where the derivative vanishes:
$f'(c_1) = f'(c_2) = f'(c_3) = 0$
The Resulting Conclusion
The core essence of Rolle's theorem is the existence. It does not tell us how many points exist, nor does it tell us how to find them (that requires algebra); it simply guarantees that as long as you don't have sharp corners and you end where you started, you must have been level at least once.
$\text{Slope of Tangent} = f'(c) = 0$
[Tangent $\parallel$ X-axis]
Important Remarks
(i) Necessity of Conditions
Rolle’s theorem fails if even one of the three conditions is not satisfied. For example:
$\bullet$ $f(x) = |x|$ in $[-1, 1]$: Here $f(-1) = f(1) = 1$ and it is continuous, but it is not differentiable at $x = 0$. Thus, Rolle's theorem does not apply (no point has $f'(c)=0$).
$\bullet$ $f(x) = [x]$ (Greatest Integer Function): Fails due to lack of continuity at integer points.
(ii) The Converse
The converse of Rolle's theorem is not necessarily true. A function's derivative may be zero at a point $c \in (a, b)$ even if the three conditions (continuity, differentiability, or $f(a)=f(b)$) are not met.
Example. Verify Rolle's theorem for the function $f(x) = x^2 - 4x + 3$ in the interval $[1, 3]$.
Answer:
Step 1: Check Continuity and Differentiability.
Since $f(x)$ is a polynomial function, it is continuous on $[1, 3]$ and differentiable on $(1, 3)$.
Step 2: Check $f(a) = f(b)$.
Here $a = 1$ and $b = 3$.
$f(1) = (1)^2 - 4(1) + 3 = 1 - 4 + 3 = 0$
$f(3) = (3)^2 - 4(3) + 3 = 9 - 12 + 3 = 0$
$f(1) = f(3) = 0$
(Condition satisfied)
Step 3: Find $c$ such that $f'(c) = 0$.
$f'(x) = 2x - 4$
Setting $f'(c) = 0$:
$2c - 4 = 0 \implies c = 2$
Step 4: Verify if $c \in (a, b)$.
Since $2 \in (1, 3)$, Rolle's theorem is verified.
Lagrange’s Mean Value Theorem
Lagrange’s Mean Value Theorem (LMVT) is often referred to as the "Bridge Theorem" of calculus because it connects the local behavior of a function (its derivative at a point) to its global behavior (the change over an entire interval). While Rolle’s Theorem is a specific case where the function returns to its starting value, LMVT is a more general and widely applicable principle used extensively in Mathematical Analysis and Physics.
Detailed Statement and the Necessity of Conditions
For a function $f(x)$ to satisfy Lagrange’s Mean Value Theorem, it must pass two essential tests of "smoothness" and "continuity":
(i) Continuity in the Closed Interval $[a, b]$: This implies that the function has no breaks, jumps, or holes between the starting point $a$ and the ending point $b$. The path of the function must be traversable without lifting the pen from the paper.
(ii) Differentiability in the Open Interval $(a, b)$: This implies that the function is "smooth" and has a unique tangent at every point between $a$ and $b$. Sharp corners (like in $|x|$) or vertical tangents are not allowed in the interior of the interval.
If these two conditions are met, there exists at least one real number $c \in (a, b)$ such that the instantaneous rate of change at $c$ equals the average rate of change over $[a, b]$:
$f'(c) = \frac{f(b) - f(a)}{b - a}$
Derivation of the Theorem using Rolle’s Theorem
To prove LMVT, we construct an auxiliary function that satisfies the conditions of Rolle’s Theorem.
Step 1: Constructing the Auxiliary Function
Let $F(x) = f(x) - kx$, where $k$ is a constant chosen such that $F(a) = F(b)$.
$f(a) - ka = f(b) - kb$
(By Assumption)
Rearranging the terms to find $k$:
$kb - ka = f(b) - f(a)$
$k(b - a) = f(b) - f(a)$
$k = \frac{f(b) - f(a)}{b - a}$
Step 2: Applying Rolle’s Theorem
Since $f(x)$ is continuous and differentiable, $F(x) = f(x) - kx$ is also continuous on $[a, b]$ and differentiable on $(a, b)$. Since $F(a) = F(b)$, by Rolle’s Theorem, there exists some $c \in (a, b)$ such that:
$F'(c) = 0$
Differentiating $F(x) = f(x) - kx$ gives $F'(x) = f'(x) - k$. At $x = c$:
$f'(c) - k = 0$
$f'(c) = k$
Substituting the value of $k$ from equation (ii):
$f'(c) = \frac{f(b) - f(a)}{b - a}$
[Hence Proved]
Comparative Study: Rolle’s Theorem vs. LMVT
The following table highlights the differences and similarities between the two fundamental theorems of calculus.
| Feature | Rolle's Theorem | Lagrange's MVT |
|---|---|---|
| Continuity | Required on $[a, b]$ | Required on $[a, b]$ |
| Differentiability | Required on $(a, b)$ | Required on $(a, b)$ |
| Endpoint Condition | $f(a) = f(b)$ | No restriction on $f(a), f(b)$ |
| Geometric Interpretation | Tangent $\parallel$ X-axis | Tangent $\parallel$ Chord $AB$ |
| Resulting Formula | $f'(c) = 0$ | $f'(c) = \frac{f(b)-f(a)}{b-a}$ |
Geometrical Interpretation of Lagrange’s Mean Value Theorem
The Geometrical Interpretation of Lagrange’s Mean Value Theorem (LMVT) provides a visual understanding of how the average rate of change over an interval relates to the instantaneous rate of change at a specific point. It essentially states that for any smooth, continuous arc, there must be at least one point where the curve is "tilting" at the exact same angle as the straight line connecting its two endpoints.
The Chord (Secant Line) $AB$
Consider the graph of a function $y = f(x)$. Let $A$ and $B$ be two points on this curve corresponding to $x = a$ and $x = b$. The coordinates of these points are $A(a, f(a))$ and $B(b, f(b))$.
If we draw a straight line (called a Chord or Secant Line) connecting point $A$ and point $B$, its slope represents the average rate of change of the function over the interval $[a, b]$. Using the coordinate geometry formula for the slope of a line passing through two points $(x_1, y_1)$ and $(x_2, y_2)$:
$\text{Slope of Chord } AB = \frac{f(b) - f(a)}{b - a}$
[Average Rate of Change]
The Tangent at Point $c$
According to the theorem, if the curve is continuous (no breaks) and differentiable (no sharp corners), then as we move along the curve from $A$ to $B$, there must be at least one intermediate point $P(c, f(c))$ where $a < c < b$.
The derivative of the function at this point, denoted as $f'(c)$, represents the Slope of the Tangent line at $P$.
$\text{Slope of Tangent at } c = f'(c)$
[Instantaneous Rate of Change]
Parallelism and Equality
Lagrange’s Mean Value Theorem asserts that the slope of this tangent is exactly equal to the slope of the chord $AB$:
$f'(c) = \frac{f(b) - f(a)}{b - a}$
[Equality of Slopes]
In geometry, two lines with equal slopes are parallel. Therefore, the theorem guarantees that there is at least one point $c$ between $a$ and $b$ where the tangent to the curve is parallel to the chord $AB$.
Graphical Representation
(i) Single Point of Parallelism
In a simple curve that bends in only one direction, there is usually only one such point $c$. As the curve "turns" to reach from $A$ to $B$, it must at some point align its direction with the straight-line path $AB$.
(ii) Multiple Points of Parallelism
If the function is more complex and oscillates (waves up and down), there may be several points where the tangent becomes parallel to the chord $AB$. In the figure below, we see multiple values ($c_1, c_2, \dots$) that satisfy the theorem.
Even in such cases, Lagrange’s Mean Value Theorem ensures the existence of at least one such point $c$ in the interval $(a, b)$.
Important Remarks
To gain a comprehensive understanding of Lagrange’s Mean Value Theorem (LMVT), it is essential to examine the conditions under which it operates, its mathematical lineage, and the logical boundaries of its application.
1. Failure of the Theorem: The Necessity of Conditions
The two conditions—continuity in $[a, b]$ and differentiability in $(a, b)$—are not merely formal requirements; they are the Pillars that guarantee the existence of point $c$. If even one of these conditions is violated, the theorem may fail to hold.
(i) Violation of Differentiability
Consider the function $f(x) = |x|$ in the interval $[-1, 2]$.
$\bullet$ $f(x)$ is continuous in $[-1, 2]$.
$\bullet$ However, $f(x)$ is not differentiable at $x = 0$ (a sharp corner or "kink" exists).
$\bullet$ Average slope = $\frac{f(2) - f(-1)}{2 - (-1)} = \frac{2 - 1}{3} = \frac{1}{3}$.
$\bullet$ The derivative $f'(x)$ is either $1$ (for $x > 0$) or $-1$ (for $x < 0$). There is no point where $f'(c) = \frac{1}{3}$. Thus, LMVT fails.
(ii) Violation of Continuity
Consider the Greatest Integer Function $f(x) = [x]$ in the interval $[0.5, 1.5]$.
$\bullet$ The function is discontinuous at $x = 1$.
$\bullet$ Average slope = $\frac{f(1.5) - f(0.5)}{1.5 - 0.5} = \frac{1 - 0}{1} = 1$.
$\bullet$ Since the derivative of a step function is $0$ wherever it exists, there is no point where $f'(c) = 1$. Thus, LMVT fails.
2. Relationship with Rolle’s Theorem
Lagrange’s Mean Value Theorem is a generalization of Rolle's Theorem. While Rolle's Theorem requires the function to "return" to its starting value, LMVT allows the function to end at any value.
Proof of Relationship:
The LMVT states that there exists $c \in (a, b)$ such that:
$f'(c) = \frac{f(b) - f(a)}{b - a}$
If we add the third condition of Rolle's Theorem, i.e., $f(a) = f(b)$, then:
$f(b) - f(a) = 0$
(Given for Rolle's)
Substituting this into equation (i):
$f'(c) = \frac{0}{b - a} = 0$
[Rolle's Theorem Conclusion]
Conclusion: Rolle’s Theorem is simply the Horizontal Chord version of Lagrange's Mean Value Theorem.
3. The Converse of the Theorem
The converse of LMVT states: "If there exists a point $c \in (a, b)$ such that $f'(c) = \frac{f(b) - f(a)}{b - a}$, then the function must be continuous on $[a, b]$ and differentiable on $(a, b)$."
This converse is false. It is possible for a function to accidentally have a tangent parallel to a secant line even if the function is broken or "pointy" elsewhere in the interval. The theorem provides sufficient conditions, not necessary ones. This distinction is often tested in "True or False" conceptual questions.
4. Comparison Table: Necessity vs. Sufficiency
| Condition Status | Scenario | Result |
|---|---|---|
| Conditions Met | Continuous and Differentiable | Point $c$ MUST exist. |
| Conditions Failed | Discontinuous or Non-differentiable | Point $c$ MAY or MAY NOT exist. |
| Point $c$ Exists | $f'(c) = \text{Average Slope}$ | Function MAY NOT be continuous. |
Example. Verify Lagrange’s Mean Value Theorem for $f(x) = x^2$ in the interval $[2, 4]$.
Answer:
Step 1: Verify Conditions.
Since $f(x) = x^2$ is a polynomial function, it is continuous on $[2, 4]$ and differentiable on $(2, 4)$.
Step 2: Calculate $f(a)$ and $f(b)$.
Here $a = 2$ and $b = 4$.
$f(2) = 2^2 = 4$
$f(4) = 4^2 = 16$
Step 3: Apply the LMVT Formula.
$f'(c) = \frac{f(4) - f(2)}{4 - 2} = \frac{16 - 4}{2} = \frac{12}{2} = 6$
Step 4: Find $c$.
Since $f(x) = x^2$, $f'(x) = 2x$.
$2c = 6 \implies c = 3$
Step 5: Check if $c \in (a, b)$.
Since $3 \in (2, 4)$, Lagrange’s Mean Value Theorem is verified.
Applications of Lagrange's Mean Value Theorem (Monotonicity)
One of the most significant applications of Lagrange's Mean Value Theorem (LMVT) is in the study of the Monotonicity of functions. It allows us to determine whether a function is increasing or decreasing over a specific interval by simply examining the sign of its first derivative $f'(x)$.
Strictly Increasing Functions
Statement:
If a function $f(x)$ is continuous in $[a, b]$, derivable in $(a, b)$, and $f'(x) > 0$ for all $x \in (a, b)$, then $f(x)$ is a strictly increasing function in $[a, b]$.Proof and Derivation:
Let $x_1$ and $x_2$ be any two points in the closed interval $[a, b]$ such that $a \leq x_1 < x_2 \leq b$.
Since $f(x)$ is continuous in $[a, b]$ and derivable in $(a, b)$, it must also be continuous in $[x_1, x_2]$ and derivable in $(x_1, x_2)$. Therefore, we can apply Lagrange's Mean Value Theorem on the interval $[x_1, x_2]$.
By LMVT, there exists at least one real number $c$ in $(x_1, x_2)$ such that:
$f'(c) = \frac{f(x_2) - f(x_1)}{x_2 - x_1}$
... (i)
On cross-multiplying the terms, we get:
$f(x_2) - f(x_1) = (x_2 - x_1) \cdot f'(c)$
... (ii)
Now, we analyze the signs of the components in the above equation:
$x_2 - x_1 > 0$
[Since $x_2 > x_1$]
$f'(c) > 0$
[Given $f'(x) > 0$ for all $x$]
Since both $(x_2 - x_1)$ and $f'(c)$ are positive, their product must be positive:
$(x_2 - x_1) \cdot f'(c) > 0$
Substituting from equation (ii):
$f(x_2) - f(x_1) > 0$
$f(x_2) > f(x_1)$
For any $x_1, x_2$ where $x_2 > x_1$, the condition $f(x_2) > f(x_1)$ holds true. Thus, $f(x)$ is strictly increasing in $[a, b]$.
Strictly Decreasing Functions
Statement:
If a function $f(x)$ is continuous in $[a, b]$, derivable in $(a, b)$, and $f'(x) < 0$ for all $x \in (a, b)$, then $f(x)$ is a strictly decreasing function in $[a, b]$.Proof and Derivation:
Following the same steps as the increasing function, let $x_2 > x_1$ in $[a, b]$. From LMVT:
$f(x_2) - f(x_1) = (x_2 - x_1) \cdot f'(c)$
Analyze the signs:
$x_2 - x_1 > 0$
(As $x_2 > x_1$)
$f'(c) < 0$
(Given condition)
The product of a positive number and a negative number is negative:
$(x_2 - x_1) \cdot f'(c) < 0$
$f(x_2) - f(x_1) < 0$
$f(x_2) < f(x_1)$
Since $x_2 > x_1$ implies $f(x_2) < f(x_1)$, $f(x)$ is strictly decreasing in $[a, b]$.
Summary Table
In Calculus, we categorize function nature as follows:
| Condition on $f'(x)$ | Nature of Function $f(x)$ |
|---|---|
| $f'(x) > 0$ | Strictly Increasing ($\uparrow$) |
| $f'(x) < 0$ | Strictly Decreasing ($\downarrow$) |
| $f'(x) = 0$ | Constant Function |
| $f'(x) \geq 0$ | Increasing (Non-decreasing) |
| $f'(x) \leq 0$ | Decreasing (Non-increasing) |
Example. Prove that the function $f(x) = e^{2x}$ is strictly increasing on $\mathbb{R}$.
Answer:
Given: $f(x) = e^{2x}$
Step 1: Differentiate the function.
$f'(x) = \frac{d}{dx}(e^{2x}) = 2e^{2x}$
Step 2: Analyze the sign of the derivative.
We know that for any real value of $x$, the exponential function $e^{2x}$ is always positive ($e^{2x} > 0$).
Therefore, $2e^{2x} > 0$ for all $x \in \mathbb{R}$.
Step 3: Conclusion.
Since $f'(x) > 0$ for all real numbers, by the application of LMVT, $f(x)$ is strictly increasing on $\mathbb{R}$.