Top
Learning Spot
Menu

Chapter 16 Probability (Concepts)

Welcome to Chapter 16: Probability! In this chapter, we transition into the axiomatic foundation of probability theory, a rigorous framework pioneered by Andrey Kolmogorov. Unlike the experimental approach, the axiomatic method treats probability as a mathematical function satisfying specific rules, allowing it to be applied even when outcomes are not necessarily equally likely.

The journey begins with defining the Sample Space ($S$), which is the set of all possible outcomes of a random experiment. An Event ($E$) is formally defined as any subset of the sample space ($E \subseteq S$). We explore three fundamental Axioms: 1) Non-negativity where $P(E) \ge 0$, 2) Normalization where $P(S) = 1$, and 3) Additivity for mutually exclusive events.

We will derive key results like the Addition Rule: $$P(E \cup F) = P(E) + P(F) - P(E \cap F)$$ This formula is essential for calculating the probability of overlapping events. We also study complementary events, where $P(E') = 1 - P(E)$, and utilize counting techniques to analyze complex scenarios involving coins, dice, and cards.

To enhance the understanding of the concepts, this page includes images for visualisation, flowcharts, mindmaps, and practical examples. This page is prepared by learningspot.co to provide a structured and comprehensive learning experience for every student.

Content On This Page
Random Experiments and Sample SPaces Events Axiomatic Approach to Probability
Laws of Probability


Random Experiments and Sample Spaces

Probability is the mathematical study of chance and uncertainty. It provides tools to quantify the likelihood of different outcomes occurring in situations where the result is not predictable. The fundamental concepts in probability theory are those of a random experiment and its sample space.

To understand probability, it is essential to first be clear about the precise meaning of the terms used. The entire theory is built upon the distinction between processes that are predictable and those that involve an element of chance.


Experiment

In the context of science and mathematics, an Experiment is a well-defined procedure or action that can be repeated and results in a set of observable outcomes. It is a general term for any process that generates data or observations.

Experiments can be broadly classified into two categories: deterministic and random.

Deterministic Experiment

A Deterministic Experiment is an experiment whose outcome is certain and predictable if it is performed under identical conditions. There is no element of chance involved. The result is uniquely determined by the conditions under which the experiment is performed.

Characteristics:

  • The outcome is always the same for a given set of initial conditions.
  • The result can be predicted with 100% certainty before the experiment is conducted.
  • These experiments typically follow well-established physical laws or mathematical rules.

Examples:

  • Adding two numbers: The result of $5 + 7$ will always be 12. This is a mathematical experiment with a deterministic outcome.
  • Boiling Water: Measuring the boiling point of pure water at standard sea-level pressure. The outcome will always be $100^\circ$ Celsius.
  • Newton's Laws of Motion: If you drop a ball from a height of 10 meters, the time it takes to hit the ground can be calculated precisely using the formula $s = ut + \frac{1}{2}at^2$. The outcome is predictable.

Random Experiment (or Probabilistic/Stochastic Experiment)

A Random Experiment is an experiment where the outcome cannot be predicted with certainty in advance, even if the experiment is repeated under the same conditions. While the specific outcome of a single trial is unknown, the set of all possible outcomes is known.

Characteristics:

  • It has more than one possible outcome.
  • The outcome of any single trial is uncertain and subject to chance.
  • It can be repeated under identical conditions.
  • There is a statistical regularity to the outcomes; over a large number of trials, the proportion of times each outcome occurs tends to stabilize.

Examples:

  • Tossing a Coin: The possible outcomes are Heads or Tails. We know what can happen, but we don't know what will happen on the next toss.
  • Rolling a Die: The possible outcomes are {1, 2, 3, 4, 5, 6}. The result of a single roll is unpredictable.
  • Weather Forecasting: Predicting whether it will rain tomorrow is a random experiment, as the outcome is uncertain.
  • Drawing a Card: Drawing a card from a well-shuffled deck is a random experiment because any of the 52 cards could be chosen, but which one is uncertain.

The study of probability is concerned exclusively with random experiments.


Outcome and Sample Space:

An Outcome is a single possible result of a random experiment.

The Sample Space of a random experiment is the set of all possible outcomes of the experiment. It represents the complete set of potential results. The sample space is typically denoted by the capital letter $S$ or sometimes $\Omega$. Each individual outcome in the sample space is called a sample point.

The sample space must be defined such that:

  • It includes all possible outcomes of the experiment (it is exhaustive).
  • No two outcomes can occur simultaneously in a single trial (they are mutually exclusive).

Examples of Sample Spaces:

Let's determine the sample space for various random experiments:

  1. Tossing a coin: The only possible results are getting a Head (H) or getting a Tail (T).

    Sample Space, $S = \{H, T\}$

    The number of sample points is $n(S) = 2$.
  2. Rolling a standard die: The possible outcomes are the faces showing the numbers 1, 2, 3, 4, 5, or 6.

    Sample Space, $S = \{1, 2, 3, 4, 5, 6\}$

    The number of sample points is $n(S) = 6$.
  3. Tossing two coins simultaneously: We need to list all possible ordered pairs of outcomes for the two coins. Let the outcomes be for the first coin and the second coin.
    • First coin H, Second coin H: HH
    • First coin H, Second coin T: HT
    • First coin T, Second coin H: TH
    • First coin T, Second coin T: TT

    Sample Space, $S = \{HH, HT, TH, TT\}$

    The number of sample points is $n(S) = 4$.
  4. Rolling two dice simultaneously: We consider the two dice distinguishable (e.g., Die 1 and Die 2). The outcome is an ordered pair $(d_1, d_2)$, where $d_1$ is the result on Die 1 and $d_2$ is the result on Die 2. Each $d_i$ can be any integer from 1 to 6.

    $S = \{(1,1), (1,2), (1,3), (1,4), (1,5), (1,6),$

    $\phantom{S = } (2,1), (2,2), (2,3), (2,4), (2,5), (2,6),$

    $\phantom{S = } (3,1), (3,2), (3,3), (3,4), (3,5), (3,6),$

    $\phantom{S = } (4,1), (4,2), (4,3), (4,4), (4,5), (4,6),$

    $\phantom{S = } (5,1), (5,2), (5,3), (5,4), (5,5), (5,6),$

    $\phantom{S = } (6,1), (6,2), (6,3), (6,4), (6,5), (6,6)\}$

    The number of sample points is $n(S) = 6 \times 6 = 36$. (Using the Multiplication Principle from Counting).
  5. Drawing two balls from a bag containing a red (R) and a blue (B) ball, one after the other without replacement: The order of drawing matters.

    Sample Space, $S = \{RB, BR\}$

    $n(S) = 2$.
  6. Drawing two balls from a bag containing a red (R) and a blue (B) ball, one after the other with replacement: The order matters, and the possibilities for the second draw depend on the first due to replacement.

    Sample Space, $S = \{RR, RB, BR, BB\}$

    $n(S) = 4$.

Example 1. Describe the sample space for the experiment of tossing a single fair coin.

Answer:

When a single coin is tossed, there are only two possible outcomes: either the coin shows a Head (H) or a Tail (T).

Therefore, the sample space is the set of these two outcomes.

Sample Space, $S = \{H, T\}$

The number of sample points is $n(S) = 2$.


Example 2. What is the sample space when a standard six-sided die is rolled once?

Answer:

A standard die has six faces marked with the numbers 1, 2, 3, 4, 5, and 6. When the die is rolled, any one of these numbers can appear on the top face.

The set of all possible outcomes is the sample space.

Sample Space, $S = \{1, 2, 3, 4, 5, 6\}$

The number of sample points is $n(S) = 6$.


Example 3. Find the sample space associated with the experiment of tossing two coins simultaneously (or one coin twice).

Answer:

Let H denote a Head and T denote a Tail. When two coins are tossed, we need to consider the outcomes on both coins. The possible outcomes are:

  • Head on the first coin and Head on the second coin (HH).
  • Head on the first coin and Tail on the second coin (HT).
  • Tail on the first coin and Head on the second coin (TH).
  • Tail on the first coin and Tail on the second coin (TT).

The sample space is the set of all these possible ordered pairs.

Sample Space, $S = \{HH, HT, TH, TT\}$

The number of sample points is $n(S) = 4$.


Example 4. A coin is tossed. If it shows a head, a die is thrown. If it shows a tail, the experiment is stopped. Describe the sample space for this experiment.

Answer:

We analyze the two possible cases for the coin toss:

Case 1: The coin shows a Tail (T).

According to the problem, the experiment stops. So, T is one possible outcome.

Case 2: The coin shows a Head (H).

The experiment continues, and a die is thrown. The possible outcomes for the die are {1, 2, 3, 4, 5, 6}. This gives us six possible combined outcomes: (H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6).

Combining all possible outcomes from both cases gives the sample space.

Sample Space, $S = \{T, (H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6)\}$

The number of sample points is $n(S) = 7$.


Example 5. Write the sample space for the random experiment of rolling a pair of dice.

Answer:

When a pair of dice is rolled, we can consider them as distinguishable (e.g., a red die and a blue die). The outcome of the experiment is an ordered pair $(d_1, d_2)$, where $d_1$ is the number on the first die and $d_2$ is the number on the second die. Each die can show a number from 1 to 6.

The sample space is the set of all possible ordered pairs:

$S = \{$

$(1,1), (1,2), (1,3), (1,4), (1,5), (1,6),$

$(2,1), (2,2), (2,3), (2,4), (2,5), (2,6),$

$(3,1), (3,2), (3,3), (3,4), (3,5), (3,6),$

$(4,1), (4,2), (4,3), (4,4), (4,5), (4,6),$

$(5,1), (5,2), (5,3), (5,4), (5,5), (5,6),$

$(6,1), (6,2), (6,3), (6,4), (6,5), (6,6)$

$\}$

The total number of possible outcomes (sample points) can be found using the multiplication principle: $6$ outcomes for the first die $\times$ $6$ outcomes for the second die.

The number of sample points is $n(S) = 6 \times 6 = 36$.



Events

In probability theory, once we have defined a random experiment and its sample space (the set of all possible outcomes), the next crucial concept is that of an event. If the sample space is the "universe" of all possibilities for an experiment, an event is a specific region or a point of interest within that universe. It is the language we use to describe the specific results we want to analyze and calculate probabilities for.


Definition of an Event

An Event is defined as any subset of the sample space S. This simple definition is very powerful. It means that an event is simply a collection of one or more possible outcomes. If the result of our experiment is one of the outcomes included in the event's set, we say that the event has occurred.

Example: Consider the experiment of rolling a single die. The sample space is the set of all possible outcomes, $S = \{1, 2, 3, 4, 5, 6\}$.

  • Let's define Event A as "getting an even number". The outcomes that satisfy this description are {2, 4, 6}. So, the event A is the set $A = \{2, 4, 6\}$.
  • If you roll the die and it comes up 4, then the outcome '4' is an element of the set A, so we say that Event A has occurred.
  • If you roll a 3, the outcome '3' is not an element of the set A, so Event A has not occurred.

Types of Events

Events can be categorized based on the number of outcomes they contain and the likelihood of their occurrence. Understanding these types is fundamental to calculating probabilities correctly.

1. Impossible and Sure Events

Impossible Event

An event that contains no outcomes from the sample space is called an impossible event. It is represented by the empty set $\phi$. The probability of an impossible event is always $0$.

Example: In a single roll of a die, the event $B = \{x : x \text{ is a number greater than } 6\}$ is impossible because $B = \phi$.

Sure Event (Certain Event)

An event that contains all the outcomes of the sample space is called a sure event. Since it is the sample space itself ($S$), it is guaranteed to happen. Its probability is always $1$.

Example: In rolling a die, the event "getting a number less than $7$" is a sure event because it includes $\{1, 2, 3, 4, 5, 6\}$.

2. Simple and Compound Events

Simple Event (Elementary Event)

If an event $E$ has only one sample point (outcome) of the sample space, it is called a simple or elementary event. In a sample space of $n$ outcomes, there are exactly $n$ simple events.

Example: Tossing a coin twice gives $S = \{HH, HT, TH, TT\}$. The event of getting two heads $E = \{HH\}$ is a simple event.

Compound Event

If an event contains more than one sample point, it is called a compound event. Compound events represent a combination of two or more simple events.

Example: In the same coin-tossing experiment, the event "getting at least one head" is $F = \{HH, HT, TH\}$. Since it contains three outcomes, it is a compound event.

3. Equally Likely Outcomes

Outcomes of a random experiment are said to be equally likely if there is no reason to expect one outcome in preference to the others. In other words, each outcome has the same chance of occurring.

If a sample space $S$ contains $n$ outcomes and all are equally likely, then the probability of each simple event is:

$P(E_i) = \frac{1}{n}$

Example: When we toss an unbiased (fair), the outcomes 'Head' ($H$) and 'Tail' ($T$) are equally likely. There is no physical reason for the coin to land on $H$ more often than $T$.

When are outcomes NOT equally likely?

Outcomes are not equally likely if the experiment is "biased." For instance, if a bag contains $9$ Red balls and only $1$ Blue ball, drawing a Red ball is much more likely than drawing a Blue ball. In such cases, we cannot simply use the formula $\frac{1}{n}$ for the probability of each outcome.


Exhaustive and Favourable Number of Cases

These two terms are fundamental to calculating probability in the classical approach.

1. Exhaustive Number of Cases

The total number of all possible outcomes of a random experiment is called the exhaustive number of cases. It is simply the cardinality (or the total number of elements) of the sample space S. It represents every single thing that can possibly happen.

It is denoted by $n(S)$.

Example: In the experiment of rolling a single die, the sample space is $S = \{1, 2, 3, 4, 5, 6\}$. The exhaustive number of cases is $n(S) = 6$.

2. Favourable Number of Cases

The number of outcomes of a random experiment that result in the happening of a particular event is called the favourable number of cases for that event. It is the cardinality (or the total number of elements) of the event set E.

It is denoted by $n(E)$.

Example: Continuing with the die roll experiment, let E be the event 'getting an even number'. The outcomes favourable to E are {2, 4, 6}. Therefore, the set for event E is $E = \{2, 4, 6\}$, and the favourable number of cases is $n(E) = 3$.

These two concepts form the basis of the classical probability formula: $P(E) = \frac{\text{Favourable Number of Cases}}{\text{Exhaustive Number of Cases}} = \frac{n(E)}{n(S)}$.


The Algebra of Events

The Algebra of Events provides a mathematical framework to describe how different outcomes of a random experiment interact. Since any event is a subset of the sample space, we use Venn diagrams and set operations to visualize and calculate their probabilities.

1. Complement of an Event ("Not A")

In probability theory, the Complement of an Event $A$ is a fundamental concept representing the non-occurrence of that event. If $S$ is the sample space of a random experiment, the complement of $A$ (denoted as $A'$, $A^c$, or $\bar{A}$) consists of all outcomes in $S$ that are not included in event $A$.

Mathematically, the complement is defined using set-builder notation as follows:

$A^c = \{\omega : \omega \in S \text{ and } \omega \notin A\}$

The relationship between an event and its complement is governed by two primary properties of sets: Exhaustiveness and Mutual Exclusivity. These properties ensure that in any experiment, either the event $A$ occurs or its complement $A^c$ occurs, but never both simultaneously.

Visual Representation via Venn Diagram

To visualize this, we represent the sample space $S$ as a universal set (usually a rectangle) and the event $A$ as a circle inside it. The region outside the circle but inside the rectangle represents the complement $A^c$.

Venn diagram showing a shaded area outside circle A representing the complement

The Exhaustive Property

Since the complement $A^c$ contains every outcome that $A$ lacks within the sample space, their union must necessarily recreate the entire sample space $S$. If we consider a single roll of a Ludo die ($S = \{1, 2, 3, 4, 5, 6\}$), and event $A$ is getting a number greater than 4 ($\{5, 6\}$), then $A^c$ is getting a number 4 or less ($\{1, 2, 3, 4\}$). Together, they cover all possible results.

$A \cup A' = S$

[Union of complementary sets]

The Mutually Exclusive Property

By definition, no outcome can belong to both $A$ and $A^c$ at the same time. This makes them disjoint or mutually exclusive. The intersection of an event and its complement is always an empty set (null set).

$A \cap A' = \phi$

[Intersection of complementary sets]

The Law of Complementation in Probability

From the properties mentioned above, we derive the most significant rule regarding complementary events in probability. Since $A$ and $A^c$ are mutually exclusive and exhaustive, the sum of their probabilities must equal the probability of the sure event ($S$), which is $1$.

$P(A) + P(A') = 1$

This leads to the computational formulas used to find the probability of "Not A":

$P(A') = 1 - P(A)$

[Probability of non-occurrence]

$P(A) = 1 - P(A')$

[Probability of occurrence]


2. Union of Events ("A or B")

The Union of two events $A$ and $B$, denoted by $A \cup B$, is the event that consists of all sample points which belong to event $A$, or to event $B$, or to both. In common language, the union of events corresponds to the occurrence of "at least one" of the events.

A practical example would be a student appearing for two board exams: Mathematics (Event $A$) and Science (Event $B$). The event "the student passes in Mathematics or Science" ($A \cup B$) is satisfied if the student passes in Mathematics only, or in Science only, or in both subjects.

Mathematically, the union is defined as:

$A \cup B = \{\omega : \omega \in A \text{ or } \omega \in B\}$

Visual Representation

The union is represented in a Venn diagram by the total area covered by both circles representing the events $A$ and $B$. This combined region shows every outcome that satisfies the condition of either event occurring.

Venn diagram with both circles A and B shaded to represent union

The Addition Theorem for Two Events

The probability of the union of two events is not simply the sum of their individual probabilities unless they are mutually exclusive. We must account for the intersection (common outcomes) to avoid double-counting.

Derivation of the Addition Formula

Let $S$ be the sample space of a random experiment with $n(S)$ equally likely outcomes. Let $A$ and $B$ be two events associated with $S$.

From the principles of set theory (inclusion-exclusion principle), the number of elements in the union of two sets is given by:

$n(A \cup B) = n(A) + n(B) - n(A \cap B)$

To convert this into a probability statement, we divide both sides of the equation by the total number of outcomes in the sample space $n(S)$:

$\frac{n(A \cup B)}{n(S)} = \frac{n(A)}{n(S)} + \frac{n(B)}{n(S)} - \frac{n(A \cap B)}{n(S)}$

By the classical definition of probability, $P(E) = \frac{n(E)}{n(S)}$. Substituting this into the equation, we get the Addition Theorem:

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$


3. Intersection of Events ("A and B")

The Intersection of two events $A$ and $B$, denoted by the symbol $A \cap B$, is an event that consists of all those sample points (outcomes) which are common to both event $A$ and event $B$. In the language of logic, this corresponds to the simultaneous occurrence of both events, often referred to as "A and B".

Mathematically, the intersection of events is defined using set-builder notation as follows:

$A \cap B = \{\omega : \omega \in A \text{ and } \omega \in B\}$

Simultaneous Occurrence

We say that the event $A \cap B$ has occurred only if the outcome of the random experiment satisfies the conditions of both $A$ and $B$ at the same time. If the outcome belongs to $A$ but not to $B$, or to $B$ but not to $A$, the intersection event $A \cap B$ has not occurred.

Consider a student appearing for an entrance exam. Let event $A$ be "clearing the Mathematics cutoff" and event $B$ be "clearing the Physics cutoff." The event $A \cap B$ represents the student clearing both cutoffs simultaneously.

Visual Representation via Venn Diagram

On a Venn diagram, where the sample space $S$ is represented by a rectangle and events $A$ and $B$ by circles, the intersection is the overlapping region shared by both circles. This region contains only those outcomes that are present in both sets.

Venn diagram showing only the overlapping region between A and B shaded

Relationship with Addition Theorem

The probability of the intersection is a vital component of the Addition Theorem of Probability. The formula for the union of two events can be rearranged to find the probability of their intersection:

By the General Addition Rule:

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

By rearranging the terms, we derive the formula for the intersection:

$P(A \cap B) = P(A) + P(B) - P(A \cup B)$

This formula allows us to calculate the probability that both events occur if we know the individual probabilities and the probability that at least one occurs.


4. Difference of Events ("A but not B")

The Difference of two events $A$ and $B$, denoted as $A - B$, is the event consisting of all sample points which belong to event $A$ but do not belong to event $B$. In logical terms, it is referred to as the occurrence of "A but not B" or "A and not B".

Mathematically, the difference $A - B$ is equivalent to the intersection of event $A$ and the complement of event $B$ ($B'$). It is defined as:

$A - B = A \cap B' = \{\omega : \omega \in A \text{ and } \omega \notin B\}$

Visual Representation via Venn Diagram

In a Venn diagram, the difference $A - B$ is represented by the region of circle $A$ that does not overlap with circle $B$. It represents the outcomes that are exclusive to event A in relation to $B$.

Venn diagram showing circle A shaded except for the part that overlaps with B

Similarly, the event $B - A$ (or $B \cap A'$) would represent outcomes that are in $B$ but not in $A$. It is important to note that, in general, $A - B \neq B - A$ unless $A = B$.

Derivation of the Cardinality and Probability Formulas

1. Counting Formula for Difference

From the visual structure of the Venn diagram, we can observe that event $A$ is the union of two mutually exclusive (disjoint) sets: the part of $A$ that is not in $B$ ($A - B$) and the part of $A$ that is shared with $B$ ($A \cap B$).

$A = (A - B) \cup (A \cap B)$

[Union of disjoint parts]

Since these two sets are disjoint, their intersection is empty, and the total number of outcomes in $A$ is the sum of the outcomes in these two parts:

$n(A) = n(A - B) + n(A \cap B)$

By rearranging the above equation, we derive the formula to find the number of outcomes for "A but not B":

$n(A - B) = n(A) - n(A \cap B)$

2. Probability Formula for Difference

To find the probability $P(A - B)$, we divide the above equation by the total number of outcomes in the sample space $n(S)$:

$\frac{n(A - B)}{n(S)} = \frac{n(A)}{n(S)} - \frac{n(A \cap B)}{n(S)}$

Applying the classical definition of probability, we obtain:

$P(A - B) = P(A) - P(A \cap B)$

Alternatively, since $A - B = A \cap B'$, the formula can also be stated as:

$P(A \cap B') = P(A) - P(A \cap B)$


5. Mutually Exclusive vs Exhaustive Events

While the terms Mutually Exclusive and Exhaustive are often discussed together in probability theory, they describe fundamentally different characteristics of event relationships. Understanding the distinction between "cannot happen together" and "must happen between them" is essential for advanced statistical modeling.

(i) Mutually Exclusive Events (Disjoint Events)

Two or more events are said to be Mutually Exclusive if the occurrence of any one of them excludes the possibility of the occurrence of the others at the same time. These events have no common outcomes.

Consider a single roll of a die in a game of Ludo. Let event $A$ be "getting a 2" and event $B$ be "getting a 5". Since it is physically impossible for a single die to show both 2 and 5 simultaneously, events $A$ and $B$ are mutually exclusive.

Mathematical Properties

For two mutually exclusive events $A$ and $B$, the intersection is the empty set:

$A \cap B = \phi$

[Disjoint condition]

Consequently, the probability of them occurring together is zero:

$P(A \cap B) = 0$

This modifies the Addition Theorem of Probability, as the subtraction term vanishes:

$P(A \cup B) = P(A) + P(B)$

Visual Representation

On a Venn diagram, mutually exclusive events are represented as separated (disjoint) circles within the sample space $S$, showing no overlapping area.

Venn diagram showing two separated circles with no intersection

(ii) Exhaustive Events

A set of events $E_1, E_2, \dots, E_n$ associated with a sample space $S$ are called Exhaustive Events if their union covers the entire sample space. This implies that when the experiment is performed, at least one of these events must occur.

Using the standard deck of cards as an example, let $E_1$ be the event of drawing a Red card and $E_2$ be the event of drawing a Black card. Since every card in the deck is either Red or Black, these two events are exhaustive.

Mathematical Properties

The union of exhaustive events is equal to the sample space:

$E_1 \cup E_2 \cup \dots \cup E_n = S$

[Coverage condition]

Since the probability of the sample space is always 1 (sure event), the probability of the union of exhaustive events is:

$P(E_1 \cup E_2 \cup \dots \cup E_n) = 1$

Visual Representation

In a Venn diagram, exhaustive events are those that collectively fill the entire rectangular boundary of the sample space $S$. There is no "empty space" left inside the rectangle that does not belong to at least one of the circles/regions.

Venn diagram where circles A and B cover the entire area of the sample space rectangle

Comparison between the Two Concepts

The following table distinguishes between the two properties to provide a clear understanding:

Basis of Comparison Mutually Exclusive Events Exhaustive Events
Core Meaning Events cannot occur at the same time. At least one event must occur.
Set Condition Intersection is empty ($A \cap B = \phi$). Union is the sample space ($A \cup B = S$).
Probability Property $P(A \cap B) = 0$. $P(A \cup B) = 1$.
Focus Focuses on the non-overlap of events. Focuses on the completeness of the events.

Mutually Exclusive and Exhaustive Events (Partitions)

When a set of events is both mutually exclusive and exhaustive, they form a partition of the sample space. This means:

1. No two events overlap ($E_i \cap E_j = \phi$ for all $i \neq j$).

2. Together they cover everything ($\bigcup E_i = S$).

For such events, the sum of their individual probabilities is exactly 1:

$P(E_1) + P(E_2) + \dots + P(E_n) = 1$

A classic example is the outcome of a cricket match (ignoring weather interruptions): {Win, Loss, Tie}. These three events are mutually exclusive (you cannot both win and tie a single match) and exhaustive (one of these outcomes must happen).


Example 1. A single die is rolled. Let A be the event 'getting a prime number', B be the event 'getting an odd number', and C be the event 'getting a number greater than 3'. Describe the events:

(i) $A \text{ and } B$      (ii) $A \text{ or } B$      (iii) $B \text{ and } C$      (iv) not C ($C'$)      (v) $A - C$

Answer:

Given:

Experiment: Rolling a single die. The sample space is $S = \{1, 2, 3, 4, 5, 6\}$.

Event A (prime number): $A = \{2, 3, 5\}$

Event B (odd number): $B = \{1, 3, 5\}$

Event C (number > 3): $C = \{4, 5, 6\}$

Solution:

(i) $A \text{ and } B$ ($A \cap B$): We need outcomes that are in both A and B (prime and odd).

$A \cap B = \{2, 3, 5\} \cap \{1, 3, 5\} = \{3, 5\}$

(ii) $A \text{ or } B$ ($A \cup B$): We need outcomes that are in A or B or both (prime or odd).

$A \cup B = \{2, 3, 5\} \cup \{1, 3, 5\} = \{1, 2, 3, 5\}$

(iii) $B \text{ and } C$ ($B \cap C$): We need outcomes that are in both B and C (odd and greater than 3).

$B \cap C = \{1, 3, 5\} \cap \{4, 5, 6\} = \{5\}$

(iv) not C ($C'$): We need all outcomes in S that are not in C (not greater than 3, i.e., less than or equal to 3).

$C' = S - C = \{1, 2, 3, 4, 5, 6\} - \{4, 5, 6\} = \{1, 2, 3\}$

(v) $A - C$: We need outcomes that are in A but not in C (prime but not greater than 3).

$A - C = \{2, 3, 5\} - \{4, 5, 6\} = \{2, 3\}$


Example 2. Two dice are rolled. Let A be the event "the sum of the numbers is 7", and B be the event "the numbers shown are equal". Are A and B mutually exclusive?

Answer:

Given:

Experiment: Rolling two dice. The sample space S has 36 outcomes, from (1,1) to (6,6).

Event A: "the sum is 7".

Event B: "the numbers are equal" (a doublet).

Solution:

Step 1: List the outcomes for each event.

For event A (sum is 7), the possible outcomes are:

$A = \{(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)\}$

For event B (numbers are equal), the possible outcomes are:

$B = \{(1,1), (2,2), (3,3), (4,4), (5,5), (6,6)\}$

Step 2: Find the intersection of A and B.

To check if the events are mutually exclusive, we need to find their intersection, $A \cap B$. We look for outcomes that are common to both sets.

By comparing the sets A and B, we can see that there are no common outcomes.

$A \cap B = \phi$

Conclusion:

Since the intersection of events A and B is the empty set, they have no outcomes in common. Therefore, events A and B are mutually exclusive. It is impossible to roll two dice where the numbers are the same and their sum is 7.



Axiomatic Approach to Probability

The Axiomatic Approach, introduced by the Russian mathematician A.N. Kolmogorov in 1933, provides a formal mathematical framework for probability. Unlike the classical or statistical definitions, it doesn't depend on the nature of outcomes but relies on certain fundamental axioms.

Let $S$ be the sample space of a random experiment. Probability $P$ is a real-valued function whose domain is the power set of $S$, denoted as $P(S)$, and whose range is the closed interval $[0, 1]$. This means for every event $A$ associated with $S$, there exists a unique real number $P(A)$, called the probability of $A$.

Mathematically, the function is defined as: $P: P(S) \to [0, 1]$

The Three Pillars (Axioms)

To be a valid probability measure, the function $P$ must satisfy the following three axioms:

Axiom 1: Non-negativity

For any event $A$ in the sample space $S$, the probability must be a non-negative real number. It can never be negative.

$0 \leq P(A) \leq 1$

Axiom 2: Certainty

The probability of the entire sample space (the Sure Event) is always equal to $1$. This signifies that some outcome from the sample space must occur.

$P(S) = 1$

Axiom 3: Additivity (For Mutually Exclusive Events)

If $A$ and $B$ are mutually exclusive events (disjoint sets), then the probability of the union of these events is the sum of their individual probabilities.

$P(A \cup B) = P(A) + P(B)$

[Provided $A \cap B = \phi$]

Probability of an Impossible Event ($\phi$)

An impossible event is represented by the empty set $\phi$, which contains no outcomes. We can prove its probability is zero using the axioms.

To Prove: $P(\phi) = 0$

Proof:

Consider any event $A$. We know that $A$ and $\phi$ are disjoint sets because they have no common elements.

$A \cap \phi = \phi$

(Mutually Exclusive)

From set theory, the union of any set with an empty set is the set itself:

$A \cup \phi = A$

Now, applying Axiom 3 to the mutually exclusive events $A$ and $\phi$:

$P(A \cup \phi) = P(A) + P(\phi)$

Substituting $P(A \cup \phi)$ with $P(A)$:

$P(A) = P(A) + P(\phi)$

$P(\phi) = 0$

Hence Proved.

Finite Sample Spaces and Elementary Events

When a sample space $S$ consists of a finite number of outcomes (elementary events) $\omega_1, \omega_2, \dots, \omega_n$, we assign a probability to each outcome such that:

1. Individual Probability: Each $P(\omega_i)$ must be between $0$ and $1$.

2. Sum of Probabilities: The total sum of probabilities of all individual outcomes in a sample space must be equal to $1$.

$\sum\limits_{i=1}^{n} P(\omega_i) = 1$

3. Event Probability: If $A$ is an event, then $P(A)$ is the sum of the probabilities of all outcomes $\omega_i$ that belong to $A$.

$P(A) = \sum\limits_{\omega_i \in A} P(\omega_i)$


Probabilities of Equally Likely Outcomes

In the study of probability, the concept of Equally Likely Outcomes is fundamental, especially in classical probability. Two or more outcomes are said to be equally likely if none of them is expected to occur in preference to the others. This is often referred to as an unbiased experiment, such as tossing a fair coin or rolling an unmarked die.

Mathematical Derivation

Let us consider a sample space $S$ associated with a random experiment. Suppose $S$ contains $n$ distinct outcomes represented as follows:

$S = \{\omega_1, \omega_2, \omega_3, \dots, \omega_n\}$

According to the Axiomatic Approach, if all outcomes are equally likely, the probability of each elementary event must be identical. Let this probability be represented by $p$.

$P(\omega_1) = P(\omega_2) = P(\omega_3) = \dots = P(\omega_n) = p$

From the axioms of probability, we know that the sum of probabilities of all possible outcomes in a sample space must equal 1:

$\sum\limits_{i=1}^{n} P(\omega_i) = 1$

[Axiom of Certainty]

Substituting the value $p$ for each outcome:

$p + p + p + \dots + p \text{ ($n$ times)} = 1$

$n \times p = 1$

$p = \frac{1}{n}$

Probability of an Event $E$

Now, let $E$ be an event associated with this experiment. If $E$ consists of $m$ outcomes (where $m \leq n$), then the cardinality of $E$ is $n(E) = m$.

Using the property of additivity for mutually exclusive elementary events:

$P(E) = \sum\limits_{\omega_i \in E} P(\omega_i)$

$P(E) = \underbrace{p + p + p + \dots + p}_{\text{m times}}$

$P(E) = m \times p$

Substituting the value of $p$ from equation (i):

$P(E) = m \times \left(\frac{1}{n}\right) = \frac{m}{n}$

In standard terminology used in competitive exams:

$P(E) = \frac{n(E)}{n(S)} = \frac{\text{Number of outcomes favourable to } E}{\text{Total number of possible outcomes}}$

Comparison of Outcome Likelihood

To better understand the difference between equally likely and non-equally likely outcomes, consider the following table:

Experiment Outcomes Likelihood Status Reason
Tossing a fair coin $\{H, T\}$ Equally Likely Probability of Head = Probability of Tail = $1/2$
Rolling a biased die $\{1, 2, 3, 4, 5, 6\}$ Not Equally Likely One face is weighted to appear more frequently.
Drawing a card from a deck 52 Cards Equally Likely Each card has a $1/52$ chance of being drawn.

Important Conditions for Equally Likely Outcomes

1. Symmetry: The physical properties of the object used (coin, die, cards) must be symmetrical and uniform.

2. Randomness: The experiment must be conducted in a way that no external factor influences the result.

3. Independence: One outcome should not be linked to the occurrence of another outcome in a single trial.


Odds in Favour and Odds Against

While probability measures the ratio of favourable outcomes to the total number of outcomes, odds compare the number of ways an event can occur directly against the number of ways it cannot occur.

Fundamental Definitions

Let $S$ be the sample space of a random experiment. Suppose all outcomes in $S$ are equally likely. For any event $E$ associated with $S$, let:

$\bullet$ $a$ = Number of ways the event $E$ can happen (favourable outcomes).

$\bullet$ $b$ = Number of ways the event $E$ can fail to happen (unfavourable outcomes).

$\bullet$ $a + b$ = Total number of outcomes in the sample space $S$.

1. Odds in Favour of Event E

The odds in favour are defined as the ratio of the number of favourable outcomes to the number of unfavourable outcomes.

$\text{Odds in favour} = a : b$

2. Odds Against Event E

The odds against are defined as the ratio of the number of unfavourable outcomes to the number of favourable outcomes.

$\text{Odds against} = b : a$

Mathematical Relationship with Probability

We can convert the ratio of odds into a probability value and vice versa by using the total number of outcomes ($a + b$).

Case I: Converting Odds to Probability

If the odds in favour of an event $E$ are given as $a : b$, the probability of the occurrence of $E$ is calculated as:

$P(E) = \frac{a}{a+b}$

[Successes divided by Total]

Similarly, the probability of the non-occurrence of $E$ (denoted as $P(E^c)$ or $P(\text{not } E)$) is:

$P(E^c) = \frac{b}{a+b}$

[Failures divided by Total]

Case II: Converting Probability to Odds

If the probability of an event $E$ is known to be $p$ (where $P(E) = p$), then the probability of its failure is $1 - p$. The odds can be expressed as follows:

Odds in favour:

$\text{Odds in favour} = \frac{P(E)}{P(E^c)} = \frac{p}{1-p}$

Odds against:

$\text{Odds against} = \frac{P(E^c)}{P(E)} = \frac{1-p}{p}$

Summary Comparison Table

To differentiate clearly between the three terms, refer to the table below:

Measurement Type Mathematical Ratio Verbal Description
Probability $a : (a+b)$ Success to Total
Odds in Favour $a : b$ Success to Failure
Odds Against $b : a$ Failure to Success

Key Points to Remember

1. Probability always ranges from $0$ to $1$, whereas odds can range from $0$ to $\infty$.

2. In Fair Betting or games of chance, odds are used to determine the payout. For example, odds of $1:5$ in favour means for every $\textsf{₹} 1$ staked, the profit would be $\textsf{₹} 5$ if the event occurs.

3. If an event is certain, the odds against it are $0 : 1$. If an event is impossible, the odds in favour of it are $0 : 1$.


Probability of Event 'Not E' (Complementary Event)

In any random experiment, for every event $E$, there exists a corresponding event which consists of all those outcomes of the sample space $S$ that are not included in $E$. This is known as the Complementary Event of $E$, denoted by 'not E', and represented symbolically as $E'$, $\bar{E}$, or $E^c$.

From a set-theoretic perspective, if $S$ is the universal set (sample space), then $E^c$ is the complement of set $E$. This means that the occurrence of $E^c$ is exactly the same as the non-occurrence of $E$.

Venn diagram showing a circle E inside a rectangular sample space S, with the outer region shaded as E complement

Proof of the Complementary Rule

The relationship $P(E^c) = 1 - P(E)$ is one of the most useful properties in probability, especially for solving complex problems where calculating the probability of non-occurrence is easier than calculating the occurrence.

To Prove: $P(E^c) = 1 - P(E)$

Proof:

Let $S$ be the sample space and $E$ be any event associated with it. By the definition of complementary sets in set theory, we have two fundamental properties:

1. Mutually Exclusive Property: An event and its complement have no common outcomes.

$E \cap E^c = \phi$

[Disjoint Sets]

2. Exhaustive Property: The union of an event and its complement equals the entire sample space.

$E \cup E^c = S$

Now, we apply the Axioms of Probability to these properties:

From the above Additivity Axiom, since $E$ and $E^c$ are mutually exclusive as per above equation:

$P(E \cup E^c) = P(E) + P(E^c)$

From the above Certainty Axiom, the probability of the sample space is always 1:

$P(S) = 1$

From the above equations, we get:

$P(S) = P(E) + P(E^c)$

$1 = P(E) + P(E^c)$

By rearranging the terms, we arrive at the final result:

$P(E^c) = 1 - P(E)$

$P(\text{not } E) = 1 - P(E)$

Hence Proved.

Significance of Complementary Events

The concept of 'Not E' is extensively used in competitive exams for "at least" type problems. For instance, calculating the probability of "at least one success" is often simpler using the complement:

$P(\text{At least one}) = 1 - P(\text{None})$

Properties of Complementary Probabilities

1. Range: Since $0 \leq P(E) \leq 1$, it follows that $0 \leq 1 - P(E) \leq 1$, thus $P(E^c)$ also lies in the range $[0, 1]$.

2. Summation: The sum of the probability of an event and its complement is always unity ($1$). Such events are called Complementary Events.

3. Impossible vs. Sure Events: The complement of a Sure Event ($S$) is an Impossible Event ($\phi$), and vice-versa.

$P(S^c) = P(\phi) = 0$

$P(\phi^c) = P(S) = 1$

Relationship between Odds and Complement

The definition of complementary events is directly linked to the calculation of Odds Against an event. If the number of ways an event $E$ occurs is '$a$' and the number of ways it fails is '$b$', then:

$\bullet$ Ways in favour of $E$ = $a$

$\bullet$ Ways in favour of $E^c$ = $b$

Event Description Probability
$E$ Occurrence $P(E)$
$E^c$ Non-occurrence $1 - P(E)$
Total Sample Space 1

Example 1. A fair die is rolled once. Find the probability of getting an odd number and the probability of getting a number less than 5.

Answer:

Given: A die is rolled. The sample space is $S = \{1, 2, 3, 4, 5, 6\}$.

Total number of outcomes $n(S) = 6$.

(i) To Find: Probability of getting an odd number.

Let $A$ be the event of getting an odd number.

$A = \{1, 3, 5\}$. Therefore, $n(A) = 3$.

$P(A) = \frac{n(A)}{n(S)} = \frac{\cancel{3}^{1}}{\cancel{6}_{2}} = \frac{1}{2}$

(ii) To Find: Probability of getting a number less than 5.

Let $B$ be the event of getting a number less than 5.

$B = \{1, 2, 3, 4\}$. Therefore, $n(B) = 4$.

$P(B) = \frac{n(B)}{n(S)} = \frac{\cancel{4}^{2}}{\cancel{6}_{3}} = \frac{2}{3}$


Example 2. If the odds in favour of an event $E$ are $3:5$, find the probability of occurrence of $E$ and the probability of non-occurrence of $E$.

Answer:

Given: Odds in favour $= 3 : 5$. Here $a = 3$ and $b = 5$.

Solution:

The probability of event $E$ is given by:

$P(E) = \frac{a}{a+b} = \frac{3}{3+5} = \frac{3}{8}$

The probability of non-occurrence of $E$ (i.e., 'not E'):

$P(E^c) = 1 - P(E) = 1 - \frac{3}{8} = \frac{5}{8}$



Laws of Probability

The Laws of Probability are derived from the fundamental axioms to simplify the calculation of probabilities for complex events. These laws allow us to determine the likelihood of unions, differences, and specific combinations of events without needing to list every outcome in the sample space.


Theorem 1: Subset and Addition Rules

If $A$ and $B$ are two events associated with a random experiment such that $A$ is a subset of $B$ ($A \subset B$), it implies that whenever event $A$ occurs, event $B$ must also occur. Under this condition, two important results follow:

(i) The probability of $A$ can never exceed the probability of $B$.

(ii) The probability of the occurrence of "B but not A" is the difference of their individual probabilities.

Derivation of Part (i): $P(A) \leq P(B)$

Let $S$ be the sample space. Since $A \subset B$, every elementary outcome $\omega_i$ that belongs to $A$ must also belong to $B$. According to the axiomatic definition, the probability of an event is the sum of the probabilities of its elementary outcomes.

$\sum\limits_{\omega_i \in A} P(\omega_i) \leq \sum\limits_{\omega_j \in B} P(\omega_j)$

[As $A$ contains fewer or equal outcomes than $B$]

$P(A) \leq P(B)$

Derivation of Part (ii): $P(B - A) = P(B) - P(A)$

From the principles of set theory, if $A \subset B$, the set $B$ can be partitioned into two mutually exclusive (disjoint) parts: the set $A$ itself and the set of elements in $B$ that are not in $A$ (denoted as $B - A$ or $B \cap A^c$).

$B = A \cup (B - A)$

[Identity for $A \subset B$]

Since $A$ and $(B - A)$ have no common elements, their intersection is null ($A \cap (B - A) = \phi$). Applying the Axiom of probability:

$P(A \cup (B - A)) = P(A) + P(B - A)$

$P(B) = P(A) + P(B - A)$

[Substituting $A \cup (B - A) = B$]

Rearranging the terms to find the probability of the difference:

$P(B - A) = P(B) - P(A)$

Hence Proved.

Venn Diagram showing set A entirely contained within set B

Addition Law for Mutually Exclusive Events

If $A$ and $B$ are mutually exclusive events, it means they cannot happen at the same time ($A \cap B = \phi$). These are often referred to as disjoint events.

The probability of the occurrence of either event $A$ or event $B$ (the union) is simply the sum of their individual probabilities.

$P(A \cup B) = P(A) + P(B)$

[Only if $A \cap B = \phi$]

Generalization for 'n' Events

If we have a collection of $n$ events $A_1, A_2, A_3, \dots, A_n$ that are pairwise mutually exclusive (no two events have a common outcome), the probability of at least one of these events occurring is the sum of their respective probabilities.

$P(A_1 \cup A_2 \cup \dots \cup A_n) = P(A_1) + P(A_2) + \dots + P(A_n)$

Using the summation notation, this is represented as:

$P\left( \bigcup\limits_{i=1}^{n} A_i \right) = \sum\limits_{i=1}^{n} P(A_i)$

Addition Theorem for Exhaustive Events

When events are not only mutually exclusive but also exhaustive, they cover all possible outcomes of the sample space $S$. In such a scenario, the sum of their probabilities must equal the probability of the sure event.

To Prove: $P(A) + P(B) = 1$ (for mutually exclusive and exhaustive events).

Proof:

Since the events are exhaustive:

$A \cup B = S$

Taking probability on both sides:

$P(A \cup B) = P(S)$

By Axiom (ii), $P(S) = 1$, and by Axiom (iii), $P(A \cup B) = P(A) + P(B)$:

$P(A) + P(B) = 1$

Hence Proved.

For a general set of $n$ mutually exclusive and exhaustive events $A_1, A_2, \dots, A_n$:

$P(A_1) + P(A_2) + \dots + P(A_n) = 1$

Comparison of Event Relationships

The following table summarizes how probabilities change based on the relationship between two events $A$ and $B$.

Relationship Condition Applicable Formula
Subset $A \subset B$ $P(B - A) = P(B) - P(A)$
Mutually Exclusive $A \cap B = \phi$ $P(A \cup B) = P(A) + P(B)$
Exhaustive $A \cup B = S$ $P(A \cup B) = 1$
Exclusive & Exhaustive $A \cap B = \phi, A \cup B = S$ $P(A) + P(B) = 1$

Theorem 2: Mutually Exclusive and Exhaustive Events

This theorem combines two critical concepts of probability theory. When events are both Mutually Exclusive (they cannot occur together) and Exhaustive (they cover all possible outcomes), they partition the sample space $S$ into distinct, non-overlapping regions that sum up to the whole.

Definitions and Conditions

For two events $A$ and $B$ to satisfy this theorem, they must meet the following mathematical criteria:

1. Mutually Exclusive: Their intersection is an empty set, meaning no outcome is common to both.

$A \cap B = \phi$

2. Exhaustive: Their union constitutes the entire sample space, meaning at least one of them must occur.

$A \cup B = S$

Proof of the Theorem

Theorem Statement: If $A$ and $B$ are mutually exclusive and exhaustive events, then the sum of their probabilities is unity ($1$).

Proof:

We begin by considering the property of mutually exclusive events. According to the Axiom of Additivity:

$P(A \cup B) = P(A) + P(B)$

(i)

Next, we incorporate the property of exhaustive events. Since the events cover the entire sample space:

$A \cup B = S$

By applying the probability measure to both sides of the equation:

$P(A \cup B) = P(S)$

From the Axiom of Certainty, we know that the probability of the sure event (sample space) is always $1$:

$P(S) = 1$

(ii)

By substituting the results from equations (i) and (ii) into each other, we conclude:

$P(A) + P(B) = 1$

Hence Proved.

Venn diagram showing sample space S split into two parts A and B with no overlap and no space left outside

Generalised Form for 'n' Events

The theorem can be extended to any finite number of events. If $A_1, A_2, A_3, \dots, A_n$ are $n$ events that are pairwise mutually exclusive and collectively exhaustive, then:

$A_1 \cup A_2 \cup \dots \cup A_n = S$

$A_i \cap A_j = \phi$

[For all $i \neq j$]

The sum of their individual probabilities is expressed as:

$P(A_1) + P(A_2) + \dots + P(A_n) = 1$

Using the summation notation:

$\sum\limits_{i=1}^{n} P(A_i) = 1$

Summary of Event Characteristics

The following table provides a clear distinction:

Property Mutually Exclusive Exhaustive
Basic Definition Events cannot happen at the same time. At least one of the events must happen.
Set Operation $A \cap B = \phi$ $A \cup B = S$
Probability Impact $P(A \cup B) = P(A) + P(B)$ $P(A \cup B) = 1$
Combined Result $P(A) + P(B) = 1$

Theorem 3: General Addition Theorem

In the study of probability, the General Addition Theorem (also known as the Inclusion-Exclusion Principle for two sets) is used to find the probability of the union of two events that are not necessarily mutually exclusive.

The Concept of "Double Counting"

When two events $A$ and $B$ have common outcomes (i.e., $A \cap B \neq \phi$), the sum $P(A) + P(B)$ counts the probability of the intersection region twice (once in $A$ and once in $B$). To obtain the correct probability for the union $P(A \cup B)$, we must subtract the probability of the intersection exactly once.

Venn diagram showing two overlapping circles A and B inside sample space S

Formal Statement and Proof

Theorem: If $A$ and $B$ are any two events associated with a random experiment having sample space $S$, then:

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

Proof of the Theorem

To prove this, we decompose the union of the two sets into mutually exclusive parts using set identities.

From set theory, the union of $A$ and $B$ can be expressed as the union of two disjoint sets: the set $A$ and the set containing elements of $B$ which are not in $A$.

$A \cup B = A \cup (B - A)$

Since $A$ and $(B - A)$ are disjoint ($A \cap (B - A) = \phi$), we apply Axiom of probability:

$P(A \cup B) = P(A) + P(B - A)$

... (i)

Now, consider the set $B$. It can be expressed as the union of the part of $B$ that overlaps with $A$ and the part that does not overlap with $A$.

$B = (A \cap B) \cup (B - A)$

Since $(A \cap B)$ and $(B - A)$ are also mutually exclusive, we apply the probability measure:

$P(B) = P(A \cap B) + P(B - A)$

Rearranging the above equation to isolate $P(B - A)$:

$P(B - A) = P(B) - P(A \cap B)$

[Subtracting $P(A \cap B)$ from both sides]           ... (ii)

Now, substitute the value of $P(B - A)$ from equation (ii) into equation (i):

$P(A \cup B) = P(A) + [P(B) - P(A \cap B)]$

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

Hence Proved.

Comparison of Addition Rules

The applicability of the addition rule depends entirely on whether the events have a common intersection.

Case Type Condition Mathematical Formula
Mutually Exclusive $A \cap B = \phi$ $P(A \cup B) = $$ P(A) + P(B)$
Non-Mutually Exclusive $A \cap B \neq \phi$ $P(A \cup B) = $$ P(A) + P(B) $$ - P(A \cap B)$
Independent Events $P(A \cap B) = P(A) \cdot P(B)$ $P(A \cup B) = $$ P(A) + P(B) $$ - [P(A) \cdot P(B)]$

Key Logical Interpretations

In word problems, specific phrases indicate the use of the General Addition Theorem:

1. "Probability of A or B": This usually refers to $P(A \cup B)$. In mathematics textbooks, "or" is treated as the inclusive or.

2. "Probability of at least one of the events": This is equivalent to $P(A \cup B)$.

3. "Probability of neither A nor B": This is the complement of the union, calculated as $1 - P(A \cup B)$.

Important Remark on the Range of the Union

Since $P(A \cup B)$ is a probability, it must satisfy the axiom $P(A \cup B) \leq 1$. This leads to Boole's Inequality for two events:

$P(A \cup B) \leq P(A) + P(B)$

The equality holds true if and only if the events are mutually exclusive.


Theorem 4: Miscellaneous Relationships

In any random experiment involving events $A$ and $B$, the event $A$ can be viewed as consisting of two disjoint parts: outcomes that are only in $A$ and outcomes that $A$ shares with $B$.

Mathematical Breakdown

Using set identities, we can express the primary events as follows:

$A = (A - B) \cup (A \cap B)$

[Where $(A - B) \cap (A \cap B) = \phi$]

$B = (B - A) \cup (A \cap B)$

[Where $(B - A) \cap (A \cap B) = \phi$]

By applying Axiom to the equations above, we derive the foundational probability identities:

$P(A) = P(A - B) + P(A \cap B)$

$P(B) = P(B - A) + P(A \cap B)$

Key Formulas

The following set of formulas are derived by rearranging and substituting the fundamental identities mentioned above. Students are advised to memorize these for fast-paced objective tests.

(i) Probability of the Union via Disjoint Parts:

$P(A \cup B) = P(A - B) + P(B - A) + P(A \cap B)$

(ii) Alternative Expressions for the Union:

$P(A \cup B) = P(A) + P(B - A) = P(B) + P(A - B)$

(iii) Probability of 'Only A' (Difference of Sets):

$P(A - B) = P(A) - P(A \cap B)$

(iv) Probability of 'Only B':

$P(B - A) = P(B) - P(A \cap B)$

(v) Set Intersection Notation for Differences:

$P(A - B) = P(A \cap B^c)$

$P(B - A) = P(A^c \cap B)$

(vi) The Sum of Probabilities Law:

$P(A) + P(B) = P(A - B) + P(B - A) + 2P(A \cap B)$

Venn diagram showing regions for Only A, Only B, and Intersection

Specific Logical Interpretations

Questions in competitive exams often use specific wording to describe these mathematical regions. Translating these phrases correctly is key to finding the solution.

1. Probability of occurrence of "A only"

This refers to outcomes that are in $A$ but not in $B$. In probability notation, it is represented as $P(A \cap B^c)$ or $P(A - B)$.

$P(\text{Only } A) = P(A) - P(A \cap B)$

2. Probability of occurrence of "B only"

This refers to outcomes that are in $B$ but not in $A$. In probability notation, it is represented as $P(A^c \cap B)$ or $P(B - A)$.

$P(\text{Only } B) = P(B) - P(A \cap B)$

3. Probability of occurrence of "Exactly one" of the two events

This is the Symmetric Difference of the two sets, meaning either 'Only $A$' happens or 'Only $B$' happens. Since these two scenarios are mutually exclusive, we sum their probabilities.

$P(\text{Exactly one}) = P(A - B) + P(B - A)$

An alternative (and often faster) formula is subtracting the intersection from the union:

$P(\text{Exactly one}) = P(A \cup B) - P(A \cap B)$

Summary Table for Reference

Verbal Phrase Set Notation Computational Formula
At least one of $A$ or $B$ $A \cup B$ $P(A) + P(B) - P(A \cap B)$
Both $A$ and $B$ $A \cap B$ $P(A \text{ and } B)$
Neither $A$ nor $B$ $A^c \cap B^c$ $1 - P(A \cup B)$
Only $A$ occurs $A \cap B^c$ $P(A) - P(A \cap B)$
Exactly one occurs $(A-B) \cup (B-A)$ $P(A) + P(B) - 2P(A \cap B)$

Proving the "Exactly One" Shortcut

The formula $P(A) + P(B) - 2P(A \cap B)$ is highly popular in examination questions. Here is the logic:

$P(\text{Exactly one}) = [P(A) - P(A \cap B)] + [P(B) - P(A \cap B)]$

[Sum of 'Only A' and 'Only B']

$P(\text{Exactly one}) = P(A) + P(B) - 2P(A \cap B)$

Hence Derived.


Theorem 5: Addition Theorem for Three Events

The Addition Theorem for Three Events is an extension of the inclusion-exclusion principle. This theorem is vital for calculating the probability that at least one of three overlapping events ($A$, $B$, or $C$) will occur. It corrects the "over-counting" that happens when we simply sum individual probabilities.

Formal Statement of the Theorem

If $A$, $B$, and $C$ are any three events associated with a random experiment having a sample space $S$, then the probability of the occurrence of at least one of these events is given by:

$P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(B \cap C) $$ - P(C \cap A) $$ + P(A \cap B \cap C)$

Mathematical Proof

To Prove: The formula stated above using the principles of set theory and the addition theorem for two events.

Proof:

We treat the union of the first two events $(A \cup B)$ as a single composite event. Let us denote this as $E$.

$P(A \cup B \cup C) = P((A \cup B) \cup C)$

Applying the General Addition Theorem for two events (Theorem 3):

$P((A \cup B) \cup C) = P(A \cup B) + P(C) - P((A \cup B) \cap C)$

... (i)

Now, we expand $P(A \cup B)$ using Theorem 3:

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

... (ii)

For the intersection term in equation (i), we apply the Distributive Law of sets:

$(A \cup B) \cap C = (A \cap C) \cup (B \cap C)$

Therefore, the probability of this term is:

$P((A \cup B) \cap C) = P((A \cap C) \cup (B \cap C))$

Applying Theorem 3 again to this new union of two intersections:

$P((A \cap C) \cup (B \cap C)) = P(A \cap C) + P(B \cap C) - P((A \cap C) \cap (B \cap C))$

Since $(A \cap C) \cap (B \cap C)$ is the same as $A \cap B \cap C$:

$P((A \cup B) \cap C) = P(A \cap C) + P(B \cap C) - P(A \cap B \cap C)$

... (iii)

Finally, we substitute the results from equations (ii) and (iii) back into equation (i):

$P(A \cup B \cup C) = [P(A) + P(B) - P(A \cap B)] + P(C) $$ - [P(A \cap C) + P(B \cap C) - P(A \cap B \cap C)]$

Opening the brackets and rearranging the terms, we get:

$P(A \cup B \cup C) = P(A) + P(B) + P(C) - P(A \cap B) - P(B \cap C) $$ - P(C \cap A) + P(A \cap B \cap C)$

Hence Proved.

Logical Visualisation (Venn Diagram)

To understand why we subtract the double intersections and add back the triple intersection, consider the areas in a 3-circle Venn diagram:

1. Add $P(A) + P(B) + P(C)$: The overlapping regions ($A \cap B$, $B \cap C$, $C \cap A$) are counted twice, and the center region ($A \cap B \cap C$) is counted three times.

2. Subtract $P(A \cap B) - P(B \cap C) - P(C \cap A)$: This removes the double-counted areas. However, the center triple intersection was removed three times after being added three times, leaving it at zero.

3. Add $P(A \cap B \cap C)$: We add the center region back once to complete the union.

Venn diagram with three circles A, B, and C showing various intersections

Summary of Phrases for Three Events

The following table is extremely helpful for interpreting word problems in competitive exams:

Verbal Description Symbolic Notation Calculation Hint
At least one of $A$, $B$, or $C$ $A \cup B \cup C$ Use Theorem 5 (Inclusion-Exclusion)
All three events occur $A \cap B \cap C$ Product of probabilities (if independent)
None of the events occur $A^c \cap B^c \cap C^c$ $1 - P(A \cup B \cup C)$
Exactly two events occur Multiple intersections Sum of 'only two' combinations

Special Case: Mutually Exclusive Events

If the events $A$, $B$, and $C$ are pairwise mutually exclusive, then all intersection terms become zero ($P(A \cap B) = 0$, $P(B \cap C) = 0$, etc.). In this scenario, the formula simplifies to:

$P(A \cup B \cup C) = P(A) + P(B) + P(C)$


Example 1. In a deck of 52 cards, let $A_1$, $A_2$, $A_3$, and $A_4$ represent drawing a Spade, Heart, Diamond, and Club respectively. Verify the theorem for exhaustive events.

Answer:

Given: A standard deck of 52 playing cards. The suits are Spades, Hearts, Diamonds, and Clubs.

Total number of cards $n(S) = 52$. Each suit has 13 cards.

Proof/Verification:

The events are mutually exclusive (a card cannot be both a Spade and a Heart) and exhaustive (every card belongs to one of these four suits).

$P(A_1) = P(A_2) = P(A_3) = P(A_4) = \frac{13}{52} = \frac{1}{4}$

By the Addition Theorem for exhaustive events:

$P(A_1) + P(A_2) + P(A_3) + P(A_4) = \frac{1}{4} + \frac{1}{4} + \frac{1}{4} + \frac{1}{4}$

$P(A_1) + P(A_2) + P(A_3) + P(A_4) = 1$

This verifies that the sum of probabilities of mutually exclusive and exhaustive events is 1.


Example 2. A coin is tossed thrice. Let $A$ be the event of getting 2 or more heads and $B$ be the event of getting an odd number of heads. Find $P(A \cup B)$.

Answer:

Given: Sample Space $S = \{HHH, HHT, HTH, HTT, THH, THT, TTH, TTT\}$. Total outcomes $n(S) = 8$.

Event $A$ (2 or more heads) = $\{HHH, HHT, HTH, THH\}$. Thus, $n(A) = 4$.

Event $B$ (Odd number of heads) = $\{HHH, HTT, THT, TTH\}$. Thus, $n(B) = 4$.

Solution:

Find the intersection $A \cap B$ (Outcomes common to both):

$A \cap B = \{HHH\}$

[Only 1 outcome]

Now, calculate probabilities:

$P(A) = \frac{4}{8} = \frac{1}{2}$

$P(B) = \frac{4}{8} = \frac{1}{2}$

$P(A \cap B) = \frac{1}{8}$

Using the Addition Theorem:

$P(A \cup B) = P(A) + P(B) - P(A \cap B)$

$P(A \cup B) = \frac{4}{8} + \frac{4}{8} - \frac{1}{8}$

$P(A \cup B) = \frac{7}{8}$