Menu Top
Complete Course of Mathematics
Topic 1: Numbers & Numerical Applications Topic 2: Algebra Topic 3: Quantitative Aptitude
Topic 4: Geometry Topic 5: Construction Topic 6: Coordinate Geometry
Topic 7: Mensuration Topic 8: Trigonometry Topic 9: Sets, Relations & Functions
Topic 10: Calculus Topic 11: Mathematical Reasoning Topic 12: Vectors & Three-Dimensional Geometry
Topic 13: Linear Programming Topic 14: Index Numbers & Time-Based Data Topic 15: Financial Mathematics
Topic 16: Statistics & Probability


Content On This Page
Probability and its Related Terms (Introduction) Random Experiment and Sample Space Event: Definition and Types (Simple, Compound, Sure, Impossible, Mutually Exclusive)
Probability: Classical (Theoretical) Definition Probability - Experimental and Theoretical: Distinction Measuring Empirical Probability (Experimental Probability)


Introduction to Probability: Basic Terms and Concepts




Probability and its Related Terms (Introduction)


The Concept of Uncertainty and Probability

Our daily lives and the world around us are filled with uncertainty. We often encounter situations where the outcome is not predictable with certainty. Examples include predicting the weather, the result of a sports match, the outcome of an election, or whether a student will pass an exam.

While we cannot eliminate uncertainty, the field of Probability provides a mathematical framework to quantify and analyze it. Probability is a branch of mathematics that deals with the likelihood or chance of a specific outcome or event occurring in a random experiment.

Probability assigns a numerical value to this likelihood. This value is always between 0 and 1, inclusive:

By assigning numerical values to probabilities, we can compare the likelihood of different events and make informed decisions or predictions based on the available information, even in the face of uncertainty. The study of probability starts with defining the basic components of a situation involving uncertainty.


Basic Related Terms

To formally study probability, we need to understand the terminology used to describe uncertain situations. The fundamental terms are:

These terms form the building blocks for defining and calculating probabilities, and they will be elaborated upon in the following sections.



Random Experiment and Sample Space


Random Experiment

An experiment is a planned operation or procedure conducted under specific conditions that yields a result or outcome. In the context of probability, we are interested in experiments that are "random".

A **random experiment** (or probabilistic experiment) is an experiment that satisfies the following two conditions:

  1. It has **more than one possible outcome**. If an experiment has only one possible outcome, its result is certain, and there is no uncertainty to quantify.
  2. It is **not possible to predict the outcome with certainty in advance**, even if the experiment is repeated under identical conditions. The outcome varies unpredictably from one trial to the next.

Each repetition of a random experiment is called a **trial**.

Examples of Random Experiments:


Outcome

An outcome is a single, specific result that can occur when a random experiment is performed. It is the fundamental result of a trial of the experiment.

Examples of Outcomes:


Sample Space (S)

The sample space of a random experiment, usually denoted by the symbol $S$ or $\Omega$, is the **set of all possible outcomes** of that experiment. It is the universal set containing every potential result that can occur. Each element in the sample space is a distinct outcome.

The number of outcomes in a sample space is denoted by $n(S)$ or $|S|$.

Examples of Sample Space:

  1. Experiment: Tossing a single fair coin.

    The possible outcomes are Head (H) or Tail (T).

    The sample space is the set of these outcomes: $S = \{H, T\}$.

    The number of outcomes in the sample space is $n(S) = 2$.

  2. Experiment: Rolling a standard six-sided die.

    The possible outcomes are the numbers 1, 2, 3, 4, 5, 6.

    The sample space is the set of these outcomes: $S = \{1, 2, 3, 4, 5, 6\}$.

    The number of outcomes in the sample space is $n(S) = 6$.

  3. Experiment: Tossing two fair coins simultaneously (or tossing one coin twice).

    We can list the possible ordered outcomes:

    • Head on the first coin, Head on the second coin (HH)
    • Head on the first coin, Tail on the second coin (HT)
    • Tail on the first coin, Head on the second coin (TH)
    • Tail on the first coin, Tail on the second coin (TT)

    The sample space is the set of these combined outcomes: $S = \{HH, HT, TH, TT\}$.

    The number of outcomes in the sample space is $n(S) = 4$.

  4. Experiment: Drawing a single card from a standard deck of 52 playing cards.

    A standard deck has 4 suits (Spades $\spadesuit$, Hearts $\heartsuit$, Diamonds $\diamondsuit$, Clubs $\clubsuit$) and 13 ranks in each suit (Ace, 2, ..., 10, Jack, Queen, King). Each unique card is a possible outcome.

    The sample space is the set of all 52 distinct cards:

    $S = \{A\spadesuit, 2\spadesuit, \dots, K\spadesuit, A\heartsuit, \dots, K\heartsuit, A\diamondsuit, \dots, K\diamondsuit, A\clubsuit, \dots, K\clubsuit \}$.

    The number of outcomes in the sample space is $n(S) = 52$.

The sample space is the fundamental set from which all possible results of a random experiment are drawn. Events are defined as collections (subsets) of outcomes from this sample space.



Event: Definition and Types (Simple, Compound, Sure, Impossible, Mutually Exclusive)


Event (E)

In probability, an event is a specific outcome or a collection of outcomes from the sample space ($S$) of a random experiment. An event is formally defined as a **subset** of the sample space.

We are interested in whether a particular event occurs when a random experiment is performed. An event occurs if the actual outcome of the experiment is one of the outcomes included in the event's subset of the sample space.

Events are usually denoted by capital letters such as A, B, C, E, F, etc.

Example (Rolling a standard six-sided die):

The sample space is $S = \{1, 2, 3, 4, 5, 6\}$.

The number of outcomes favourable to an event E is denoted by $n(E)$. For the examples above, $n(A)=3$, $n(B)=2$, $n(C)=1$, $n(D)=0$.


Types of Events

Events can be classified based on the number of outcomes they contain or their relationship with other events or the sample space:

  1. Elementary Event (or Simple Event):

    An elementary event is an event that consists of **exactly one** outcome from the sample space. It is the simplest type of event.

    • Each individual outcome in the sample space corresponds to an elementary event.
    • Example (Rolling a die, $S=\{1, 2, 3, 4, 5, 6\}$): The elementary events are $\{1\}$, $\{2\}$, $\{3\}$, $\{4\}$, $\{5\}$, and $\{6\}$. Event C ($\{3\}$) from the example above is an elementary event.
  2. Compound Event:

    A compound event is an event that consists of **more than one** outcome from the sample space. It is formed by combining two or more elementary events.

    • Example (Rolling a die, $S=\{1, 2, 3, 4, 5, 6\}$): Event A (getting an even number) = $\{2, 4, 6\}$. Event B (getting a number greater than 4) = $\{5, 6\}$. These are compound events.
  3. Sure Event (or Certain Event):

    A sure event is an event that is guaranteed to occur every time the random experiment is performed. It consists of **all** possible outcomes of the experiment.

    • The sure event is equal to the entire sample space $S$.
    • Example (Rolling a die, $S=\{1, 2, 3, 4, 5, 6\}$): Event D: Getting a number less than 7. $D = \{1, 2, 3, 4, 5, 6\} = S$. This is a sure event.
    • The probability of a sure event is always 1. $P(S) = 1$.
  4. Impossible Event:

    An impossible event is an event that cannot occur in any outcome of the random experiment. It contains **no outcomes**.

    • The impossible event is represented by the empty set, denoted by $\phi$ or \{\}.
    • Example (Rolling a die, $S=\{1, 2, 3, 4, 5, 6\}$): Event E: Getting a number greater than 6. $E = \{\} = \phi$. This is an impossible event.
    • The probability of an impossible event is always 0. $P(\phi) = 0$.
  5. Mutually Exclusive Events:

    Two or more events are said to be mutually exclusive (or disjoint) if the occurrence of one event **prevents** the occurrence of any of the others in a single trial of the experiment. In other words, they cannot happen at the same time.

    • Mathematically, two events A and B are mutually exclusive if their intersection is the empty set: $A \cap B = \phi$. If the intersection contains any outcomes, they are not mutually exclusive.
    • Example (Rolling a die, $S=\{1, 2, 3, 4, 5, 6\}$): Let A = {getting an even number} = $\{2, 4, 6\}$. Let F = {getting an odd number} = $\{1, 3, 5\}$. $A \cap F = \{\} = \phi$. Events A and F are mutually exclusive. You cannot get both an even and an odd number on a single roll.
    • Example (Rolling a die): Let A = {getting an even number} = $\{2, 4, 6\}$. Let B = {getting a number > 4} = $\{5, 6\}$. $A \cap B = \{6\}$. Since the outcome 6 is in both events, $A \cap B \neq \phi$. Events A and B are **not** mutually exclusive because it is possible to roll a 6, which is both even and greater than 4.
  6. Complementary Event:

    For any event A, its **complementary event** (often denoted by $A'$, $\bar{A}$, or $A^c$) is the event that A does **not** occur. It consists of all outcomes in the sample space $S$ that are not included in event A.

    • The complementary event $A'$ is the set difference $S - A$.
    • Events A and $A'$ are always mutually exclusive ($A \cap A' = \phi$).
    • Events A and $A'$ are always collectively exhaustive, meaning their union covers the entire sample space ($A \cup A' = S$).
    • The sum of the probability of an event and its complement is always 1: $P(A) + P(A') = 1$.
    • Example (Rolling a die, $S=\{1, 2, 3, 4, 5, 6\}$): If A = {getting a number less than 3} = $\{1, 2\}$, then the complementary event $A'$ is {getting a number not less than 3} = {getting a number $\ge$ 3} = $\{3, 4, 5, 6\}$. $A' = S - A = \{1, 2, 3, 4, 5, 6\} - \{1, 2\} = \{3, 4, 5, 6\}$.

Understanding these types of events and their properties is fundamental for defining probability rules and theorems.




Probability: Classical (Theoretical) Definition


Basis of Classical Probability

The **classical definition of probability**, also known as theoretical probability, is based on the idea of **equally likely outcomes**. It applies to random experiments where it is reasonable to assume that each possible outcome has the same chance of occurring. This is often the case in games of chance involving fair dice, fair coins, well-shuffled decks of cards, etc.

This definition does not require actually performing the experiment; it is based on logical reasoning and the structure of the sample space.


Definition

If a random experiment has a finite sample space $S$ with $n(S)$ possible outcomes, and if all these outcomes are **equally likely**, then the **theoretical probability** of an event $E$ (which is a subset of $S$) is defined as the ratio of the number of outcomes favourable to $E$ to the total number of possible equally likely outcomes in the sample space.

Let $n(E)$ be the number of outcomes in the sample space $S$ that belong to the event $E$.

Formula for Theoretical Probability:

$$P(E) = \frac{\text{Number of outcomes favourable to Event E}}{\text{Total number of equally likely outcomes in Sample Space}} = \frac{n(E)}{n(S)}$$

... (1)

This formula is the cornerstone of classical probability calculations.


Conditions and Properties of Theoretical Probability

The classical definition of probability has certain conditions and inherent properties:


Example

Example 1. A fair six-sided die is rolled once. Find the theoretical probability of:

(a) Getting a 4.

(b) Getting an odd number.

(c) Getting a number less than 5.

Answer:

Given: A fair six-sided die is rolled once.

To Find: Probabilities of specific events.

Solution:

The experiment is rolling a fair six-sided die. Since the die is fair, all six possible outcomes are equally likely.

The sample space is $S = \{1, 2, 3, 4, 5, 6\}$.

The total number of equally likely outcomes is $n(S) = 6$.

We use the formula $P(E) = \frac{n(E)}{n(S)}$.

**(a) Getting a 4:**

Let Event E1 be "getting a 4".

The outcome favourable to E1 is $\{4\}$.

Number of outcomes favourable to E1, $n(E1) = 1$.

$P(E1) = \frac{n(E1)}{n(S)} = \frac{1}{6}$

... (i)

The theoretical probability of getting a 4 is $\frac{1}{6}$.

**(b) Getting an odd number:**

Let Event E2 be "getting an odd number".

The outcomes favourable to E2 are $\{1, 3, 5\}$.

Number of outcomes favourable to E2, $n(E2) = 3$.

$P(E2) = \frac{n(E2)}{n(S)} = \frac{3}{6} = \frac{1}{2}$

... (ii)

The theoretical probability of getting an odd number is $\frac{1}{2}$.

**(c) Getting a number less than 5:**

Let Event E3 be "getting a number less than 5".

The outcomes favourable to E3 are $\{1, 2, 3, 4\}$.

Number of outcomes favourable to E3, $n(E3) = 4$.

$P(E3) = \frac{n(E3)}{n(S)} = \frac{4}{6} = \frac{2}{3}$

... (iii)

The theoretical probability of getting a number less than 5 is $\frac{2}{3}$.



Probability - Experimental and Theoretical: Distinction


Introduction

When we talk about the probability of an event, we might be referring to two different but related concepts: theoretical probability and experimental probability. Understanding the distinction between them is crucial.

Theoretical Probability

Theoretical probability is based on logical reasoning, the structure of the experiment, and the assumption of equally likely outcomes. It is what we *expect* to happen in the long run, based on the ideal conditions of the experiment.


Experimental (Empirical) Probability

Experimental probability, also known as empirical probability or relative frequency, is based on the results of actual trials of a random experiment. It is an estimate of the probability based on observed outcomes.


Relationship (Law of Large Numbers)

The relationship between theoretical and experimental probability is explained by the **Law of Large Numbers**. This fundamental theorem in probability states that as the number of trials in a random experiment increases, the experimental probability of an event tends to converge towards its theoretical probability.

In simple terms, if you repeat a random experiment many, many times, the proportion of times a specific event occurs will get closer and closer to the event's theoretical probability.

So, while a small number of coin tosses might yield an experimental probability significantly different from 0.5, tossing the coin thousands or millions of times will almost certainly result in an experimental probability that is very close to 0.5. This is why experimental probability is considered an *estimate* of the theoretical probability.

Theoretical probability is about the ideal long-term expectation, while experimental probability is about what is observed in a finite set of trials and is used to estimate the theoretical probability when the theoretical probability cannot be easily calculated (e.g., in complex real-world scenarios).



Measuring Empirical Probability (Experimental Probability)


Definition and Formula

The **Experimental Probability** (or Empirical Probability or relative frequency) of an event $E$ is an estimation of the likelihood of that event occurring, based on the actual outcomes observed from conducting a random experiment a number of times. It is calculated by observing how frequently the event occurs in a series of trials.

The definition is based on the ratio of the number of times the desired event occurs to the total number of times the experiment is conducted.

Formula for Experimental Probability:

$$P(\text{Event E}) \approx \frac{\text{Number of times Event E occurred}}{\text{Total number of trials in the experiment}}$$

... (1)

Let $f_E$ be the frequency of event E (the number of times event E occurred) and $N$ be the total number of trials performed.

Then, the experimental probability of E is:

$$P(E) \approx \frac{f_E}{N}$$

... (2)

The symbol "$\approx$" is used because this is an estimated probability based on a finite number of trials. As the number of trials ($N$) increases, the experimental probability tends to get closer to the true theoretical probability (if one exists).


Procedure for Measuring Experimental Probability

To determine the experimental probability of an event:

  1. Define the Experiment and Event:

    Clearly define the random experiment and the specific event ($E$) whose probability you want to estimate.

  2. Conduct Trials:

    Perform the random experiment a specific number of times. The more trials you conduct, the better your estimate of the probability will likely be. Let the total number of trials be $N$.

  3. Record Outcomes:

    For each trial, record the outcome.

  4. Count Favourable Outcomes:

    Go through the recorded outcomes and count how many times the event $E$ occurred. This count is the frequency of event $E$, denoted by $f_E$.

  5. Calculate the Experimental Probability:

    Divide the frequency of the event ($f_E$) by the total number of trials ($N$) using the formula $P(E) = f_E / N$.


Example

Example 1. A standard six-sided die was thrown 150 times, and the frequency of each outcome was recorded as follows:

Outcome123456
Frequency253028222520

Find the experimental probability of getting:

(a) A 3

(b) An even number

Answer:

Given: Results of rolling a die 150 times (frequencies of outcomes).

To Find: Experimental probabilities of getting a 3 and an even number.

Solution:

The total number of trials performed is $N = 150$.

**(a) Experimental Probability of getting a 3:**

Let Event E1 be "getting a 3".

From the table, the number of times a 3 occurred (Frequency of E1) is $f_{E1} = 28$.

Using the formula $P(E) = f_E / N$:

Experimental $P(\text{getting a 3}) = \frac{\text{Frequency of 3}}{\text{Total trials}}$

... (i)

$P(E1) = \frac{28}{150}$

... (ii)

We can simplify the fraction by dividing both numerator and denominator by their greatest common divisor, 2:

$P(E1) = \frac{\cancel{28}^{14}}{\cancel{150}_{75}} = \frac{14}{75}$

... (iii)

The experimental probability of getting a 3 is $\frac{14}{75}$ (or approximately 0.1867).

Note that the theoretical probability of getting a 3 on a fair die is $1/6 \approx 0.1667$. The experimental result is slightly higher due to random variation in the trials.

**(b) Experimental Probability of getting an even number:**

Let Event E2 be "getting an even number". This event occurs if the outcome is 2, 4, or 6.

From the table, the frequencies of these outcomes are:

  • Frequency of 2 = 30
  • Frequency of 4 = 22
  • Frequency of 6 = 20

The number of times event E2 occurred (Frequency of E2) is the sum of the frequencies of 2, 4, and 6:

$$f_{E2} = \text{Frequency(2)} + \text{Frequency(4)} + \text{Frequency(6)}$$

... (iv)

$$f_{E2} = 30 + 22 + 20 = 72$$

... (v)

Using the formula $P(E) = f_E / N$:

Experimental $P(\text{getting an even number}) = \frac{\text{Frequency of getting an even number}}{\text{Total trials}}$

... (vi)

$P(E2) = \frac{72}{150}$

... (vii)

We can simplify the fraction by dividing both numerator and denominator by their greatest common divisor, 6:

$P(E2) = \frac{\cancel{72}^{12}}{\cancel{150}_{25}} = \frac{12}{25}$

... (viii)

The experimental probability of getting an even number is $\frac{12}{25}$ (or 0.48).

Note that the theoretical probability of getting an even number on a fair die is $3/6 = 1/2 = 0.5$. The experimental result (0.48) is close to the theoretical probability, as expected with a moderately large number of trials.