Menu Top
Classwise Concept with Examples
6th 7th 8th 9th 10th 11th 12th

Class 12th Chapters
1. Relations and Functions 2. Inverse Trigonometric Functions 3. Matrices
4. Determinants 5. Continuity and Differentiability 6. Application of Derivatives
7. Integrals 8. Application of Integrals 9. Differential Equations
10. Vector Algebra 11. Three Dimensional Geometry 12. Linear Programming
13. Probability

Content On This Page
Introduction to Linear Programming Formation of an L.P.P in Two Variables x and y Graphical Method of Solving an L.P.P


Chapter 12 Linear Programming (Concepts)

This chapter introduces a powerful mathematical optimization technique known as Linear Programming, often abbreviated as LPP. In a world constrained by limited resources, time, and budgets, individuals and organizations constantly face the challenge of making the best possible decisions – how to maximize profit, minimize cost, or allocate resources most effectively. Linear Programming provides a systematic, quantitative approach to tackling such optimization problems, specifically when the underlying relationships can be described using linear equations and inequalities. It is a cornerstone of operations research and finds wide application in business, economics, engineering, and numerous other fields.

At the heart of any Linear Programming Problem lie three fundamental components:

For LPPs involving just two decision variables ($x$ and $y$), a highly intuitive and visual solution method is the graphical method. This chapter primarily focuses on mastering this technique, which involves the following systematic steps:

  1. Formulate the LPP: Carefully translate the given word problem into a precise mathematical model, clearly defining the objective function $Z$ and listing all the linear constraints, including non-negativity constraints.
  2. Graph the Constraints: Treat each inequality constraint as an equation to draw its corresponding boundary line on the Cartesian plane. Then, determine the half-plane region that satisfies the inequality (often by testing a point like the origin $(0,0)$). Remember to use solid lines for constraints with $\le$ or $\ge$ and dotted lines for strict inequalities ($<$ or $>$). The non-negativity constraints typically restrict the solution search to the first quadrant.
  3. Identify the Feasible Region: This is the geometric region representing all possible solutions that satisfy all constraints simultaneously. It is the common area of intersection of all the shaded half-planes corresponding to the constraints. The feasible region for an LPP is always a convex polygon (or an unbounded convex region).
  4. Determine Corner Points (Vertices): Identify the coordinates of all the vertices (corner points) of the feasible region. These points occur at the intersection of the boundary lines forming the region and are typically found by solving pairs of linear equations simultaneously.
  5. Evaluate Objective Function: Calculate the value of the objective function $Z$ at each corner point of the feasible region by substituting the coordinates of the vertices into the function $Z = ax + by$.
  6. Find Optimum Value: According to the fundamental theorem of linear programming, if an optimal solution exists for a bounded feasible region, it must occur at one (or sometimes more) of the corner points. Compare the values of $Z$ calculated in the previous step; the largest value represents the maximum, and the smallest value represents the minimum, as required by the problem.

We also discuss special cases that can arise: unbounded feasible regions, where the objective function might increase or decrease indefinitely (no finite optimum), or might still have an optimum at a corner point; infeasible problems, where the constraints are contradictory, resulting in no feasible region and thus no solution; and problems admitting multiple optimal solutions, which occurs when the objective function line is parallel to one of the edges of the feasible region corresponding to the optimal value. Typical application areas explored include manufacturing problems (optimizing production mix), diet problems (minimizing cost while meeting nutritional needs), and basic transportation problems.



Introduction to Linear Programming

Linear Programming (LP), also known as Linear Optimisation, is a mathematical technique used to find the best outcome (maximum profit, lowest cost, etc.) in a mathematical model whose requirements are represented by linear relationships. It's a fundamental tool within the broader field of mathematical optimisation. Essentially, it involves optimising (either maximising or minimising) a linear function, called the objective function, subject to a set of linear inequalities or equalities, called constraints.

Linear programming is widely applied in various domains for decision-making problems where limited resources must be allocated or utilised in an optimal manner. Examples include resource allocation in manufacturing, transportation planning, financial planning, production scheduling, and diet planning.


Key Components of a Linear Programming Problem (LPP)

Every Linear Programming Problem is characterised by the following essential components:

1. Decision Variables

These are the unknown quantities that need to be determined to find the optimal solution. They represent the levels of activities or amounts of resources to be used. Typically, these are denoted by symbols like $x_1, x_2, \dots, x_n$. The values of decision variables must be determined by solving the LPP. By their nature in most practical applications (like quantities of products, hours of labour, etc.), decision variables are usually restricted to be non-negative.

2. Objective Function

This is the mathematical expression that represents the overall goal of the problem. It is always a linear function of the decision variables. The objective is either to Maximise this function (e.g., maximise profit, revenue, output) or to Minimise it (e.g., minimise cost, time, resource usage). It is commonly denoted by $Z$.

A general form of the objective function with $n$ decision variables $x_1, x_2, \dots, x_n$ is:

$Z = c_1 x_1 + c_2 x_2 + \dots + c_n x_n$

Here, $c_1, c_2, \dots, c_n$ are constants representing the coefficients of the decision variables in the objective function (e.g., profit per unit, cost per hour).

3. Constraints

These are the limitations or restrictions on the decision variables that arise from the availability of resources, capacity, demand, regulations, or other real-world conditions. Constraints are always expressed as linear inequalities or linear equations involving the decision variables.

A general set of $m$ constraints for an LPP with $n$ decision variables is:

$a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n \ \ (\leq, =, \geq) \ \ b_1$

$a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n \ \ (\leq, =, \geq) \ \ b_2$

...

$a_{m1}x_1 + a_{m2}x_2 + \dots + a_{mn}x_n \ \ (\leq, =, \geq) \ \ b_m$

Here, $a_{ij}$ are coefficients representing the amount of resource $i$ used by one unit of activity $j$, and $b_i$ are constants representing the total availability or requirement of resource $i$. The symbol $(\leq, =, \geq)$ indicates that a constraint can be an inequality of type 'less than or equal to', 'greater than or equal to', or an exact 'equality'.

4. Non-negativity Restrictions

These are special types of constraints that require the decision variables to take non-negative values. In most practical scenarios, quantities like production units, labour hours, etc., cannot be negative.

$x_1 \geq 0, x_2 \geq 0, \dots, x_n \geq 0$

These restrictions define the domain for the decision variables and are a standard part of almost all LPPs.


Mathematical Formulation of an LPP

Combining the key components, a Linear Programming Problem can be formally stated mathematically as follows:

Optimize (Maximise or Minimise) the Objective Function:

Z = $c_1 x_1 + c_2 x_2 + \dots + c_n x_n$

Subject to the Constraints:

a$_{11}$x$_1$ + a$_{12}$x$_2$ + $\dots$ + a$_{1n}$x$_n \ (\leq, =, \geq) \ b_1$

a$_{21}$x$_1$ + a$_{22}$x$_2$ + $\dots$ + a$_{2n}$x$_n \ (\leq, =, \geq) \ b_2$

...

a$_{m1}$x$_1$ + a$_{m2}$x$_2$ + $\dots$ + a$_{mn}$x$_n \ (\leq, =, \geq) \ b_m$

and the Non-negativity Restrictions:

x$_1 \geq 0, x_2 \geq 0, \dots, x_n \geq 0$

The essence of an LPP is that all relationships involved (the objective function and all constraints) must be linear in the decision variables. This linearity simplifies the problem and allows for efficient solution methods.


Terminology in LPP

Understanding the standard terminology is crucial for working with Linear Programming Problems:

Solution

A set of values assigned to the decision variables ($x_1, x_2, \dots, x_n$) is called a solution to the LPP.

Feasible Solution

A feasible solution is a solution (a set of values for the decision variables) that satisfies all the constraints, including the non-negativity restrictions.

Feasible Region (or Solution Region)

The feasible region is the set of all feasible solutions. In the case of an LPP with two variables ($x$ and $y$), the feasible region can be represented graphically as the area on the coordinate plane that satisfies all the given constraints simultaneously. This region is always a convex polygon (or may be unbounded).

Optimal Solution

An optimal solution is a feasible solution that results in the optimal value (either maximum or minimum, as required by the objective function) of the objective function. There might be a unique optimal solution, infinitely many optimal solutions, or no optimal solution.

Objective Value

The objective value is the value of the objective function ($Z$) for a given feasible solution. The goal of solving an LPP is to find the feasible solution that yields the optimal objective value.



Formation of an L.P.P in Two Variables x and y

Translating a real-world problem into the mathematical framework of a Linear Programming Problem is the crucial first step in solving it using optimization techniques. This process involves carefully reading the problem statement, identifying the key elements, and expressing them in terms of mathematical equations and inequalities. For problems involving only two decision variables, this formulation allows us to solve them using the graphical method, which will be discussed in the next section.

The process of formulating an LPP typically involves the following steps:


Steps for Forming an LPP

Here are the systematic steps to formulate a Linear Programming Problem, especially focusing on problems with two decision variables (say, $x$ and $y$):

1. Identify the Decision Variables

Read the problem carefully and identify the quantities that you need to determine to arrive at the optimal solution. These are your decision variables. Represent them with symbols, commonly $x$ and $y$ for a two-variable problem. Clearly define what each variable represents (e.g., number of units of product A, number of hours of labour, quantity of ingredient X). Since, in most practical problems, these quantities cannot be negative, you must include non-negativity restrictions for each variable.

Let $x$ and $y$ be the decision variables.

Non-negativity restrictions: $x \geq 0, y \geq 0$

2. Formulate the Objective Function

Identify the goal of the problem: Is it to maximise something (like profit, revenue, sales) or to minimise something (like cost, time, distance)? This quantity to be optimised is the objective function. Express this objective as a linear mathematical function of your decision variables ($x$ and $y$). If the objective is profit ($Z$), and profit per unit of $x$ is $c_1$ and per unit of $y$ is $c_2$, the objective function will be $Z = c_1 x + c_2 y$.

Objective Function: Optimize $Z = ax + by$ (where 'Optimize' means Maximize or Minimize)

Here, $a$ and $b$ are constants (coefficients) derived from the problem statement (e.g., profit/cost per unit of $x$ and $y$).

3. Formulate the Constraints

Identify all the limitations or restrictions mentioned in the problem. These could relate to available resources (like raw materials, machine hours, labour hours), capacity limits, demand requirements, budget limits, etc. Translate each limitation into a linear inequality or equation involving $x$ and $y$. Pay close attention to keywords like:

Each constraint will typically be in the form $a_i x + b_i y \leq c_i$, $a_i x + b_i y \geq c_i$, or $a_i x + b_i y = c_i$, where $a_i, b_i,$ and $c_i$ are constants based on the problem data.

Example Constraints: $c_1 x + d_1 y \leq e_1$, $c_2 x + d_2 y \geq e_2$, etc.

4. Include Non-negativity Restrictions

Although mentioned in step 1, it's important to list these explicitly as part of the constraints set in the final formulation. These are standard constraints in almost all LPPs:

x $\geq 0, y \geq 0$

Once these steps are completed, you will have the complete mathematical formulation of the Linear Programming Problem, consisting of the objective function to be optimized, subject to a set of linear constraints and the non-negativity restrictions.


Example 1. A manufacturer produces two types of toys, A and B. Three machines are needed for this purpose. The first machine (cutting) takes 12 minutes per toy of type A and 6 minutes per toy of type B. The second machine (assembling) takes 10 minutes per toy of type A and 20 minutes per toy of type B. The third machine (finishing) takes 8 minutes per toy of type A and 15 minutes per toy of type B. The first machine is available for at most 3 hours, the second machine for at most 4 hours, and the third machine for at most 5 hours. The profit per toy of type A is $\textsf{₹}$ 50, and per toy of type B is $\textsf{₹}$ 60. Formulate this problem as a linear programming problem to maximize the total profit.

Answer:

Let's follow the steps to formulate this LPP:

Step 1: Identify the Decision Variables

The quantities we need to decide are the number of toys of each type to be produced.

Let $x$ be the number of toys of type A produced.

Let $y$ be the number of toys of type B produced.

Since the number of toys cannot be negative, we have the non-negativity restrictions:

$x \geq 0$

... (1)

$y \geq 0$

... (2)

Step 2: Formulate the Objective Function

The objective is to maximize the total profit.

Profit from type A toys = $50x$

Profit from type B toys = $60y$

Total Profit $Z = 50x + 60y$. We want to maximise Z.

Maximize $Z = 50x + 60y$

[Objective Function]

Step 3: Formulate the Constraints

The constraints are based on the available machine time. The time units for toy production are in minutes, while machine availability is in hours. We convert hours to minutes for consistency:

3 hours = $3 \times 60 = 180$ minutes

4 hours = $4 \times 60 = 240$ minutes

5 hours = $5 \times 60 = 300$ minutes

Constraint 1 (Machine 1 - Cutting):

Time required for $x$ toys of type A = $12x$ minutes.

Time required for $y$ toys of type B = $6y$ minutes.

Total time on Machine 1 = $12x + 6y$.

Availability of Machine 1 is at most 180 minutes.

$12x + 6y \leq 180$

Dividing by 6, we simplify this constraint:

$2x + y \leq 30$

... (3) [Machine 1 Constraint]

Constraint 2 (Machine 2 - Assembling):

Time required for $x$ toys of type A = $10x$ minutes.

Time required for $y$ toys of type B = $20y$ minutes.

Total time on Machine 2 = $10x + 20y$.

Availability of Machine 2 is at most 240 minutes.

$10x + 20y \leq 240$

Dividing by 10, we simplify this constraint:

$x + 2y \leq 24$

... (4) [Machine 2 Constraint]

Constraint 3 (Machine 3 - Finishing):

Time required for $x$ toys of type A = $8x$ minutes.

Time required for $y$ toys of type B = $15y$ minutes.

Total time on Machine 3 = $8x + 15y$.

Availability of Machine 3 is at most 300 minutes.

$8x + 15y \leq 300$

... (5) [Machine 3 Constraint]

Mathematical Formulation of the LPP

Combining the objective function, constraints, and non-negativity restrictions, the linear programming problem is formulated as follows:

Maximize $Z = 50x + 60y$

Subject to the constraints:

$2x + y \leq 30$

$x + 2y \leq 24$

$8x + 15y \leq 300$

and the non-negativity restrictions:

$x \geq 0, y \geq 0$



Graphical Method of Solving an L.P.P

The Graphical Method is a visual and intuitive technique used to solve Linear Programming Problems (LPPs) that involve only two decision variables. This method is suitable for understanding the fundamental concepts of feasible region and optimal solutions. For LPPs with more than two variables, graphical representation becomes impossible, and other methods like the Simplex Method are used.

The graphical method involves plotting the system of linear inequalities (constraints) on a two-dimensional graph. The region that satisfies all constraints simultaneously is identified as the feasible region. The optimal solution is then found by evaluating the objective function at the vertices (corner points) of this feasible region.


Steps for Solving an LPP Graphically

Consider a Linear Programming Problem with two decision variables, $x$ and $y$, which can be formulated as follows:

Optimize (Maximize or Minimize) $Z = ax + by$

[Objective Function]

Subject to a set of $m$ linear inequalities (constraints) in $x$ and $y$, such as $A_i x + B_i y (\leq, =, \geq) C_i$ for $i = 1, 2, \dots, m$,

and the non-negativity restrictions:

x $\geq 0$

y $\geq 0$

Here are the steps to solve such an LPP using the graphical method:

Step 1: Plot the Constraint Lines

For each inequality constraint (e.g., $Ax + By \leq C$ or $Ax + By \geq C$), treat it temporarily as a linear equation ($Ax + By = C$). Plot the graph of each of these linear equations on the xy-plane. To plot a straight line, you need at least two points. The easiest points to find are often the intercepts with the axes (by setting $x=0$ to find the y-intercept and setting $y=0$ to find the x-intercept), provided they are within a reasonable range for plotting.

Step 2: Determine the Feasible Region

The feasible region is the area on the graph that satisfies *all* the constraints simultaneously. For each linear inequality, determine the region (side of the line) that satisfies it.

The feasible region of the LPP is the intersection of all the individual feasible regions determined by each constraint (including non-negativity). This region is always a convex polygon. It can be bounded (enclosed on all sides) or unbounded (extending infinitely in one or more directions).

Step 3: Identify the Corner Points (Vertices) of the Feasible Region

The corner points (or vertices) of the feasible region are the points where the boundary lines of the feasible region intersect. These points are crucial for finding the optimal solution. Calculate the coordinates of each corner point. This may involve solving systems of linear equations formed by the intersecting boundary lines.

Step 4: Evaluate the Objective Function at Each Corner Point

Substitute the coordinates $(x, y)$ of each corner point identified in Step 3 into the objective function $Z = ax + by$. Calculate the value of $Z$ for each vertex.

Step 5: Determine the Optimal Solution

According to the fundamental theorem of linear programming (discussed below), the optimal value of the objective function (maximum or minimum) for a bounded feasible region always occurs at one of the corner points.


Corner Point Theorem

The reason why we only need to check the corner points is based on a fundamental theorem of linear programming:

Let R be the feasible region for a linear programming problem, and let Z = ax + by be the objective function.

For unbounded regions, if the objective function to be maximised can increase indefinitely within the feasible region, the maximum does not exist. Similarly, if the objective function to be minimised can decrease indefinitely, the minimum does not exist. If an optimal value exists in an unbounded region, it will be at one of the corner points.


Iso-Profit or Iso-Cost Line Method (Alternative Visualization)

While evaluating corner points is sufficient, the graphical method can also be understood using the concept of Iso-profit lines (for maximisation) or Iso-cost lines (for minimisation).

This method provides a visual confirmation of the corner point theorem.


Example 1. Solve the following LPP graphically:

Maximize $Z = 4x + y$

Subject to the constraints:

$x + y \leq 5$

$4x + y \leq 14$

$x \geq 0$

$y \geq 0$

Answer:

The given Linear Programming Problem is:

Maximize $Z = 4x + y$

Subject to the constraints:

1. $x + y \leq 5$

... (1)

2. $4x + y \leq 14$

... (2)

3. $x \geq 0$

... (3)

4. $y \geq 0$

... (4)

Step 1: Plot the Constraint Lines

We plot the lines corresponding to the equality forms of the constraints:

  • Line for constraint (1): $x + y = 5$
    • If $x=0$, $y=5$. Point is $(0, 5)$.
    • If $y=0$, $x=5$. Point is $(5, 0)$.
    Plot the line passing through $(0, 5)$ and $(5, 0)$.
  • Line for constraint (2): $4x + y = 14$
    • If $x=0$, $y=14$. Point is $(0, 14)$.
    • If $y=0$, $4x=14 \implies x=\frac{14}{4} = 3.5$. Point is $(3.5, 0)$.
    Plot the line passing through $(0, 14)$ and $(3.5, 0)$.
  • Constraint (3), $x \geq 0$, corresponds to the region on and to the right of the y-axis.
  • Constraint (4), $y \geq 0$, corresponds to the region on and above the x-axis.

Step 2: Determine the Feasible Region

We need the region that satisfies all inequalities (1), (2), (3), and (4).

  • For $x + y \leq 5$, test the origin $(0,0)$: $0 + 0 \leq 5 \implies 0 \leq 5$. This is true. The feasible region for this constraint is the half-plane containing the origin, i.e., below or on the line $x+y=5$.
  • For $4x + y \leq 14$, test the origin $(0,0)$: $4(0) + 0 \leq 14 \implies 0 \leq 14$. This is true. The feasible region for this constraint is the half-plane containing the origin, i.e., below or on the line $4x+y=14$.
  • Constraints $x \geq 0$ and $y \geq 0$ restrict the feasible region to the first quadrant.

The feasible region is the area in the first quadrant that lies below or on both lines $x+y=5$ and $4x+y=14$. This region is a bounded convex polygon.

Graph showing constraint lines and feasible region for the LPP example.

Step 3: Identify the Corner Points

The corner points (vertices) of the feasible region are the points where the boundary lines intersect. From the graph, the vertices are:

A: Intersection of $x=0$ and $y=0$. This is the origin. Coordinates: $(0, 0)$.

B: Intersection of $y=0$ and $4x+y=14$. Substitute $y=0$ into the second equation: $4x + 0 = 14 \implies 4x = 14 \implies x = \frac{14}{4} = 3.5$. Coordinates: $(3.5, 0)$.

C: Intersection of $x+y=5$ and $4x+y=14$. We solve this system of equations:

$\begin{array}{cr} 4x + y = 14 \\ x + y = 5 \end{array}$

Subtracting the second equation from the first:

$\begin{array}{r} 4x + y \\ - (x + y) \\ \hline 3x \end{array} \begin{array}{r} = 14 \\ = 5 \\ \hline = 9 \end{array}$

From $3x = 9$, we get $x = 3$.

Substitute $x=3$ into $x+y=5$: $3 + y = 5 \implies y = 5 - 3 = 2$. Coordinates: $(3, 2)$.

D: Intersection of $x=0$ and $x+y=5$. Substitute $x=0$ into the first equation: $0 + y = 5 \implies y = 5$. Coordinates: $(0, 5)$.

The corner points of the feasible region are A(0, 0), B(3.5, 0), C(3, 2), and D(0, 5).

Step 4: Evaluate the Objective Function at Each Corner Point

We evaluate the objective function $Z = 4x + y$ at each corner point:

  • At A(0, 0): $Z = 4(0) + 0 = 0$.
  • At B(3.5, 0): $Z = 4(3.5) + 0 = 14 + 0 = 14$.
  • At C(3, 2): $Z = 4(3) + 2 = 12 + 2 = 14$.
  • At D(0, 5): $Z = 4(0) + 5 = 0 + 5 = 5$.

Step 5: Determine the Optimal Solution

We want to maximize $Z$. Comparing the values of $Z$ at the corner points (0, 14, 14, 5), the maximum value is 14.

The maximum value of $Z$ is 14, and it occurs at two distinct corner points: $(3.5, 0)$ and $(3, 2)$.

According to the theorem, if the maximum value of the objective function occurs at two corner points, it also occurs at every point on the line segment joining these two points.

Thus, the optimal solution is any point $(x, y)$ on the line segment connecting $(3.5, 0)$ and $(3, 2)$. The maximum value of the objective function is 14.