Wolfram|Alpha

Wolfram|Alpha Widgets Overview Tour Gallery Sign In

Share this page.

  • StumbleUpon
  • Google Buzz

Output Type

Output width, output height.

To embed this widget in a post, install the Wolfram|Alpha Widget Shortcode Plugin and copy and paste the shortcode above into the HTML source.

To embed a widget in your blog's sidebar, install the Wolfram|Alpha Widget Sidebar Plugin , and copy and paste the Widget ID below into the "id" field:

Save to My Widgets

Build a new widget.

We appreciate your interest in Wolfram|Alpha and will be in touch soon.

Forgot password? New user? Sign up

Existing user? Log in

Linear Programming

Already have an account? Log in here.

  • Andrew Hayes

Linear programming is an optimization technique for a system of linear constraints and a linear objective function. An objective function defines the quantity to be optimized, and the goal of linear programming is to find the values of the variables that maximize or minimize the objective function.

A factory manufactures doodads and whirligigs. It costs $2 and takes 3 hours to produce a doodad. It costs $4 and takes 2 hours to produce a whirligig. The factory has $220 and 150 hours this week to produce these products. If each doodad sells for $6 and each whirligig sells for $7, then how many of each product should be manufactured this week in order to maximize profit? This kind of problem is perfect to use linear programming techniques on. All of the quantifiable relationships in the problem are linear. The values of variables are constrained in some way. The goal is to find values of the variables that will maximize some quantity.

Linear programming is useful for many problems that require an optimization of resources. It could be applied to manufacturing, to calculate how to assign labor and machinery to minimize cost of operations. It could be applied in high-level business operations, to decide which products to sell and in what quantity in order to maximize profit. It could also be applied in logistics, to decide how to apply resources to get a job done in the minimum amount of time.

When to use Linear Programming

Linear programming in two variables, simplex algorithm, special cases of simplex algorithm, big-m method.

Linear programming can be used to solve a problem when the goal of the problem is to maximize some value and there is a linear system of inequalities that defines the constraints on the problem.

A constraint is an inequality that defines how the values of the variables in a problem are limited. In order for linear programming techniques to work, all constraints should be linear inequalities.

Returning to the example in the introduction:

Note that there is a cost associated with producing each part. Each doodad costs $2 to make and each whirligig costs $4 to make. The factory only has $220 available to spend on these costs, so the production is limited by cost. Let \(x\) be the number of doodads produced, and let \(y\) be the number of whirligigs produced. Then this constraint can be written as an inequality: \[2x+4y \le 220.\] There is also the limitation on how much time the factory has to produce these parts. Each doodad requires 3 hours to make and each whirligig requires 2 hours to make. The factory only has 150 hours available this week, so production is also limited by time. This constraint can be written as an inequality: \[3x+2y \le 150.\] In addition to these constraints, there is also a couple of "common sense" constraints. It's not possible to produce less than 0 of any part, so these constraints are also written: \[\begin{align} x &\ge 0 \\ y &\ge 0. \end{align}\] These are called non-negative constraints . Altogether, the constraints form a system of inequalities: \[\begin{cases}\begin{align} 2x+4y &\le 220 \\ 3x+2y &\le 150 \\ x &\ge 0 \\ y &\ge 0. \end{align}\end{cases}\]

Graphing these inequalities in the coordinate plane creates a polygon shape.

Graph the system of constraints \[\begin{cases}\begin{align} 2x+4y &\le 220 \\ 3x+2y &\le 150 \\ x &\ge 0 \\ y &\ge 0. \end{align}\end{cases}\]

The shaded region above is the feasible region of this problem.

The region that is bound by the system of constraints is called the feasible region . It represents the possible values of the variables that satisfy all of the constraints. In order for linear programming techniques to work, it should be a convex polytope (in 2 dimensions, a convex polygon; in 3 dimensions, a convex polyhedron; and so on).

Finding the feasible region is only sufficient to give the possible solutions of a problem. The goal of linear programming is to find the best solution to a problem. This is done by maximizing or minimizing the objective function .

The objective function is a function that defines some quantity that should be minimized or maximized. The arguments of the objective function are the same variables that are used in the constraints. In order for linear programming techniques to work, the objective function should be linear.
Each doodad costs $2 to make and sells for $6. This gives a profit of $4 per doodad. Each whirligig costs $4 to make and sells for $7. This gives a profit of $3 per whirligig. The profit function can be defined as \[p(x,y)=4x+3y.\] This is the objective function of this problem, and the goal is to maximize it.

It seems like the strategy now would be to test ordered pairs in the feasible region until a maximum profit is found. However, a more efficient method is available.

Let \(P\) be the maximum profit in the feasible region: \[P=4x+3y.\] Solve for y: \[y=-\frac{4}{3}x+\frac{P}{3}.\] This maximum profit gives an equation of a line, and whatever point in the feasible region passes through this line is the optimal solution. The \(y\)-intercept of this line is \(\frac{P}{3}.\) Since \(P\) is maximized, this \(y\)-intercept should be maximized as well. Graph several lines with the same slope of \(-\frac{4}{3}.\) The line that maximizes the \(y\)-intercept is the one that passes through the vertex at \((20,45),\) the intersection of the first two constraints. All other higher lines do not pass through the feasible region. All other lower lines pass through more than one point in the feasible region, and do not maximize the \(y\)-intercept of the line. Therefore, the factory should produce 20 doodads and 45 whirligigs. This will give a profit of \($215.\) \(_\square\)

In the previous example, it was shown that the optimal solution was on a vertex of the feasible region. This is true for all linear programming problems.

Given a convex polygonal feasible region and a linear objective function, the solution that maximizes or minimizes the objective function will be located on one of the vertices of the feasible region.
Let the objective function be \(f(x,y)=ax+by.\) Let the maximum value of this function be \(P,\) and let the minimum value of this function be \(Q.\) There exist lines which intersect each of the optimal solutions, \((x,y)\): \[\begin{align} ax+by &= P &&\qquad (1) \\ ax+by &= Q &&\qquad (2) \\\\ \Rightarrow y &= -\frac{a}{b}x + \frac{P}{b} &&\qquad (1) \\ y &= -\frac{a}{b}x + \frac{Q}{b}. &&\qquad (2) \end{align}\] Since \(P\) is the maximum value of the objective function, \((1)\) has the maximum \(y\)-intercept of a line with slope \(-\frac{a}{b}\) that passes through the feasible region. Likewise, \((2)\) has the minimum \(y\)-intercept of a line with slope \(-\frac{a}{b}\) that passes through the feasible region. Suppose that \((1)\) or \((2)\) passes through a point that is not one of the vertices of the feasible region. Case 1. The intersection point is on a side of the feasible region that is parallel to lines \((1)\) and \((2).\) If this is the case, then line \((1)\) or \((2)\) also contains a vertex of the feasible region, so this cannot be so. Case 2. The intersection point is somewhere else within the feasible region. The line will intersect more than one point in the feasible region. Then there will exist another point within the feasible region, either higher or lower, that a line with a parallel slope intersects. This cannot be true if line \((1)\) or \((2)\) has the maximum or minimum \(y\)-intercept of a line that passes through the feasible region. Hence, \((1)\) and \((2)\) must intersect a vertex of the feasible region. Each optimal solution is located at a vertex of the feasible region. \(_\square\)

This theorem gives a simple method for finding the optimal solution to a linear programming problem in two variables.

Process for finding the optimal solution of a linear programming problem in two variables Confirm that the feasible region is a convex polygon and the objective function is linear. Find the ordered pair of each vertex of the feasible region. Substitute each ordered pair into the objective function to find the solution that maximizes or minimizes the objective function.
A farmer feeds his cows a feed mix to supplement their foraging. The farmer uses two types of feed for the mix. Corn feed contains 100 g protein per kg and 750 g starch per kg. Wheat feed contains 150 g protein per kg and 700 g starch per kg. Each cow should be fed at most 7 kg of feed per day. The farmer would like each cow to receive at least 650 g protein and 4000 g starch per day. If corn feed costs $0.40/kg and wheat costs $0.45/kg, then what is the optimal feed mix that minimizes cost? Round your answers to the nearest gram. Let \(c\) be the kilograms of corn feed per cow per day, and let \(w\) be the kilograms of wheat feed per cow per day. The system of constraints can be written: \[\begin{cases} \begin{align} 0.1c+0.15w &\ge 0.65 \\ 0.75c+0.7w &\ge 4 \\ c+w &\le 7 \\ c &\ge 0 \\ w &\ge 0. \end{align} \end{cases}\] The objective function is \[\text{Minimize:} \ f(c,w)=0.40c+0.45w.\] Graphing the system of constraints gives an idea of where the vertices of the feasible region are, and which lines intersect to form them: Solve for each vertex of the feasible region by solving each pair of intersecting lines as a system of equations. For example, to solve for the vertex within the \(1^\text{st}\) quadrant, solve the system of equations \[\begin{cases} \begin{align} 0.1c+0.15w &= 0.65 \\ 0.75c +0.7w &= 4. \end{align} \end{cases}\] Solving this system gives \(c \approx 3.411\) and \(w \approx 2.059.\) These values can be substituted into the objective function to obtain the cost of this mix: \[f(3.411,2.059) = \$2.29.\] Note that it is not necessary to solve for every vertex. Since the problem requires a minimum and the objective function line has a negative slope, the optimal solution must be on the underside of the feasible region. Solve for these vertices: \[\begin{cases} \begin{align} 0.75c +0.7w &= 4 \\ c &= 0 \end{align} \end{cases} \implies c=0, w \approx 5.714 \implies f(0,5.714)=\$2.57 \] \[\begin{cases} \begin{align} 0.1c+0.15w &= 0.65 \\ w &= 0 \end{align} \end{cases} \implies c=6.5, w=0 \implies f(6.5,0)=\$2.60. \] The feed mix that minimizes cost contains 3411 g corn and 2059 g wheat. It costs $2.29 per cow. \(_\square\)

A manufacturer has 750 meters of cotton and 1000 meters of polyester. Production of a sweatshirt requires 1 meter of cotton and 2 meters of polyester, while production of a shirt requires 1.5 meters of cotton and 1 meter of polyester. The sale prices of a sweatshirt and a shirt are 30 € and 24 €, respectively. What are the number of sweatshirts \((S)\) and the number of shirts \((C)\) that maximize total sales?

Submit \(S + C.\)

Jordan has $100 to buy some comic books. He really likes the Star Wars books which cost $12 each, but he could also buy the Marvels books which cost $5 each. If he has to buy at least 12 books, what is the maximum number of the Star Wars books that he can buy?

An amateur bodybuilder is looking for supplement protein bars to build his muscle fast, and there are 2 available products: protein bar A and protein bar B.

Each protein bar A contains 15 g of protein and 30 g of carbohydrates and has total 200 calories. On the other hand, each protein bar B contains 30 g of protein and 20 g of carbohydrates and has total 240 calories.

According to his nutritional plan, this bodybuilder needs at least 20,000 calories from these supplements over the month, which must comprise of at least 1,800 g of protein and at least 2,200 g of carbohydrates.

If each protein bar A costs $3 and each protein bar B costs $4, what is the least possible amount of money (in $) he can spend to meet all his one-month requirements?

This kind of method would also work for linear optimization problems in more than two variables. However, these kinds of problems are more challenging to visualize with a coordinate graph, and there can be many more vertices to check for the optimal solution. The simplex algorithm was developed as an efficient method to solve these kinds of problems.

The simplex algorithm is a method to obtain the optimal solution of a linear system of constraints, given a linear objective function. It works by beginning at a basic vertex of the feasible region, and then iteratively moving to adjacent vertices, improving upon the solution each time until the optimal solution is found.

The simplex algorithm has many steps and rules, so it is helpful to understand the logic behind these steps and rules with a simple example before proceeding with the formal algorithm.

Given the system of constraints \[\begin{cases} \begin{align} 2x+3y &\le 90 \\ 3x+2y &\le 120 \\ x & \ge 0 \\ y & \ge 0, \end{align} \end{cases}\] maximize the objective function \[f(x,y)=7x+5y.\] The simplex algorithm begins by converting the constraints and objective functions into a system of equations. This is done by introducing new variables called slack variables . Slack variables represent the positive difference, or slack , between the left hand side of an inequality and the right hand side of that inequality. The inequality \[2x+3y \le 90\] becomes \[2x+3y+s_1=90.\] Likewise, the inequality \[3x+2y \le 120\] becomes \[3x+2y+s_2 = 120.\] In addition to the slack variables, a variable \(z\) is introduced to represent the value of the objective function. This gives the equation \[z-7x-5y=0.\] These equations give the system of equations \[\begin{cases} \begin{array}{cccccccccccc} z & - & 7x & - & 5y & & & & & = & 0 && (0) \\ & & 2x & + & 3y & + & s_1 & & & = & 90 && (1) \\ & & 3x & + & 2y & & & + & s_2 & = & 120. && (2) \end{array} \end{cases}\] In augmented matrix form, this is \[\left[\begin{array}{ccccc|c} 1 & -7 & -5 & 0 & 0 & 0 \\ 0 & 2 & 3 & 1 & 0 & 90 \\ 0 & 3 & 2 & 0 & 1 & 120 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \end{array}\] It is implied that all variables in this system (including \(s_1,\) \(s_2,\) and \(z\)) are greater than or equal to 0. The variables \(s_1\) and \(s_2\) have zero coefficients in row \((0)\) and are called basic variables . The variables \(x\) and \(y\) have non-zero coefficients in row \((0)\) and are called non-basic variables . At any point in this process, the basic solution is given by setting the non-basic variables to 0. Currently, the basic solution is \[x=0, \quad y=0, \quad s_1=90, \quad s_2=120, \quad z=0.\] Consider what effect increasing the values of the non-basic variables would have on the value of \(z.\) Increasing either \(x\) or \(y\) would cause \(z\) to also increase, because \(x\) and \(y\) have negative coefficients in row \((0).\) Thus, this is not the optimal solution. The iterations of the simplex algorithm involve exchanging basic variables and non-basic variables by using matrix row operations. At each step of the process, a non-basic variable in row \((0)\) is eliminated, leading another basic variable to take its place as a non-basic variable. This is called a pivot . Suppose \(x\) were to be eliminated in row \((0).\) This can be done with either row \((1)\) or row \((2).\) Case 1. Eliminating \(x\) in row \((0)\) with row \((1)\), \[\left[\begin{array}{ccccc|c} 1 & 0 & \frac{21}{2} & \frac{7}{2} & 0 & 315 \\ 0 & 2 & 3 & 1 & 0 & 90 \\ 0 & 3 & 2 & 0 & 1 & 120 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1) \\ (2) \end{array}\] During this pivot, the variable \(x\) entered as a basic variable, and the variable \(s_1\) left to become a non-basic variable. Now eliminate \(x\) in row \((2)\): \[\left[\begin{array}{ccccc|c} 1 & 0 & \frac{21}{2} & \frac{7}{2} & 0 & 315 \\ 0 & 2 & 3 & 1 & 0 & 90 \\ 0 & 0 & -\frac{5}{2} & -\frac{3}{2} & 1 & -15 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1) \\ (2)\vphantom{\frac{1}{2}} \end{array}\] This gives the basic solution \[x=45, \quad y=0, \quad s_1=0, \quad s_2=-15, \quad z=315.\] This solution is impossible, because it leads to one of the variables being negative. Case 2. Eliminating \(x\) in row \((0)\) with row \((2)\), \[\left[\begin{array}{ccccc|c} 1 & 0 & -\frac{1}{3} & 0 & \frac{7}{3} & 280 \\ 0 & 2 & 3 & 1 & 0 & 90 \\ 0 & 3 & 2 & 0 & 1 & 120 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1) \\ (2) \end{array}\] During this pivot, the variable \(x\) entered as a basic variable, and the variable \(s_2\) left to become a non-basic variable. Now eliminate \(x\) in row \((1)\): \[\left[\begin{array}{ccccc|c} 1 & 0 & -\frac{1}{3} & 0 & \frac{7}{3} & 280 \\ 0 & 0 & \frac{5}{3} & 1 & -\frac{2}{3} & 10 \\ 0 & 3 & 2 & 0 & 1 & 120 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1)\vphantom{\frac{1}{2}} \\ (2) \end{array}\] This gives the basic solution \[x=40, \quad y=0, \quad s_1=10, \quad s_2=0, \quad z=280.\] This solution is possible, but it is not optimal, because there is a negative coefficient in row \((0).\) This implies that \(z\) can be increased further by increasing \(y.\) Another pivot will be needed to find the optimal solution. It is a fair amount of work to perform a pivot, only to find that it gives an infeasible solution. Fortunately, one can anticipate which pivot will result in a feasible solution by observing the ratio of the element in the right part of the augmented matrix to the coefficient of the entering variable. Consider \(y\) as the entering variable, and calculate these ratios: \[\left[\begin{array}{ccccc|c} 1 & 0 & -\frac{1}{3} & 0 & \frac{7}{3} & 280 \\ 0 & 0 & {\color{red}\frac{5}{3}} & 1 & -\frac{2}{3} & {\color{red}10} \\ 0 & 3 & {\color{blue}2} & 0 & 1 & {\color{blue}120} \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1)\vphantom{\frac{1}{2}} \\ (2) \end{array}\] For entering variable \(y,\) this ratio is \(10\div \frac{5}{3}=6\) for row \((1)\) and \(\frac{120}{2}=60\) for row \((2).\) Selecting the row that minimizes this ratio will ensure that the pivot results in a feasible solution. Thus, row \((1)\) should be selected as the pivot row. Eliminating \(y\) in row \((0)\) with row \((1)\), \[\left[\begin{array}{ccccc|c} 1 & 0 & 0 & \frac{1}{5} & \frac{11}{5} & 282 \\ 0 & 0 & \frac{5}{3} & 1 & -\frac{2}{3} & 10 \\ 0 & 3 & 2 & 0 & 1 & 120 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1)\vphantom{\frac{1}{2}} \\ (2) \end{array}\] Then eliminating \(y\) in row \((2)\), \[\left[\begin{array}{ccccc|c} 1 & 0 & 0 & \frac{1}{5} & \frac{11}{5} & 282 \\ 0 & 0 & \frac{5}{3} & 1 & -\frac{2}{3} & 10 \\ 0 & 3 & 0 & -\frac{6}{5} & \frac{9}{5} & 108 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1)\vphantom{\frac{1}{2}} \\ (2)\vphantom{\frac{1}{2}} \end{array}\] This gives the basic solution \[x=36, \quad y=6, \quad s_1=0, \quad s_2=0, \quad z=282.\] This solution must be optimal, because any increase in the non-basic variables \(s_1\) and \(s_2\) will cause a decrease in \(z.\) Thus, the maximum value of the objective function is \[f(36,6)=282.\ _\square\]
Simplex Algorithm for Maximization This version of the simplex algorithm is valid for a maximization problem with all constraints (except for the non-negative constraints) giving maximum values (using the \(\le\) symbol). In matrix form, the requirements are \[\begin{array}{ll} \text{Maximize:} & \textbf{c}^\text{T} \cdot \textbf{x} \\ \text{Subject to:} & \textbf{A}\textbf{x} \le \textbf{b}, \ \ x_i \ge 0, \end{array}\] where \(\textbf{c}\) is a vector containing the coefficients of the objective function, \(\textbf{x}\) is a vector containing the variables of the problem, \(\textbf{A}\) is a matrix containing the constraint coefficients, and \(\textbf{b}\) is a vector containing the maximum values of those constraints. Convert the constraints and objective function into a system of equations by introducing slack variables and the \(z\) variable. Put the system of equations into augmented matrix form. The objective function equation should go in row \((0).\) Select one of the non-basic variables to be the entering variable. Select the pivot row by computing the ratio \(\frac{\text{Element on right side of augmented matrix}}{\text{Coefficient of entering variable}}\) for each row. The correct pivot row minimizes this ratio. However, this ratio must be positive. If all coefficients of non-basic variables in row \((0)\) are positive, then you have the optimal solution. Otherwise, select a non-basic variable that has a negative coefficient in row \((0)\) to be the next entering variable, then pivot again to find another feasible solution. Continue pivoting until the optimal solution is found.

A toy factory manufactures three kinds of toys: cars, motorcycles, and boats. One toy car makes $20 profit, one toy motorcycle makes $15 profit, and one toy boat makes $25 profit. There are three departments of labour: casting, which has 8 workers; assembly, which has 12 workers; quality control, which has 10 workers.

Every day, the labour is allocated as follows: a toy car requires 2 casting, 2 assembly, 2 quality control; a toy motorcycles requires 1 casting, 2 assembly, 1 quality control; a toy boat requires 2 casting, 3 assembly, 3 quality control.

What is the maximum profit per day (in dollars) the toy company can achieve?

The simplex algorithm for minimization problems works by converting the problem to a maximization problem. This concept that every maximization problem has a corresponding minimization problem is formalized with the von Neumann duality principle .

Given the system of constraints \[\begin{cases}\begin{align} 4x+3y+5z &\ge 65 \\ x+3y+2z &\ge 38 \\ 2x+3y+4z &\ge 52 \\ x,y,z &\ge 0, \end{align}\end{cases}\] minimize the function \[f(x,y,z)=12x+3y+10z.\] This problem could be put into the form shown in the maximization examples above, but an issue would occur with finding the first basic solution: setting the \(x,\) \(y,\) and \(z,\) variables to \(0\) would give an infeasible solution with the slack variables taking on negative values. The simplex algorithm needs to start with a feasible solution, so this would not work. The Big-M method gives a workaround to this problem, but there is a much simpler method for this problem. A "dual" of this problem can be written by transposing the coefficients. Place the coefficients of the constraints into an augmented matrix. Place the coefficients of the objective function into the bottom row, with a 0 in the right part: \[\left[\begin{array}{ccc|c} \color{green}4 & \color{red}3 & \color{blue}5 & \color{orange}65 \\ \color{green}1 & \color{red}3 & \color{blue}2 & \color{orange}38 \\ \color{green}2 & \color{red}3 & \color{blue}4 & \color{orange}52 \\ \hline \color{green}12 & \color{red}3 & \color{blue}10 & \color{orange}0 \end{array}\right].\] Transpose the elements of the matrix: \[\left[\begin{array}{ccc|c} \color{green}4 & \color{green}1 & \color{green}2 & \color{green}12 \\ \color{red}3 & \color{red}3 & \color{red}3 & \color{red}3 \\ \color{blue}5 & \color{blue}2 & \color{blue}4 & \color{blue}10 \\ \hline \color{orange}65 & \color{orange}38 & \color{orange}52 & \color{orange}0 \end{array}\right].\] Note: It's tempting to divide out the 3 in the second row of this matrix, but this will break the symmetry that is required to return to the original problem. This gives a new system of constraints and an objective function to be maximized: Given the system of constraints \[\begin{cases}\begin{align} 4u+v+2w &\le 12 \\ 3u+3v+3w &\le 3 \\ 2u+3v+4w &\le 52 \\ u,v,w &\ge 0, \end{align}\end{cases}\] maximize the function \[g(u,v,w)=65u+38v+52w.\] Now the simplex algorithm can be applied to find the optimal solution \[\left[\begin{array}{ccccccc|c} 1 & -65 & -38 & -52 & 0 & 0 & 0 & 0 \\ 0 & 4 & 1 & 2 & 1 & 0 & 0 & 12 \\ 0 & 3 & 3 & 3 & 0 & 1 & 0 & 3 \\ 0 & 2 & 3 & 4 & 0 & 0 & 1 & 52 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \end{array}\] Enter \(u\) with row \((2)\): \[\left[\begin{array}{ccccccc|c} 1 & 0 & 27 & 13 & 0 & \frac{65}{3} & 0 & 65 \\ 0 & 0 & -3 & -2 & 1 & -4 & 0 & 8 \\ 0 & 1 & 1 & 1 & 0 & \frac{1}{3} & 0 & 1 \\ 0 & 0 & 1 & 2 & 0 & -2 & 1 & 50 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1) \\ (2)\vphantom{\frac{1}{2}} \\ (3) \end{array}\] All coefficients in row \((0)\) are positive, so this is the optimal solution. The maximum value in the top right of the matrix, \(65,\) is the same as the minimum value for the original problem. However, the variables \(u,\) \(v,\) and \(w\) are not the same as the variables in the original problem. Fortunately, the values of the variables that minimize the original problem correspond to the coefficients of the slack variables in row \((0).\) \[\left[\begin{array}{ccccccc|c} 1 & 0 & 27 & 13 & \color{red}0 & \color{red}\frac{65}{3} & \color{red}0 & 65 \\ 0 & 0 & -3 & -2 & 1 & -4 & 0 & 8 \\ 0 & 1 & 1 & 1 & 0 & \frac{1}{3} & 0 & 1 \\ 0 & 0 & 1 & 2 & 0 & -2 & 1 & 50 \end{array}\right]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{2}} \\ (1) \\ (2)\vphantom{\frac{1}{2}} \\ (3) \end{array}\] Thus, the values of the original problem that minimize the objective function are \[x=0, \quad y=\frac{65}{3}, \quad z=0.\ _\square\]
Simplex Algorithm for Minimization This version of the simplex algorithm is valid for a minimization problem with all constraints giving minimum values (using the \(\ge\) symbol). In matrix form, the requirements are: \[\begin{array}{ll} \text{Minimize:} & \textbf{c}^\text{T} \cdot \textbf{x} \\ \text{Subject to:} & \textbf{A}\textbf{x} \ge \textbf{b}\, \quad x_i \ge 0 \end{array}\] where \(\textbf{c}\) is a vector containing the coefficients of the objective function, \(\textbf{x}\) is a vector containing the variables of the problem, \(\textbf{A}\) is a matrix containing the constraint coefficients, and \(\textbf{b}\) is a vector containing the minimum values of those constraints. Place the coefficients of the constraints and objective function into an augmented matrix. The coefficients of the objective function should go into the bottom row. Transpose this matrix. This new matrix represents the dual maximization problem. Write the new system of constraints and objective function. This problem has different variables than the original problem. Use the simplex algorithm for maximization to obtain the maximum value. This maximum value is the same as the minimum value for the original problem. The coefficients of the slack variables in row \((0)\) correspond to the values of the variables in the original problem.

The simplex algorithm can sometimes lead to some surprising results. It is possible that a linear programming problem has infinite solutions or no solutions. These special cases are explored here.

As was stated earlier, a linear programming problem that has minimum constraints does not work with the simplex algorithm. The reason for this is that the initial basic solution is infeasible.

Given the system of constraints \[\begin{cases}\begin{align} 2x+3y &\ge 10 \\ 3x+y &\ge 8 \\ x &\ge 0 \\ y &\ge 0, \end{align}\end{cases}\] minimize the function \[f(x,y)=5x+10y.\] Show that this cannot be done using the normal simplex algorithm. This problem could be solved with a dual or by simply testing the vertices of the feasible region, but consider solving it with the simplex algorithm. Because the constraints are minimum constraints, the slack variables need to have negative coefficients. In addition, the objective function is a minimization. This can be accounted for by treating the problem as a maximization of \(-z.\) Applying the simplex algorithm, this gives the initial matrix \[\left[ \begin{array}{ccccc|c} -1 & 5 & 10 & 0 & 0 & 0 \\ 0 & 2 & 3 & -1 & 0 & 10 \\ 0 & 3 & 1 & 0 & -1 & 8 \\ \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \end{array}\] This initial matrix implies an infeasible solution of \(s_1=-10,\ s_2=-8.\) The simplex algorithm will not produce a meaningful result if the initial basic solution is infeasible. \(_\square\)

It is sometimes possible to solve the problem with its dual, but this is not the case if a problem mixes minimum constraints with maximum constraints.

Given the system of constraints \[\begin{cases}\begin{align} -x+5y &\le 25 \\ 6x+5y &\le 60 \\ x+y &\ge 2 \\ x &\ge 0 \\ y &\ge 0, \end{align}\end{cases}\] minimize the function \[f(x,y)=x-10y.\] Show that this cannot be done using the normal simplex algorithm or the dual method. Putting this problem into a simplex matrix would give an initial basic solution that is infeasible: \[\left [ \begin{array}{cccccc|c} -1 & 1 & -10 & 0 & 0 & 0 & 0 \\ 0 & -1 & 5 & 1 & 0 & 0 & 25 \\ 0 & 6 & 5 & 0 & 1 & 0 & 60 \\ 0 & 1 & 1 & 0 & 0 & -1 & 2 \\ \end{array} \right ] \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \end{array}\] \[\begin{array}{ccc} s_1 = 25, & s_2 = 60, & s_3 = -2. \end{array}\] Putting the problem's dual into a simplex matrix would also yield an infeasible initial basic solution: \[\left [ \begin{array}{cccccc|c} 1 & -25 & -60 & 2 & 0 & 0 & 0\\ 0 & 1 & -6 & 1 & 1 & 0 & 1 \\ 0 & -5 & -5 & 1 & 0 & 1 & -10 \end{array} \right ] \qquad \begin{array}{c} (0) \\ (1) \\ (2) \end{array}\] \[\begin{array}{cc} s_1 = 1, & s_2 = -10.\ _\square \end{array}\]

The Big-M method can be used when an initial basic solution is infeasible. The idea behind this method is to introduce artificial variables to the problem in order to move the solution into the feasible region. Unlike slack variables, these artificial variables must have a value of zero in the final solution.

Given the system of constraints \[\begin{cases}\begin{align} -x+5y &\le 25 \\ 6x+5y &\le 60 \\ x+y &\ge 2 \\ x &\ge 0 \\ y &\ge 0, \end{align}\end{cases}\] minimize the function \[f(x,y)=x-10y.\] From the previous example, it is known that the third constraint produces an infeasible \(s_3=-2.\) To compensate for this, an artificial variable, \(a_1,\) is introduced to this constraint and the objective function. In the objective function, this variable has a coefficient of \(M.\) \(M\) represents an arbitrarily large constant amount. The new constraints and objective function are \[\begin{cases}\begin{array}{ccccccccccccccc} -z & + & x & - & 10y & & & & & & & + & Ma_1 & = & 0 \\ & - & x & + & 5y & + & s_1 & & & & & & & = & 25 \\ & & 6x & + & 5y & & & + & s_2 & & & & & = & 60 \\ & & x & + & y & & & & & - & s_3 & + & a_1 & = & 2. \end{array}\end{cases} \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \end{array}\] In matrix form, \[\left [ \begin{array}{ccccccc|c} -1 & 1 & -10 & 0 & 0 & 0 & M & 0 \\ 0 & -1 & 5 & 1 & 0 & 0 & 0 & 25 \\ 0 & 6 & 5 & 0 & 1 & 0 & 0 & 60 \\ 0 & 1 & 1 & 0 & 0 & -1 & 1 & 2 \end{array} \right ]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \end{array}\] The first goal with the Big-M method is to move the problem into the feasible region. Recall that the current basic solution has \(s_3=-2.\) This variable will be the leaving variable, with the artificial variable, \(a_1,\) being the entering variable: \[\left [ \begin{array}{ccccccc|c} -1 & 1-M & -10-M & 0 & 0 & M & 0 & -2M \\ 0 & -1 & 5 & 1 & 0 & 0 & 0 & 25 \\ 0 & 6 & 5 & 0 & 1 & 0 & 0 & 60 \\ 0 & 1 & 1 & 0 & 0 & -1 & 1 & 2 \\ \end{array} \right ]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \end{array}\] The current solution is now in the feasible region, with all basic variables positive: \[s_1 = 25, \quad s_2 = 60, \quad a_1 = 2.\] This solution is clearly not correct, because it contains a non-zero artificial variable in the solution. Furthermore, there are negative coefficients in row \((0)\). The Big-M method now proceeds just as the simplex algorithm. The new goal is to enter variables with negative coefficients in row \((0)\). Since \(y\) has the most negative coefficient in the row \((0)\), that variable will be entered first. The row that minimizes the ratio of the right hand side and the coefficient is \((3)\), so \(y\) will be entered through this row: \[\left [ \begin{array}{ccccccc|c} -1 & 11 & 0 & 0 & 0 & -10 & 10+M & 20 \\ 0 & -6 & 0 & 1 & 0 & 5 & -5 & 15 \\ 0 & 1 & 0 & 0 & 1 & 5 & -5 & 50 \\ 0 & 1 & 1 & 0 & 0 & -1 & 1 & 2 \\ \end{array} \right ] \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \end{array}\] \[y=2, \quad s_1=15, \quad s_2=50.\] This solution no longer contains the artificial variable, but it is not yet optimal due to the negative coefficient in row \((0).\) \(s_3\) must be the next variable to enter. The minimum positive ratio for this variable is in row \((1)\): \[\left [ \begin{array}{ccccccc|c} -1 & -1 & 0 & 2 & 0 & 0 & M & 50 \\ 0 & -6 & 0 & 1 & 0 & 5 & -5 & 15 \\ 0 & 7 & 0 & -1 & 1 & 0 & 0 & 35 \\ 0 & -1 & 5 & 1 & 0 & 0 & 0 & 25 \\ \end{array} \right ] \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \end{array}\] \[y = 5, \quad s_2 = 35, \quad s_3 = 3.\] Now \(x\) must be the entering variable, and the only row with a positive ratio is \((2)\): \[\left [ \begin{array}{ccccccc|c} -1 & 0 & 0 & \frac{15}{7} & \frac{1}{7} & 0 & M & 55 \\ 0 & 0 & 0 & \frac{1}{7} & \frac{6}{7} & 5 & -5 & 45 \\ 0 & 7 & 0 & -1 & 1 & 0 & 0 & 35 \\ 0 & 0 & 5 & \frac{6}{7} & \frac{1}{7} & 0 & 0 & 30 \\ \end{array} \right ]. \qquad \begin{array}{c} (0)\vphantom{\frac{1}{7}} \\ (1)\vphantom{\frac{1}{7}} \\ (2) \\ (3)\vphantom{\frac{1}{7}} \end{array}\] Now it is clear that this gives the optimal solution: \[x=5, y=6, s_3=9\implies f(5,6)=-55.\ _\square\]
Big M Simplex Method This method is viable for any linear programming problem that does not match the forms of the previous section . It is also required for problems which contain equality constraints. Assign slack variables and the \(z\) variable as with the basic simplex algorithm, and create a simplex matrix. For each slack variable that has a negative value in the initial basic solution, add a distinct artificial variable to that constraint. Also add a distinct artificial variable to each equality constraint. Artificial variables begin with a coefficient of \(1\) in each constraint row. In the objective function row, every artificial variable begins with the same coefficient, \(M.\) This represents an arbitrarily large positive constant amount. If the problem is a minimization, then the coefficients of the objective function row are negated, and the goal is to maximize \(-z.\) Move the solution into the feasible region by performing pivots with a negative slack variable as the leaving variable and an artificial variable as the entering variable. Once the basic solution is in the feasible region, proceed with the simplex algorithm as before. While performing the simplex algorithm, ensure that the elements in the right side of the matrix are positive. If an element in the right side is not positive, multiply that row by \(-1\) to make it positive. If an element in the right side of the matrix is \(0,\) then ensure that the coefficient of the basic variable in that row is positive. Choose the entering variable by observing the coefficient in row \((0)\) that is the most negative. Choose pivot rows by selecting the row that minimizes the ratio \(\frac{\text{Element on right side of augmented matrix}}{\text{Coefficient of entering variable}}.\) The ratio must be non-negative, and the coefficient of the entering variable in the pivot row must be positive. An optimal solution cannot contain any artificial variables. If row \((0)\) of the matrix contains no negative coefficients, and the solution contains an artificial variable, then the problem has no solution.
Vanessa is scheduling her employees for the upcoming week. When on the assembly line, Darren assembles 5 units per hour, and when on the packaging line, he packages 10 units per hour. Lori only works on the assembly line, and she assembles 4 units per hour. Darren's pay is $12 per hour and Lori's pay is $9 per hour Vanessa needs to have at least 200 units assembled and packaged by the end of the week. She can assign each worker a maximum of 40 hours. How should Vanessa schedule her employees to minimize payroll? This problem can be solved with simpler methods, but is solved here with the Big M method as a demonstration of how to deal with different types of constraints with the Big M method. Let \(m_d\) and \(m_l\) be the number of hours that Darren and Lori work on the assembly line, respectively. Let \(p_d\) be the number of hours that Darren and Lori work on the packaging line, respectively. Each worker can work a maximum of 40 hours. This gives the constraints \[\begin{align} m_d+p_d &\le 40 \\ m_l &\le 40. \end{align}\] Vanessa would not want to waste hours on packaging if there are no assembled units to package. Therefore, the number of units assembled should equal the number of units packaged. This can be expressed with the equation \[5m_d+4m_l=10p_d.\] The number of units assembled and packaged should be at least 200. This can be expressed with the constraint \[\begin{align} 10p_d &\ge 200 \\ p_d &\ge 20. \end{align}\] This gives the following system of constraints: \[\begin{cases} \begin{array}{ccccccc} m_d & & & + & p_d & \le & 40 \\ & & m_l & & & \le & 40 \\ & & & & p_d & \ge & 20 \\ 5m_d & + & 4m_l & - & 10p_d & = & 0 \\ m_d, & & m_l, & & p_d & \ge & 0. \end{array} \end{cases}\] The objective function is \[f(m_d,m_l,p_d)=12m_d+9m_l+12p_d.\] Converting the system of constraints and objective function to a simplex matrix, \[\left[\begin{array}{ccccccc|c} -1 & 12 & 9 & 12 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 & 0 & 0 & 40 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 40 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 20 \\ 0 & 5 & 4 & -10 & 0 & 0 & 0 & 0 \\ \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] The basic solution is currently infeasible: \[m_d=0, \quad m_l=0, \quad p_d=0, \quad s_1 = 40, \quad s_2=40, \quad s_3=-20.\] Since \(s_3\) has an infeasible value, the row that contains it requires an artificial variable. The equality constraint row also requires an artificial variable. These artificial variables are given the coefficient \(M\) in row \((0):\) \[\left[\begin{array}{ccccccccc|c} -1 & 12 & 9 & 12 & 0 & 0 & 0 & M & M & 0 \\ 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 40 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 40 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 & 0 & 20 \\ 0 & 5 & 4 & -10 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] To move the solution into the feasible region, \(s_3\) will be the leaving variable and \(a_1\) will be the entering variable. The pivot will be performed with row \((3):\) \[\left[\begin{array}{ccccccccc|c} -1 & 12 & 9 & 12-M & 0 & 0 & M & 0 & M & -20M \\ 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 40 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 40 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 & 0 & 20 \\ 0 & 5 & 4 & -10 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] The other artificial variable must be moved into the basic solution as well: \[\left[\begin{array}{ccccccccc|c} -1 & 12-5M & 9-4M & 12+9M & 0 & 0 & M & 0 & 0 & -20M \\ 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 40 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 40 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 & 0 & 20 \\ 0 & 5 & 4 & -10 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] Now the algorithm proceeds as the usual simplex algorithm. The goal is to eliminate the negative coefficients in row \((0).\) Since \(M\) is an arbitrarily large positive constant value, \(12-5M\) is the most negative coefficient in row \((0).\) Therefore, \(m_d\) will be the entering variable. The selection of the pivot row is slightly more challenging than the usual simplex algorithm. Observe that every value in the right-hand side of the constraint rows is positive except for the value in row \((4).\) In this row, the right-hand side is \(0,\) and the basic variable contained in this row, \(a_2,\) has a positive coefficient. It is important to maintain these two things: values in the right-hand sides of the constraint rows are positive, or if the value in the right hand side is \(0,\) then the coefficient of the basic variable in that row is positive. Maintaining this will ensure the correct selection of the pivot row. The pivot row is selected by choosing the row that minimizes the ratio of \(\frac{\text{Element on right side of augmented matrix}}{\text{Coefficient of entering variable}},\) provided that the coefficient of the entering variable is positive . Thus, the minimum ratio for the entering variable is \(\frac{0}{5}\) from row \((4).\) This will be the pivot row: \[\left[\begin{array}{ccccccccc|c} -1 & 0 & -\frac{3}{5} & 36-M & 0 & 0 & M & 0 & M-\frac{12}{5} & -20M \\ 0 & 0 & -4 & 15 & 5 & 0 & 0 & 0 & -1 & 200 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 40 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 & 0 & 20 \\ 0 & 5 & 4 & -10 & 0 & 0 & 0 & 0 & 1 & 0 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] Now \(36-M\) is the most negative coefficient in row \((0).\) Thus, \(p_d\) will be the entering variable. Row \((4)\) will not be chosen as the pivot row again, as it has a negative coefficient for this variable. The row that minimizes the ratio is \(\frac{200}{15}\) from row \((1):\) \[\left[\begin{array}{ccccccccc|c} -1 & 0 & 9-\frac{4}{15}M & 0 & \frac{1}{3}M-12 & 0 & M & 0 & \frac{14}{15}M & -\frac{20}{3}M-480 \\ 0 & 0 & -4 & 15 & 5 & 0 & 0 & 0 & -1 & 200 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 40 \\ 0 & 0 & 4 & 0 & -5 & 0 & -15 & 15 & 1 & 100 \\ 0 & 15 & 4 & 0 & 10 & 0 & 0 & 0 & 1 & 400 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] Now \(9-\frac{4}{15}M\) is the most negative coefficient in row \((0).\) \(m_l\) will be the entering variable and the minimizing ratio is \(\frac{100}{4}\) in row \((3):\) \[\left[\begin{array}{ccccccccc|c} -1 & 0 & 0 & 0 & -\frac{3}{4} & 0 & \frac{135}{4} & M-\frac{135}{4} & M-\frac{9}{4} & -705 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 & 0 & 20 \\ 0 & 0 & 0 & 0 & 5 & 4 & 15 & -15 & -1 & 60 \\ 0 & 0 & 4 & 0 & -5 & 0 & -15 & 15 & 1 & 100 \\ 0 & 1 & 0 & 0 & 1 & 0 & 1 & -1 & 0 & 20 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] Now \(-\frac{3}{4}\) is the most negative coefficient in row \((0).\) \(s_1\) will be the entering variable and the minimizing ratio is \(\frac{60}{5}\) in row \((2):\) \[\left[\begin{array}{ccccccccc|c} -1 & 0 & 0 & 0 & 0 & \frac{3}{5} & 36 & M-36 & M-\frac{12}{5} & -696 \\ 0 & 0 & 0 & 1 & 0 & 0 & -1 & 1 & 0 & 20 \\ 0 & 0 & 0 & 0 & 5 & 4 & 15 & -15 & -1 & 60 \\ 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 40 \\ 0 & 5 & 0 & 0 & 0 & -4 & -10 & 10 & 1 & 40 \end{array}\right]. \qquad \begin{array}{c} (0) \\ (1) \\ (2) \\ (3) \\ (4) \end{array}\] With all coefficients in row \((0)\) positive and no artificial variables in the solution, the solution is optimal. The solution is \[m_d=8, \quad m_l=40, \quad p_d=20, \quad z=696.\] Vanessa should put Darren on the assembly line for 8 hours and on the packaging line for 20 hours. She should put Lori on the assembly line for 40 hours. This will put payroll at $696 for the week. \(_\square\)

Problem Loading...

Note Loading...

Set Loading...

  • Math Article

Linear Programming

In Mathematics, linear programming is a method of optimising operations with some constraints. The main objective of linear programming is to maximize or minimize the numerical value. It consists of linear functions which are subjected to the constraints in the form of linear equations or in the form of inequalities.  Linear programming is considered an important technique that is used to find the optimum resource utilisation. The term “linear programming” consists of two words as linear and programming. The word “linear” defines the relationship between multiple variables with degree one. The word “programming” defines the process of selecting the best solution from various alternatives.

Linear Programming is widely used in Mathematics and some other fields such as economics, business, telecommunication, and manufacturing fields. In this article, let us discuss the definition of linear programming, its components, and different methods to solve linear programming problems.

Table of Contents:

  • Characteristics
  • Linear programming Problems
  • Simplex Method

Graphical Method

  • Applications
  • Practice Problems

What is Linear Programming?

Linear programming (LP)   or Linear Optimisation may be defined as the problem of maximizing or minimizing a linear function that is subjected to linear constraints. The constraints may be equalities or inequalities. The optimisation problems involve the calculation of profit and loss.   Linear programming problems  are an important class of optimisation problems, that helps to find the feasible region and optimise the solution in order to have the highest or lowest value of the function.

In other words, linear programming is considered as an optimization method to maximize or minimize the objective function of the given mathematical model with the set of some requirements which are represented in the linear relationship. The main aim of the linear programming problem is to find the optimal solution.

Linear programming is the method of considering different inequalities relevant to a situation and calculating the best value that is required to be obtained in those conditions.  Some of the assumptions taken while working with linear programming are:

  • The number of constraints should be expressed in the quantitative terms
  • The relationship between the constraints and the objective function should be linear
  • The linear function (i.e., objective function) is to be optimised

Components of Linear Programming

The basic components of the LP are as follows:

  • Decision Variables
  • Constraints
  • Objective Functions

Characteristics of Linear Programming

The following are the five characteristics of the linear programming problem:

Constraints – The limitations should be expressed in the mathematical form, regarding the resource.

Objective Function – In a problem, the objective function should be specified in a quantitative way.

Linearity – The relationship between two or more variables in the function must be linear. It means that the degree of the variable is one.

Finiteness –  There should be finite and infinite input and output numbers. In case, if the function has infinite factors, the optimal solution is not feasible. 

Non-negativity – The variable value should be positive or zero. It should not be a negative value.

Decision Variables – The decision variable will decide the output. It gives the ultimate solution of the problem. For any problem, the first step is to identify the decision variables.

Linear Programming Problems

The Linear Programming Problems (LPP) is a problem that is concerned with finding the optimal value of the given linear function. The optimal value can be either maximum value or minimum value. Here, the given linear function is considered an objective function. The objective function can contain several variables, which are subjected to the conditions and it has to satisfy the set of linear inequalities called linear constraints. The linear programming problems can be used to get the optimal solution for the following scenarios, such as manufacturing problems, diet problems, transportation problems, allocation problems and so on.

Methods to Solve Linear Programming Problems

The linear programming problem can be solved using different methods, such as the graphical method, simplex method, or by using tools such as R, open solver etc. Here, we will discuss the two most important techniques called the simplex method and graphical method in detail.

Linear Programming Simplex Method

The simplex method is one of the most popular methods to solve linear programming problems. It is an iterative process to get the feasible optimal solution. In this method, the value of the basic variable keeps transforming to obtain the maximum value for the objective function. The algorithm for linear programming simplex method is provided below:

Step 1 : Establish a given problem. (i.e.,) write the inequality constraints and objective function.

Step 2: Convert the given inequalities to equations by adding the slack variable to each inequality expression.

Step 3 : Create the initial simplex tableau. Write the objective function at the bottom row. Here, each inequality constraint appears in its own row. Now, we can represent the problem in the form of an augmented matrix, which is called the initial simplex tableau.

Step 4 : Identify the greatest negative entry in the bottom row, which helps to identify the pivot column. The greatest negative entry in the bottom row defines the largest coefficient in the objective function, which will help us to increase the value of the objective function as fastest as possible.

Step 5 : Compute the quotients. To calculate the quotient, we need to divide the entries in the far right column by the entries in the first column, excluding the bottom row. The smallest quotient identifies the row. The row identified in this step and the element identified in the step will be taken as the pivot element.

Step 6: Carry out pivoting to make all other entries in column is zero.

Step 7: If there are no negative entries in the bottom row, end the process. Otherwise, start from step 4.

Step 8: Finally, determine the solution associated with the final simplex tableau.

The graphical method is used to optimize the two-variable linear programming. If the problem has two decision variables, a graphical method is the best method to find the optimal solution. In this method, the set of inequalities are subjected to constraints. Then the inequalities are plotted in the XY plane. Once, all the inequalities are plotted in the XY graph, the intersecting region will help to decide the feasible region. The feasible region will provide the optimal solution as well as explains what all values our model can take.  Let us see an example here and understand the concept of linear programming in a better way.

Calculate the maximal and minimal value of z = 5x + 3y for the following constraints.

x + 2y ≤ 14

3x – y ≥ 0

x – y ≤ 2

The three inequalities indicate the constraints. The area of the plane that will be marked is the feasible region.

The optimisation equation (z) = 5x + 3y. You have to find the (x,y) corner points that give the largest and smallest values of z.

To begin with, first solve each inequality.

x + 2y ≤ 14 ⇒  y ≤ -(1/2)x + 7

3x – y ≥ 0 ⇒  y ≤ 3x

x – y ≤ 2 ⇒ y  ≥ x – 2

Here is the graph for the above equations.

Linear Programming - Graph

Now pair the lines to form a system of linear equations to find the corner points.

y = -(½) x + 7

Solving the above equations, we get the corner points as (2, 6)

y = -1/2 x + 7

y = x – 2

Solving the above equations, we get the corner points as (6, 4)

Solving the above equations, we get the corner points as (-1, -3)

For linear systems, the maximum and minimum values of the optimisation equation lie on the corners of the feasibility region. Therefore, to find the optimum solution, you only need to plug these three points in z = 3x + 4y

z = 5(2) + 3(6) = 10 + 18 = 28

z = 5(6) + 3(4) = 30 + 12 = 42

z = 5(-1) + 3(-3) = -5 -9 = -14

Hence, the maximum of z = 42 lies at (6, 4) and the minimum of z = -14 lies at (-1, -3)

Linear Programming Applications

A real-time example would be considering the limitations of labours and materials and finding the best production levels for maximum profit in particular circumstances. It is part of a vital area of mathematics known as optimisation techniques. The applications of LP in some other fields are

  • Engineering – It solves design and manufacturing problems as it is helpful for doing shape optimisation
  • Efficient Manufacturing – To maximise profit, companies use linear expressions
  • Energy Industry – It provides methods to optimise the electric power system.
  • Transportation Optimisation – For cost and time efficiency.

Importance of Linear Programming

Linear programming is broadly applied in the field of optimisation for many reasons. Many functional problems in operations analysis can be represented as linear programming problems. Some special problems of linear programming are such as network flow queries and multi-commodity flow queries are deemed to be important to have produced much research on functional algorithms for their solution.

Linear Programming Video Lesson

Linear programming problem.

linear programming solve method

Linear Programming Practice Problems

Solve the following linear programming problems:

  • A doctor wishes to mix two types of foods in such a way that the vitamin contents of the mixture contain at least 8 units of vitamin A and 10 units of vitamin C. Food ‘I’ contains 2 units/kg of vitamin A and 1 unit/kg of vitamin C. Food ‘II’ contains 1 unit/kg of vitamin A and 2 units/kg of vitamin C. It costs Rs 50 per kg to purchase Food ‘I’ and Rs 70 per kg to purchase Food ‘II’. Formulate this problem as a linear programming problem to minimise the cost of such a mixture
  • One kind of cake requires 200g of flour and 25g of fat, and another kind of cake requires 100g of flour and 50g of fat.  Formulate this problem as a linear programming problem to find the maximum number of cakes that can be made from 5kg of flour and 1 kg of fat assuming that there is no shortage of the other ingredients used in making the cakes.

Frequently Asked Questions on Linear Programming

Linear programming is a process of optimising the problems which are subjected to certain constraints. It means that it is the process of maximising or minimizing the linear functions under linear inequality constraints. The problem of solving linear programs is considered as the easiest one.

Mention the different types of linear programming.

The different types of linear programming are: Solving linear programming by Simplex method Solving linear programming using R Solving linear programming by graphical method Solving linear programming with the use of an open solver.

What are the requirements of linear programming?

The five basic requirements of linear programming are: Objective function Constraints Linearity Non-negativity Finiteness

Mention the advantages of Linear programming

The advantages of linear programming are: Linear programming provides insights to the business problems It helps to solve multi-dimensional problems According to the condition change, LP helps in making the adjustments By calculating the cost and profit of various things, LP helps to take the best optimal solution

What is meant by linear programming problems?

The linear programming problems (LPP) helps to find the best optimal solution of a linear function (also, known as the objective function) which are placed under certain constraints (set of linear inequality constraints)

To learn all concepts in Maths in a more engaging way, register at BYJU’S. Also, watch interesting videos on various Maths topics by downloading BYJU’S– The Learning App.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

linear programming solve method

Thank you so much for clearly explained notes. I benefited a lot from them

Thank you very much for this material.

linear programming solve method

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

close

Linear Programming

Linear programming is a process that is used to determine the best outcome of a linear function. It is the best method to perform linear optimization by making a few simple assumptions. The linear function is known as the objective function. Real-world relationships can be extremely complicated. However, linear programming can be used to depict such relationships, thus, making it easier to analyze them.

Linear programming is used in many industries such as energy, telecommunication, transportation, and manufacturing. This article sheds light on the various aspects of linear programming such as the definition, formula, methods to solve problems using this technique, and associated linear programming examples.

What is Linear Programming?

Linear programming, also abbreviated as LP, is a simple method that is used to depict complicated real-world relationships by using a linear function . The elements in the mathematical model so obtained have a linear relationship with each other. Linear programming is used to perform linear optimization so as to achieve the best outcome.

Linear Programming Definition

Linear programming can be defined as a technique that is used for optimizing a linear function in order to reach the best outcome. This linear function or objective function consists of linear equality and inequality constraints. We obtain the best outcome by minimizing or maximizing the objective function .

Linear Programming Examples

Suppose a postman has to deliver 6 letters in a day from the post office (located at A) to different houses (U, V, W, Y, Z). The distance between the houses is indicated on the lines as given in the image. If the postman wants to find the shortest route that will enable him to deliver the letters as well as save on fuel then it becomes a linear programming problem. Thus, LP will be used to get the optimal solution which will be the shortest route in this example.

Example of Linear Programming

Linear Programming Formula

A linear programming problem will consist of decision variables , an objective function, constraints, and non-negative restrictions. The decision variables, x, and y, decide the output of the LP problem and represent the final solution. The objective function, Z, is the linear function that needs to be optimized (maximized or minimized) to get the solution. The constraints are the restrictions that are imposed on the decision variables to limit their value. The decision variables must always have a non-negative value which is given by the non-negative restrictions. The general formula of a linear programming problem is given below:

  • Objective Function: Z = ax + by
  • Constraints: cx + dy ≤ e, fx + gy ≤ h. The inequalities can also be "≥"
  • Non-negative restrictions: x ≥ 0, y ≥ 0

How to Solve Linear Programming Problems?

The most important part of solving linear programming problem is to first formulate the problem using the given data. The steps to solve linear programming problems are given below:

  • Step 1: Identify the decision variables.
  • Step 2: Formulate the objective function. Check whether the function needs to be minimized or maximized.
  • Step 3: Write down the constraints.
  • Step 4: Ensure that the decision variables are greater than or equal to 0. (Non-negative restraint)
  • Step 5: Solve the linear programming problem using either the simplex or graphical method.

Let us study about these methods in detail in the following sections.

Linear Programming Methods

There are two main methods available for solving linear programming problem. These are the simplex method and the graphical method. Given below are the steps to solve a linear programming problem using both methods.

Linear Programming by Simplex Method

The simplex method in lpp can be applied to problems with two or more decision variables. Suppose the objective function Z = 40\(x_{1}\) + 30\(x_{2}\) needs to be maximized and the constraints are given as follows:

\(x_{1}\) + \(x_{2}\) ≤ 12

2\(x_{1}\) + \(x_{2}\) ≤ 16

\(x_{1}\) ≥ 0, \(x_{2}\) ≥ 0

Step 1: Add another variable, known as the slack variable, to convert the inequalities into equations. Also, rewrite the objective function as an equation .

- 40\(x_{1}\) - 30\(x_{2}\) + Z = 0

\(x_{1}\) + \(x_{2}\) + \(y_{1}\) =12

2\(x_{1}\) + \(x_{2}\) + \(y_{2}\) =16

\(y_{1}\) and \(y_{2}\) are the slack variables.

Step 2: Construct the initial simplex matrix as follows:

\(\begin{bmatrix} x_{1} & x_{2} &y_{1} & y_{2} & Z & \\ 1&1 &1 &0 &0 &12 \\ 2& 1 & 0& 1 & 0 & 16 \\ -40&-30&0&0&1&0 \end{bmatrix}\)

Step 3: Identify the column with the highest negative entry. This is called the pivot column. As -40 is the highest negative entry, thus, column 1 will be the pivot column.

Step 4: Divide the entries in the rightmost column by the entries in the pivot column. We exclude the entries in the bottom-most row.

12 / 1 = 12

The row containing the smallest quotient is identified to get the pivot row. As 8 is the smaller quotient as compared to 12 thus, row 2 becomes the pivot row. The intersection of the pivot row and the pivot column gives the pivot element.

Thus, pivot element = 2.

Step 5: With the help of the pivot element perform pivoting, using matrix properties , to make all other entries in the pivot column 0.

Using the elementary operations divide row 2 by 2 (\(R_{2}\) / 2)

\(\begin{bmatrix} x_{1} & x_{2} &y_{1} & y_{2} & Z & \\ 1&1 &1 &0 &0 &12 \\ 1& 1/2 & 0& 1/2 & 0 & 8 \\ -40&-30&0&0&1&0 \end{bmatrix}\)

Now apply \(R_{1}\) = \(R_{1}\) - \(R_{2}\)

\(\begin{bmatrix} x_{1} & x_{2} &y_{1} & y_{2} & Z & \\ 0&1/2 &1 &-1/2 &0 &4 \\ 1& 1/2 & 0& 1/2 & 0 & 8 \\ -40&-30&0&0&1&0 \end{bmatrix}\)

Finally \(R_{3}\) = \(R_{3}\) + 40\(R_{2}\) to get the required matrix.

\(\begin{bmatrix} x_{1} & x_{2} &y_{1} & y_{2} & Z & \\ 0&1/2 &1 &-1/2 &0 &4 \\ 1& 1/2 & 0& 1/2 & 0 & 8 \\ 0&-10&0&20&1&320 \end{bmatrix}\)

Step 6: Check if the bottom-most row has negative entries. If no, then the optimal solution has been determined. If yes, then go back to step 3 and repeat the process. -10 is a negative entry in the matrix thus, the process needs to be repeated. We get the following matrix.

\(\begin{bmatrix} x_{1} & x_{2} &y_{1} & y_{2} & Z & \\ 0&1 &2 &-1 &0 &8 \\ 1& 0 & -1& 1 & 0 & 4 \\ 0&0&20&10&1&400 \end{bmatrix}\)

Writing the bottom row in the form of an equation we get Z = 400 - 20\(y_{1}\) - 10\(y_{2}\). Thus, 400 is the highest value that Z can achieve when both \(y_{1}\) and \(y_{2}\) are 0.

Also, when \(x_{1}\) = 4 and \(x_{2}\) = 8 then value of Z = 400

Thus, \(x_{1}\) = 4 and \(x_{2}\) = 8 are the optimal points and the solution to our linear programming problem.

Linear Programming by Graphical Method

If there are two decision variables in a linear programming problem then the graphical method can be used to solve such a problem easily.

Suppose we have to maximize Z = 2x + 5y.

The constraints are x + 4y ≤ 24, 3x + y ≤ 21 and x + y ≤ 9

where, x ≥ 0 and y ≥ 0.

To solve this problem using the graphical method the steps are as follows.

Step 1: Write all inequality constraints in the form of equations.

x + 4y = 24

3x + y = 21

Step 2: Plot these lines on a graph by identifying test points.

x + 4y = 24 is a line passing through (0, 6) and (24, 0). [By substituting x = 0 the point (0, 6) is obtained. Similarly, when y = 0 the point (24, 0) is determined.]

3x + y = 21 passes through (0, 21) and (7, 0).

x + y = 9 passes through (9, 0) and (0, 9).

Step 3: Identify the feasible region. The feasible region can be defined as the area that is bounded by a set of coordinates that can satisfy some particular system of inequalities.

Any point that lies on or below the line x + 4y = 24 will satisfy the constraint x + 4y ≤ 24.

Similarly, a point that lies on or below 3x + y = 21 satisfies 3x + y ≤ 21.

Also, a point lying on or below the line x + y = 9 satisfies x + y ≤ 9.

The feasible region is represented by OABCD as it satisfies all the above-mentioned three restrictions.

Step 4: Determine the coordinates of the corner points. The corner points are the vertices of the feasible region.

B = (6, 3). B is the intersection of the two lines 3x + y = 21 and x + y = 9. Thus, by substituting y = 9 - x in 3x + y = 21 we can determine the point of intersection.

C = (4, 5) formed by the intersection of x + 4y = 24 and x + y = 9

Linear Programming by Graphical Method

Step 5: Substitute each corner point in the objective function. The point that gives the greatest (maximizing) or smallest (minimizing) value of the objective function will be the optimal point.

33 is the maximum value of Z and it occurs at C. Thus, the solution is x = 4 and y = 5.

Applications of Linear Programming

Linear programming is used in several real-world applications. It is used as the basis for creating mathematical models to denote real-world relationships. Some applications of LP are listed below:

  • Manufacturing companies make widespread use of linear programming to plan and schedule production.
  • Delivery services use linear programming to decide the shortest route in order to minimize time and fuel consumption.
  • Financial institutions use linear programming to determine the portfolio of financial products that can be offered to clients.

Related Articles:

  • Introduction to Graphing
  • Linear Equations in Two Variables
  • Solutions of a Linear Equation
  • Mathematical Induction

Important Notes on Linear Programming

  • Linear programming is a technique that is used to determine the optimal solution of a linear objective function.
  • The simplex method in lpp and the graphical method can be used to solve a linear programming problem.
  • In a linear programming problem, the variables will always be greater than or equal to 0.

Linear programming Example

As the minimum value of Z is 127, thus, B (3, 28) gives the optimal solution. Answer: The minimum value of Z is 127 and the optimal solution is (3, 28)

Linear Programming Problem

  • Example 3: Using the simplex method in lpp solve the linear programming problem Minimize Z = \(x_{1}\) + 2\(x_{2}\) + 3\(x_{3}\) \(x_{1}\) + \(x_{2}\) + \(x_{3}\) ≤ 12 2\(x_{1}\) + \(x_{2}\) + 3\(x_{3}\) ≤ 18 \(x_{1}\), \(x_{2}\), \(x_{3}\) ≥ 0 Solution: Convert all inequalities to equations by introducing slack variables. -\(x_{1}\) - 2\(x_{2}\) - 3\(x_{3}\) + Z = 0 \(x_{1}\) + \(x_{2}\) + \(x_{3}\) + \(y_{1}\) = 12 2\(x_{1}\) + \(x_{2}\) + 3\(x_{3}\) + \(y_{2}\) = 18 Expressing this as a matrix we get, \(\begin{bmatrix} x_{1} & x_{2} & x_{3} & y_{1} & y_{2} & Z & \\ 1 & 1 & 1 & 1 & 0 & 0 & 12\\ 2 & 1 & 3 & 0 & 1 & 0 & 18\\ -1 & -2 & -3 & 0 & 0 & 1 & 0 \end{bmatrix}\) As -3 is the greatest negative value thus, column 3 is the pivot column. 12 / 1 = 12 18 / 3 = 6 As 6 is the smaller quotient thus, row 2 is the pivot row and 3 is the pivot element. By applying matrix operations we get, \(\begin{bmatrix} x_{1} & x_{2} & x_{3} & y_{1} & y_{2} & Z & \\ 0.33 & 0.667 & 0 & 1 & -0.33 & 0 & 6\\ 0.667 & 0.33 & 1 & 0 & 0.33 & 0 & 6\\ 1 & -1 & 0 & 0 & 1 & 1 & 18 \end{bmatrix}\) Now -1 needs to be eliminated. Thus, by repreating the steps the matrix so obtained is as follows \(\begin{bmatrix} x_{1} & x_{2} & x_{3} & y_{1} & y_{2} & Z & \\ 0.5 & 1 & 0 & 1.5 & 0.5 & 0 & 9\\ 0.5 & 0 & 1 & -0.5 & 0.5 & 0 & 3\\ 1.5 & 0 & 0 & 1.5 & 0.5 & 1 & 27 \end{bmatrix}\) We get the maximum value of Z = 27 at \(x_{1}\) = 0, \(x_{2}\) = 9 \(x_{3}\) = 3 Answer: Maximum value of Z = 27 and optimal solution (0, 9, 3)

go to slide go to slide go to slide

linear programming solve method

Book a Free Trial Class

Practice Questions on Linear Programming

go to slide go to slide

FAQs on Linear Programming

What is meant by linear programming.

Linear programming is a technique that is used to identify the optimal solution of a function wherein the elements have a linear relationship.

What is Linear Programming Formula?

The general formula for a linear programming problem is given as follows:

What is the Objective Function in Linear Programming Problems?

The objective function is the linear function that needs to be maximized or minimized and is subject to certain constraints. It is of the form Z = ax + by.

How to Formulate a Linear Programming Model?

The steps to formulate a linear programming model are given as follows:

  • Identify the decision variables.
  • Formulate the objective function.
  • Identify the constraints.
  • Solve the obtained model using the simplex or the graphical method.

How to Find Optimal Solution in Linear Programming?

We can find the optimal solution in a linear programming problem by using either the simplex method or the graphical method. The simplex method in lpp can be applied to problems with two or more variables while the graphical method can be applied to problems containing 2 variables only.

How to Find Feasible Region in Linear Programming?

To find the feasible region in a linear programming problem the steps are as follows:

  • Draw the straight lines of the linear inequalities of the constraints.
  • Use the "≤" and "≥" signs to denote the feasible region of each constraint.
  • The region common to all constraints will be the feasible region for the linear programming problem.

What are Linear Programming Uses?

Linear programming is widely used in many industries such as delivery services, transportation industries, manufacturing companies, and financial institutions. The linear program is solved through linear optimization method, and it is used to determine the best outcome in a given scenerio.

Google OR-Tools

  • Google OR-Tools
  • Español – América Latina
  • Português – Brasil
  • Tiếng Việt

Advanced LP Solving

Despite the maturity of LP technology, some use cases require more advanced techniques. For example, a number of different LP algorithms and implementations are available, each of which has strengths and weaknesses. Furthermore, numerical instability can cause solvers to slow down or fail to solve certain models.

This guide introduces the concepts and provides examples to help you get the most performance and reliability out of LP solvers.

This section presents key concepts for advanced use of LP solvers. We assume that readers are familiar with the concept of duality in LP .

Families of LP algorithms

The following classes of algorithms for LP are accessible via OR-Tools.

The Simplex algorithm was the first practical LP algorithm and remains the most popular. The algorithm walks along the vertices (corner points) of the feasible region, iteratively improving the value of the objective function until reaching an optimal solution. There are two types of simplex algorithms:

  • Primal simplex takes steps along the vertices of the primal feasible region. This variant is particularly effective at solving a sequence of LP problems with varying objective functions, that is, where the primal feasible region is fixed.
  • Dual simplex takes steps along the vertices of the dual feasible region. This variant is particularly effective at solving a sequence of LP problems where the dual feasible region is fixed, for example, when only bounds on variables change. For this reason, dual simplex is used extensively in MIP solvers.

Barrier or interior-point methods were the second practical family of algorithms for LP. Barrier methods pair theoretical guarantees of efficient (polynomial time) convergence with reliable performance in practice. They complement the simplex algorithm when it performs poorly; for example, some solvers offer to run both simplex and barrier in parallel, returning the solution from the algorithm that finishes first.

First-order methods are a family of algorithms that use exclusively gradient information (that is, first-order derivatives) to guide the iterations. Gradient descent is a well-known example. These methods are popular in nonlinear optimization and machine learning. For LP, first-order methods can scale to larger problems than simplex and barrier, and may also have much smaller memory requirements. On the other thand, they are more sensitive to numerical issues and may struggle to obtain highly accurate solutions.

The table below lists the LP solvers available in OR-Tools and indicates which of the three families of algorithms is implemented in each solver.

G indicates the solver is developed by Google. L indicates that the solver requires a license issued by the respective third-party developer.

See Recommendations for suggestions on which solvers and algorithms to use.

What does solving really mean?

For getting the most out of LP solvers, it's important to understand what is implied when a solver claims to have found a solution to an LP problem. This section covers the basics necessary for answering this question, in particular given numerical imprecision and non-uniqueness of solutions.

LP solvers almost always use floating-point arithmetic, making their solutions subject to numerical imprecision. To account for this, and to improve performance by avoiding effort on solutions that are already good enough, solvers accept solutions—and claim to have solved a problem—when these solutions satisfy conditions up to certain tolerances .

Consider the linear programming problem

and its corresponding dual problem

This pair of problems has a unique optimal primal solution of $ x_1 = 1, x_2 = 0 $ and dual solution $ y = -2 $. Which solutions could be accepted as optimal by a solver? To answer this, we define the following quantities:

  • The duality gap is the difference between the primal objective value and the dual objective value, in this case, $ |(-2x_1 - x_2) - y| $.
  • The primal infeasibilities are the violations of the primal constraints, in this case, $ (\max\{ (x_1+x_2) - 1, 0 \}, \max\{-x_1, 0\}, \max\{-x_2, 0\}) $.
  • The dual infeasibilities are the violations of the dual constraints, in this case, $ (\max\{ 2 + y, 0 \}, \max\{ 1 + y, 0 \}, \max\{ y, 0 \}) $.

A solver declares a solution as optimal if the duality gap, the primal infeasibilities, and the dual infeasibilities are smaller than a given tolerance. 1

Notably, the application of the tolerances varies for both natural and idiosyncratic reasons across solvers and algorithms. For example, the duality gap in the simplex algorithm is driven only by numerical imprecision, while the primal and dual infeasibilities are present even in exact arithmetic. Some methods directly enforce the bound constraints $ x_1 \ge 0, x_2 \ge 0, $ and $ y \le 0 $, while others treat violations of bound constraints differently from violations of linear constraints like $x_1 + x_2 \le 1$. For some solvers, tolerances are absolute ; that is, there is a parameter $ \epsilon $, and solutions are considered optimal if the duality gap and all primal and dual infeasibilities are less than or equal to $ \epsilon $. For other solvers, tolerances are relative , meaning that they scale with the size of the coefficients in the problem.

For an example of the effect of tolerances, consider an absolute tolerance of $ \epsilon = \frac{1}{2} $ applied to the above primal-dual pair. The solution $ x_1 = 1.5, x_2 = 0, y = -3 $ has zero duality gap and infeasibilities all less than or equal to $ \epsilon $, hence a solver might declare this solution "optimal". Yet, its objective value (-3) differs by 1 from the true optimal objective value of -2. Typical default values of $ \epsilon $ are between $10^{-6}$ and $10^{-8}$, which makes such extreme examples rare but not impossible.

Types of solutions

LP problems can have more than one optimal solution, even more so when accounting for tolerances. We briefly discuss the properties of solutions returned by the three different families of LP algorithms presented above.

Simplex algorithms always return vertices or corner points of the feasible region. These solutions are preferred in some situations because they tend to be sparser.

Barrier and first-order methods do not generally return vertices. (Theory provides additional characterizations that are beyond the scope of this guide.)

For historical reasons and because vertex solutions have appealing properties, solvers often, by default, apply a crossover procedure to move to an optimal vertex from a solution found by a barrier algorithm. Crossover is not currently offered for solutions found by first-order methods.

Recommendations

We make the following recommendations for advanced use of LP solvers.

Scaling of problem data

Solvers can experience slow convergence or failures on models because of numerical issues. Such issues can arise for many reasons; here we give one example.

It is common for very small or large numerical constants to appear in LP models. Extending the example from above, if \(x_1\) and \(x_2\) represent the fraction of customers assigned to "provider 1" or "provider 2", and if we want to maximize benefit from serving these customers, we might write the following objective function,

  • $ c_1 $ is the benefit from assigning customers to provider 1
  • $ c_2 $ is the benefit from assigning customers to provider 2

Duals satisfying the following constraints would be considered feasible with an absolute tolerance $ \epsilon $:

  • $ y \le -c_1 + \epsilon $,
  • $ y \le -c_2 + \epsilon $.

If the benefit units of $ c_1 $ and $ c_2 $ are small fractional values that happen to be on the same scale as $ \epsilon $, then the dual feasibility conditions become rather weak, hence a very suboptimal primal may be declared optimal.

If, on the other hand, the benefit units are "microdollars" (1 000 000 microdollars = 1 dollar), the resulting very large absolute values ask for very high precision in the solution, possibly unreasonably high given the limits of floating point numbers. Solvers may fail to converge if the problem is formulated in this way. As part of formulating a well-posed problem, advanced modelers should consider if the problem data are scaled in a way that's consistent with their solver's tolerances.

In addition to avoiding numerical failures, tolerances may also be tuned for better performance. Simplex and barrier methods are not too sensitive to tolerances but occasionally might benefit from larger tolerances if progress is observed to stall at the end of the solve. On the other hand, first-order methods are typically much more sensitive. Users of first-order methods can benefit from faster solutions by relaxing tolerances, as the context allows.

For Glop's tolerances, see primal_feasibility_tolerance , dual_feasibility_tolerance , and solution_feasibility_tolerance in GlopParameters . Note that primal_feasibility_tolerance and dual_feasibility_tolerance are used within the algorithm, while solution_feasibility_tolerance is checked post-solve to flag numerical issues. For PDLP, see eps_optimal_absolute and eps_optimal_relative .

For further reading on these types of issues, see Gurobi's Guidelines for Numerical Issues . While the guidelines are written for users of Gurobi, many of the principles apply generally.

Choice of solvers and algorithms

We start off with an example of how large the impact of the choice of solvers and algorithms can be and then present a guide for choosing LP solvers.

Variability in practice

We illustrate the variability in performance across LP algorithms and solvers by comparing the solve times on a selection of instances that have been used by Hans Mittelmann for benchmarking LP solvers. The instances are explicitly chosen to show the extremes of relative performance and are not necessarily representative of typical behavior.

Glop's primal and dual simplex methods are compared with Gurobi's barrier method (with and without crossover, which finds a vertex solution) and PDLP, a first-order method, in high and low precision. The table below reports solve times in seconds, with a limit of 20 minutes (1200 seconds).

From these results we conclude the following.

  • The relative performance of algorithms and solvers can vary by orders of magnitude on a single instance.
  • No single algorithm and solver is uniformly better than others.
  • Crossover (enabled by default) increases solve time, sometimes substantially.
  • PDLP can solve to low precision sometimes 10 times faster than to high precision.

A brief guide for choosing LP solvers

Given that no single LP algorithm or solver is the best, we recommend the following steps for discovering what is best for your use case. Start with the first step and proceed to the next if performance is not sufficient.

  • If the default configuration (primal simplex) doesn't perform well, try switching to dual simplex using use_dual_simplex: true .
  • If using barrier, disable "crossover" if you do not need a vertex solution.
  • Try PDLP. Tune the convergence tolerances to your application. Why: PDLP is designed for the largest problems, where simplex and barrier methods hit memory limits or are too slow. PDLP performs best when an approximate but quick solution is preferred to an exact but slow solution.
  • If you have made it this far, you are now an advanced LP user! Please see OR-Tools support options for further help.

It is often more complex than this. Solvers typically check these conditions on a transformed/simplified version of the problem, called the presolved problem. In some cases, a solution to the presolved problem may be far away from a solution to the input problem. This situation can lead to unusual statuses like CPLEX's OptimalInfeas or Glop's IMPRECISE .  ↩

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License , and code samples are licensed under the Apache 2.0 License . For details, see the Google Developers Site Policies . Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2022-02-25 UTC.

What Is Linear Programming? Meaning, Methods, and Examples

Linear programming helps determine how to arrive at the most optimized situation given a set of resource constraints.

Linear programming is defined as a technique in algebra that uses linear equations to figure out how to arrive at the optimal situation (maximum or minimum) as an answer to a mathematical problem, assuming the finiteness of resources and the quantifiable nature of the end optimization goal. This article explains how linear programming works with examples.

Table of Contents

What is linear programming, linear programming methods, examples of linear programming.

Linear programming is a technique in algebra that uses linear equations to determine how to arrive at the optimal situation (maximum or minimum) as an answer to a mathematical problem, assuming the finiteness of resources and the quantifiable nature of the end optimization goal. 

Linear programming (LP) uses many linear inequalities pertaining to a given scenario to determine the “optimal” value one can obtain under those constraints. A classic example would be calculating the “optimal” production levels to maximize profits, given the restrictions of supplies and personnel.

In the “real world,” linear programming is an essential subfield of mathematics known as optimization methods. This area of research (or at least its applicable findings) is used in resource allocation and management. These “real-world” systems may have dozens or even hundreds of variables. In algebra, however, you will only see the basic (and graphable) linear example with two variables.

Graphing the inequalities (called “constraints”) to construct a walled-off zone on the x,y plane is the typical method for solving linear programming problems (known as “feasibility region”). Then, you determine the dimensions of the extremities of this feasible zone (i.e., the intersection locations of the different pairs of lines) and evaluate these vertices in the equation (termed “optimization equation”) in which you’re attempting to get the maximum or minimum value.

Linear programming (LP) is among the most straightforward optimization techniques. It simplifies specific, complicated linear programming and optimization issues to help you reach a solution. Data analysts will always encounter applications and challenges requiring linear programming solutions.

Linear programming formula

A linear programming issue includes choice parameters, a nonlinear objective, constraints, and nonnegative limitations. The outcome of the LP model is determined by the choice variables x and y, which also reflect the ultimate answer.

Z is the function that must be optimized (maximized or minimized) to arrive at an answer. The constrictions are the limits placed on the variables to limit their values. According to the non-negative limitations, the variables must always possess a non-negative value.

The linear programming formula may be regarded as follows:

  • The function of the formula: ax + by = Z
  • The formula’s operating limitations: cx + dy ≤ e and fx + gy ≤ h
  • Other, non-negative restrictions: x ≥ 0, y ≥ 0

You need to know a few terms to understand the meaning of linear programming. First come the decision variables. These elements fight for limited resources, including products, services, etc. They are referred to as decision variables if they are connected with a linear connection capable of determining the most optimal option.

The next component would be the objective function. The challenge must have had a quantitatively measurable aim, such as maximizing profit, reducing costs, etc. These constraints also apply to the available resources, such as limited equipment, people, materials, etc. Some constraints are observably existent but do not hamper the process of the studied issue; they are referred to as redundant constraints.

A viable solution is the collection of all potential solutions that fulfill the constants in the format of variables. In addition, an optimum value is the best possible solution that effectively supports the problem’s aim.

The most crucial step in addressing a linear programming issue is formulating it using the provided data. The following are the steps for solving linear programming problems:

  • Determine the choice factors
  • Develop the objective function 
  • Determine whether the function should be decreased or maximized
  • Record the limitations
  • Verify that decision variables are either larger than or equal to 0. (Non-negative inhibition)
  • Utilize either the simplex or graphical method to resolve the linear programming issue

See More: Top Five Free Cloud Platforms to Learn Kubernetes Online

Why is linear programming necessary?

We all encounter several target-based scenarios daily. Suppose a student has 15 days to finish an assignment, or a salesperson has one month to reach their sales quota, while another individual gets $600 to spend on an electronic device.

Let’s imagine that the student’s purpose for this endeavor is to get the highest possible grade. The salesman would strive for the highest possible monthly sales total. The purchaser of a device would want to decrease the price as much as feasible. They would attempt to purchase a device within their budget. The purpose of each situation described above is to maximize the benefits or reduce costs. Such issues are optimization challenges that may be resolved by linear programming.

An optimization issue in mathematics may include maximizing profit, minimizing costs, or minimizing resource use. We have previously described the aims of the three presented circumstances; we can now examine their respective limiting considerations. What does it imply?

In every instance, resources are scarce. In the first scenario, the deadline for completing the assignment is tight. Similarly, in example two, the individual must sell the greatest amount of things within a given time frame. In the third scenario, the individual must purchase the device within a defined budget; hence, the quantity of money is the restricting element. The lack of available resources hinders the search for optimal solutions to the presented challenges.

One cannot use the typical calculus and marginal analysis techniques in these circumstances. Calculus approaches can only handle precisely equal constraints, a limitation that linear programming does not have.

Numerous real-world applications make use of linear programming. It serves as the foundation for mathematical models that represent real-world connections. To organize and schedule production, manufacturing businesses employ linear programming extensively. To decrease travel time and fuel consumption, delivery services employ linear programming to determine the shortest route. Financial institutions establish the spectrum of investment instruments that may be supplied to customers using linear programming.

Linear programming delivers vital insights into business challenges by facilitating the identification of the ideal solution in each given circumstance.

Traits of a linear programming task

A problem being solved through linear programming will have the following traits or characteristics:

  • Subject to constraints : Regarding the resource, one should represent the restrictions in mathematical form.
  • Geared towards an objective function : The objective function of an issue should be described quantitatively.
  • A linear relationship : The function’s connection across two or more independent variables must be linear. It indicates that the variable’s degree is one.
  • Includes only finite numbers : There should be output and input numbers that are both finite and infinite. The optimum solution is not implementable if the function contains an unlimited number of elements.
  • Does not include negative values : The variable’s value must be zero or positive. The value should not be negative.
  • Hinges on decision variables : The result is determined by the decision variable. It provides the final solution to the issue. The first step in solving any issue is to determine the decision factors.

See More: What Is COBOL Programming Language? Definition, Examples, Uses, and Challenges

There are several approaches to solving linear programming problems. The four most important approaches are:

1. The simplex method 

The simplex method is a typical methodology for tackling optimization problems in linear programming. Typically, it consists of a function and some restrictions written as inequalities. The inequality defines a polygonal area, with the solution often located at a vertex. This approach is a method for systematically examining the vertices as potential solutions.

The technique approaches and finally achieves the maximum or lowest value of the goal function via an iterative procedure. In addition, the strategy aids the decision-maker in identifying duplicate restrictions, a complete solution, several alternatives, and an infeasible issue, thereby providing a thorough grasp of the business situation.

A twin problem accompanies every linear programming issue. One may easily derive the answer to this issue using the simplex approach from resolving the initial problem.

George Dantzig created the simplex technique for linear programming. Dantzig developed planning systems for the United States Air Defense during World War II using a desk calculator. In 1946, a coworker challenged him to automate the planning procedure to prevent him from accepting another position. Dantzig defined the issue as linear inequalities, although he did not include an aim in his formulation at the time. Without a goal, a wide variety of plausible options exist. Therefore, military-specific “ground rules” should be applied to identify the best possible alternative.

Dantzig’s key realization was that most such ground rules might be expressed as a linear function of objectives that must be maximized.

2. Solving linear programming problems using R

Linear programming is an excellent tool for decision-making optimization. Several R programs, such as the lpSolve R package, enable the solution of linear programming difficulties. lpSolve is an R extension that allows links to a C-based framework for linear programming problem-solving. With only a few bits of open-source code, you may get statistically significant information (sensitivity analysis).

Whereas other free optimization solutions are available, having a linear programming R code in one’s individual code repository may save a considerable amount of time by eliminating the need to start writing the formula from scratch and requiring only the modification of the coefficients and signs of the respective matrices. This is helpful since R is regularly used for data science and statistical analysis. 

3. Graphical linear programming

The technique of resolving a linear equation system by generating a graph is often referred to as the graphical method. The same holds true for linear programming issues.

Using graphical approaches, it is simple to solve any optimization programming issue with just two variables. These variables may be referred to as x1 and x2, and most of the analysis can be performed on a two-dimensional graph using these variables. The graphical approach for solving linear programming employs the extreme or corner points method and the iso-profit (cost) efficiency line method.

The iso-cost or iso-profit approach identifies the point combination that yields the same costs/profits as any other point combinations on the same line. This is accomplished by drawing parallel lines to the gradient of the equation.

4. Linear programming using OpenSolver

OpenSolver is a tool designed to solve models of linear and integer programming. OpenSolver is an Excel VBA add-on that expands the capabilities of Excel’s built-in Solver. Andrew Mason developed and updated it with students at the University of Auckland’s Engineering Science department. In addition, it permits you to resolve linear and mixed-integer optimization methods in Google Sheets without arbitrary size restrictions.

It’s important to note that almost all linear programming and mixed-integer linear programming libraries that are widely used are authored in Fortran, C, or C++ and are native to those languages. This is because linear programming requires a lot of work with (often sizable) matrices, which is hard to do in a language like Python . This kind of library is called a solver.

5. Mixed-integer linear programming

Linear programming can be made even more robust with mixed-integer linear programming. It can solve problems in which at least one variable has a discrete integer value instead of a continuous value. At first glance, mixed-integer problems look like continuous variable problems, but they are much better in flexibility and accuracy.

Integer variables are essential for accurately representing numbers expressed with integers, like the number of airplanes made or the number of customers served. The binary variable is a type of integer variable that is very important. It can only have the values 0 or 1 and helps make yes-or-no decisions, like whether or not to build a plant or turn on a machine. You can also use them to imitate logical constraints.

See More: Pivoting From Coder to Solution Architect: Four Skills and Certifications to Thrive

The Linear Programming Problem (LPP) involves finding the best value of a given linear function. The ideal value may either be the largest or smallest one. Here, the linear function provided is regarded as an objective function. The objective function may include several variables dependent on the situation and must fulfill the linear constraints, a collection of linear inequalities. One may utilize linear programming issues to determine the ideal solution for manufacturing, diet, transportation, and allocation problems, among others.

Listed below are a few illustrations of the kind of issues commonly addressed by linear programming techniques:

Example 1: Optimizing dietary needs and cost constraints

A doctor wants to combine two food kinds such that the mixture’s vitamin content includes a minimum of 8 elements of vitamin A and ten elements of vitamin C. Food ‘I’ includes 2 vitamin A units per kilogram and 1 vitamin C unit per kilogram. Food ‘II’ has 1 vitamin A unit per kilogram and 2 vitamin C units per kilogram. Food ‘I’ is priced at $5 per kilogram, whereas Food ‘II’ costs $7 per kilogram. To minimize the price of such a combination, this may be expressed as a problem of linear programming.

Example 2: Optimizing food ingredients and food volume

One type of cake calls for 200g of flour and 25g of fat, but another one calls for 100g of flour and 50g of fat. This issue may be expressed as a linear programming problem to determine the highest proportion of cake that can be baked using 5kg of wheat and 1kg of fat. It also implies that there are sufficient quantities of the other cake-making components.

Example 3: Optimizing goods transportation costs

Consider a manufacturing business with two plants in cities F1 & F2 and three retail outlets in cities C1, C2, or C3. Monthly demand at retail locations is 8, 5, and 2 units, whereas monthly supply at manufacturers is 6 and 9, accordingly. Notice that the entire supply and demand are equal. We are also provided with the cost of transporting one unit from manufacturing to retail outlets. This linear programming challenge aims to estimate the amount that must be shipped from each manufacturer to each retail area to satisfy demand at the lowest possible total shipping cost.

Example 4: Optimizing product sales to arrive at maximum profit

A bakery produces two types of cookies: chocolate chip and caramel. The bakery anticipates daily demand for a minimum of 80 caramelized & 120 chocolate chip cookies. Due to a lack of raw materials and labor, the bakery can produce 120 caramel cookies and 140 chocolate chip cookies daily. For the bakery to be viable, it must sell a minimum of 240 cookies each day. Every chocolate chip cookie served generates $0.75 in profit, whereas each caramel biscuit generates $0.88. The solution to the number of chocolate chip and caramel cookies that the bakery must produce each day to maximize profit may be determined using linear programming.

See More: Cobol Programmer: Job Description, Key Skills, and Salary in 2022

Linear programming, like decision trees and fuzzy logic , is essential for computing algorithms. It states that, given a set of fixed resource constraints, an optimal or best solution exists. This has myriad applications in cognitive technologies such as AI or machine learning , which try and apply mathematical formulae and statistical models to real-world problems. 

Did this article adequately explain how linear programming works? Tell us on Facebook Opens a new window , Twitter Opens a new window , and LinkedIn Opens a new window . We’d love to hear from you!

MORE ON IT STRATEGY

  • What Is Unified Communication? Definition, System, Cloud Service, Best Practices, and Examples
  • 10 Best Practices for Disaster Recovery Planning (DRP)
  • Top 10 SIEM Solutions in 2022
  • What is Gamification? Definition, Software, Examples, and Best Practices 2022
  • Scrum vs. DevOps: Understanding the Key Differences

Share This Article:

Technical Writer

Take me to Community

Recommended Reads

CodeOps: Leveraging AI for Code Reusability and Product Development

CodeOps: Leveraging AI for Code Reusability and Product Development

From Reactive to Proactive: A Guide to Infrastructure Resilience

From Reactive to Proactive: A Guide to Infrastructure Resilience

Ready or Not, the Shift Is Here: Trends Driving ERP Decision-making

Ready or Not, the Shift Is Here: Trends Driving ERP Decision-making

Understanding the Tech Tipping Point for Emerging Markets

Understanding the Tech Tipping Point for Emerging Markets

The State of IT Spend in 2024: Making Choices and Tradeoffs

The State of IT Spend in 2024: Making Choices and Tradeoffs

The State of IT Spend in 2024: Cybersecurity as a % of Computing Infrastructure

The State of IT Spend in 2024: Cybersecurity as a % of Computing Infrastructure

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Mathematics LibreTexts

4.2: Maximization By The Simplex Method

  • Last updated
  • Save as PDF
  • Page ID 37869

  • Rupinder Sekhon and Roberta Bloom
  • De Anza College

Learning Objectives

In this section, you will learn to solve linear programming maximization problems using the Simplex Method:

  • Identify and set up a linear program in standard maximization form
  • Convert inequality constraints to equations using slack variables
  • Set up the initial simplex tableau using the objective function and slack equations
  • Find the optimal simplex tableau by performing pivoting operations.
  • Identify the optimal solution from the optimal simplex tableau.

In the last chapter, we used the geometrical method to solve linear programming problems, but the geometrical approach will not work for problems that have more than two variables. In real life situations, linear programming problems consist of literally thousands of variables and are solved by computers. We can solve these problems algebraically, but that will not be very efficient. Suppose we were given a problem with, say, 5 variables and 10 constraints. By choosing all combinations of five equations with five unknowns, we could find all the corner points, test them for feasibility, and come up with the solution, if it exists. But the trouble is that even for a problem with so few variables, we will get more than 250 corner points, and testing each point will be very tedious. So we need a method that has a systematic algorithm and can be programmed for a computer. The method has to be efficient enough so we wouldn't have to evaluate the objective function at each corner point. We have just such a method, and it is called the simplex method .

The simplex method was developed during the Second World War by Dr. George Dantzig. His linear programming models helped the Allied forces with transportation and scheduling problems. In 1979, a Soviet scientist named Leonid Khachian developed a method called the ellipsoid algorithm which was supposed to be revolutionary, but as it turned out it is not any better than the simplex method. In 1984, Narendra Karmarkar, a research scientist at AT&T Bell Laboratories developed Karmarkar's algorithm which has been proven to be four times faster than the simplex method for certain problems. But the simplex method still works the best for most problems.

The simplex method uses an approach that is very efficient. It does not compute the value of the objective function at every point; instead, it begins with a corner point of the feasibility region where all the main variables are zero and then systematically moves from corner point to corner point, while improving the value of the objective function at each stage. The process continues until the optimal solution is found.

To learn the simplex method, we try a rather unconventional approach. We first list the algorithm, and then work a problem. We justify the reasoning behind each step during the process. A thorough justification is beyond the scope of this course.

We start out with an example we solved in the last chapter by the graphical method. This will provide us with some insight into the simplex method and at the same time give us the chance to compare a few of the feasible solutions we obtained previously by the graphical method. But first, we list the algorithm for the simplex method.

THE SIMPLEX METHOD

  • Set up the problem. That is, write the objective function and the inequality constraints.
  • Convert the inequalities into equations. This is done by adding one slack variable for each inequality.
  • Construct the initial simplex tableau. Write the objective function as the bottom row.
  • The most negative entry in the bottom row identifies the pivot column.
  • Calculate the quotients. The smallest quotient identifies a row. The element in the intersection of the column identified in step 4 and the row identified in this step is identified as the pivot element. The quotients are computed by dividing the far right column by the identified column in step 4. A quotient that is a zero, or a negative number, or that has a zero in the denominator, is ignored.
  • Perform pivoting to make all other entries in this column zero. This is done the same way as we did with the Gauss-Jordan method.
  • When there are no more negative entries in the bottom row, we are finished; otherwise, we start again from step 4.
  • Read off your answers. Get the variables using the columns with 1 and 0s. All other variables are zero. The maximum value you are looking for appears in the bottom right hand corner.

Now, we use the simplex method to solve Example 3.1.1 solved geometrically in section 3.1.

Example \(\PageIndex{1}\)

Niki holds two part-time jobs, Job I and Job II. She never wants to work more than a total of 12 hours a week. She has determined that for every hour she works at Job I, she needs 2 hours of preparation time, and for every hour she works at Job II, she needs one hour of preparation time, and she cannot spend more than 16 hours for preparation. If she makes $40 an hour at Job I, and $30 an hour at Job II, how many hours should she work per week at each job to maximize her income?

In solving this problem, we will follow the algorithm listed above.

STEP 1. Set up the problem. Write the objective function and the constraints.

Since the simplex method is used for problems that consist of many variables, it is not practical to use the variables \(x\), \(y\), \(z\) etc. We use symbols \(x_1\), \(x_2\), \(x_3\), and so on.

  • \(x_1\) = The number of hours per week Niki will work at Job I and
  • \(x_2\) = The number of hours per week Niki will work at Job II.

It is customary to choose the variable that is to be maximized as \(Z\).

The problem is formulated the same way as we did in the last chapter.

\[\begin{array}{ll} \textbf { Maximize } & \mathrm{Z}=40 \mathrm{x}_{1}+30 \mathrm{x}_{2} \\ \textbf { Subject to: } & \mathrm{x}_{1}+\mathrm{x}_{2} \leq 12 \\ & 2 \mathrm{x}_{1}+\mathrm{x}_{2} \leq 16 \\ & \mathrm{x}_{1} \geq 0 ; \mathrm{x}_{2} \geq 0 \end{array}\nonumber \]

STEP 2. Convert the inequalities into equations. This is done by adding one slack variable for each inequality.

For example to convert the inequality \(x_1 + x_2 ≤ 12\) into an equation, we add a non-negative variable \(y_1\), and we get

\[x_1 + x_2 + y_1 = 12 \nonumber \]

Here the variable \(y_1\) picks up the slack, and it represents the amount by which \(x_1 + x_2\) falls short of 12. In this problem, if Niki works fewer than 12 hours, say 10, then \(y_1\) is 2. Later when we read off the final solution from the simplex table, the values of the slack variables will identify the unused amounts.

We rewrite the objective function \(Z = 40x_1 + 30x_2\) as \(- 40x_1 - 30x_2 + Z = 0\).

After adding the slack variables, our problem reads

\[\begin{array}{ll} \text { Objective function } & - 40x_1 - 30x_2 + Z = 0 \\ \text { Subject to constraints: } & x_1 + x_2 + y_1 = 12 \\ & 2x_1 + x_2 + y_2 = 16 \\ & x1 ≥ 0; x2 ≥ 0 \end{array} \nonumber \]

STEP 3. Construct the initial simplex tableau . Each inequality constraint appears in its own row. (The non-negativity constraints do not appear as rows in the simplex tableau.) Write the objective function as the bottom row.

Now that the inequalities are converted into equations, we can represent the problem into an augmented matrix called the initial simplex tableau as follows.

Example4.2.1a.png

Here the vertical line separates the left hand side of the equations from the right side. The horizontal line separates the constraints from the objective function. The right side of the equation is represented by the column C.

The reader needs to observe that the last four columns of this matrix look like the final matrix for the solution of a system of equations. If we arbitrarily choose \(x_1 = 0\) and \(x_2 = 0\), we get

\[\left[\begin{array}{ccccc} y_{1} & y_{2} & Z & | & C \\ 1 & 0 & 0 & | & 12 \\ 0 & 1 & 0 & | & 16 \\ 0 & 0 & 1 & | & 0 \end{array}\right]\nonumber \]

which reads

\[y_1 = 12 \quad y_2 = 16 \quad Z = 0 \nonumber \]

The solution obtained by arbitrarily assigning values to some variables and then solving for the remaining variables is called the basic solution associated with the tableau. So the above solution is the basic solution associated with the initial simplex tableau. We can label the basic solution variable in the right of the last column as shown in the table below.

Example4.2.1b.png

STEP 4. The most negative entry in the bottom row identifies the pivot column.

The most negative entry in the bottom row is -40; therefore the column 1 is identified.

Example4.2.1c.png

Question Why do we choose the most negative entry in the bottom row?

Answer The most negative entry in the bottom row represents the largest coefficient in the objective function - the coefficient whose entry will increase the value of the objective function the quickest.

The simplex method begins at a corner point where all the main variables, the variables that have symbols such as \(x_1\), \(x_2\), \(x_3\) etc., are zero. It then moves from a corner point to the adjacent corner point always increasing the value of the objective function. In the case of the objective function \(Z = 40x_1+ 30x_­2\), it will make more sense to increase the value of \(x_1\) rather than \(x_2\). The variable \(x_1\) represents the number of hours per week Niki works at Job I. Since Job I pays $40 per hour as opposed to Job II which pays only $30, the variable \(x_1\) will increase the objective function by $40 for a unit of increase in the variable \(x_1\).

STEP 5. Calculate the quotients. The smallest quotient identifies a row. The element in the intersection of the column identified in step 4 and the row identified in this step is identified as the pivot element.

Following the algorithm, in order to calculate the quotient, we divide the entries in the far right column by the entries in column 1, excluding the entry in the bottom row.

Example4.2.1e.png

The smallest of the two quotients, 12 and 8, is 8. Therefore row 2 is identified. The intersection of column 1 and row 2 is the entry 2, which has been highlighted. This is our pivot element.

Question Why do we find quotients, and why does the smallest quotient identify a row?

Answer When we choose the most negative entry in the bottom row, we are trying to increase the value of the objective function by bringing in the variable \(x_1\). But we cannot choose any value for \(x_1\). Can we let \(x_1 = 100\)? Definitely not! That is because Niki never wants to work for more than 12 hours at both jobs combined: \(x_1 + x_2 ≤ 12\). Can we let \(x_1 = 12\)? Again, the answer is no because the preparation time for Job I is two times the time spent on the job. Since Niki never wants to spend more than 16 hours for preparation, the maximum time she can work is 16 ÷ 2 = 8.

Now you see the purpose of computing the quotients; using the quotients to identify the pivot element guarantees that we do not violate the constraints.

Question Why do we identify the pivot element?

Answer As we have mentioned earlier, the simplex method begins with a corner point and then moves to the next corner point always improving the value of the objective function. The value of the objective function is improved by changing the number of units of the variables. We may add the number of units of one variable, while throwing away the units of another. Pivoting allows us to do just that.

The variable whose units are being added is called the entering variable , and the variable whose units are being replaced is called the departing variable . The entering variable in the above table is \(x_1\), and it was identified by the most negative entry in the bottom row. The departing variable \(y_2\) was identified by the lowest of all quotients.

STEP 6. Perform pivoting to make all other entries in this column zero.

In chapter 2, we used pivoting to obtain the row echelon form of an augmented matrix. Pivoting is a process of obtaining a 1 in the location of the pivot element, and then making all other entries zeros in that column. So now our job is to make our pivot element a 1 by dividing the entire second row by 2. The result follows.

Example4.2.1f.png

To obtain a zero in the entry first above the pivot element, we multiply the second row by -1 and add it to row 1. We get

Example4.2.1g.png

To obtain a zero in the element below the pivot, we multiply the second row by 40 and add it to the last row.

Example4.2.1h.png

We now determine the basic solution associated with this tableau. By arbitrarily choosing \(x_2 = 0\) and \(y_2 = 0\), we obtain \(x_1 = 8\), \(y_1 = 4\), and \(z = 320\). If we write the augmented matrix, whose left side is a matrix with columns that have one 1 and all other entries zeros, we get the following matrix stating the same thing.

\[\left[\begin{array}{ccccc} \mathrm{x}_{1} & \mathrm{y}_1 & \mathrm{Z} & | & \mathrm{C} \\ 0 & 1 & 0 & | & 4 \\ 1 & 0 & 0 & | & 8 \\ 0 & 0 & 1 & | & 320 \end{array}\right] \nonumber \]

We can restate the solution associated with this matrix as \(x_1 = 8\), \(x_2 = 0\), \(y_1 = 4\), \(y_2 = 0\) and \(z = 320\). At this stage of the game, it reads that if Niki works 8 hours at Job I, and no hours at Job II, her profit Z will be $320. Recall from Example 3.1.1 in section 3.1 that (8, 0) was one of our corner points. Here \(y_1 = 4\) and \(y_2 = 0\) mean that she will be left with 4 hours of working time and no preparation time.

STEP 7. When there are no more negative entries in the bottom row, we are finished; otherwise, we start again from step 4.

Since there is still a negative entry, -10, in the bottom row, we need to begin, again, from step 4. This time we will not repeat the details of every step, instead, we will identify the column and row that give us the pivot element, and highlight the pivot element. The result is as follows.

Example4.2.1i.png

We make the pivot element 1 by multiplying row 1 by 2, and we get

Example4.2.1j.png

Now to make all other entries as zeros in this column, we first multiply row 1 by -1/2 and add it to row 2, and then multiply row 1 by 10 and add it to the bottom row.

Example4.2.1k.png

We no longer have negative entries in the bottom row, therefore we are finished.

Question Why are we finished when there are no negative entries in the bottom row?

Answer The answer lies in the bottom row. The bottom row corresponds to the equation:

\[\begin{array}{l} 0 x_{1}+0 x_{2}+20 y_{1}+10 y_{2}+Z=400 \quad \text { or } \\ z=400-20 y 1-10 y 2 \end{array}\nonumber \]

Since all variables are non-negative, the highest value \(Z\) can ever achieve is 400, and that will happen only when \(y_1\) and \(y_2\) are zero.

STEP 8. Read off your answers.

We now read off our answers, that is, we determine the basic solution associated with the final simplex tableau. Again, we look at the columns that have a 1 and all other entries zeros. Since the columns labeled \(y_1\) and \(y_2\) are not such columns, we arbitrarily choose \(y_1 = 0\), and \(y_2 = 0\), and we get

\[\left[\begin{array}{ccccc} \mathrm{x}_{1} & \mathrm{x}_{2} & \mathrm{Z} & | & \mathrm{C} \\ 0 & 1 & 0 & | & 8 \\ 1 & 0 & 0 & | & 4 \\ 0 & 0 & 1 & | & 400 \end{array}\right] \nonumber \]

The matrix reads \(x_1 = 4\), \(x_2= 8\) and \(z = 400\).

The final solution says that if Niki works 4 hours at Job I and 8 hours at Job II, she will maximize her income to $400. Since both slack variables are zero, it means that she would have used up all the working time, as well as the preparation time, and none will be left.

PM Calculator - Logo

Graphical Method Calculator – Linear Programming 🥇

Do you often find yourself confused by linear programming problems that you can't solve? Maybe it's time to get some help. Well… You're in luck! Solving optimization exercises with the graphical method will be easier with our graphical method calculator for linear programming problems.

Graphical Method Calculator - Linear Programming

Objective function:, constraints.

Constraint 1:

Constraint 2:

X 1 , X 2 ≥ 0

Members-Only Content

Do you already have a membership, get membership.

The above application is a simplified version of our graphing method calculator available to students who have a membership with us; however, it has all the basic functionality required to graph most linear programming exercises in your school.

This free version, like other available free linear programming calculators, only shows the final result (optimal solution and graph) of the problem. Since many students cannot adequately understand how the graphs were generated, we have developed a version with detailed step-by-step explanations of the solution of the problem.

Advanced Functions of the Graphical Method of Linear Programming Calculator

Our membership aims to help you improve your problem solving skills and perform better in your school. That is why we include a series of online resources, where linear programming is a must. In this application you will find the following:

  • Calculation of the intersections with the axes to graph each constraint.
  • Explanation of the area to shade depending on the type of inequality.
  • Determination of the feasible region.
  • Location of the objective function on the graph, if applicable.
  • Determination of special cases such as unbounded, unbounded or infeasible solutions.
  • Solve exercises with inequalities or equations.
  • You can enter a maximum of 10 restrictions and 2 variables.

You can find complete examples of how the application works in this link .

How to use the Online Graphical Method Calculator

The use of our calculator is very simple and intuitive, however, we will explain its use step by step:

  • Before starting, you must have made the approach of the model to be optimized. Remember that for the graphical method we normally work with 2 decision variables.
  • You must enter the coefficients of the objective function and the constraints. You can enter integer values, fractions and decimals. Likewise, you must also select the sign of the inequalities.
  • To enter the coefficients of the objective function and the constraints, you can use integer values as well as fractions and decimals. You must also select the sign of the inequalities.
  • Click on “Solve / Graph” .
  • If you are in the free version, you will immediately get the final graph and results. In the full version, you will be able to see the step by step from the creation of the graphs to the final result.

Next we will see some images of the operation of the calculator:

graphical method calculator

This calculator facilitates your learning of the graphical method and combines well with our simplex method application (two phases) and our Big M Method calculator .

Final Reflection

We know that the best way to learn something is to have the right tools to do it. At PM Calculators we are working to bring you the best tools gathered in one place. If you have any recommendation to improve our calculator, write us to our contact form.

  • 90% Refund @Courses
  • Data Structure
  • Analysis of Algorithms
  • Backtracking
  • Dynamic Programming
  • Divide and Conquer
  • Geometric Algorithms
  • Mathematical Algorithms
  • Pattern Searching
  • Bitwise Algorithms
  • Branch & Bound
  • Randomized Algorithms

Related Articles

  • Solve Coding Problems
  • CBSE Class 12 Maths Notes

Chapter 1: Relations and Functions

  • Types of Functions
  • Composite functions - Relations and functions
  • Invertible Functions
  • Composition of Functions
  • Inverse Functions
  • Verifying Inverse Functions by Composition

Chapter 2: Inverse Trigonometric Functions

  • Inverse Trigonometric Functions
  • Graphs of Inverse Trigonometric Functions - Trigonometry | Class 12 Maths
  • Properties of Inverse Trigonometric Functions
  • Inverse Trigonometric Identities

Chapter 3: Matrices

  • Types of Matrices
  • Matrix Operations
  • Matrix Addition
  • Matrix Multiplication
  • Transpose of a Matrix
  • Symmetric and Skew Symmetric Matrices
  • Elementary Operations on Matrices
  • Inverse of a Matrix by Elementary Operations - Matrices | Class 12 Maths
  • Invertible Matrix

Chapter 4: Determinants

  • Determinant of a Matrix
  • Properties of Determinants
  • Area of a Triangle using Determinants
  • Minors and Cofactors
  • Adjoint of a Matrix
  • Applications of Matrices and Determinants

Chapter 5: Continuity and Differentiability

  • Continuity and Discontinuity in Calculus - Class 12 CBSE
  • Differentiability of a Function | Class 12 Maths
  • Derivatives of Inverse Functions
  • Derivatives of Implicit Functions - Continuity and Differentiability | Class 12 Maths
  • Derivatives of Composite Functions
  • Derivatives of Inverse Trigonometric Functions | Class 12 Maths
  • Derivative of Exponential Functions
  • Logarithmic Differentiation - Continuity and Differentiability
  • Proofs for the derivatives of eˣ and ln(x) - Advanced differentiation
  • Rolle's Theorem and Lagrange's Mean Value Theorem
  • Derivative of Functions in Parametric Forms
  • Second Order Derivatives in Continuity and Differentiability | Class 12 Maths
  • Mean Value Theorem
  • Algebra of Continuous Functions - Continuity and Differentiability | Class 12 Maths

Chapter 6: Applications of Derivatives

  • Critical Points
  • Derivatives as Rate of Change
  • Increasing and Decreasing Functions
  • Increasing and Decreasing Intervals
  • Tangents and Normals
  • Equation of Tangents and Normals
  • Relative Minima and Maxima
  • Absolute Minima and Maxima
  • Concave Function
  • Inflection Point
  • Curve Sketching
  • Approximations & Maxima and Minima - Application of Derivatives | Class 12 Maths
  • Higher Order Derivatives

Chapter 7: Integrals

  • Integration by Substitution
  • Integration by Partial Fractions
  • Integration by Parts
  • Integration of Trigonometric Functions
  • Functions Defined by Integrals
  • Definite Integral
  • Computing Definite Integrals
  • Fundamental Theorem of Calculus
  • Finding Derivative with Fundamental Theorem of Calculus
  • Evaluating Definite Integrals
  • Properties of Definite Integrals
  • Definite Integrals of Piecewise Functions
  • Improper Integrals
  • Riemann Sum
  • Riemann Sums in Summation Notation
  • Trapezoidal Rule
  • Definite Integral as the Limit of a Riemann Sum
  • Antiderivatives
  • Indefinite Integrals
  • Particular Solutions to Differential Equations
  • Integration by U-substitution
  • Reverse Chain Rule
  • Partial Fraction Expansion
  • Trigonometric Substitution

Chapter 8: Applications of Integrals

  • Area under Simple Curves
  • Area Between Two Curves - Calculus
  • Area between Polar Curves
  • Area as Definite Integral

Chapter 9: Differential Equations

  • Differential Equations
  • Homogeneous Differential Equations
  • Separable Differential Equations
  • Exact Equations and Integrating Factors
  • Implicit Differentiation
  • Implicit differentiation - Advanced Examples
  • Advanced Differentiation
  • Disguised Derivatives - Advanced differentiation | Class 12 Maths
  • Derivative of Inverse Trig Functions
  • Logarithmic Differentiation

Chapter 10: Vector Algebra

  • Vector Algebra
  • Dot and Cross Products on Vectors
  • How to Find the Angle Between Two Vectors?
  • Section Formula - Vector Algebra

Chapter 11: Three-dimensional Geometry

  • Direction Cosines and Direction Ratios
  • Equation of a Line in 3D
  • Angles Between two Lines in 3D Space
  • Shortest Distance Between Two Lines in 3D Space | Class 12 Maths
  • Points, Lines and Planes

Chapter 12: Linear Programming

Linear programming, graphical solution of linear programming problems, chapter 13: probability.

  • Conditional Probability and Independence - Probability | Class 12 Maths
  • Multiplication Theorem
  • Dependent and Independent Events
  • Bayes' Theorem
  • Probability Distribution
  • Binomial Distribution in Probability
  • Binomial Mean and Standard Deviation - Probability | Class 12 Maths
  • Bernoulli Trials and Binomial Distribution
  • Discrete Random Variable
  • Expected Value
  • NCERT Solutions for Class 12 Maths -Chapter Wise with PDF
  • RD Sharma Class 12 Solutions for Maths

Linear programming is the simplest way of optimizing a problem. Through this method, we can formulate a real-world problem into a mathematical model. There are various methods for solving Linear Programming Problems and one of the easiest and most important methods for solving LPP is the graphical method. In Graphical Solution of Linear Programming, we use graphs to solve LPP.

We can solve a wide variety of problems using Linear programming in different sectors, but it is generally used for problems in which we have to maximize profit, minimize cost, or minimize the use of resources. In this article, we will learn about Solutions of Graphical solutions of linear programming problems, their types, examples, and others in detail.

Table of Content

Graphical Solution of a Linear Programming Problems

Corner point methods, iso-cost methods.

  • Solved Examples

Linear programming is a mathematical technique employed to determine the most favorable solution for a problem characterized by linear relationships. It is a valuable tool in fields such as operations research, economics, and engineering, where efficient resource allocation and optimization are critical.

Now let’s learn about types of linear programming problems

Types of Linear Programming Problems

There are mainly three types of problems based on Linear programming they are,

Manufacturing Problem: In this type of problem, some constraints like manpower, output units/hour, and machine hours are given in the form of a linear equation. And we have to find an optimal solution to make a maximum profit or minimum cost.

Diet Problem: These problems are generally easy to understand and have fewer variables. Our main objective in this kind of problem is to minimize the cost of diet and to keep a minimum amount of every constituent in the diet. 

Transportation Problem: In these problems, we have to find the cheapest way of transportation by choosing the shortest route/optimized path.

Some commonly used terms in linear programming problems are,

Objective function: The direct function of form Z = ax + by, where a and b are constant, which is reduced or enlarged is called the objective function. For example, if Z = 10x + 7y. The variables x and y are called the decision variable.

Constraints: The restrictions that are applied to a linear inequality are called constraints.

  • Non-Negative Constraints: x > 0, y > 0 etc.
  • General Constraints: x + y > 40, 2x + 9y ≥ 40 etc.

Optimization problem: A problem that seeks to maximization or minimization of variables of linear inequality problem is called optimization problems.

Feasible Region: A common region determined by all given issues including the non-negative (x ≥ 0, y ≥ 0) constrain is called the feasible region (or solution area) of the problem. The region other than the feasible region is known as the infeasible region.

Feasible Solutions: These points within or on the boundary region represent feasible solutions of the problem. Any point outside the scenario is called an infeasible solution.

Optimal(Most Feasible) Solution: Any point in the emerging region that provides the right amount (maximum or minimum) of the objective function is called the optimal solution.

  • If we have to find maximum output, we have to consider the innermost intersecting points of all equations.
  • If we have to find minimum output, we consider the outermost intersecting points of all equations.
  • If there is no point in common in the linear inequality, then there is no feasible solution.

We can solve linear programming problems using two different methods are,

To solve the problem using the corner point method you need to follow the following steps:

Step 1: Create mathematical formulation from the given problem. If not given.

Step 2: Now plot the graph using the given constraints and find the feasible region.

Step 3: Find the coordinates of the feasible region(vertices) that we get from step 2.

Step 4: Now evaluate the objective function at each corner point of the feasible region. Assume N and n denotes the largest and smallest values of these points.

Step 5: If the feasible region is bounded then N and n are the maximum and minimum value of the objective function. Or if the feasible region is unbounded then:

  • N is the maximum value of the objective function if the open half plan is got by the ax + by > N has no common point to the feasible region. Otherwise, the objective function has no solution.
  • n is the minimum value of the objective function if the open half plan is got by the ax + by < n has no common point to the feasible region. Otherwise, the objective function has no solution.

Examples on LPP using Corner Point Methods

Example 1: Solve the given linear programming problems graphically: 

Maximize: Z = 8x + y

Constraints are,

  • 2x + y ≤ 60
  • x ≥ 0, y ≥ 0

Step 1: Constraints are,

Step 2: Draw the graph using these constraints. 

Graph for Z = 8x + y

Here both the constraints are less than or equal to, so they satisfy the below region (towards origin). You can find the vertex of feasible region by graph, or you can calculate using the given constraints:

 x + y = 40 …(i)

2x + y = 60 …(ii)

Now multiply eq(i) by 2 and then subtract both eq(i) and (ii), we get

Now put the value of y in any of the equations, we get

x = 20 

So the third point of the feasible region is (20, 20)

Step 3: To find the maximum value of Z = 8x + y. Compare each intersection point of the graph to find the maximum value

So the maximum value of Z = 240 at point x = 30, y = 0.

Example 2: One kind of cake requires 200 g of flour and 25g of fat, and another kind of cake requires 100 g of flour and 50 g of fat Find the maximum number of cakes that can be made from 5 kg of flour and 1 kg of fat assuming that there is no shortage of the other ingredients, used in making the cakes.

Solution: 

Step 1: Create a table like this for easy understanding (not necessary).

Step 2: Create linear equation using inequality

  • 200x + 100y ≤ 5000 or 2x + y ≤ 50
  • 25x + 50y ≤ 1000 or x + 2y ≤ 40
  • Also, x > 0 and y > 0

Step 3: Create a graph using the inequality (remember only to take positive x and y-axis)

Corner Point Method Example 2

Step 4: To find the maximum number of cakes (Z) = x + y. Compare each intersection point of the graph to find the maximum number of cakes that can be baked.

Clearly, Z is maximum at co-ordinate (20, 10). So the maximum number of cakes that can be baked is Z = 20 + 10 = 30.

The term iso-cost or iso-profit method provides the combination of points that produces the same cost/profit as any other combination on the same line. This is done by plotting lines parallel to the slope of the equation.

To solve the problem using Iso-cost method you need to follow the following steps:

Step 3: Now find the coordinates of the feasible region that we get from step 2.

Step 4: Find the convenient value of Z(objective function) and draw the line of this objective function.

Step 5: If the objective function is maximum type then draw a line which is parallel to the objective function line and this line is farthest from the origin and only has one common point to the feasible region. Or if the objective function is minimum type then draw a line which is parallel to the objective function line and this line is nearest from the origin and has at least one common point to the feasible region.

Step 6: Now get the coordinates of the common point that we find in step 5. Now, this point is used to find the optimal solution and the value of the objective function.

Linear Programming Graphical Solution of Linear Inequalities

Solved Examples of Graphical Solution of LPP

Maximize: Z = 50x + 15y

  • 5x + y ≤ 100

Step 1: Finding points 

We can also write as 

5x + y = 100….(i)

x + y = 50….(ii)

Now we find the points 

so we take eq(i), now in this equation

When x = 0, y = 100

When y = 0, x = 20

So, points are (0, 100) and (20, 0)

Similarly, in eq(ii)

When x = 0, y = 50

When y = 0, x = 50

So, points are (0, 50) and (50, 0)

Step 2: Now plot these points in the graph and find the feasible region.

Solved Example 1: Z = 50x + 15y

Step 3: Now we find the convenient value of Z(objective function) 

So, to find the convenient value of Z, we have to take the lcm of coefficient of 50x + 15y, i.e., 150. So, the value of Z is the multiple of 150, i.e., 300. Hence, 

50x + 15y = 300

Put x = 0, y = 20

Put y = 0, x = 6

draw the line of this objective function on the graph:

Solved Example 1 Z = 50x + 15y Step 3

Step 4: As we know that the objective function is maximum type then we draw a line which is parallel to the objective function line and farthest from the origin and only has one common point to the feasible region.

Solved Example 1: Z = 50x + 15y Step 4

Step 5: We have a common point that is (12.5, 37.5) with the feasible region. So, now we find the optimal solution of the objective function:

Z = 50x + 15y

Z = 50(12.5) + 15(37.5)

Z = 625 + 562.5

Thus, maximum value of Z with given constraint is, 1187

Example 2: Solve the given linear programming problems graphically: 

Minimize: Z = 20x + 10y

  • x + 2y ≤ 40
  • 3x + y ≥ 30
  • 4x + 3y ≥ 60

l 1 = x + 2y = 40 ….(i)

l 2 = 3x + y = 30 ….(ii)

l 3 = 4x + 3y = 60 ….(iii)

So we take eq(i), now in this equation

When x = 0, y = 20

When y = 0, x = 40

So, points are (0, 20) and (40, 0)

When x = 0, y = 30

When y = 0, x = 10

So, points are (0, 30) and (10, 0)

Similarly, in eq(iii)

When y = 0, x = 15

So, points are (0, 20) and (15, 0)

Solved Example 2: Z = 20x + 10y Step 2

So let us assume z = 0

20x + 10y = 0

Solved Example 2: Z = 20x + 10y Step 3

Step 4: As we know that the objective function is minimum type then we draw a line which is parallel to the objective function line and nearest from the origin and has at least one common point to the feasible region.

Solved Exanple 2: Z = 20x + 10y Step 4

This parallel line touch the feasible region at point A. So now we find the coordinates of point A:

As you can see from the graph at point A l2 and l3 line intersect so we find the coordinate of point A by solving these equations:

l 2 = 3x + y = 30 ….(iv)

l 3 = 4x + 3y = 60 ….(v)

Now multiply eq(iv) with 4 and eq(v) with 3, we get

12x + 4y = 120 

12x + 9y = 180 

Now subtract both the equation we get coordinates (6, 12)

Step 5: We have a common point that is (6, 12) with the feasible region. So, now we find the optimal solution of the objective function:

Z = 20x + 10y

Z = 20(6) + 10(12)

Z = 120 + 120

Thus, the minimum value of Z with the given constraint are 240

FAQs on Graphical Solution of LPP

1. what is linear programming problems(lpp).

Linear Programming Problems are the mathematical problems that are used to solve or optimize the mathematical problems. We can solve Linear Programming Problems to maximize and minimize the special linear condition.

2. What are Types of Linear Programming Problems(LPP) Solution?

There are various types of Solution to Linear Programming Problems that are, Linear Programming Problems Solution by Simplex Method Linear Programming Problems Solution by R Method Linear Programming Problems Solution by Graphical Method

3. What are Types of Graphical Solution of Linear Programming Problems(LPP)?

There are various types of graphical solution of linear programming problems that are, Corner Points Methods Iso-Cost Methods

Feeling lost in the world of random DSA topics, wasting time without progress? It's time for a change! Join our DSA course, where we'll guide you on an exciting journey to master DSA efficiently and on schedule. Ready to dive in? Explore our Free Demo Content and join our DSA course, trusted by over 100,000 geeks!

  • DSA in Java
  • DSA in Python
  • DSA in JavaScript

Please Login to comment...

  • Linear Equations
  • Mathematical
  • School Learning
  • School Mathematics
  • satyam_sharma
  • Top 12 AI Testing Tools for Test Automation in 2024
  • 7 Best ChatGPT Plugins for Converting PDF to Editable Formats
  • Microsoft is bringing Linux's Sudo command to Windows 11
  • 10 Best AI Voice Cloning Tools to be Used in 2024 [Free + Paid]
  • 10 Best IPTV Service Provider Subscriptions

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

scipy.optimize.linprog #

Linear programming: minimize a linear objective function subject to linear equality and inequality constraints.

Linear programming solves problems of the following form:

where \(x\) is a vector of decision variables; \(c\) , \(b_{ub}\) , \(b_{eq}\) , \(l\) , and \(u\) are vectors; and \(A_{ub}\) and \(A_{eq}\) are matrices.

Alternatively, that’s:

minimize c @ x such that A_ub @ x <= b_ub A_eq @ x == b_eq lb <= x <= ub

Note that by default lb = 0 and ub = None . Other bounds can be specified with bounds .

The coefficients of the linear objective function to be minimized.

The inequality constraint matrix. Each row of A_ub specifies the coefficients of a linear inequality constraint on x .

The inequality constraint vector. Each element represents an upper bound on the corresponding value of A_ub @ x .

The equality constraint matrix. Each row of A_eq specifies the coefficients of a linear equality constraint on x .

The equality constraint vector. Each element of A_eq @ x must equal the corresponding element of b_eq .

A sequence of (min, max) pairs for each element in x , defining the minimum and maximum values of that decision variable. If a single tuple (min, max) is provided, then min and max will serve as bounds for all decision variables. Use None to indicate that there is no bound. For instance, the default bound (0, None) means that all decision variables are non-negative, and the pair (None, None) means no bounds at all, i.e. all variables are allowed to be any real.

The algorithm used to solve the standard form problem. ‘highs’ (default), ‘highs-ds’ , ‘highs-ipm’ , ‘interior-point’ (legacy), ‘revised simplex’ (legacy), and ‘simplex’ (legacy) are supported. The legacy methods are deprecated and will be removed in SciPy 1.11.0.

If a callback function is provided, it will be called at least once per iteration of the algorithm. The callback function must accept a single scipy.optimize.OptimizeResult consisting of the following fields:

The current solution vector.

The current value of the objective function c @ x .

True when the algorithm has completed successfully.

The (nominally positive) values of the slack, b_ub - A_ub @ x .

The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x .

The phase of the algorithm being executed.

An integer representing the status of the algorithm.

0 : Optimization proceeding nominally.

1 : Iteration limit reached.

2 : Problem appears to be infeasible.

3 : Problem appears to be unbounded.

4 : Numerical difficulties encountered.

The current iteration number.

A string descriptor of the algorithm status.

Callback functions are not currently supported by the HiGHS methods.

A dictionary of solver options. All methods accept the following options:

Maximum number of iterations to perform. Default: see method-specific documentation.

Set to True to print convergence messages. Default: False .

Set to False to disable automatic presolve. Default: True .

All methods except the HiGHS solvers also accept:

A tolerance which determines when a residual is “close enough” to zero to be considered exactly zero.

Set to True to automatically perform equilibration. Consider using this option if the numerical values in the constraints are separated by several orders of magnitude. Default: False .

Set to False to disable automatic redundancy removal. Default: True .

Method used to identify and remove redundant rows from the equality constraint matrix after presolve. For problems with dense input, the available methods for redundancy removal are:

Repeatedly performs singular value decomposition on the matrix, detecting redundant rows based on nonzeros in the left singular vectors that correspond with zero singular values. May be fast when the matrix is nearly full rank.

Uses the algorithm presented in [5] to identify redundant rows.

Uses a randomized interpolative decomposition. Identifies columns of the matrix transpose not used in a full-rank interpolative decomposition of the matrix.

Uses “svd” if the matrix is nearly full rank, that is, the difference between the matrix rank and the number of rows is less than five. If not, uses “pivot”. The behavior of this default is subject to change without prior notice.

Default: None. For problems with sparse input, this option is ignored, and the pivot-based algorithm presented in [5] is used.

For method-specific options, see show_options('linprog') .

Guess values of the decision variables, which will be refined by the optimization algorithm. This argument is currently used only by the ‘revised simplex’ method, and can only be used if x0 represents a basic feasible solution.

Indicates the type of integrality constraint on each decision variable.

0 : Continuous variable; no integrality constraint.

1 : Integer variable; decision variable must be an integer within bounds .

2 : Semi-continuous variable; decision variable must be within bounds or take value 0 .

3 : Semi-integer variable; decision variable must be an integer within bounds or take value 0 .

By default, all variables are continuous.

For mixed integrality constraints, supply an array of shape c.shape . To infer a constraint on each decision variable from shorter inputs, the argument will be broadcasted to c.shape using np.broadcast_to .

This argument is currently used only by the 'highs' method and ignored otherwise.

A scipy.optimize.OptimizeResult consisting of the fields below. Note that the return types of the fields may depend on whether the optimization was successful, therefore it is recommended to check OptimizeResult.status before relying on the other fields:

The values of the decision variables that minimizes the objective function while satisfying the constraints.

The optimal value of the objective function c @ x .

The (nominally positive) values of the slack variables, b_ub - A_ub @ x .

True when the algorithm succeeds in finding an optimal solution.

An integer representing the exit status of the algorithm.

0 : Optimization terminated successfully.

The total number of iterations performed in all phases.

A string descriptor of the exit status of the algorithm.

Additional options accepted by the solvers.

This section describes the available solvers that can be selected by the ‘method’ parameter.

‘highs-ds’ and ‘highs-ipm’ are interfaces to the HiGHS simplex and interior-point method solvers [13] , respectively. ‘highs’ (default) chooses between the two automatically. These are the fastest linear programming solvers in SciPy, especially for large, sparse problems; which of these two is faster is problem-dependent. The other solvers ( ‘interior-point’ , ‘revised simplex’ , and ‘simplex’ ) are legacy methods and will be removed in SciPy 1.11.0.

Method highs-ds is a wrapper of the C++ high performance dual revised simplex implementation (HSOL) [13] , [14] . Method highs-ipm is a wrapper of a C++ implementation of an i nterior- p oint m ethod [13] ; it features a crossover routine, so it is as accurate as a simplex solver. Method highs chooses between the two automatically. For new code involving linprog , we recommend explicitly choosing one of these three method values.

New in version 1.6.0.

Method interior-point uses the primal-dual path following algorithm as outlined in [4] . This algorithm supports sparse constraint matrices and is typically faster than the simplex methods, especially for large, sparse problems. Note, however, that the solution returned may be slightly less accurate than those of the simplex methods and will not, in general, correspond with a vertex of the polytope defined by the constraints.

New in version 1.0.0.

Method revised simplex uses the revised simplex method as described in [9] , except that a factorization [11] of the basis matrix, rather than its inverse, is efficiently maintained and used to solve the linear systems at each iteration of the algorithm.

New in version 1.3.0.

Method simplex uses a traditional, full-tableau implementation of Dantzig’s simplex algorithm [1] , [2] ( not the Nelder-Mead simplex). This algorithm is included for backwards compatibility and educational purposes.

New in version 0.15.0.

Before applying interior-point , revised simplex , or simplex , a presolve procedure based on [8] attempts to identify trivial infeasibilities, trivial unboundedness, and potential problem simplifications. Specifically, it checks for:

rows of zeros in A_eq or A_ub , representing trivial constraints;

columns of zeros in A_eq and A_ub , representing unconstrained variables;

column singletons in A_eq , representing fixed variables; and

column singletons in A_ub , representing simple bounds.

If presolve reveals that the problem is unbounded (e.g. an unconstrained and unbounded variable has negative cost) or infeasible (e.g., a row of zeros in A_eq corresponds with a nonzero in b_eq ), the solver terminates with the appropriate status code. Note that presolve terminates as soon as any sign of unboundedness is detected; consequently, a problem may be reported as unbounded when in reality the problem is infeasible (but infeasibility has not been detected yet). Therefore, if it is important to know whether the problem is actually infeasible, solve the problem again with option presolve=False .

If neither infeasibility nor unboundedness are detected in a single pass of the presolve, bounds are tightened where possible and fixed variables are removed from the problem. Then, linearly dependent rows of the A_eq matrix are removed, (unless they represent an infeasibility) to avoid numerical difficulties in the primary solve routine. Note that rows that are nearly linearly dependent (within a prescribed tolerance) may also be removed, which can change the optimal solution in rare cases. If this is a concern, eliminate redundancy from your problem formulation and run with option rr=False or presolve=False .

Several potential improvements can be made here: additional presolve checks outlined in [8] should be implemented, the presolve routine should be run multiple times (until no further simplifications can be made), and more of the efficiency improvements from [5] should be implemented in the redundancy removal routines.

After presolve, the problem is transformed to standard form by converting the (tightened) simple bounds to upper bound constraints, introducing non-negative slack variables for inequality constraints, and expressing unbounded variables as the difference between two non-negative variables. Optionally, the problem is automatically scaled via equilibration [12] . The selected algorithm solves the standard form problem, and a postprocessing routine converts the result to a solution to the original problem.

Dantzig, George B., Linear programming and extensions. Rand Corporation Research Study Princeton Univ. Press, Princeton, NJ, 1963

Hillier, S.H. and Lieberman, G.J. (1995), “Introduction to Mathematical Programming”, McGraw-Hill, Chapter 4.

Bland, Robert G. New finite pivoting rules for the simplex method. Mathematics of Operations Research (2), 1977: pp. 103-107.

Andersen, Erling D., and Knud D. Andersen. “The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm.” High performance optimization. Springer US, 2000. 197-232.

Andersen, Erling D. “Finding all linearly dependent rows in large-scale linear programming.” Optimization Methods and Software 6.3 (1995): 219-227.

Freund, Robert M. “Primal-Dual Interior-Point Methods for Linear Programming based on Newton’s Method.” Unpublished Course Notes, March 2004. Available 2/25/2017 at https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec14_int_pt_mthd.pdf

Fourer, Robert. “Solving Linear Programs by Interior-Point Methods.” Unpublished Course Notes, August 26, 2005. Available 2/25/2017 at http://www.4er.org/CourseNotes/Book%20B/B-III.pdf

Andersen, Erling D., and Knud D. Andersen. “Presolving in linear programming.” Mathematical Programming 71.2 (1995): 221-245.

Bertsimas, Dimitris, and J. Tsitsiklis. “Introduction to linear programming.” Athena Scientific 1 (1997): 997.

Andersen, Erling D., et al. Implementation of interior point methods for large scale linear programming. HEC/Universite de Geneve, 1996.

Bartels, Richard H. “A stabilization of the simplex method.” Journal in Numerische Mathematik 16.5 (1971): 414-434.

Tomlin, J. A. “On scaling linear programming problems.” Mathematical Programming Study 4 (1975): 146-166.

Huangfu, Q., Galabova, I., Feldmeier, M., and Hall, J. A. J. “HiGHS - high performance software for linear optimization.” https://highs.dev/

Huangfu, Q. and Hall, J. A. J. “Parallelizing the dual revised simplex method.” Mathematical Programming Computation, 10 (1), 119-142, 2018. DOI: 10.1007/s12532-017-0130-5

Consider the following problem:

The problem is not presented in the form accepted by linprog . This is easily remedied by converting the “greater than” inequality constraint to a “less than” inequality constraint by multiplying both sides by a factor of \(-1\) . Note also that the last constraint is really the simple bound \(-3 \leq x_1 \leq \infty\) . Finally, since there are no bounds on \(x_0\) , we must explicitly specify the bounds \(-\infty \leq x_0 \leq \infty\) , as the default is for variables to be non-negative. After collecting coeffecients into arrays and tuples, the input for this problem is:

The marginals (AKA dual values / shadow prices / Lagrange multipliers) and residuals (slacks) are also available.

For example, because the marginal associated with the second inequality constraint is -1, we expect the optimal value of the objective function to decrease by eps if we add a small amount eps to the right hand side of the second inequality constraint:

Also, because the residual on the first inequality constraint is 39, we can decrease the right hand side of the first constraint by 39 without affecting the optimal solution.

IMAGES

  1. How to Solve a Linear Programming Problem Using the Graphical Method

    linear programming solve method

  2. how to solve dual problem in linear programming

    linear programming solve method

  3. simplex method for solving linear programming

    linear programming solve method

  4. how to solve a linear programming problem using the graphical method

    linear programming solve method

  5. how to solve a linear programming problem using the graphical method

    linear programming solve method

  6. how to solve a linear programming problem using the graphical method

    linear programming solve method

VIDEO

  1. Linear Programming Problem(Part 1)

  2. Linear Programming

  3. Linear Programming Classic Problems 10

  4. Solve Applied Problems Using Linear Programming (Example 5)

  5. MATHS 12TH #Linear Programming #fullchapter LINEAR PROGRAMMING #maths @aakhyaacademy #science #study

  6. Lecture 03

COMMENTS

  1. Linear Programming Solver

    Added Jul 31, 2018 by vik_31415 in Mathematics Linear programming solver with up to 9 variables. New constraints could be added by using commas to separate them. Send feedback | Visit Wolfram|Alpha Get the free "Linear Programming Solver" widget for your website, blog, Wordpress, Blogger, or iGoogle. Find more Mathematics widgets in Wolfram|Alpha.

  2. Linear Programming

    Linear programming can be used to solve a problem when the goal of the problem is to maximize some value and there is a linear system of inequalities that defines the constraints on the problem. ... This method is viable for any linear programming problem that does not match the forms of the previous section. It is also required for problems ...

  3. PDF Section 2.1

    Solving Linear Programming Problems - The Graphical Method Graph the system of constraints. This will give the feasible set. Find each vertex (corner point) of the feasible set. Substitute each vertex into the objective function to determine which vertex optimizes the objective function. State the solution to the problem.

  4. Linear programming

    Linear programming ( LP ), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization ).

  5. 4: Linear Programming

    4.3: Minimization By The Simplex Method. In this section, we will solve the standard linear programming minimization problems using the simplex method. The procedure to solve these problems involves solving an associated problem called the dual problem. The solution of the dual problem is used to find the solution of the original problem.

  6. Linear Programming (Definition, Methods & Examples)

    The word "linear" defines the relationship between multiple variables with degree one. The word "programming" defines the process of selecting the best solution from various alternatives. Linear Programming is widely used in Mathematics and some other fields such as economics, business, telecommunication, and manufacturing fields.

  7. Linear Programming: Definition, Formula, Examples, Problems

    Linear programming is a mathematical concept that is used to find the optimal solution of the linear function. This method uses simple assumptions for optimizing the given function. Linear Programming has a huge real-world application and it is used to solve various types of problems.

  8. PDF Linear programming 1 Basics

    identity matrix. Similarly, a linear program in standard form can be replaced by a linear program in canonical form by replacing Ax= bby A0x b0where A0= A A and b0= b b . 2 The Simplex Method In 1947, George B. Dantzig developed a technique to solve linear programs | this technique is referred to as the simplex method. 2.1 Brief Review of Some ...

  9. Linear Programming

    How to Solve Linear Programming Problems? The most important part of solving linear programming problem is to first formulate the problem using the given data. The steps to solve linear programming problems are given below: Step 1: Identify the decision variables. Step 2: Formulate the objective function.

  10. Learn how to solve a linear programming problem

    Learn how to solve problems using linear programming. A linear programming problem involves finding the maximum or minimum value of an equation, called the o...

  11. A Beginner's Guide to Linear Programming and the Simplex Algorithm

    To solve the problem, we can use the simplex algorithm or another linear programming method to find the values of w and c that maximize the objective function subject to the constraints. In this case, the optimal solution would be to grow 40 tons of wheat and 40 tons of corn, which would yield a profit of €10,000.

  12. Advanced LP Solving

    Consider the linear programming problem $$ \begin{align*} \min\quad & -2x_1 - x_2 \\ \text{s.t.}\quad& x_1 + x_2 \le 1\\ & x_1, x_2 \ge 0 \end{align*} $$ ... to tolerances but occasionally might benefit from larger tolerances if progress is observed to stall at the end of the solve. On the other hand, first-order methods are typically much more ...

  13. Hands-On Linear Programming: Optimization With Python

    The basic method for solving linear programming problems is called the simplex method, which has several variants. Another popular approach is the interior-point method. Mixed-integer linear programming problems are solved with more complex and computationally intensive methods like the branch-and-bound method, ...

  14. 3.4: Simplex Method

    Use technology that has automated those by-hand methods. In this section we will explore the traditional by-hand method for solving linear programming problems. To handle linear programming problems that contain upwards of two variables, mathematicians developed what is now known as the simplex method. It is an efficient algorithm (set of ...

  15. 3: Linear Programming

    In this section, we will begin to formulate, analyze, and solve such problems, at a simple level, to understand the many components of such a problem. 3.1.1: Maximization Applications (Exercises) 3.2: Minimization Applications. Minimization linear programming problems are solved in much the same way as the maximization problems.

  16. Linear Programming Explained: Formulas and Examples

    IT Strategy What Is Linear Programming? Meaning, Methods, and Examples Linear programming helps determine how to arrive at the most optimized situation given a set of resource constraints. Chiradeep BasuMallick Technical Writer December 16, 2022

  17. 4.2: Maximization By The Simplex Method

    In the last chapter, we used the geometrical method to solve linear programming problems, but the geometrical approach will not work for problems that have more than two variables. In real life situations, linear programming problems consist of literally thousands of variables and are solved by computers. We can solve these problems ...

  18. Linear Programming

    Finding the optimal solution to the linear programming problem by the simplex method. Complete, detailed, step-by-step description of solutions. Hungarian method, dual simplex, matrix games, potential method, traveling salesman problem, dynamic programming ... Solve linear programming tasks offline! The number of constraints: 4-----The Number ...

  19. Graphical Method Calculator

    You're in luck! Solving optimization exercises with the graphical method will be easier with our graphical method calculator for linear programming problems. Graphical Method Calculator - Linear Programming Objective: Objective Function: X + X Constraints Constraint 1: X + X Constraint 2: X + X X 1, X 2 ≥ 0 + − Graph Reset Members-Only Content

  20. Graphical Solution of Linear Programming Problems

    There are various methods for solving Linear Programming Problems and one of the easiest and most important methods for solving LPP is the graphical method. In Graphical Solution of Linear Programming, we use graphs to solve LPP.

  21. scipy.optimize.linprog

    Notes. This section describes the available solvers that can be selected by the 'method' parameter. 'highs-ds' and 'highs-ipm' are interfaces to the HiGHS simplex and interior-point method solvers , respectively. 'highs' (default) chooses between the two automatically. These are the fastest linear programming solvers in SciPy, especially for large, sparse problems; which of ...

  22. Simplex method calculator

    Simplex method calculator - Solve the Linear programming problem using Simplex method, step-by-step online We use cookies to improve your experience on our site and to show you relevant advertising. By browsing this website, you agree to our use of cookies.

  23. Excel Linear Programming (Using Solver and Graphical Methods)

    For solving linear programming models, it uses the Simplex LP method. While using the Excel Solver Add-in, we can also save and load a linear programming model for reuse purposes. We can also apply the graphical method to solve a linear programming problem in Excel. However, this method is applicable only for problems with two variables.

  24. Integer vs Linear Programming in MATLAB: A Guide

    To solve the same problem as an LP problem, you can use intlinprog with an integer variable index of []. The optimal solution is to produce 12.5 units of product A and 27.5 units of product B ...

  25. How to Normalize Data for Linear Programming in OR

    Linear programming (LP) is a powerful technique in operations research (OR) that can help you optimize your decisions under constraints. But before you can apply LP to your real-world problems ...