Search anything:

Solving Linear Programming problems in Python using cvxpy library

Software engineering algorithms python.

Internship at OpenGenus

Get this book -> Problems on Array: For Interviews and Competitive Programming

Mathematical Programming is used to find the best or optimal solution to a problem that requires a decision or set of decisions about how best to use a set of limited resources to achieve a state goal of objectives.

Linear programming requires that all the mathematical functions in the model be linear functions. We have solved linear programming problems in Python using cvxpy library .

Mathematical formulation

Decision variables: X 1 , X 2 , X 3 , .... X n Objective function or linear function: Z

a-1

Library used

Here, we use the library, cvxpy to find the solution of the linear programming problem(lpp).

To install this library, use the following command:

To include it in our code, use

Here, we solve the following LPP:

Maximise: z = x 1 + x 2

  • 4 x 1 + 3 x 2 <= 12
  • -3 x 1 + 4 x 2 <= 12
  • 0 <= x 1 , x 2 <= 2

The expected solution is

Step 1: First define the number of variables that would be used

For example, if only two x i s would be used,

This line creates a column matrix of dimensions 2 x 1 (in a general case, if n is the number of linear programming variables, then a column matrix of dimensions n x 1 should be made.)

Step 2: Define the constraints

Here the constraints are as follows

All values of x lie between 0 and 2 both inclusive.

The above lines first make a 2 x 2 matrix (as described in the table above). Since both of their constraints are the same, we can define the constraint in a single line, by making the matrix <= 12.

Multiple constraints on both the variables can be defined using x as a general variable.

For defining multiple single lined distinct constraints, use the following format:

Where <= can be replaced by any other inequality symbol.

Here A is a square matrix of dimensions n x n where n is the number of varibles in the linear programming problem, x is as defined in the previous step, and B is a column matrix of dimensions n x 1 .

For example, for defining the following constraints, use the following snippet: (n = 3)

Step 3: Define the objective function

Here the objective function is z = x 1 + x 2

For an objective function z = 3 x 1 + 4 x 2 , define a new 1 dimensional array containing the different parameters in the objective function as follows:

Step 4: Define the problem and then solve it

The problem is defined by the objective function and the constraints.

Step 5: Print the maximised objective funstion, and the x values

Here, solution contains the value of the objective function, and x.value is the column matrix, containing the values of x i .

Output of the code

Here the first line denotes the solution while the next two lines denote the values of the two parameters.

Minimize: z = 4 x 1 + 2 x 2

  • -3 x 1 + 6 x 2 <= 10
  • x 1 >= 1
  • 0 <= x 1 , x 2 <= 5

With this article at OpenGenus , you must have the complete idea of solving Linear Programming problems in Python. Enjoy.

OpenGenus IQ: Computing Expertise & Legacy icon

scipy.optimize.linprog #

Linear programming: minimize a linear objective function subject to linear equality and inequality constraints.

Linear programming solves problems of the following form:

where \(x\) is a vector of decision variables; \(c\) , \(b_{ub}\) , \(b_{eq}\) , \(l\) , and \(u\) are vectors; and \(A_{ub}\) and \(A_{eq}\) are matrices.

Alternatively, that’s:

minimize c @ x such that A_ub @ x <= b_ub A_eq @ x == b_eq lb <= x <= ub

Note that by default lb = 0 and ub = None . Other bounds can be specified with bounds .

The coefficients of the linear objective function to be minimized.

The inequality constraint matrix. Each row of A_ub specifies the coefficients of a linear inequality constraint on x .

The inequality constraint vector. Each element represents an upper bound on the corresponding value of A_ub @ x .

The equality constraint matrix. Each row of A_eq specifies the coefficients of a linear equality constraint on x .

The equality constraint vector. Each element of A_eq @ x must equal the corresponding element of b_eq .

A sequence of (min, max) pairs for each element in x , defining the minimum and maximum values of that decision variable. If a single tuple (min, max) is provided, then min and max will serve as bounds for all decision variables. Use None to indicate that there is no bound. For instance, the default bound (0, None) means that all decision variables are non-negative, and the pair (None, None) means no bounds at all, i.e. all variables are allowed to be any real.

The algorithm used to solve the standard form problem. ‘highs’ (default), ‘highs-ds’ , ‘highs-ipm’ , ‘interior-point’ (legacy), ‘revised simplex’ (legacy), and ‘simplex’ (legacy) are supported. The legacy methods are deprecated and will be removed in SciPy 1.11.0.

If a callback function is provided, it will be called at least once per iteration of the algorithm. The callback function must accept a single scipy.optimize.OptimizeResult consisting of the following fields:

The current solution vector.

The current value of the objective function c @ x .

True when the algorithm has completed successfully.

The (nominally positive) values of the slack, b_ub - A_ub @ x .

The (nominally zero) residuals of the equality constraints, b_eq - A_eq @ x .

The phase of the algorithm being executed.

An integer representing the status of the algorithm.

0 : Optimization proceeding nominally.

1 : Iteration limit reached.

2 : Problem appears to be infeasible.

3 : Problem appears to be unbounded.

4 : Numerical difficulties encountered.

The current iteration number.

A string descriptor of the algorithm status.

Callback functions are not currently supported by the HiGHS methods.

A dictionary of solver options. All methods accept the following options:

Maximum number of iterations to perform. Default: see method-specific documentation.

Set to True to print convergence messages. Default: False .

Set to False to disable automatic presolve. Default: True .

All methods except the HiGHS solvers also accept:

A tolerance which determines when a residual is “close enough” to zero to be considered exactly zero.

Set to True to automatically perform equilibration. Consider using this option if the numerical values in the constraints are separated by several orders of magnitude. Default: False .

Set to False to disable automatic redundancy removal. Default: True .

Method used to identify and remove redundant rows from the equality constraint matrix after presolve. For problems with dense input, the available methods for redundancy removal are:

Repeatedly performs singular value decomposition on the matrix, detecting redundant rows based on nonzeros in the left singular vectors that correspond with zero singular values. May be fast when the matrix is nearly full rank.

Uses the algorithm presented in [5] to identify redundant rows.

Uses a randomized interpolative decomposition. Identifies columns of the matrix transpose not used in a full-rank interpolative decomposition of the matrix.

Uses “svd” if the matrix is nearly full rank, that is, the difference between the matrix rank and the number of rows is less than five. If not, uses “pivot”. The behavior of this default is subject to change without prior notice.

Default: None. For problems with sparse input, this option is ignored, and the pivot-based algorithm presented in [5] is used.

For method-specific options, see show_options('linprog') .

Guess values of the decision variables, which will be refined by the optimization algorithm. This argument is currently used only by the ‘revised simplex’ method, and can only be used if x0 represents a basic feasible solution.

Indicates the type of integrality constraint on each decision variable.

0 : Continuous variable; no integrality constraint.

1 : Integer variable; decision variable must be an integer within bounds .

2 : Semi-continuous variable; decision variable must be within bounds or take value 0 .

3 : Semi-integer variable; decision variable must be an integer within bounds or take value 0 .

By default, all variables are continuous.

For mixed integrality constraints, supply an array of shape c.shape . To infer a constraint on each decision variable from shorter inputs, the argument will be broadcasted to c.shape using np.broadcast_to .

This argument is currently used only by the 'highs' method and ignored otherwise.

A scipy.optimize.OptimizeResult consisting of the fields below. Note that the return types of the fields may depend on whether the optimization was successful, therefore it is recommended to check OptimizeResult.status before relying on the other fields:

The values of the decision variables that minimizes the objective function while satisfying the constraints.

The optimal value of the objective function c @ x .

The (nominally positive) values of the slack variables, b_ub - A_ub @ x .

True when the algorithm succeeds in finding an optimal solution.

An integer representing the exit status of the algorithm.

0 : Optimization terminated successfully.

The total number of iterations performed in all phases.

A string descriptor of the exit status of the algorithm.

Additional options accepted by the solvers.

This section describes the available solvers that can be selected by the ‘method’ parameter.

‘highs-ds’ and ‘highs-ipm’ are interfaces to the HiGHS simplex and interior-point method solvers [13] , respectively. ‘highs’ (default) chooses between the two automatically. These are the fastest linear programming solvers in SciPy, especially for large, sparse problems; which of these two is faster is problem-dependent. The other solvers ( ‘interior-point’ , ‘revised simplex’ , and ‘simplex’ ) are legacy methods and will be removed in SciPy 1.11.0.

Method highs-ds is a wrapper of the C++ high performance dual revised simplex implementation (HSOL) [13] , [14] . Method highs-ipm is a wrapper of a C++ implementation of an i nterior- p oint m ethod [13] ; it features a crossover routine, so it is as accurate as a simplex solver. Method highs chooses between the two automatically. For new code involving linprog , we recommend explicitly choosing one of these three method values.

New in version 1.6.0.

Method interior-point uses the primal-dual path following algorithm as outlined in [4] . This algorithm supports sparse constraint matrices and is typically faster than the simplex methods, especially for large, sparse problems. Note, however, that the solution returned may be slightly less accurate than those of the simplex methods and will not, in general, correspond with a vertex of the polytope defined by the constraints.

New in version 1.0.0.

Method revised simplex uses the revised simplex method as described in [9] , except that a factorization [11] of the basis matrix, rather than its inverse, is efficiently maintained and used to solve the linear systems at each iteration of the algorithm.

New in version 1.3.0.

Method simplex uses a traditional, full-tableau implementation of Dantzig’s simplex algorithm [1] , [2] ( not the Nelder-Mead simplex). This algorithm is included for backwards compatibility and educational purposes.

New in version 0.15.0.

Before applying interior-point , revised simplex , or simplex , a presolve procedure based on [8] attempts to identify trivial infeasibilities, trivial unboundedness, and potential problem simplifications. Specifically, it checks for:

rows of zeros in A_eq or A_ub , representing trivial constraints;

columns of zeros in A_eq and A_ub , representing unconstrained variables;

column singletons in A_eq , representing fixed variables; and

column singletons in A_ub , representing simple bounds.

If presolve reveals that the problem is unbounded (e.g. an unconstrained and unbounded variable has negative cost) or infeasible (e.g., a row of zeros in A_eq corresponds with a nonzero in b_eq ), the solver terminates with the appropriate status code. Note that presolve terminates as soon as any sign of unboundedness is detected; consequently, a problem may be reported as unbounded when in reality the problem is infeasible (but infeasibility has not been detected yet). Therefore, if it is important to know whether the problem is actually infeasible, solve the problem again with option presolve=False .

If neither infeasibility nor unboundedness are detected in a single pass of the presolve, bounds are tightened where possible and fixed variables are removed from the problem. Then, linearly dependent rows of the A_eq matrix are removed, (unless they represent an infeasibility) to avoid numerical difficulties in the primary solve routine. Note that rows that are nearly linearly dependent (within a prescribed tolerance) may also be removed, which can change the optimal solution in rare cases. If this is a concern, eliminate redundancy from your problem formulation and run with option rr=False or presolve=False .

Several potential improvements can be made here: additional presolve checks outlined in [8] should be implemented, the presolve routine should be run multiple times (until no further simplifications can be made), and more of the efficiency improvements from [5] should be implemented in the redundancy removal routines.

After presolve, the problem is transformed to standard form by converting the (tightened) simple bounds to upper bound constraints, introducing non-negative slack variables for inequality constraints, and expressing unbounded variables as the difference between two non-negative variables. Optionally, the problem is automatically scaled via equilibration [12] . The selected algorithm solves the standard form problem, and a postprocessing routine converts the result to a solution to the original problem.

Dantzig, George B., Linear programming and extensions. Rand Corporation Research Study Princeton Univ. Press, Princeton, NJ, 1963

Hillier, S.H. and Lieberman, G.J. (1995), “Introduction to Mathematical Programming”, McGraw-Hill, Chapter 4.

Bland, Robert G. New finite pivoting rules for the simplex method. Mathematics of Operations Research (2), 1977: pp. 103-107.

Andersen, Erling D., and Knud D. Andersen. “The MOSEK interior point optimizer for linear programming: an implementation of the homogeneous algorithm.” High performance optimization. Springer US, 2000. 197-232.

Andersen, Erling D. “Finding all linearly dependent rows in large-scale linear programming.” Optimization Methods and Software 6.3 (1995): 219-227.

Freund, Robert M. “Primal-Dual Interior-Point Methods for Linear Programming based on Newton’s Method.” Unpublished Course Notes, March 2004. Available 2/25/2017 at https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec14_int_pt_mthd.pdf

Fourer, Robert. “Solving Linear Programs by Interior-Point Methods.” Unpublished Course Notes, August 26, 2005. Available 2/25/2017 at http://www.4er.org/CourseNotes/Book%20B/B-III.pdf

Andersen, Erling D., and Knud D. Andersen. “Presolving in linear programming.” Mathematical Programming 71.2 (1995): 221-245.

Bertsimas, Dimitris, and J. Tsitsiklis. “Introduction to linear programming.” Athena Scientific 1 (1997): 997.

Andersen, Erling D., et al. Implementation of interior point methods for large scale linear programming. HEC/Universite de Geneve, 1996.

Bartels, Richard H. “A stabilization of the simplex method.” Journal in Numerische Mathematik 16.5 (1971): 414-434.

Tomlin, J. A. “On scaling linear programming problems.” Mathematical Programming Study 4 (1975): 146-166.

Huangfu, Q., Galabova, I., Feldmeier, M., and Hall, J. A. J. “HiGHS - high performance software for linear optimization.” https://highs.dev/

Huangfu, Q. and Hall, J. A. J. “Parallelizing the dual revised simplex method.” Mathematical Programming Computation, 10 (1), 119-142, 2018. DOI: 10.1007/s12532-017-0130-5

Consider the following problem:

The problem is not presented in the form accepted by linprog . This is easily remedied by converting the “greater than” inequality constraint to a “less than” inequality constraint by multiplying both sides by a factor of \(-1\) . Note also that the last constraint is really the simple bound \(-3 \leq x_1 \leq \infty\) . Finally, since there are no bounds on \(x_0\) , we must explicitly specify the bounds \(-\infty \leq x_0 \leq \infty\) , as the default is for variables to be non-negative. After collecting coeffecients into arrays and tuples, the input for this problem is:

The marginals (AKA dual values / shadow prices / Lagrange multipliers) and residuals (slacks) are also available.

For example, because the marginal associated with the second inequality constraint is -1, we expect the optimal value of the objective function to decrease by eps if we add a small amount eps to the right hand side of the second inequality constraint:

Also, because the residual on the first inequality constraint is 39, we can decrease the right hand side of the first constraint by 39 without affecting the optimal solution.

  • Linear Programming using Pyomo
  • Networking and Professional Development for Machine Learning Careers in the USA
  • Predicting Employee Churn in Python
  • Airflow Operators
  • MLOps Tutorial

Machine Learning Geek

Solving Assignment Problem using Linear Programming in Python

Learn how to use Python PuLP to solve Assignment problems using Linear Programming.

In earlier articles, we have seen various applications of Linear programming such as transportation, transshipment problem, Cargo Loading problem, and shift-scheduling problem. Now In this tutorial, we will focus on another model that comes under the class of linear programming model known as the Assignment problem. Its objective function is similar to transportation problems. Here we minimize the objective function time or cost of manufacturing the products by allocating one job to one machine.

If we want to solve the maximization problem assignment problem then we subtract all the elements of the matrix from the highest element in the matrix or multiply the entire matrix by –1 and continue with the procedure. For solving the assignment problem, we use the Assignment technique or Hungarian method, or Flood’s technique.

The transportation problem is a special case of the linear programming model and the assignment problem is a special case of transportation problem, therefore it is also a special case of the linear programming problem.

In this tutorial, we are going to cover the following topics:

Assignment Problem

A problem that requires pairing two sets of items given a set of paired costs or profit in such a way that the total cost of the pairings is minimized or maximized. The assignment problem is a special case of linear programming.

For example, an operation manager needs to assign four jobs to four machines. The project manager needs to assign four projects to four staff members. Similarly, the marketing manager needs to assign the 4 salespersons to 4 territories. The manager’s goal is to minimize the total time or cost.

Problem Formulation

A manager has prepared a table that shows the cost of performing each of four jobs by each of four employees. The manager has stated his goal is to develop a set of job assignments that will minimize the total cost of getting all 4 jobs.  

Assignment Problem

Initialize LP Model

In this step, we will import all the classes and functions of pulp module and create a Minimization LP problem using LpProblem class.

Define Decision Variable

In this step, we will define the decision variables. In our problem, we have two variable lists: workers and jobs. Let’s create them using  LpVariable.dicts()  class.  LpVariable.dicts()  used with Python’s list comprehension.  LpVariable.dicts()  will take the following four values:

  • First, prefix name of what this variable represents.
  • Second is the list of all the variables.
  • Third is the lower bound on this variable.
  • Fourth variable is the upper bound.
  • Fourth is essentially the type of data (discrete or continuous). The options for the fourth parameter are  LpContinuous  or  LpInteger .

Let’s first create a list route for the route between warehouse and project site and create the decision variables using LpVariable.dicts() the method.

Define Objective Function

In this step, we will define the minimum objective function by adding it to the LpProblem  object. lpSum(vector)is used here to define multiple linear expressions. It also used list comprehension to add multiple variables.

Define the Constraints

Here, we are adding two types of constraints: Each job can be assigned to only one employee constraint and Each employee can be assigned to only one job. We have added the 2 constraints defined in the problem by adding them to the LpProblem  object.

Solve Model

In this step, we will solve the LP problem by calling solve() method. We can print the final value by using the following for loop.

From the above results, we can infer that Worker-1 will be assigned to Job-1, Worker-2 will be assigned to job-3, Worker-3 will be assigned to Job-2, and Worker-4 will assign with job-4.

In this article, we have learned about Assignment problems, Problem Formulation, and implementation using the python PuLp library. We have solved the Assignment problem using a Linear programming problem in Python. Of course, this is just a simple case study, we can add more constraints to it and make it more complicated. You can also run other case studies on Cargo Loading problems , Staff scheduling problems . In upcoming articles, we will write more on different optimization problems such as transshipment problem, balanced diet problem. You can revise the basics of mathematical concepts in  this article  and learn about Linear Programming  in this article .

  • Solving Blending Problem in Python using Gurobi
  • Transshipment Problem in Python Using PuLP

You May Also Like

how to solve linear programming problems in python

Working with crosstab, pivot_tables, and melt functions in Pandas

how to solve linear programming problems in python

Solving Multi-Period Production Scheduling Problem in Python using PuLP

how to solve linear programming problems in python

Decision Tree Classification in Python

MILP Ch.04: Linear Programming Formulation With Gurobi Python API

Chapter 4: Linear Programming Formulation with the Gurobi Python API

how to solve linear programming problems in python

In various real-world scenarios, resource allocation plays a crucial role in optimizing efficiency and maximizing productivity. The Resource Assignment Problem (RAP) is a common challenge that involves assigning available resources to specific tasks or jobs in a way that maximizes the overall performance or satisfaction. In this article, we will explore how to formulate and solve the RAP using linear programming techniques and the Gurobi Python API.

Formulating the RAP as a Linear Programming Problem

To begin, we need to import the Gurobi Python library, which provides powerful tools for solving optimization problems. Once imported, we can define the input data for the RAP. In this particular problem, the data consists of two main components: the list of resources and the list of jobs. Additionally, we have matching scores that indicate the compatibility or suitability of each resource for each job.

To represent the resources and jobs, we use lists in Python. For example, let’s assume we have three resources named Carlos, Joe, and Monika, and three jobs: tester, Java developer, and architect. We can initialize these lists as R and J, respectively. To handle the matching scores, we can utilize the multidict function provided by the Gurobi Python API. This function allows us to initialize a dictionary with multiple keys and corresponding values. We define the dictionary keys as combinations of resources and jobs, and the values as the matching scores for each combination.

For instance, if Carlos is assigned as a tester, the matching score would be 53. Similarly, if Joe is assigned as a tester, the matching score would be 80. By using the multidict function, we can efficiently define the indices and keys of the dictionary and associate the matching scores with the respective combinations of resources and jobs.

Defining the RAP Model

To solve the RAP, we need to formulate it as a linear programming problem. We create a model object, denoted as ‘m,’ which encapsulates all the elements of the mathematical optimization model. In this case, we name the model as RAP.

The RAP formulation consists of four main components: data, decision variables, constraints, and the objective function. By defining these components, we construct the model object ‘m’ that encompasses the RAP.

Decision Variables

In linear programming, decision variables represent the unknowns we seek to optimize. To define these variables, we use the ‘addVars’ method provided by the Gurobi Python API. This method operates on the combinations obtained from the multidict function.

The decision variables and matching scores share the same keys for identification. The decision variable name is assigned to ‘x,’ which captures all the decision variables in the linear programming formulation.

Job Constraints

In the RAP, we have constraints related to jobs. To incorporate these constraints into the model, we utilize the ‘addConstrs’ method available in the ‘m’ object. The job constraints are captured in an object called ‘jobs,’ and the constraints are defined using the ‘x.sum’ method in Python.

For each job ‘j’ in the list of jobs ‘J,’ we calculate the summation of all the resources that can be assigned to that job. The ‘x.sum’ method enables us to express the left-hand side of each job constraint accurately. The constraints are defined as equality constraints, indicated by the double equality sign (=), and are set to a value of 1. With this step, we effectively capture all the job constraints within the model.

Resource Constraints

Similar to job constraints, resource constraints also need to be defined. We employ the ‘addConstr’ method within the ‘m’ object to specify these constraints. Once again, we utilize the ‘x.sum’ method to define the summations within each constraint.

For example, for each resource ‘r,’ we define a constraint that encompasses all the jobs to which the resource can be assigned. Since it is possible to not assign all the resources, we express these constraints as less than or equal to (≤) constraints. This allows for the flexibility of not assigning a resource to a particular job. By employing the ‘addConstr’ method, we capture all these resource constraints in the model.

Objective Function

The objective function represents the goal we aim to optimize. In the RAP, our objective is to maximize the total matching score. We use the ‘setObjective’ method provided by the Gurobi Python API to define the equation that represents the objective function.

In this case, we utilize the ‘x.prod’ method in Python to calculate the product of each matching score with its associated decision variable. By summing up all these products, we obtain the total matching score. The resulting equation is set as the objective function using the ‘setObjective’ method.

To inform Gurobi that we want to maximize the objective function, we use the ‘GRB.MAXIMIZE’ parameter. Gurobi’s optimization sense is set to maximize the objective function value.

Solving the RAP

Once we have defined the model object ‘m’ with all the necessary components, we are ready to solve the RAP. The Gurobi Python API provides an ‘optimize()’ function, which calls the Gurobi library to solve the defined linear programming problem.

The ‘optimize()’ function leverages the constraints, decision variables, and objective function specified in the model object ‘m’ to find the optimal solution. In this case, Gurobi determines that Carlos should be assigned to the tester job, Joe to the architect job, and Monika to the Java developer job. The resulting total matching score is found to be 193.

In this article, we explored the formulation and solution of the Resource Assignment Problem (RAP) using linear programming techniques and the Gurobi Python API. By defining the input data, decision variables, constraints, and objective function, we constructed a mathematical optimization model to maximize the total matching score. Gurobi’s powerful solver capabilities efficiently solved the RAP and provided an optimal resource assignment solution. By leveraging linear programming and optimization techniques, organizations can effectively allocate resources and improve overall performance and productivity.

Jupyter Notebooks

  • RAP Problem 001 Jupyter Notebook
  • RAP Problem 002 Jupyter Notebook
  • RAP Problem 003 Jupyter Notebook

Complete tutorial series

  • Chapter 1: Why Mixed-Integer Programming (MIP)
  • Chapter 2: Resource Assignment Problem
  • Chapter 3: Linear Programming Formulations
  • Chapter 4: Linear Programming Formulation With Gurobi Python API
  • Chapter 5: Jupyter Notebook-1 Resource Assignment Problem Formulation
  • Chapter 6: Perfect Formulation Resource Assignment Problem (RAP)
  • Chapter 7: Jupyter Notebook-2 Perfect Formulation Resource Assignment Problem
  • Chapter 8: Methods for Solving MIP Problems
  • Chapter 9: Approach 1 Branch And Bound Methods For Solving MIP Problems Part 1
  • Chapter 10: Approach 1 Branch And Bound Methods For Solving MIP Problems Part II
  • Chapter 11: Approach 2 Cutting Planes Methods For Solving MIP Problems
  • Chapter 12: Jupyter Notebook-3 – Why MIP Is Better than Simple Heuristics
  • Summary & Conclusion: Mixed Integer Linear Programming

Guidance for Your Journey

30 day free trial for commercial users, always free for academics.

GUROBI NEWSLETTER

Latest news and releases

  • Welcome: Linear Programming Tutorial
  • Jupyter Notebook Modeling Examples
  • Chapter-1: Why Mixed-Integer Programming (MIP)

Try Gurobi for Free

Choose the evaluation license that fits you best, and start working with our Expert Team for technical guidance and support.

Evaluation License

Academic license, cloud trial.

Request free trial hours, so you can see how quickly and easily a model can be solved on the cloud.

Looking for documentation?

Jupyter Models

Case studies, privacy overview.

Back to top

Intermediate Quantitative Economics with Python

Linear Programming

Thomas J. Sargent and John Stachurski

20. Linear Programming #

20.1. overview #.

Linear programming problems either maximize or minimize a linear objective function subject to a set of linear equality and/or inequality constraints.

Linear programs come in pairs:

an original primal problem, and

an associated dual problem.

If a primal problem involves maximization , the dual problem involves minimization .

If a primal problem involves minimization , the dual problem involves maximization .

We provide a standard form of a linear program and methods to transform other forms of linear programming problems into a standard form.

We tell how to solve a linear programming problem using SciPy .

We describe the important concept of complementary slackness and how it relates to the dual problem.

Let’s start with some standard imports.

20.2. Objective Function and Constraints #

We want to minimize a cost function \(c'x = \sum_{i=1}^n c_i x_i\) over feasible values of \(x = (x_1,x_2,\dots,x_n)'\) .

\(c = (c_1,c_2,\dots,c_n)'\) is a unit cost vector , and

\(x = (x_1,x_2,\dots,x_n)'\) is a vector of decision variables

Decision variables are restricted to satisfy a set of linear equality and/or inequality constraints.

We describe the constraints with the following collections of \(n\) -dimensional vectors \(a_i\) and scalars \(b_i\) and associated sets indexing the equality and inequality constraints:

\(a_i\) for \(i \in M_i\) , where \(M_1,M_2,M_3\) are each sets of indexes

and a collection of scalers

\(b_i\) for \(i \in N_i\) , where \(N_1,N_2,N_3\) are each sets of indexes.

A linear programming can be stated as [ Ber97 ] :

A vector \(x\) that satisfies all of the constraints is called a feasible solution .

A collection of all feasible solutions is called a feasible set .

A feasible solution \(x\) that minimizes the cost function is called an optimal solution .

The corresponding value of cost function \(c'x\) is called the optimal value .

If the feasible set is empty, we say that solving the linear programming problem is infeasible .

If, for any \(K \in \mathbb R\) , there exists a feasible solution \(x\) such that \(c'x < K\) , we say that the problem is unbounded or equivalently that the optimal value is \(-\infty\) .

20.3. Example 1: Production Problem #

This example was created by [ Ber97 ]

Suppose that a factory can produce two goods called Product \(1\) and Product \(2\) .

To produce each product requires both material and labor.

Selling each product generates revenue.

Required per unit material and labor inputs and revenues are shown in table below:

30 units of material and 20 units of labor available.

A firm’s problem is to construct a production plan that uses its 30 units of materials and 20 unites of labor to maximize its revenue.

Let \(x_i\) denote the quantity of Product \(i\) that the firm produces.

This problem can be formulated as:

The following graph illustrates the firm’s constraints and iso-revenue lines.

_images/ae4fea293915ed386fae41e53abf9e80ffc7faead0da955acf7d28d9ffe02861.png

The blue region is the feasible set within which all constraints are satisfied.

Parallel orange lines are iso-revenue lines.

The firm’s objective is to find the parallel orange lines to the upper boundary of the feasible set.

The intersection of the feasible set and the highest orange line delineates the optimal set.

In this example, the optimal set is the point \((2.5, 5)\) .

20.4. Example 2: Investment Problem #

We now consider a problem posed and solved by [ Hu18 ] .

A mutual fund has \( \$ 100,000\) to be invested over a three year horizon.

Three investment options are available:

Annuity: the fund can pay a same amount of new capital at the beginning of each of three years and receive a payoff of 130% of total capital invested at the end of the third year. Once the mutual fund decides to invest in this annuity, it has to keep investing in all subsequent years in the three year horizon.

Bank account: the fund can deposit any amount into a bank at the beginning of each year and receive its capital plus 6% interest at the end of that year. In addition, the mutual fund is permitted to borrow no more than $20,000 at the beginning of each year and is asked to pay back the amount borrowed plus 6% interest at the end of the year. The mutual fund can choose whether to deposit or borrow at the beginning of each year.

Corporate bond: At the beginning of the second year, a corporate bond becomes available. The fund can buy an amount that is no more than \( \$ \) 50,000 of this bond at the beginning of the second year and at the end of the third year receive a payout of 130% of the amount invested in the bond.

The mutual fund’s objective is to maximize total payout that it owns at the end of the third year.

We can formulate this as a linear programming problem.

Let \(x_1\) be the amount of put in the annuity, \(x_2, x_3, x_4\) be bank deposit balances at the beginning of the three years, and \(x_5\) be the amount invested in the corporate bond.

When \(x_2, x_3, x_4\) are negative, it means that the mutual fund has borrowed from bank.

The table below shows the mutual fund’s decision variables together with the timing protocol described above:

The mutual fund’s decision making proceeds according to the following timing protocol:

At the beginning of the first year, the mutual fund decides how much to invest in the annuity and how much to deposit in the bank. This decision is subject to the constraint:

At the beginning of the second year, the mutual fund has a bank balance of \(1.06 x_2\) . It must keep \(x_1\) in the annuity. It can choose to put \(x_5\) into the corporate bond, and put \(x_3\) in the bank. These decisions are restricted by

At the beginning of the third year, the mutual fund has a bank account balance equal to \(1.06 x_3\) . It must again invest \(x_1\) in the annuity, leaving it with a bank account balance equal to \(x_4\) . This situation is summarized by the restriction:

The mutual fund’s objective function, i.e., its wealth at the end of the third year is:

Thus, the mutual fund confronts the linear program:

20.5. Standard Form #

For purposes of

unifying linear programs that are initially stated in superficially different forms, and

having a form that is convenient to put into black-box software packages,

it is useful to devote some effort to describe a standard form .

Our standard form is:

The standard form LP problem can be expressed concisely as:

Here, \(Ax = b\) means that the \(i\) -th entry of \(Ax\) equals the \(i\) -th entry of \(b\) for every \(i\) .

Similarly, \(x >= 0\) means that \(x_j\) is greater than \(0\) for every \(j\) .

20.5.1. Useful Transformations #

It is useful to know how to transform a problem that initially is not stated in the standard form into one that is.

By deploying the following steps, any linear programming problem can be transformed into an equivalent standard form linear programming problem.

Objective Function: If a problem is originally a constrained maximization problem, we can construct a new objective function that is the additive inverse of the original objective function. The transformed problem is then a minimization problem.

Decision Variables: Given a variable \(x_j\) satisfying \(x_j \le 0\) , we can introduce a new variable \(x_j' = - x_j\) and subsitute it into original problem. Given a free variable \(x_i\) with no restriction on its sign, we can introduce two new variables \(x_j^+\) and \(x_j^-\) satisfying \(x_j^+, x_j^- \ge 0\) and replace \(x_j\) by \(x_j^+ - x_j^-\) .

Inequality constraints: Given an inequality constraint \(\sum_{j=1}^n a_{ij}x_j \le 0\) , we can introduce a new variable \(s_i\) , called a slack variable that satisfies \(s_i \ge 0\) and replace the original constraint by \(\sum_{j=1}^n a_{ij}x_j + s_i = 0\) .

Let’s apply the above steps to the two examples described above.

20.5.2. Example 1: Production Problem #

The original problem is:

This problem is equivalent to the following problem with a standard form:

20.5.3. Example 2: Investment Problem #

20.6. computations #.

The package scipy.optimize provides a function linprog to solve linear programming problems with a form below:

By default \(l = 0\) and \(u = \text{None}\) unless explicitly specified with the argument ‘bounds’.

Let’s apply this great Python tool to solve our two example problems.

20.6.1. Example 1: Production Problem #

The problem is:

The optimal plan tells the factory to produce 2.5 units of Product 1 and 5 units of Product 2; that generates a maximizing value of revenue of 27.5.

We are using the linprog function as a black box .

Inside it, Python first transforms the problem into standard form.

To do that, for each inequality constraint it generates one slack variable.

Here the vector of slack variables is a two-dimensional NumPy array that equals \(b_{ub} - A_{ub}x\) .

See the official documentation for more details.

This problem is to maximize the objective, so that we need to put a minus sign in front of parameter vector c.

20.6.2. Example 2: Investment Problem #

Let’s solve this problem using linprog .

Python tells us that the best investment strategy is:

At the beginning of the first year, the mutual fund should buy \( \$24,927.75\) of the annuity. Its bank account balance should be \( \$75,072.25\) .

At the beginning of the second year, the mutual fund should buy \( \$50,000 \) of the corporate bond and keep invest in the annuity. Its bank account balance should be \( \$ 4,648.83\) .

At the beginning of the third year, the mutual fund should borrow \( \$20,000\) from the bank and invest in the annuity.

At the end of the third year, the mutual fund will get payouts from the annuity and corporate bond and repay its loan from the bank. At the end it will own \( \$ \) 141018.24, so that it’s total net rate of return over the three periods is 41.02%.

20.7. Duality #

Associated with a linear programming of form (20.1) with \(m\) constraints and \(n\) decision variables, there is an dual linear programming problem that takes the form (please see [ Ber97 ] )

Where \(A_j\) is \(j\) -th column of the \(m\) by \(n\) matrix \(A\) .

In what follows, we shall use \(a_i'\) to denote the \(i\) -th row of \(A\) and \(A_j\) to denote the \(j\) -th column of \(A\) .

To construct the dual of linear programming problem (20.1) , we proceed as follows:

For every constraint \(a_i' x \ge(\le or =) b_i\) , \(j = 1,2,...,m\) , in the primal problem, we construct a corresponding dual variable \(p_i\) . \(p_i\) is restricted to be positive if \(a_i' x \ge b_i\) or negative if \(a_i' x \le b_i\) or unrestricted if \(a_i' x = b_i\) . We construct the \(m\) -dimensional vector \(p\) with entries \(p_i\) .

For every variable \(x_j\) , \(j = 1,2,...,n\) , we construct a corresponding dual constraint \(A_j' p \ge(\le or =) c_j\) . The constraint is \(A_j' p \ge c_j\) if \(x_j \le 0\) , \(A_j' p \le c_j\) if \(x_j \ge 0\) or \(A_j' p = c_j\) if \(x_j \) is unrestricted.

The dual problem is to maximize objective function \(b'p\) .

For a maximization problem, we can first transform it to an equivalent minimization problem and then follow the above steps above to construct the dual minimization problem.

We can easily verify that the dual of a dual problem is the primal problem .

The following table summarizes relationships between objects in primal and dual problems.

As an example, the dual problem of the standard form (20.2) is:

As another example, consider a linear programming problem with form:

Its dual problem is:

20.8. Duality Theorems #

Primal and dual problems are linked by powerful duality theorems that have weak and strong forms.

The duality theorems provide the foundations of enlightening economic interpretations of linear programming problems.

Weak duality: For linear programming problem (20.1) , if \(x\) and \(p\) are feasible solutions to the primal and the dual problems, respectively, then

Strong duality: For linear programming problem (20.1) , if the primal problem has an optimal solution \(x\) , then the dual problem also has an optimal solution. Denote an optimal solution of the dual problem as \(p\) . Then

According to strong duality, we can find the optimal value for the primal problem by solving the dual problem.

But the dual problem tells us even more as we shall see next.

20.8.1. Complementary Slackness #

Let \(x\) and \(p\) be feasible solutions to the primal problem (20.1) and its dual problem, respectively.

Then \(x\) and \(p\) are also optimal solutions of the primal and dual problems if and only if:

This means that \(p_i = 0\) if \(a_i' x - b_i \neq 0\) and \(x_j = 0\) if \(A_j' p - c_j \neq 0\) .

These are the celebrated complementary slackness conditions.

Let’s interpret them.

20.8.2. Interpretations #

Let’s take a version of problem (20.3) as a production problem and consider its associated dual problem.

A factory produce \(n\) products with \(m\) types of resources.

Where \(i=1,2,\dots,m\) and \(j=1,2,\dots,n\) , let

\(x_j\) denote quantities of product \(j\) to be produced

\(a_{ij}\) denote required amount of resource \(i\) to make one unit of product \(j\) ,

\(b_i\) denotes the avaliable amount of resource \(i\)

\(c_j\) denotes the revenue generated by producing one unit of product \(j\) .

Dual variables: By strong duality, we have

Evidently, a one unit change of \(b_i\) results in \(p_i\) units change of revenue.

Thus, a dual variable can be interpreted as the value of one unit of resource \(i\) .

This is why it is often called the shadow price of resource \(i\) .

For feasible but not optimal primal and dual solutions \(x\) and \(p\) , by weak duality, we have

Here, the expression is opposite to the statement above since primal problem is a minimization problem.

When a strict inequality holds, the solution is not optimal because it doesn’t fully utilize all valuable resources.

if a shadow price \(p_i\) is larger than the market price for Resource \(i\) , the factory should buy more Resource \(i\) and expand its scale to generate more revenue;

if a shadow price \(p_i\) is less than the market price for Resource \(i\) , the factory should sell its Resource \(i\) .

Complementary slackness: If there exists \(i\) such that \(a_i' x - b_i < 0\) for some \(i\) , then \(p_i = 0\) by complementary slackness. \(a_i' x - b_i < 0\) means that to achieve its optimal production, the factory doesn’t require as much Resource \(i\) as it has. It is reasonable that the shadow price of Resource \(i\) is 0: some of its resource \(i\) is redundant.

If there exists \(j\) such that \(A_j' p - c_j > 0\) , then \(x_j = 0\) by complementary slackness. \(A_j' p - c_j > 0\) means that the value of all resources used when producing one unit of product \(j\) is greater than its cost.

This means that producing another product that can more efficiently utilize these resources is a better choice than producing product \(j\)

Since producing product \(j\) is not optimal, \(x_j\) should equal \(0\) .

20.8.3. Example 1: Production Problem #

This problem is one specific instance of the problem (20.3) , whose economic meaning is interpreted above.

We solve this dual problem by using the function linprog .

Since parameters used here are defined before when solving the primal problem, we won’t define them here.

The optimal value for the dual problem equals 27.5.

This equals the optimal value of the primal problem, an illustration of strong duality.

Shadow prices for materials and labor are 0.625 and 0.4375, respectively.

20.8.4. Example 2: Investment Problem #

The dual problem is:

The optimal value for the dual problem is 141018.24, which equals the value of the primal problem.

Now, let’s interpret the dual variables.

By strong duality and also our numerical results, we have that optimal value is:

We know if \(b_i\) changes one dollor, then the optimal payoff in the end of the third year will change \(p_i\) dollars.

For \(i = 1\) , this means if the initial capital changes by one dollar, then the optimal payoff in the end of the third year will change \(p_1\) dollars.

Thus, \(p_1\) is the potential value of one more unit of initial capital, or the shadow price for initial capital.

We can also interpret \(p_1\) as the prospective value in the end of the third year coming from having one more dollar to invest at the beginning of the first year.

If the mutual fund can raise money at a cost lower than \(p_1 - 1\) , then it should raise more money to increase its revenue.

But if it bears a cost of funds higher than \(p_1 - 1\) , the mutual fund shouldn’t do that.

For \(i = 4, 5, 6\) , this means that if the amount of capital that the fund is permitted to borrow from the bank changes by one dollar, the optimal pay out at the end of the third year will change \(p_i\) dollars.

Thus, for \(i = 4, 5, 6\) , \(|p_i|\) indicates the value of one dollar that the mutual fund can borrow from the bank at the beginning of the \(i-3\) -th year.

\(|p_i|\) is the shadow price for the loan amount. (We use absolute value here since \(p_i \le 0\) .)

If the interest rate is lower than \(|p_i|\) , then the mutual fund should borrow to increase its optimal payoff; if the interest rate is higher, it is better to not do this.

For \(i = 7\) , this means that if the amount of the corporate bond the mutual fund can buy changes one dollar, then the optimal payoff will change \(p_7\) dollars at the end of the third year. Again, \(p_7\) is the shadow price for the amount of the corporate bond the mutual fund can buy.

As for numerical results

\(p_1 = 1.38\) , which means one dollar of initial capital is worth \(\$ 1.38\) at the end of the third year.

\(p_4 = p_5 = 0\) , which means the loan amounts at the beginning of the first and second year are worth nothing. Recall that the optimal solution to the primal problem, \(x_2, x_3 > 0\) , which means at the beginning of the first and second year, the mutual fund has a postive bank account and borrows no capital from the bank. Thus, it is reasonable that the loan amounts at the beginning of the first and second year are valueless. This is what the complementary slackness conditions mean in this setting.

\(p_6 = -0.16\) , which means one dollar of the loan amount at the beginning of the third year is worth \(\$ 0.16\) . Since \(|p_6|\) is higher than the interest rate 6%, the mutual fund should borrow as much as possible at the beginning of the third year. Recall that the optimal solution to the primal problem is \(x_4 = -20,000\) which means the mutual fund borrows money from the bank as much as it can.

\(p_7 = 0.0015\) , which means one dollar of the amount of the corporate bond that the mutual fund can buy is worth \(\$ 0.0015\) .

Lecture (PDF)

Notebook Launcher

Choose public or private cloud service for "Launch" button.

Select a server

  • Public Colab

Launch Notebook

  • 90% Refund @Courses
  • Free Python 3 Tutorial
  • Control Flow
  • Exception Handling
  • Python Programs
  • Python Projects
  • Python Interview Questions
  • Python Database
  • Data Science With Python
  • Machine Learning with Python

Related Articles

  • Solve Coding Problems
  • Python - Validate String date format
  • Star fractal printing using Turtle in Python
  • Python - K length decimal Places
  • Python terminal processing with TerminalDesigner module
  • Smart calculator in Python
  • Operations on Python Counter
  • How to Create a Sparse Matrix in Python
  • Python | Check whether two lists follow same pattern or not
  • Pipx : Python CLI package tool
  • Python | Convert list of strings to space separated string
  • NumPy - Arithmetic operations with array containing string elements
  • Hangman Game in Python
  • Python | Ways to convert hex into binary
  • Generating Word Cloud in Python
  • Python program to convert POS to SOP
  • Python | Visualizing O(n) using Python
  • Word Prediction using concepts of N - grams and CDF
  • Interesting Python Implementation for Next Greater Elements
  • Python - Get Random Range Average

Python | Linear Programming in Pulp

  • Objective Function: The main aim of the problem, either to maximize of to minimize, is the objective function of linear programming. In the problem shown below, Z (to minimize) is the objective function.
  • Decision Variables: The variables used to decide the output as decision variables. They are the unknowns of the mathematical programming model. In the below problem, we are to determine the value of x and y in order to minimize Z. Here, x and y are the decision variables.
  • Constraints: These are the restrictions on the decision variables. The limitations on the decision variables given under subject to the constraints in the below problem are the constraints of the Linear programming.
  • Non – negativity restrictions: In linear programming, the values for decision variables are always greater than or equal to 0.

Explanation :

  • Line 1-2: First import the library pulp as p.
  • Line 4-5: Define the problem by giving a suitable name to your problem, here I have given the name ‘Problem’. Also, specify your aim for the objective function of whether to Maximize or Minimize.
  • Line 7-9: Define LpVariable to hold the variables of the objective functions. The next argument specifies the lower bound of the defined variable, i.e. 0, and the upper bound is none by default. You can specify the upper bound too.
  • Line 11-12: Denotes the objective function in terms of defined variables.
  • Line 14-18: These are the constraints on the variables.
  • Line 21: This will show you the problem in the output screen.
  • Line 23: This is the problem solver.
  • Line 24: Will display the status of the problem.
  • Line 27: Will print the value for x and y and the minimum value for the objective function.

how to solve linear programming problems in python

Don't miss your chance to ride the wave of the data revolution! Every industry is scaling new heights by tapping into the power of data. Sharpen your skills and become a part of the hottest trend in the 21st century.

Dive into the future of technology - explore the Complete Machine Learning and Data Science Program by GeeksforGeeks and stay ahead of the curve.

Please Login to comment...

author

  • Apple's New AI-powered Tool: Editing Through Text Prompts
  • Rebranding Google Bard to Gemini: All You Need to Know, Android App and Advanced subscriptions
  • Youtube TV's Multiview Update: Tailor Your Experience in 4 Easy Steps
  • Kore.ai Secures $150 Million for AI-Powered Growth
  • 10 Best IPTV Service Provider Subscriptions

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Adventures in Machine Learning

Optimizing complex problems: linear programming with python.

Linear Programming (LP) is a mathematical optimization technique used to find the optimal solution to a given problem. It involves maximizing or minimizing a linear objective function subjected to a set of linear equations and inequalities.

In this article, we will discuss the importance of linear programming, its applications, and how to implement it using Python. Definition and Importance of Linear Programming:

Linear Programming is a method used to solve a range of optimization problems.

It involves finding the best solution to a problem given certain constraints that limit the available options. LP problems are formulated using linear equations and inequalities to model the problem’s objectives and constraints.

The primary goal of LP is to maximize or minimize the objective function while maintaining the constraints’ feasibility. Linear Programming is widely used in various fields, including scientific computing, economics, manufacturing, transportation, and energy.

It is used to solve complex problems, such as production planning, resource allocation, transportation optimization, and even game theory. LP had a significant impact on the field of economics, where it has been used to model supply chain management, optimum pricing, and market equilibrium.

It plays a critical role in the realm of mathematical modeling, where it is used to develop efficient algorithms that could help to solve complex problems.to Mixed-Integer Linear Programming:

Mixed-Integer Linear Programming (MILP) is an extension of LP that includes variables with discrete values, such as integers, which is usually binary. In other words, MILP problems have both continuous and discrete variables.

The inclusion of discrete variables increases the complexity of the problem and introduces new challenges to the optimization process. MILP has many real-world applications, such as scheduling optimization, network design, and facility location.

One example of its application is in airline route scheduling, where the problem is to determine the optimal routes for aircraft based on available airports, departure times, and fuel capacity. Another example is in the allocation of production resources, where MILP is used to optimize the production capacity, personnel, and materials required for a given demand.

Applications of Linear Programming:

Linear Programming has diverse applications in various fields. In scientific computing, it is used to solve optimization problems in physics, biology, and chemistry.

In economics, LP is used to model the supply and demand of goods, market equilibrium, and game theory. In manufacturing, it is used to optimize production planning, inventory management, and supply chain management.

In transportation, LP is used to optimize the route planning, vehicle scheduling, and capacity utilization. In energy, LP is used to optimize the energy generation, distribution, and supply chain.

Linear Programming with Python:

Python is a popular programming language used to implement LP and MILP algorithms. It provides a range of tools and libraries, such as SciPy, PuLP, and Pyomo, which can help to model and solve LP and MILP problems.

Python also has access to efficient solvers written in Fortran or C/C++ that can handle large-scale problems effectively. The Simplex Method and Interior-Point Method:

The Simplex method is a popular algorithm used to solve LP problems.

It works by iteratively solving a set of linear equations to find the optimal solution. It is a time-tested algorithm that has been in use for several decades.

However, it does not handle MILP problems effectively. The Interior-Point method is another algorithm that can be used to solve LP problems.

It works by finding the optimal solution by minimizing the barrier function while maintaining the constraints. It is a more modern approach than the Simplex method and is preferred for large-scale problems.

Solving Mixed-Integer Linear Programming Problems with Branch-and-Bound:

Branch-and-Bound is an algorithmic technique used to solve MILP problems. It involves dividing the problem into smaller sub-problems and solving each sub-problem recursively.

The algorithm uses a lower and upper bound to prune the sub-problems that do not contribute to the optimal solution. Branch-and-Cut and Branch-and-Price are variations of the Branch-and-Bound method used in specific types of problems, such as network optimization and scheduling.

Python Tools for Linear and Mixed-Integer Linear Programming:

SciPy is a Python library used for scientific computing. It includes a range of modules that can help to solve LP and MILP problems, such as scipy.optimize.linprog and scipy.optimize.minimize.

PuLP is another open-source library that can be used to model LP and MILP problems. It provides a layer of abstraction over LP and MILP solvers, making it easier to create models and solve them.

Pyomo is another modeling language and optimization tool with specialized solvers for LP, MILP, Quadratic Programming, and Mixed-Integer Quadratic Programming. Conclusion:

Linear Programming is a powerful tool used to solve optimization problems in various fields.

It involves formulating a problem with linear equations and inequalities and finding the optimal solution that satisfies the constraints. LP is widely used in scientific computing, economics, manufacturing, transportation, and energy.

Python provides a range of tools and libraries that can help to model and solve LP and MILP problems efficiently. The Simplex and Interior-Point methods are used to solve LP problems, while Branch-and-Bound, Branch-and-Cut, and Branch-and-Price are used to solve MILP problems.

With the growing importance of optimization problems in modern industries, the use of LP and MILP is expected to grow, making it an essential tool in scientific computing and systems design. Linear Programming Examples:

In this section, we will discuss some practical examples of Linear Programming, including small LP problems, infeasible LP problems, unbounded LP problems, and resource allocation problems.

Small Linear Programming Problem:

Suppose we want to maximize our profit by selling two products, x and y. The unit price for product x is $3, and the unit price for product y is $5.

Suppose we can produce a maximum of 100 units of x and 80 units of y, and we need at least 30 units of x to meet our customers’ demand. We want to use Linear Programming to determine the optimal product mix that maximizes our profit.

Decision variables:

X – the number of units of product x we produce, y – the number of units of product y we produce, objective function:.

maximize: 3x + 5y

Constraints:

The feasible region is the area bounded by the constraints. The optimal solution to this problem is x = 30 and y = 10, with a maximum profit of $120.

Infeasible Linear Programming Problem:

Suppose we want to minimize the cost of producing a product with two components, A and B. The cost of component A is $5, and the cost of component B is $4.

We need at least 10 units of component A and 15 units of component B to produce one unit of the product. However, we only have 80 units of component A and 100 units of component B in stock.

We want to use Linear Programming to minimize the cost of producing as many units of the product as possible. Decision variables:

x – the number of units of the product we produce

minimize: 5x + 4x

10A + 15B <= 1

This problem is infeasible because there are no values of A and B that satisfy the constraints. In this case, we may need to revise our requirements or find other suppliers to obtain sufficient stock of the components.

Unbounded Linear Programming Problem:

Suppose we want to maximize the profit of selling two products, x and y. The unit price for product x is $3, and the unit price for product y is $5.

We have unlimited resources and can produce as many units of x and y as we want. We want to use Linear Programming to determine the optimal product mix that maximizes our profit.

Since we have unlimited resources, there are no constraints on the maximum number of units we can produce. This problem is unbounded, and the optimal solution is infinite.

The optimal solution will occur at the boundary of the feasible region, which is unbounded. Resource Allocation Problem:

Suppose we have three resources, R1, R2, and R3, and we want to allocate them to three projects, P1, P2, and P3.

Each project requires different amounts of each resource, and each project has a known profit associated with it. We want to use Linear Programming to determine the optimal resource allocation that maximizes our profit.

xi – the amount of resource Ri allocated to project Pi

maximize: 3×1 + 5×2 + 2×3

2×1 + 1×2 + 4×3 <= 100

3×1 + 4×2 + 2×3 <= 120

1×1 + 3×2 + 2×3 <= 80

xi >= 0 for i = 1, 2, 3

This problem involves optimizing the allocation of resources to projects to achieve maximum profit. The constraints represent the available resources, while the objective function represents the profit from each project.

The feasible region is the area bounded by the constraints, and the optimal solution is found at the corner of the feasible region. Linear Programming Python Implementation:

In this section, we will discuss how to implement LP and MILP algorithms using Python.

We will discuss the overview of SciPy and PuLP, how to define and solve optimization problems with SciPy, and how to use PuLP to invoke external solvers. Overview of SciPy and PuLP:

SciPy is a Python library used for scientific computing.

It includes a range of modules that can help to solve LP and MILP problems. PuLP is another open-source library that can be used to model LP and MILP problems.

It provides a layer of abstraction over LP and MILP solvers, making it easier to create models and solve them. Defining and Solving Optimization Problems with SciPy:

The linprog() function in SciPy is used to solve LP problems.

It takes the objective function, the inequality and equality constraints, and the bounds on the decision variables as input parameters. The function returns the optimum value for the objective function and the variables that achieve it.

To define and solve an optimization problem with SciPy, follow these steps:

1. Import the linprog() function from SciPy.optimize.

2. Define the objective function and the constraints.

3. Call the linprog() function with the objective function and constraints as input parameters.

4. Extract the optimal solution and the objective function value.

Here is an example of using linprog() to solve the small LP problem we defined earlier:

from scipy.optimize import linprog

c = [-3, -5]

A = [[-1, 0], [0, -1], [1, 0]]

b = [-30, -80, 100]

res = linprog(c, A_ub=A, b_ub=b, bounds=(0, None))

This code defines the objective function, c, the inequality constraints, A and b, and the bounds on the decision variables. The linprog() function is called with these inputs, and the solution is printed to the console.

Using PuLP to Invoke External Solvers:

PuLP provides a simplified interface to a range of LP and MILP solvers, including CBC, CLP, CGL, GLPK, Gurobi, and CPLEX. These solvers can be used to solve large-scale optimization problems that cannot be tackled with the simplex algorithm.

To use PuLP to invoke external solvers, follow these steps:

1. Install the required external solver, such as Gurobi or CPLEX.

2. Import the PuLP library and create a new problem object.

3. Define the variables, objective function, and constraints.

4. Specify the external solver to use and solve the problem.

Here is an example of using PuLP to solve a MILP problem:

Import pulp.

prob = pulp.LpProblem(“Resource Allocation”, pulp.LpMaximize)

x1 = pulp.LpVariable(“x1”, lowBound=0, cat=’Integer’)

x2 = pulp.LpVariable(“x2”, lowBound=0, cat=’Integer’)

x3 = pulp.LpVariable(“x3”, lowBound=0, cat=’Integer’)

prob += 3*x1 + 5*x2 + 2*x3

prob += 2*x1 + x2 + 4*x3 <= 100

prob += 3*x1 + 4*x2 + 2*x3 <= 120

prob += x1 + 3*x2 + 2*x3 <= 80

prob.solve(pulp.GLPK())

print(“Optimal Solution:”)

for v in prob.variables():

print(v.name, “=”, v.varValue)

print(“Total Profit = “, pulp.value(prob.objective))

This code uses PuLP to define a MILP problem with three binary variables and the corresponding constraints and objective function. It then uses the GLPK solver to find the optimal solution to the problem and prints the solution to the console.

Conclusion:

Linear Programming is a powerful mathematical optimization technique with diverse real-world applications. Python provides a range of tools and libraries, such as SciPy and PuLP, that can be used to model and solve LP and MILP problems efficiently.

The simplex method, interior-point method, branch-and-bound, and branch-and-cut algorithms can be employed to solve LP and MILP problems effectively. By understanding LP and MILP’s fundamentals, implementation, and usage, we can solve complex optimization problems that can improve the efficiency and effectiveness of decision-making in various industries and organizations.

Linear Programming Solvers:

Linear Programming is a powerful optimization technique used to solve a range of real-world problems such as resource allocation, transportation optimization, and supply chain management. The simplex method and interior-point method are the classical algorithms used to solve LP problems.

However, for large-scale real-world problems, these algorithms may not be sufficient, and more efficient algorithms are required. The simplex method works by iteratively moving from one feasible solution to a better feasible solution until an optimal solution is found.

The interior-point method works by finding the optimal solution by minimizing the barrier function while maintaining the constraints. For MILP problems, branch-and-bound is the common approach used to find the optimal solution.

It involves dividing the problem into smaller subproblems and solving them recursively while using a lower and upper bound to prune subproblems that do not contribute to the optimal solution. Linear Programming solvers are powerful tools that can help to solve large-scale optimization problems.

These solvers leverage the latest algorithms in optimization and provide users with different solvers’ options to choose from. There are many LP solvers available, including open-source solvers like GLPK, COIN-OR, and PuLP, and commercial solvers like Gurobi and CPLEX.

Solver performance varies based on the problem’s size and complexity, as well as the available hardware and software resources. Summary of Linear Programming and Python Implementation:

Linear Programming is an effective way of optimizing complex problems.

It involves maximizing or minimizing an objective function subject to a set of linear constraints. Python provides several approaches to implement LP solutions.

SciPy and PuLP are popular libraries used in Python for linear programming. SciPy is a library that offers a simple optimization interface for solving LP problems.

It provides an implementation of linear programming algorithms, such as the Simplex method and Interior-point method. On the other hand, PuLP provides an abstraction layer over LP solvers.

It can be used to create optimization problems and solve them using various optimization libraries. Python provides access to many LP solvers, including commercial and open-source solvers.

Some of the popular optimization solvers used with Python are GLPK, COIN-OR, Gurobi, and CPLEX. These solvers provide different access options and different optimization methods to solve LP and MILP problems effectively.

  • Terms & Conditions
  • Privacy Policy

IMAGES

  1. how to solve linear programming problems in python

    how to solve linear programming problems in python

  2. How to Solve Linear Programming (LP) Problems Using Python

    how to solve linear programming problems in python

  3. 4 Ways to Solve Linear Programming in Python

    how to solve linear programming problems in python

  4. How to Solve Linear Programming (LP) Problems Using Python

    how to solve linear programming problems in python

  5. How to Perform Linear Programming in Python Using Solver

    how to solve linear programming problems in python

  6. Linear Programming Problem Solution in Python

    how to solve linear programming problems in python

VIDEO

  1. Linear programming problem part 3

  2. Linear programming (Graphical Method)

  3. Operations research II Lecture-9 II #5 Steps to solve LPP by Graphical Method-IV

  4. BASIC PYTHON| L3

  5. Linear Programming Problem || Introduction

  6. HKDSE Maths 2013 MC Q37

COMMENTS

  1. Hands-On Linear Programming: Optimization With Python

    How to solve a linear programming problem with Python You'll first learn about the fundamentals of linear programming. Then you'll explore how to implement linear programming techniques in Python. Finally, you'll look at resources and libraries to help further your linear programming journey.

  2. 4 Ways to Solve Linear Programming in Python

    SciPy The first option is SciPy's optimize.linprog. It is quite easy to use, considering many Python users are familiar with the SciPy library. A plus point is that it interfaces with HiGHS, a...

  3. Solving Linear Programming problems in Python using cvxpy library

    We have solved linear programming problems in Python using cvxpy library. Learn how to formulate Linear Programming problems Mathematical formulation Decision variables: X 1, X 2, X 3, .... X n Objective function or linear function: Z Library used Here, we use the library, cvxpy to find the solution of the linear programming problem (lpp).

  4. scipy.optimize.linprog

    Linear programming: minimize a linear objective function subject to linear equality and inequality constraints. Linear programming solves problems of the following form: min x c T x such that A u b x ≤ b u b, A e q x = b e q, l ≤ x ≤ u,

  5. How to Solve Linear Programming Problems With Examples and

    How to Solve Linear Programming Problems With Examples and Implementation in Python AI optimization algorithms Cesar William Alvarenga · Follow Published in Better Programming · 5 min read · Oct 3, 2020 Photo made with Canva.

  6. Solving your first linear program in Python

    Apr 5, 2020 3 Figuring out a cake recipe I do not remember. Photo by author. You might have come across the term 'linear programming' at some point in data science or research. I will try to explain what it is and how one can implement a linear program in Python. Why do we need a linear program?

  7. Linear Programming using Python

    Linear Programming using Python A step by step introduction to formulating and solving a linear optimization problem using PuLP library in Python. Priyansh Soni · Follow Published in Towards Data Science · 10 min read · Apr 26, 2020 -- 4 Photo by Antoine Dautry on Unsplash Objective of the Article

  8. How to Create Your First Linear Programming Solver in Python

    How to Create Your First Linear Programming Solver in Python Understand how those packages work and how to create your first solver. Gustavo Santos · Follow Published in Towards Data Science · 6 min read · Nov 25, 2021 Photo by Anika Huizinga on Unsplash Linear Programming

  9. Working With Linear Systems in Python With scipy.linalg

    scipy.linalg includes several tools for working with linear algebra problems, including functions for performing matrix calculations, such as determinants, inverses, eigenvalues, eigenvectors, and the singular value decomposition.

  10. linear programming in python?

    linear programming in python? - Stack Overflow linear programming in python? Ask Question Asked 11 years, 8 months ago Modified 1 year, 3 months ago Viewed 69k times 43 I need to make a linear programming model. Here are the inequalities I'm using (for example): 6x + 4y <= 24 x + 2y <= 6 -x + y <= 1 y <= 2

  11. Solving Linear Programming using Python PuLP

    Learn how to use Python PuLP to solve linear programming problems. As a Senior operation manager, your job is to optimize scarce resources, improve productivity, reduce cost, and maximize profit.

  12. Linear Programming using Pyomo

    First, we create the solver using the SolvingFactory class by passing the GLPK solver and then call the solve () method with model oobject as input. from pyomo.opt import SolverFactory # Create a solver object solver = SolverFactory('glpk') # Solve the LP problem and assign output to results results = solver.solve(model) Let's print the ...

  13. Solving Assignment Problem using Linear Programming in Python

    Assignment Problem. A problem that requires pairing two sets of items given a set of paired costs or profit in such a way that the total cost of the pairings is minimized or maximized. The assignment problem is a special case of linear programming. For example, an operation manager needs to assign four jobs to four machines.

  14. Solving Linear Programming Problems (LPPs) Using PuLP and Python

    Now, we solve this LPP using PuLP. At first, we import the required package. import pulp. We instantiate a problem class named "My_LP_Problem" for the maximization problem.

  15. MILP Ch.04: Linear Programming Formulation With Gurobi Python API

    The Gurobi Python API provides an 'optimize()' function, which calls the Gurobi library to solve the defined linear programming problem. The 'optimize()' function leverages the constraints, decision variables, and objective function specified in the model object 'm' to find the optimal solution.

  16. Introduction to Linear Programming in Python

    In Python, there are different libraries for linear programming such as the multi-purposed SciPy, the beginner-friendly PuLP, the exhaustive Pyomo, and many others. Today, we are going to use Google OR-Tools, which is quite user-friendly, comes with several prepackaged solvers, and has by far the most stars on GitHub.

  17. 20. Linear Programming

    20.1. Overview #. Linear programming problems either maximize or minimize a linear objective function subject to a set of linear equality and/or inequality constraints. Linear programs come in pairs: an original primal problem, and. an associated dual problem. If a primal problem involves maximization, the dual problem involves minimization.

  18. Python

    Code : To solve the aforementioned linear programming problem in Python: import pulp as p Lp_prob = p.LpProblem ('Problem', p.LpMinimize) x = p.LpVariable ("x", lowBound = 0) y = p.LpVariable ("y", lowBound = 0) Lp_prob += 3 * x + 5 * y Lp_prob += 2 * x + 3 * y >= 12 Lp_prob += -x + y <= 3 Lp_prob += x >= 4 Lp_prob += y <= 3 print(Lp_prob)

  19. Optimizing Complex Problems: Linear Programming with Python

    Linear Programming with Python: Python is a popular programming language used to implement LP and MILP algorithms. It provides a range of tools and libraries, such as SciPy, PuLP, and Pyomo, which can help to model and solve LP and MILP problems.

  20. Solving Real World Linear Programming Problem using Python ...

    #LPP #Optimization #OperationResearch #Python In this video, we take a real world example to demonstrate how you can use Python to solve a linear programming...

  21. Solve Optimization Problems: Exploring Linear Programming with Python

    Example 3: Marketing Budget Optimization solved by Pyomo. Pyomo is an open-source Python modelling language for mathematical optimization that supports the modelling of complex systems with linear ...

  22. Linear Programming with Python. Exploring SciPy's "linprog" function

    Model solution: using a standard optimization algorithm. Upon obtaining a solution, a sensitivity analysis should be performed to find out the behavior of the solution due to changes in some of the parameters. Model validity: checking if the model works as it was supposed to.

  23. python

    In the documentation scipy.optimize I saw a possible solution thanks to the x0 parameter in the scipy.optimize.linprog method, which made it possible to pass the starting basis to the solver, but it is used only in the 'revised simplex' method, which does not solve milp problems.

  24. Linear Programming: optimizing solutions with Python using PuLP

    First of all, you will need to install it: pip install pulp. And once you have it ready to use, begin your script/program/notebook with the import line: import pulp. Now, you can create the LP ...

  25. Dana

    1,648 likes, 100 comments - codingmermaid.ai on February 15, 2024: "Hear me out I don't believe or promote the idea of becoming a data scientist in 30 days, 6..."