PDF of Eric’s handwritten notes are here.

### LP General Formulation

Recall the **standard form** for linear programs (LP’s). The LP is specified by the following input parameters:

- Variables:

(The transpose means that this is a column vector.) - Objective function:
- Constraints: matrix and

**Standard form for LP:**

max

subject to:

An alternative form is known as **slack form** for LP. Here we use equality (instead of inequalities ) for the constraints.

**Slack form for LP:**

max

subject to:

The two forms are equivalent, it is straightforward to convert between the two forms.

*Slack to Standard:* Let’s first see how to convert a slack constraint into standard form. Note the constraint is equivalent to the pair of constraints: and . And the later constraint can be converted into standard form by multiplying by both sides by -1 to yield: .

*Standard to Slack:* To convert from standard from to slack form, consider a standard constraint . Create a new variable . We will use to denote the slack between the left and right hand sides. Thus if then we want that . Therefore we replace the original constraint by this pair: and .

### LP Geometry

Recall, the feasible region for a LP is the set of points which satisfy all of the constraints (regardless of their objective function value). Formally, the feasible region is the set:

= { }.

**Feasible region:** The equation defines a plane in dimensions (this is called a hyperplane since it’s in more than 2 dimensions). Hence each constraint defines a half-space in dimensions, i.e., all of the points on one side of the plane. Therefore the feasible region is the intersection of these halfspaces. It is a convex polyhedron in dimensions.

**Vertices: **The vertices of the feasible region are the “corners” of this polyhedron; they are the points in the feasible region which satisfy at least of the constraints with equality (they lie on these hyperplanes). Note that there can be a huge number of vertices, namely . We refer to a pair of vertices as neighbors if they share constraints that they satisfy with equality. Hence from a vertex we choose one constraint to remove and one to add in (and then check if this a valid point). Therefore, a vertex of the feasible region has neighbors.

**Convexity: **Since the feasible region is convex if we are at a vertex and all of its neighbors have smaller objective value then the entire feasible region must be below (where below is with respect to the objective function), otherwise it would go down for the neighbors and then back up which would be non-convex. Therefore a vertex which is locally optimal is also globally optimal. Now there may be optimal points which are not vertices, but in this case there is still a vertex which is also optimal (has the same objective function value). For example, in 2-dimensions our feasible region is a convex polygon and a point on the edge of the polygon could be optimal, but in this case the entire edge is optimal and thus both endpoints (which are vertices) are also optimal.

**Feasible region empty? **Are there any points satisfying all of the LP constraints? In other words, is the feasible region empty or not? How do we check that? And if it is non-empty, how do we find a feasible point to start our algorithm from? If our LP is in standard form we can make a new LP which always has a trivial feasible point. Then by finding an optimal point for this new LP we can determine if the original LP has a non-empty feasible region.

Consider a LP in standard form with variables :

max

subject to:

**Feasibility LP:** For the new LP, which we’ll call the feasiblity LP, we add one new variable . Then we replace the constraint with . Note is a single variable, not a vector, so every constraint adds in for the same variable . The key is that all of these constraints are clearly satisfied for and . In fact, we don’t need to take but it suffices to take it sufficiently small, such as where is the smallest entry in the vector . Thus, here is the feasibility LP:

max

subject to:

*A few notes on the feasibility LP:*

–First off we’ve changed the objective function. Why? Because we are no longer interested in finding the optimal solution to the original LP, just a feasible point regardless of its objective function value. We know that for and small enough is a feasible point for the new LP. If we find a feasible point for the new LP where then this is also feasible for the original LP because

when . So our goal is simply to determine if there is a feasible point for the new LP where . Hence we can run the simplex algorithm (which we’ll describe momentarily) starting from and then stop when we find a feasible point with or if we find that the optimal value has (and hence the original LP has an empty feasible set).

–Note there is no non-negativity constraint on . As discussed earlier this is still a valid LP and can be converted into standard form. We kept it in this simpler form to simplify the presentation.

**Simplex algorithm: **The simplex algorithm walks on vertices of the feasible region. It starts at a vertex, and then chooses a neighbor which has higher objective value; if there are multiple such neighbors that are better than the current vertex then which one we choose depends on the variant of the simplex algorithm (random, greedy, etc.). If we end at a vertex that is better than all of its neighbors then we know that it is an optimal point. In the worst case the simplex algorithm takes exponential time. However it does well on many instances, and is widely used for many huge LP’s.

One note: how do we find a starting feasible point for the simplex algorithm? We simply run the simplex algorithm on our feasibility LP. That LP has a trivial starting point and then we will either find a feasible point for our original LP or we’ll determine that the original LP is infeasible.

**Poly-time LP algorithms: **There are algorithms that are guaranteed to be polynomial-time (for all LP’s), these are based on the ellipsoid method or interior point methods. Algorithms based on the simplex method are widely used. There are also popular algorithms based on interior point methods.

**Types of LP’s:** In the examples we’ve considered so far there is a point in the feasible region which is optimal, and hence there is a vertex of the feasible region which is also optimal. This might not be the case in some situations. More precisely, the optimal point is at a vertex of the feasible region, except:

- Infeasible LP: the feasible region is empty so there are no valid points. Here is a simple example:
- max , such that
- , .

Note, whether or not a LP is infeasible just depends on the constraints, the objective function doesn’t matter (so, for a given set of constraints, it is true for all objective functions or none).

- Unbounded LP: the optimal value of the objective function is unbounded. So the feasible region is non-empty but in the direction of the objective function the feasible region is unbounded so we can achieve arbitrarily large objective function. Here is a simple example:
- max , such that
- ,

Note, the feasible region can be an unbounded set, but for some objective functions the LP is bounded and other objective functions it may be unbounded.

When these two exceptions do not occur then we refer to the LP as a feasible LP: the feasible region is non-empty and the objective function is bounded.

### LP Duality

Now we are ready to explore LP duality. Let’s recall one of our examples from last class. Here is the example:

subject to:

(1) (this is simply a labeling of this constraint)

(2)

(3)

(4)

(5)

Last class we ran the simplex algorithm on this LP and found that the point was an optimal point with objective value 2400. Now suppose we are simply given this point and we want to verify that it’s optimal. How can we do that? Can we prove that no feasible point has an objective value ? Any feasible point satisfies each of the constraints so it also satisfies any linear combination of the constraints.

**Example of dual vector:** Let’s take a linear combination of the constraints defined by the vector . There are 4 non-trivial constraints (we’re excluding the non-negativity constraints) so let’s look at times these 4 constraints. We’ll require every coordinate to be . Thus when we take a linear combination we don’t need to flip any of the signs, they are all constraints. Taking a linear combination of the 4 constraints yields:

Consider the vector . For this choice of we have:

.

Let’s collect terms and this is equivalent to:

.

Note the left-hand-side is the objective function of our original LP. Thus we’ve shown that we can never achieve a value bigger than 2400. In other words, the profit is . Hence the point is optimal since it achieves the maximum possible profit of 2400.

**Finding dual vector:** How did we find this ? For this LP with 4 constraints we had:

For this LP this is equivalent to:

.

Collecting terms we have:

.

We want to obtain the objective function on the left-hand-side, or something bigger. Thus we want that all of the following hold:

In addition we are trying to find the best upper bound on the objective function. The best upper bound means we want the smallest possible right-hand-side. Thus we want to minimize the right-hand-side, or in other words we want to minimize . Thus we have the dual LP:

subject to:

We refer to the original LP as the primal LP, and this new LP we just defined as its dual LP.

**Primal LP Dual LP:** In general, how do we convert a primal LP into its dual LP. Assume the primal LP is in standard form. First off the dual LP has a variable for each constraint in the primal LP (except for the non-negativity constraints). Thus is a vector of size . And there is a constraint in the dual LP for each variable in the primal LP. Thus the dual LP has constraints. So if the primal LP has variables and constraints (plus the non-negativity constraints), then the dual LP has variables and constraints (plus the non-negativity constraints).

In our example, we are trying to find a linear combination of the constraints that minimized the right-hand-side (yielding the smallest possible upper bound on the profit). Thus, the dual objective function is defined by the right-hand-side of the constraints in the primal LP. More specifically, the objective function is .

The constraints of the dual LP specify that the left-hand-side of the linear combination of the constraints is at least the primal’s objective function. Thus, the right-hand-side of the dual constraints are defined by the primal’s objective function. And the left-hand-side of the dual constraints are defined by the columns of the primal’s constraint matrix . Thus, the dual LP has constraints $latex A^Ty \geq c$

**General form of dual LP:** Putting it all together for a primal LP with variables and constraint matrix $m$ in standard form:

max

subject to:

Its dual LP has variables and is the following:

min

subject to:

**Dual(Dual) = Primal: **As a sanity check, when we take the dual of the dual we should end up back with the primal LP. Let’s check that this is the case. The below illustration starts with a primal LP in standard form (this is the top-left LP). We take its dual LP (see the top-right LP). We then convert this dual LP into standard form (see the bottom-right LP). We take its dual (see the bottom-left LP). Notice that if we convert this LP, which is the dual of the dual, into standard form then we end up with the original primal LP.

**Weak Duality:** Our construction of the dual LP was designed so that it gave an upper bound on the value of the primal LP’s objective function. More precisely, any feasible point for the dual LP gives an upper bound on the value of the primal LP’s objective function. Thus, for any feasible point for the primal LP its value of the objective function is at most that of the dual’s objective function at point . Therefore we have the following theorem:

Weak Duality Theorem: For a feasible point of the primal LP and a feasible point of the dual LP, it holds that:

At this point it should be quite intuitive that the theorem holds. For completeness let’s formally prove it since the proof is fairly elementary.

Proof of the Weak Duality Theorem:

Since is feasible for the dual LP we know that . Take the transpose of both sides and we have: $latex y^TA \geq c^T$. Now multiply both sides by and we have: $latex c^Tx \leq y^TAx$. To summarize we did the following chain:

Note the right-hand-side is simply a number. Thus taking the transpose will yield the same number, therefore:

.

So we have shown that: .

Now consider . Since it is feasible for the primal LP we have that . Again taking the transpose and then multiplying both sides by in this case we get the following:

Combining these two results we have shown that:

.

Ignoring the middle term we have the desired result, which completes the proof of the weak duality theorem.

**Consequences of Weak Duality: **There are a few important implications of the weak duality theorem. First off, the theorem just shows that the primal’s optimal is at most the dual’s optimal. If we find a pair of feasible points and where we achieve equality (as we did in the beginning example from this lecture) then we’ve proven that they are both optimal:

Corollary 1: Given a primal feasible and dual feasible where , then and are optimal for the primal and dual, respectively.

Suppose the optimal value of the primal LP is unbounded. Since the dual’s objective value is always at least this unbounded value the only option is that the dual is infeasible (there are no feasible points):

Corollary 2: If the primal LP is unbounded then the dual LP is infeasible.

Similarly, if the dual LP is unbounded then the primal LP is infeasible.

(Note, the reverse implications do not hold. For example, if the dual LP is infeasible then the primal LP can be unbounded and it can also be infeasible.)

**Strong Duality: **Corollary 1 says that if there is a primal feasible and dual feasible with equality then they are both optimal. In our earlier example we found such a and . Does there always exist such a pair achieving equality? Yes! First off we need that both the primal and dual LP’s are feasible (i.e., the feasible regions are non-empty and bounded). And then we have that the optimal points have equality.

**Strong Duality Theorem: **

- The primal LP is feasible with bounded optima iff the dual LP is feasible with bounded optima.

(By feasible with bounded optima we mean that the LP has a non-empty feasible region and the optimal solution has bounded value.) - Moreover, for a feasible primal LP with optimal solution and its dual LP with optimal solution , the objective functions match:

**Max-flow = Min-cut via LP Duality: **Let us now review how to express max-flow as a LP. Then we’ll see the dual LP corresponds to the capacity of the min-st-cut. Hence we’ll get an alternative proof of the max-flow = min-cut theorem.

**Max-flow as LP: **The straightforward way to express max-flow as a LP is to have a variable for each edge . We then have constraints along edges for the capacity constraints and constraints along vertices (except and ) for the conservation of flow constraints. Here is the LP:

s.t.

for all

for all

**Alternative Max-flow LP: **An equivalent LP makes a variable for each path from to . Hence let be the set of all paths from to . Note that typically is exponentially large. So the resulting LP may be huge (too large to write down efficiently). But we are simply using that it is equivalent to max-flow and we will consider its dual.

In this new LP the original flow conservation constraints are no longer needed since we will specify an amount of flow that is sent along the entire path from to , and none of it is lost along the way. We still need constraints for the capacity constraints, and to do this we need to sum over all paths through an edge . Here is this new LP:

max s.t.

for

for

**Example Max-flow LP: **Here is an example flow network:

The variables in the corresponding LP will be for the 3 paths from to :

The max-flow LP (primal LP) is the following:

s.t.

The dual LP is the following:

s.t.

(edges on )

(edges on )

(edges on )

One should think of the variable as indicating whether this edge is counted in a st-cut where .

The dual LP finds the capacity of the min st-cut.

**Dual of Max-flow LP: In **Here is the general form of the dual of the max-flow LP. We are taking the dual of the LP with a variable for each path . In the dual there is a variable for each edge .

The dual LP is the following:

s.t.:

for all

(every path cross the cut at least once and such edge )

for all

The dual LP is equivalent to the min-st-cut problem.

Lemma:

For st-cut , there is a feasible where .

Hence, the dual LP optimal value is at most the minimum capacity of a st-cut. Since by strong duality we have that the primal and dual LP have matching values, we have the following:

Proof:

Fix a st-cut .

Set if it cross from to , and otherwise.

This is feasible since every crosses the cut at least once.

And it has value: , which completes the proof of the lemma.

To prove that the size of the max-flow equals the min capacity of a st-cut we need to show that the optimal value of the dual is at least that of the min capacity of a st-cut.

Lemma 2:

For st-cut , there is a feasible where .

Hence, Dual LP optimal capacity(min st-cut). Therefore we have that: |max-flow| = |min st-cut|.

Lemma 2 is harder to prove so we omit the proof from this overview.