##### Document Text Contents

Page 2

LINEAR PROGRAMMING

WITH MATLAB

MP07_Ferris_FMA.qxp 10/4/2007 2:22 PM Page 1

Page 139

126 Chapter 5. Solving Large Linear Programs

tableau. Since A·B = LU and A′·Bu = pB, it follows that u = (L′)−1(U ′)−1pB. Hence,

we can reuse the L and U factors of A·B calculated above (leading to important savings in

practice) and proceed in MATLAB as follows:

� u = L’\(U’\p(B));

� c = p(N)-A(:,N)’*u

c =

−1−3

3

Since c′ makes up the bottom row of the tableau (except for the bottom-right element) and

since it has negative entries, the current tableau is not optimal. Therefore, we select an

index to enter the basis, corresponding to one of the negative entries in c, and calculate the

column in the tableau that corresponds to this variable xN(s).

� s = 2;

� d = U\(L\A(:,N(s)))

d =

46

14

We now carry out the ratio test with the values of d and xB calculated above and find that

r = 1 is the index that leaves the basis. We swap r and s between the sets B and N and

continue with the next iteration of the revised simplex method.

� r=1; swap = B(r);

� B(r) = N(s); N(s) = swap;

� [L,U] = lu(A(:,B));

� x_B = U\(L\b)

� u = L’\(U’\p(B));

� c = p(N)-A(:,N)’*u

c =

−0.250.75

2.25

Now c has the value shown above. It indicates that we have not yet achieved optimality, as

there is still a negative component. We identify the component s = 1 to enter the basis and

a component r = 1 to leave, and we continue with the next simplex iteration.

� s = 1; d = U\(L\A(:,N(s)))

� r = 1; swap = B(r);

� B(r) = N(s); N(s) = swap;

� [L,U] = lu(A(:,B));

� x_B = U\(L\b)

� u = L’\(U’\p(B));

� c = p(N)-A(:,N)’*u

c =

11

2

Page 140

5.2. The Revised Simplex Method 127

Since we now have c ≥ 0, the current tableau is optimal. The corresponding solution is

x = (1, 0, 0, 2, 0, 3)′, with z = p′x = p′

B

xB = 2.

Exercise 5-2-2. In MATLAB, solve the following linear program using the revised simplex

method as shown above. You will need to convert the data of this problem to canonical

form and choose an initial B that corresponds to the slack variables that you add during this

conversion.

max −x1 + 2x2 + x3

subject to x1 + x2 + x3 ≤ 3,

−x1 + x2 − x3 ≤ 1,

2x1 + x2 − x3 ≤ 1,

−x1 + x2 ≤ 4,

x1, x2, x3 ≥ 0.

You should work through each step of the method explicitly, calculating each of the inter-

mediate vectors, as in the example above.

In fact, it is computationally more efficient to update the LU factorization at each step

of the revised simplex method instead of recomputing the factorization anew. Details on

such procedures can be found in Section 5.2.3; in the absence of these methods the overall

complexity of the revised simplex method will be larger than that of an implementation

using Jordan exchanges.

Numerical rounding error causes many problems for implementations of the simplex

method due to the special nature of the number zero and the need to determine positivity

or negativity of components in crucial steps of the procedure. Typical commercial codes

contain at least three types of “zero tolerances,” small positive numbers below which a

floating-point number is deemed indistinguishable from zero for purposes of the algorithm.

The first tolerance zer_tol is used in comparisons of numbers against zero. In testing

the bottom row of the tableau for negativity, we use the MATLAB code

if isempty(find(c < -zer_tol))

rather than the simple test

if isempty(find(c < 0))

This allows some of the values of c to be slightly negative. Chvátal (1983) suggests a value

of 10−5 as a typical value for this tolerance.

The second tolerance is a pivot tolerance piv_tol that is used to safeguard against

very small pivot elements and hence avoid the problems shown in Example 2-6-1. Pivots are

deemed acceptable only if the potential pivot element is greater than piv_tol in absolute

value. A typical value for this tolerance is 10−8.

The third zero tolerance is a slack tolerance slack_tol that is a measure of how

much error we will allow in the slack variables, that is, the difference between A BxB and b.

The use of this tolerance will be outlined further in the sections on advanced pivot selection

mechanisms and basis updates. A typical value for slack_tol is 10−6.

A simple version of the resulting revised simplex method that uses these tolerances

and LU decomposition for solving the systems of linear equations is given in rsm.m. We

illustrate the use of this routine on the example given above.

Page 278

Index 265

pivot, 18, 20, 46, 47. See also Jordan exchange

block, 31

degenerate, 62, 64, 69

nondegenerate, 67

submatrix, 31

pivot selection, 15. See also pricing

Devex, 142, 143

for dual simplex, 103

for Lemke’s algorithm, 179

steepest-edge, 142

polynomial complexity, 207

predictor-corrector method.

See path-following methods

preprocessing, 212

pricing, 47, 49, 53, 68, 117

partial, 142

primal-dual methods. See interior-point

methods

primal-dual pair of linear programs, 89

definition, 94

for general linear programs, 108

interior-point methods and, 195

optimality conditions for, 100

weak duality for, 97

zero-sum games and, 192

prisoners’ dilemma, 186, 191

projection, 173, 205, 244

pure strategies, 185

QR factorization, 226

quadratic function, 244–247

convex, 244–247

quadratic programming, 7, 172–177,

212–215, 227, 231

convex, 172, 173, 175, 212

general form, 175

infeasible, 182, 183

nonconvex, 182, 184

unbounded, 182, 183

rank, 91, 225

determination via Jordan exchanges,

91–92

full, 225

ratio test, 48, 126, 203

after addition of constraints, 157

definition, 49

for dual simplex method, 103, 104,

157, 165

in Lemke’s method, 179, 180, 188

robust implementation of, 143

ties in, 66, 68

reduced costs, 49, 68, 124, 132, 142, 152,

153, 160

regression problems. See approximation

problems

residual vector, 219, 221, 227

outliers, 227

resource allocation, 10–11

revised simplex method, 52, 117, 123–143,

156

choice of initial basis, 125

for problems with bound constraints,

129, 134

fundamental idea, 123–124

initial basic feasible solution, 134–139

linear algebra in, 124–126

pivot selection, 142–143

roundoff error issues, 127–129

specification, 124

roundoff error, 28, 127, 143

saddle point, 184

scalar product, 252

Scheme II for linear programs in nonstan-

dard form, 72, 80–86, 157, 220,

221, 223

Schur complement, 31

sensitivity analysis, 4, 151

separating hyperplane, 11, 230, 231, 233

separation lemma, 113

shadow prices, 97, 154, 155. See also

Lagrange multipliers

Sherman–Morrison–Woodbury formula,

140

shifting, 74

simplex method, 6, 7, 45–87

introduction, 14–15

network, 149–150

slack variables, 8, 45, 117, 152, 196

definition, 45

dual, 101, 103

Page 279

266 Index

smallest-subscript rule, 67–72, 78, 104

solution set, 58–60

source, 145

standard form, 7, 16, 45, 117, 151

standard form, transformation to, 72–76

free variable method, 75

Scheme I, 72, 76–80

Scheme II, 80–86

substitution method, 74

Steinitz theorem, 26–27, 38, 92

support vector machines

linear, 230–233

linear programming (�1) formulation,

233

nonlinear, 233–234

nonseparable, 231–233

separable, 230–231

support vectors, 231, 232

tableau, 18–20

augmented, 62

condensed, 45, 47, 123

degenerate, 65, 66, 120

dual, 89

feasible, 46, 187

initial, 47, 119

labels, 21

MATLAB representation, 21–22

nondegenerate, 65

optimal, 49, 53, 58, 71, 152–154,

156

primal, 89

unbounded, 54

Taylor’s theorem, 170, 198, 199, 249

theorem of the alternative, 113

tolerance

pivot, 127

slack, 127

triangular substitution, 40, 139, 141, 209

unbounded linear program, 14, 46, 54–56,

66, 98

unbounded ray, 54, 182, 183

uncertainty in formulation, 151

variables, 2

artificial, 61, 62, 135, 167, 179, 218

for equality constraints, 80–82, 193

basic, 45, 152

blocking, 133

dependent, 18, 21, 23

dual, 89. See also Lagrange

multipliers

free, 75, 80, 107, 134, 136–139, 167,

184, 222

independent, 18, 21, 23, 26

nonbasic, 45, 51, 130

nonnegative, 107

slack. See slack variables

vector

norm. See norm

notation, 237

representation in MATLAB, 251

vertex, 14–15, 46, 51–53, 120, 163

adjacent, 15

degenerate, 52

zero-sum game, 185, 192–193

LINEAR PROGRAMMING

WITH MATLAB

MP07_Ferris_FMA.qxp 10/4/2007 2:22 PM Page 1

Page 139

126 Chapter 5. Solving Large Linear Programs

tableau. Since A·B = LU and A′·Bu = pB, it follows that u = (L′)−1(U ′)−1pB. Hence,

we can reuse the L and U factors of A·B calculated above (leading to important savings in

practice) and proceed in MATLAB as follows:

� u = L’\(U’\p(B));

� c = p(N)-A(:,N)’*u

c =

−1−3

3

Since c′ makes up the bottom row of the tableau (except for the bottom-right element) and

since it has negative entries, the current tableau is not optimal. Therefore, we select an

index to enter the basis, corresponding to one of the negative entries in c, and calculate the

column in the tableau that corresponds to this variable xN(s).

� s = 2;

� d = U\(L\A(:,N(s)))

d =

46

14

We now carry out the ratio test with the values of d and xB calculated above and find that

r = 1 is the index that leaves the basis. We swap r and s between the sets B and N and

continue with the next iteration of the revised simplex method.

� r=1; swap = B(r);

� B(r) = N(s); N(s) = swap;

� [L,U] = lu(A(:,B));

� x_B = U\(L\b)

� u = L’\(U’\p(B));

� c = p(N)-A(:,N)’*u

c =

−0.250.75

2.25

Now c has the value shown above. It indicates that we have not yet achieved optimality, as

there is still a negative component. We identify the component s = 1 to enter the basis and

a component r = 1 to leave, and we continue with the next simplex iteration.

� s = 1; d = U\(L\A(:,N(s)))

� r = 1; swap = B(r);

� B(r) = N(s); N(s) = swap;

� [L,U] = lu(A(:,B));

� x_B = U\(L\b)

� u = L’\(U’\p(B));

� c = p(N)-A(:,N)’*u

c =

11

2

Page 140

5.2. The Revised Simplex Method 127

Since we now have c ≥ 0, the current tableau is optimal. The corresponding solution is

x = (1, 0, 0, 2, 0, 3)′, with z = p′x = p′

B

xB = 2.

Exercise 5-2-2. In MATLAB, solve the following linear program using the revised simplex

method as shown above. You will need to convert the data of this problem to canonical

form and choose an initial B that corresponds to the slack variables that you add during this

conversion.

max −x1 + 2x2 + x3

subject to x1 + x2 + x3 ≤ 3,

−x1 + x2 − x3 ≤ 1,

2x1 + x2 − x3 ≤ 1,

−x1 + x2 ≤ 4,

x1, x2, x3 ≥ 0.

You should work through each step of the method explicitly, calculating each of the inter-

mediate vectors, as in the example above.

In fact, it is computationally more efficient to update the LU factorization at each step

of the revised simplex method instead of recomputing the factorization anew. Details on

such procedures can be found in Section 5.2.3; in the absence of these methods the overall

complexity of the revised simplex method will be larger than that of an implementation

using Jordan exchanges.

Numerical rounding error causes many problems for implementations of the simplex

method due to the special nature of the number zero and the need to determine positivity

or negativity of components in crucial steps of the procedure. Typical commercial codes

contain at least three types of “zero tolerances,” small positive numbers below which a

floating-point number is deemed indistinguishable from zero for purposes of the algorithm.

The first tolerance zer_tol is used in comparisons of numbers against zero. In testing

the bottom row of the tableau for negativity, we use the MATLAB code

if isempty(find(c < -zer_tol))

rather than the simple test

if isempty(find(c < 0))

This allows some of the values of c to be slightly negative. Chvátal (1983) suggests a value

of 10−5 as a typical value for this tolerance.

The second tolerance is a pivot tolerance piv_tol that is used to safeguard against

very small pivot elements and hence avoid the problems shown in Example 2-6-1. Pivots are

deemed acceptable only if the potential pivot element is greater than piv_tol in absolute

value. A typical value for this tolerance is 10−8.

The third zero tolerance is a slack tolerance slack_tol that is a measure of how

much error we will allow in the slack variables, that is, the difference between A BxB and b.

The use of this tolerance will be outlined further in the sections on advanced pivot selection

mechanisms and basis updates. A typical value for slack_tol is 10−6.

A simple version of the resulting revised simplex method that uses these tolerances

and LU decomposition for solving the systems of linear equations is given in rsm.m. We

illustrate the use of this routine on the example given above.

Page 278

Index 265

pivot, 18, 20, 46, 47. See also Jordan exchange

block, 31

degenerate, 62, 64, 69

nondegenerate, 67

submatrix, 31

pivot selection, 15. See also pricing

Devex, 142, 143

for dual simplex, 103

for Lemke’s algorithm, 179

steepest-edge, 142

polynomial complexity, 207

predictor-corrector method.

See path-following methods

preprocessing, 212

pricing, 47, 49, 53, 68, 117

partial, 142

primal-dual methods. See interior-point

methods

primal-dual pair of linear programs, 89

definition, 94

for general linear programs, 108

interior-point methods and, 195

optimality conditions for, 100

weak duality for, 97

zero-sum games and, 192

prisoners’ dilemma, 186, 191

projection, 173, 205, 244

pure strategies, 185

QR factorization, 226

quadratic function, 244–247

convex, 244–247

quadratic programming, 7, 172–177,

212–215, 227, 231

convex, 172, 173, 175, 212

general form, 175

infeasible, 182, 183

nonconvex, 182, 184

unbounded, 182, 183

rank, 91, 225

determination via Jordan exchanges,

91–92

full, 225

ratio test, 48, 126, 203

after addition of constraints, 157

definition, 49

for dual simplex method, 103, 104,

157, 165

in Lemke’s method, 179, 180, 188

robust implementation of, 143

ties in, 66, 68

reduced costs, 49, 68, 124, 132, 142, 152,

153, 160

regression problems. See approximation

problems

residual vector, 219, 221, 227

outliers, 227

resource allocation, 10–11

revised simplex method, 52, 117, 123–143,

156

choice of initial basis, 125

for problems with bound constraints,

129, 134

fundamental idea, 123–124

initial basic feasible solution, 134–139

linear algebra in, 124–126

pivot selection, 142–143

roundoff error issues, 127–129

specification, 124

roundoff error, 28, 127, 143

saddle point, 184

scalar product, 252

Scheme II for linear programs in nonstan-

dard form, 72, 80–86, 157, 220,

221, 223

Schur complement, 31

sensitivity analysis, 4, 151

separating hyperplane, 11, 230, 231, 233

separation lemma, 113

shadow prices, 97, 154, 155. See also

Lagrange multipliers

Sherman–Morrison–Woodbury formula,

140

shifting, 74

simplex method, 6, 7, 45–87

introduction, 14–15

network, 149–150

slack variables, 8, 45, 117, 152, 196

definition, 45

dual, 101, 103

Page 279

266 Index

smallest-subscript rule, 67–72, 78, 104

solution set, 58–60

source, 145

standard form, 7, 16, 45, 117, 151

standard form, transformation to, 72–76

free variable method, 75

Scheme I, 72, 76–80

Scheme II, 80–86

substitution method, 74

Steinitz theorem, 26–27, 38, 92

support vector machines

linear, 230–233

linear programming (�1) formulation,

233

nonlinear, 233–234

nonseparable, 231–233

separable, 230–231

support vectors, 231, 232

tableau, 18–20

augmented, 62

condensed, 45, 47, 123

degenerate, 65, 66, 120

dual, 89

feasible, 46, 187

initial, 47, 119

labels, 21

MATLAB representation, 21–22

nondegenerate, 65

optimal, 49, 53, 58, 71, 152–154,

156

primal, 89

unbounded, 54

Taylor’s theorem, 170, 198, 199, 249

theorem of the alternative, 113

tolerance

pivot, 127

slack, 127

triangular substitution, 40, 139, 141, 209

unbounded linear program, 14, 46, 54–56,

66, 98

unbounded ray, 54, 182, 183

uncertainty in formulation, 151

variables, 2

artificial, 61, 62, 135, 167, 179, 218

for equality constraints, 80–82, 193

basic, 45, 152

blocking, 133

dependent, 18, 21, 23

dual, 89. See also Lagrange

multipliers

free, 75, 80, 107, 134, 136–139, 167,

184, 222

independent, 18, 21, 23, 26

nonbasic, 45, 51, 130

nonnegative, 107

slack. See slack variables

vector

norm. See norm

notation, 237

representation in MATLAB, 251

vertex, 14–15, 46, 51–53, 120, 163

adjacent, 15

degenerate, 52

zero-sum game, 185, 192–193