LINEAR PROGRAMMING WITH NONLINEAR PARAMETRIC OBJECTIVE FUNCTIONS LINEAR PROGRAMMING WITH NONLINEAR PARAMETRIC OBJECTIVE FUNCTIONS LINEAR PROGRAMMING WITH NONLINEAR PARAMETRIC OBJECTIVE FUNCTIONS by Martha Margaret Hayes, B.A. THESIS Presented to the Faculty of the Graduate School of The University of Texas in Partial Fulfillment of the Requirements For the Degree of MASTER OF ARTS THE UNIVERSITY OP TEXAS AUGUST, 1956 LIBRARY The university OF TEXAS PREFACE This thesis is an outgrowth of a term paper written in a course for Dr, R. E. Greenwood of the Mathematics Department of The University of Texas, The problem of linear programming is that of maximizing or minimizing a linear n functional, f(X) ¦ Z c.L, subject to a set of constraints, JJ n »« Z X.a.. b. (1 1,2,...,m), 1 J j=i where n> m and and are constants. The simplex method of Dantzig and the dual method of Lemke are methods of solving this problem, Saaty and Gass have solved the problem in which each is a linear function of a parameter t. This paper formulates and solves the problem (1) in which each is a parabolic function of a parameter t and (2) in which each is a periodic function of a parameter t of the form *h sin t + cos t, where and are constants. i Cj I should like to sincere appreciation to express my Dr, R. E, Greenwood, who acted as supervising professor, for his guidance and suggestions* to Dr. H, J. Ettlinger, who also served on the committee, for examining and approving the manuscripts* and to Mrs. Edgar J. Mackey, who typed this thesis, for her help and cooperation. M. M. H. Austin, Texas June, 1956 TABLE OF CONTENTS CHAPTER PAGE I. THEORETICAL BACKGROUND 1 A. 1ntr0ducti0n............. 1 B. Definition of Terms 2 C, Statement and Proof of a Fundamental Theorem... 11 D. The Solution Space and Requirements 5pace....................................... 13 E. Assuring That the Solution Set Is Bounded 14 F. The Extreme Point of the Solution Set 15 G. Finding a Second Extreme Point Solution From a Known Extreme Point Solution 17 H. Determining Whether a Given Extreme Point Solution Is Optimal..., 22 I. Statement of the Duality Theorem of Linear Programming 26 J. Explanation of Matrix Notation., 27 K. Proof of the Duality Theorem of Linear Programming 29 L, Duality in Other Fields of Mathematics 34 11. TWO METHODS OF SOLUTION 36 A. Introduction 36 B, Finding An Initial Feasible Extreme Point Solution 36 C. Resolution of Degeneracy 40 D. Development of the Computational Procedure. 48 IV TABLE OF CONTENTS (Continued) CHAPTER PAGE E. The Simplex Algorithm 52 F. Determining All Optimal Solutions 53 G. An Example Solved by the Simplex Method 56 H, The Geometric Solution 60 I, Statement of the Dual Problem from the Linear Programming Problem 60 J. An Application of the Duality Theorem. 62 K. The Solution Set of the Dual Problem... 63 L. Theoretical Development of the Dual Method 64 M. Resolution of Assumptions Made in Section L 72 N, An Example Solved by the Dual Method 80 111. THE PROBLEM IN WHICH THE COST COEFFICIENTS ARE LINEAR FUNCTIONS OF A PARAMETER 86 A. Introduction 36 B, The One-Parameter Case Solved by the Simplex Meth0d............ 87 C. Equivalence of the Dual and Simplex Methods in the One-Parameter Case. 96 D. The Two-Parameter and Multi-Parameter Problem*.. 100 E. Solution of a One-Parameter Problem .. 102 V TABLE OF CONTENTS (Continued) PAGE CHAPTER IV. THE PROBLEMS IN WHICH THE COST COEFFICIENTS ARE NONLINEAR FUNCTIONS OF A PARAMETER 108 A. Introduction 103 5. The Parabolic Case. 103 C. The Periodic Case 114 D. Solution of a Parabolic Problem 122 E. Solution of a Periodic Problem 133 BIBLIOGRAPHY 142 VITA 143 VI the LIBRARY THE UNIVERSITY OF TEXAS CHAPTER I THEORETICAL BACKGROUND A. INTRODUCTION The concept of linear programming is a relatively new one; and the development of the techniques used to attack it and related problems is occupying the time of a great many contemporary mathematicians, particularly those who are Interested in economic and industrial applica­ tions of mathematics. Physically, the problem is concerned in the with "planning a complex of interdependent activities best possible (optimal) fashion" jT]. Scheduling trains, blending aviation gasolines, and allocating labor are only a few examples of the type of problems which have been formulated as linear programming problems. Mathematically, the problem is the maximization or minimization of a linear "functional" of a set of non-negative variables which are subject to a set of linear inequalities or linear equations, ¦ • that is, finding x (x ,x ~..,x ), such that f(x) 2 c.x. n 12 dJ j--I = is a maximum (or minimum), > 0 (j 1,2,...,n), and n n = ~ a. .x, 2 a. -x, K, b. (i 1,2,,,,,m) or 2 b.(i !,<:,,,,,m), J “¦ 1J j j=i In order to understand the development and applica­ tion of the methods of handling such problems, one must first 1 become familiar to some extent with the theory of convex l/\ sets in n-dimensional space and the theory of linear trans­ formations. One purpose of this chapter is to present the from these two fields to understand- an theory necessary of ing the linear programming problem. Following this discussion and based on it will be a brief analysis of the simplex method of solution. A more detailed presentation of until the simplex method and its procedure will be reserved Chapter 11. Finally, the last section of Chapter I will be a statement and proof of the "duality theorem," a basic theorem for linear programming. B. DEFINITION OF TERMS The first step in presenting any mathematical theory is to define the terms used. Points or vectors will mean points or vectors in n-dimensional space. Addition of points is defined only for sets of points, each member of which has the same number of components. The sum of two points ¦ ) and y (y.y ) is then x3(x ,x ..., x . ...» y n 12, a12 »++ * x+y (x y., x y_, ...» x y„). Multiplication of 1X22 ixnu ~ ~ - a x a real number a is defined by the following point by relationi ax = (ax .ax , ...» ax). 12 n is defined as the set of The ray through the point x that Charnes all points ax such a >O. |TJ uses the symbol 3 to mean the set Lx,pJ of all points x having the property P, This notation will be used henceforth in this paper. For example, the ray through the point xis ax|a > 0 If . L-—J /\ /\ 1 u' x' andx two are distinct points, the segment joining {l) (a) (l) (a)U <7< ij x andx -.(1 vOx . - [yX A convex set is a "collection of points such that, if x and y are any two points in the collection, the segment joining them is also in the collection" jT] An extreme . point of a convex set is a "point in a convex set which does not lie on a segment joining some two other points of the set" JT] . If xis a point of the convex set K and x(i), x (m) are the extreme points of the convex set K, then x may be expressed as a "convex" linear combina­ tion of the extreme points* that is, {i) xmXx^ + X + +Xx^ x ... « 12 m where O 0. "T is said to be a linear transformation if* T(ax + bw) » aT(x) + bT(w) for all points x, w, and real numbers a, b" [TJ. A linear transformation is completely determined by what It does to the unit vectors. The notation will be used to denote the unit vectors, (l) (*> (n) e e= e (1,0,0,,..,0) # (0,1,0,...,0); (0,0,...,1). Now* » +.+ (x ) x x xxexa x , , ... laaia n (j) (a) (n) -+ ++ Therefore, T(x) x T(e ) x T(e ) T(e ). ... x 1& 2 ¦ Consequently, if T(e (i 1,2, ..., n) are known, the transformation will be uniquely determined, •tM * LJii ia in ] y ... n a^ e . (n) ,(i) . . . a .<"'-H|. y ... . Sinn such linear transformation T is associated Every with amatrixA. a a... a * 1111 in & 3. 0 0 0Si 2122 2nwhereA ¦ • aa a ** * nini nn It is also possible to have a linear transformation from n-dimensional into m-dlmensional For these space space. transformations, the associated matrix has n rows and m col- For example, fa "] is the matrix (column vector umns. — a1 2 a« : . a n__ in this case) associated with a linear transformation from n-dimensional space into one-dimensional space. (A convenient way of Indicating column vector A is to write A*, its a row vector. The A* of a matrix A "transpose,” transpose is the matrix obtained by interchanging the rows and columns of A.) To every point xin n-dimensional space there corres­ into which ponds a unique point w in m-dimensional space x is transforms by the linear transformation whose associated matrix has n rows and m columns. The statements in the next three paragraphs are from matrix or vector theory and will be made and used in this paper without proof. "A linearly independent set of vectors (or points) ++ ..., is a set such that the ’null vector’ for real numbers (0, 0), Cj, ** ¦*" ... only if c 0, Vectors not linearly independent are linearly dependent. "A consequence of linear independence is that, if a point P « 1*> + (where ..., is a aa Q1 + P^, then the are linearly independent set), , ...» the only coefficients of the * s in terms of which one can express P Q as a sum of multiples of the i.e., as a 'linear combination,' in m-dimensional then "If the points are an space, there are at most m points in a set. linearly independent Such a set of m points (vectors) is called a basis of the written space. Every point of the space may be (uniquely) as a linear combination of the points of a basis" [T] • The following theorem is of importance in linear t programming "A linear transformation L from an n-dimensional space U to an m-dimensional space W takes a convex poly­ hedron K into a convex polyhedron L(K), the image of K" [TJ . The method of proof is (1) to show that the convexity of L(K) follows from the convexity of K and (2) to show that L(K) has at least extreme point and no more extreme one points than K has. For a detailed proof one may consult the other references linear Bibliography, Reference |T], or on transformations and convex sets. A linear functional f(x) la a linear transformation which takas the points x ¦ {x tx, x ) of an n- n dimensional into one-dimensional space. For example, apace n ¦ f(x) 2 c.x. is a linear functional. At the beginning 11 i«i of this paper the problem of linear programming was stated 8 as the problem of finding x * (x ,x , .... x ) such that (1) 12 n n n f(x) » Z c.x. is a maximum (or minimum), (2) Z a..x. < b. J XJJ 1 " J j*i n • ® (1 1,2, ...» m) Z a.,x, b.(i 1,2, m), and (3) or a J1 jcl n » >0 Z x. (j 1,2, n). If the restrictions a..x. 0 becomes “ ~ ijJi Jj=l n Z a,-x, + x_. Sb. and x. > 0 and x >O, J .i ..n+i* ~ n+i " To every solution to the set of inequalities there corresponds a solution to the set of equations for which the value of the linear functional is the Hence same. one may work with the set of equations only. Henceforth this paper will use the notation of Charnes, and the unknowns will be denoted by X * X ) instead of x. ...» Q In order to clarify the discussion which follows the set of will be written out in detail equations greater here: Xa +X + +Xa ®b a ... i11 212 nin i Xa +Xa + +X„a *b ... n i2i222 in2 * Xa+Xa+... + Xa b , i sal 2m2n mn m 9 The column vectors (b ,b , .... b ),* (a ,a .....a .) m nil i a 1121 ..., (a ­ dimensional space. They will be denoted by ? ,P , , O » respectively. The problem then la to find X (X ,X , ...,X ) i2 ft +* suchthatXP XP XP P andforwhichf(X) iiaa nn o is a maximum (or minimum). The set of equations, n 2* - (i 1,2, ..., m), be considered a linear transformation from into may n-space m-space if each is replaced by Then the point * x (x ,x ~•,,x ) is transformed into w * (w ,w ~.,,w ). i2n im 2 The matrix associated with this linear transformation is the transpose of the matrix of coefficients of the equations. This is true because the n unit vectors are transformed into e^ the n column vectors p f> ? For example, » » » •••» * ln 2 (1,0, 0) is transformed into P^, To understand this, recall that if x is an n-dimenslonal point, it is trans­ formed into w, where w ® x P + x P + ...+ x . Therefore, 1122 nn (i) e (1,0, 0) is transformed into the point, a+ + +* .In general e^ is transformed into ... Now the matrix associated with a linear transformation is the matrix whose element is the jth component of the transform of the ith unit vector. In other words the ith 10 row of the matrix is simply the of the transform components of Hence the matrix associated with the transform 8 ... x xP +xP + +x_P_ wisthen mmatrixwhoserows xi22 nn are the This is obviously the transpose of the matrix of coefficients of the set of aquations + ++* xP P... P xx . ii no 22 n Henceforth, the following notation will be used: H will denote n-dimensional space; W will denote m-dimensional and L will denote the linear transformation whose space; associated matrix is the transpose of the matrix of coeffi­ cients. "Now the set of all vectors in U having non-negative coordinates forms a convex polyhedral cone" JX] . The linear transformation L takes this convex polyhedral cone of U into a convex polyhedral cone of W. "A representative convex set associated with a convex polyhedral cone is a convex set one on each of the cone" generated by points, edge jT]. of isof An edge a convex polyhedral cone the ray through one the extreme points of the convex polyhedron generating the cone. Hence, any representative convex set may be thought of as having generated the cone. In the linear programming problem there is usually n more than one point X such that 2 *P and >o. Q i*i The set of all such points will be denoted henceforth by A» Such "feasible" points are called solutions. They do not necessarily maximize or minimize the linear functional. It will now be shown that Al is a convex set. If X^ and X^ l5 two distinct then are any points in A.» P and o U) (a)J --• L(X ) P Ifo<|i < 1, then +(l . Q p)X (l) (a)pL(X ) +(1 p)L(X) |i?+(1-p) P HenceA - . o o is a convex set. C. STATEMENT AND PROOF OF A FUNDAMENTAL THEOREM A theorem will now be which is basic to the proved solution of the problem at hand: "Theorem. A linear functional f(u) defined on a convex polyhedron K takes on its max. (or min.) at an extreme point of the convex set. If it takes on the max. (or min.) at more than one point, then it takes the same value over the whole convex set generated by those particular points. "Proof. that x Is a of the convex Suppose point set K for which f(x) is a max. (or min.). If xis an extreme point, the first statement of the theorem is true. Suppose that x is not an extreme point and that f(x) is a max, (or Then a ’convex' combination xas min,). we may express r ¦ of extreme points of K, A Thussx S ~, . say ,.y /^A^ iaEl r where 0< V. < 1 (i » 1,2,...,r), and 2 •/. » 1. 11 i*i Then, because f is a linear functional, we have: rr ~] - f(x)-f 2y,A 2 " “ax. ji i*i I*l Now, since the are all non-negative, we do not decrease r the sum 2 /.f(A. ) if for each f(A.) we substitute the 11 1 1-1 greatest of the values say f(A) for some fixed k. k But then r » f(x) f(A ) 2V. -f(A ). Hence, f(x) * f(A ).) im i-1 "If f takes on a max. (or min.) for more than one extreme point, say ..., i.e., * * * * f(A) *M (or min.), consider the convex max. ... k set formed from these points. If xis any point in this set, kk then x * 2 y. A. where o 0, there exists a positive number k such that - +­ and are positive (i 1,2, ,q). Then X^ *(X .ko , X +kc ,..., X .kc ) f iriz qq (s) and X®(X-ko X-kc X-kc ) , ,* .... irj 2qq X+ are points Also * X^ Therefore, the = {i q) must be coefficients of linearly 1,2, ..., independent vectors because this is a contradiction of the assumption that Xis an extreme point. It should be noted, as above, that q < m because more than m vectors In m­ dlmensional space are necessarily linearly dependent. Now, since n is a finite number, the number of sets of linearly independent vectors and hence the number of extreme points is finite. Therefore, if the solution set is bounded, it is a convex polyhedron. The functional takes on its optimal value (maximum minimum) at one of its extreme points. or G. FINDING A SECOND EXTREME POINT SOLUTION FROM A KNOWN EXTREME POINT SOLUTION The foundation has been constructed which now upon the simplex procedure is based. The first step in this procedure is to assume that a solution X, involving exactly non-zero which are coefficients of linearly m components independent vectors, is readily available. This is actually no restriction on the problem, as will be explained in Chapter 11. The components of this solution will be denoted X X and the corresponding column vectors by ,X,..., byP.P, .... P. Since these column vectors are linearly l a m independent, they form a basis of W (m-dimensional space). All of the s (J 1,2, n ) may be expressed in •••» Pj'“ m * terms of them. For example, P, 2 x. ,P. . The values x.. JJ are unique. The value of the functional determined by this m first "trial” solution X X ) is z » 2 c » , ~., i^i* o m Si i 18 This solution is an extreme point of , and what is sought is another extreme point of which, if possible, increases the value of the functional. Let be some vector not in the basis, P ,P .... P , Then i2, m P -2\ P . ftP-ftP v ac 1 --ftJ pftp Po ** lkt k. t * ftP (1-X) P 0-J (X 1­ k. j If >0and - each (X>O, then this will give an­ >i other solution. If >O, all >O, and at least - - one then this is another extreme point of the solution set because it involves a different set of linearly vectors. To prove that these vectors are independent - linearly independent, let Xr xrk “°* Now assum ® the 3et * of vectors (1 1,2, ij i + rj i « k) is linearly dependent. Then there exists a set of numbers and a number m not all zero, such that £ + d k P k * °* Kow d k 0f i*1 i+r m for if ¦o, then S * 0 and at least one of the d i + 0 i*l ifr " This is not possible because the (l 1,2,,,,,m; i f r) 19 are linearly independent. Thus f 0, and ¦ Vic J/'W ifr m -d. P*2 - ("dL )F k i- K aX i-i k ifr let -It = b a x . P K ."b P . 1 1 k i»i i+r But P =sx.P . 1 i»i Subtracting the expression involving from the expression Involving gives: - 0* P**( *rk*lk W r x®i i+r Now since the vectors .P are linearly independent, m all the coefficients must vanish. In particular x *O. r}c However, it has been assumed that X > 0, fc-> C, and r X X -0, Hence the new set r Therefore, « xrk of vectors (i * 1,2, ..., ij i fr{ i »k) is a linearly independent set, and the new solution is an extreme point of A­ 20 The problem now is to find a way of choosing so - that h-yO, all €*:„ ) &t least one value x -*x ¦°­i ik The procedure followss * 1, The solution X (X ,X X ) such that all X. > 0, *,’ ’ i2m. 1 m Z X.P. P and P ,P P are linearly is “ , , independent *O 12m Bl assumed known. These m form a basis of W. * 2. All the Pj' 3 (j 1,2, ...» n) are expressed in terms a of the basis vectors. (P, * Zx, ,P,.)J x1 * i»i = 3. Assume that for some value of J, say j k, some m = yOf that is, Z and some 3 are greater x^' than zero. For every > 0, is calculated. 4. £is assigned the smallest of these positive values. If X /xis the smallest of the set, P°r °» rrlc = - then # \ /x Then all values will be > 0. r - Since "> 0 and &> 0, then if x^ k £.O, then )>O. Ox^ Jc On the other hand, if xiJf >O, then, because of the choice of O as the smallest of those values which are positive, every value of (P°r x> 0) is Z Hence, iJc if* >O,lk > 8, x ik 21 x ik **ik> - x - \ ik» (X‘*x} °* i ik It will now be assumed that if (the replacing vector) has been chosen such that some >O, then ©¦ « or only xx ik ik one value of ij that is, that ) or on*y one - &x^k * * value of i, namely, i r. Then a new basis has been uniquely determined in which P, replaces P in the old basis. The kr - case where x * ® for more than one value of iis the ik case of "degeneracy" and will be discussed in the next chapter. 11l - 5. Then Z x + &P rov*des another extreme P k i«i point X* of the solution set _/± , (1 ¦ 1,2, m). * “ €hcik - K *• * \[ 0 (1*m+l, m+ 2, ..., nj 1 k), * Notice that X* 0. r Let z be the value of the functional determined by Q <£, • z the value determined by Also define the symbol . Xjand , _X Q m z. to mean 2 x c anc* that these values depend uponii i the solution being used or, more particularly, the set of m linearly independent vectors (the basis) corresponding to the solution. ’ ‘ *« * °lVl< ®*lk)cl * XI 11 i m m - - S 6o. It will be shown vector, shortly that if this is not the case, then either the given solution is optimal or there is no optimal solution (the value of the functional is unbounded). When a new extreme point has been found by the method outlined above, it is a "better" solution than the first if, and only if, - » ) > 0 where kis the subscript of the new vector {°k k introduced into the basis. Itis now to to advantageous backtrack the point where it was assumed that a solution X had been found m involving exactly non-zero components which were coeffi­ cients of m linearly independent vectors. These vectors form a basis of W, and hence all of the column vectors may be expressed in terms of them. This is done, and «a all of the values (i 1,2, i| j 0,1,2, n) m are tabulated. Each z. ( 5 S x..c.) is then calculated J «l x1 and recorded, and then -is also calculated and recorded for each value of j, The table of values is exam­ ined, and there are three possibilities. I. For some j, and for this j and -Zj> 0, every i, O. It should be recalled here that m *w j - is a solution for any & such that all an(* — & 0 and that the value of the functional for this solution is z Hence since all £ 0 and >o, + Zj). x,y Q any positive value of £ will give a "feasible" solution. The n+i functional has no bound. (Appending the equation S X. * B, 1 i»i where B discussed in Section E of this chapter, automat­ was ically eliminates the possibility of this case.) 11. For all j * 1,2, c.. cj for all j > Also 0. n nmn ,, , == Hence z 2 X.c. < 2 z.X. 2 X.( 2 x..c.) 11 111 J i»i i*i i«i j»i mn , = 2 ( 2 X.x,.)c,, IJI J j*i i*i n nm mn ,, , 33 Now P 2X. P. 2X. ( 2 x..P.) 3 2 ( 2 X.x,.)P.. rt 1 JJ J I=l i=i j=i J=i i=i m 3 Also P„ 2 X.P,. Since P ,P P form a basis of M, ii i a.... 0 ,m jaj the expression of P in terms of them is unique. Q n , 3 Therefore, { 2 X,x,,) X.. 1 J1J Substituting this in the last expression for the right-hand m * 3 , side of the inequality above yields z < 2 X.c,, z oJ ° ° J j«i Charnaa (after proving the above) states the follow- P ing optimality teat, "Let 2 3 P 0 be a solution - on p linearly independent vectors. Add m p more to 25 obtain a basis for tf. Express the remaining * s in terms » of this basis. Then X X is ..., , 0,0) p the finite maximum if all optimal, i,e,, yields of f(X) c*z £ ES•jj 111. For for and some j, >Of every such j the some i, > 0. This Is case which is dealt with by introducing a new vector P into the basis and removing one fc of the old basis vectors, (See preceding section.) If only one vector is replaced by that is. If there is only one “ vector, say P such that /xthen the new solution , rr J.jc will be baaed on m linearly Independent vectors, which also form a basis of K, In this case the value of the func­ tional for the new solution is larger than the value of the functional for the first solution. Since the new solution Involves vectors which form a basis of W, the whole process may be repeated. If, at each stage, a solution involving m linearly independent vectors is obtained, the process may be continued. When I or II occurs, the problem is solved. The third possibility cannot recur indefinitely because (l) there are only a finite number of sets of m linearly independent vectors in the set P , and (2) the n possibility of cycling or repeating any basis is eliminated because each solution is assured different from all preced­ ing ones by the fact that each solution determines a value of the functional larger than those determined all by pre­ ceding solutions In the procedure. Hence I or II will eventually occur, A difficulty which arises in practice is that at some stage there will be fewer than m vectors Involved in a solution, and hence the other vectors cannot be expressed in terms of them. This is called ''degeneracy" and will be discussed 11, fully in Chapter I. STATEMENT OF THE DUALITY THEOREM OF LINEAR PROGRAMMING One of the fundamental theorems of this relatively now branch of mathematics linear programming is known as the "duality theorem." Consider the following linear programming problem* minimize nn f * Z c.x, where Z a..x. >b. (i * 1,2, m) c and 0(J 1,2, q)• Xj Its "dual" problem then is this* maximize mm * • Z b.w. where Z 0(i 1,2,...»m). i The duality theorem states that if there is a finite solution to either problem, then there is a finite solution to the other and that the minimum of f is equal to the maximum of g. The proof which will be presented here is essentially the proof given by Charnes in An Introduction to Linear Programming. It will be necessary before presenting the proof to clarify some of the matrix notation which will be used in it. J. EXPLANATION OF MATRIX NOTATION Small letters without subscripts will be used to designate column vectors. For example, x^ X j x*I , x n The same small letter with a prime will indicate the corres­ ponding row vector; that is, x* * ..., More generally, a prime on any symbol for a matrix indicates the transpose of the matrix. Capital letters without subscripts will denote matrices with more than column and one more than one row, I will be used to mean the square matrix with entries of unity in the ith row and column and entries *100" of zero everywhere else. I «* 010 or a similar matrix of 001_ _ another order, so long as it is square. If A and B are two matrices with the same number of rows, means [“ ¦] the matrix obtained by writing A to the left of B; that is, a a a a b b n 12 13 14 ii la A * a a a a> B = b b> 2 1 2 2 23 24 21 22 a a a a b b 3 1 32 33 34 3 1 3 2 aaa abb 11 12 1314 11 12 A> »B C—\ —' a a a abb 21 22 2324 21 22 a a aabb 31 32 3334 31 32 Multiplication of matrices is not commutative, and the product AB is defined only when the number of columns in A is equal to the number of rows in B. When this is the the element case, in the ith row and jth column of the product matrix is the sum of the products of corresponding elements in the ith row of A * andthejthcolumnofB,IfAB Cand(& a ) il,a2, ~,, iin is ofA the ith row and is the jth column of B, then b 2j1 I I— _J n c. . ® Z a.,b, Obviously the number of rows in A is the J*J k s*! same as the number of rows in C, and the number of columns in B is the same as the number of columns in C. If X is any square matrix whose rows (or columns) are linearly independ­ ent, then X"1 will mean the matrix of the same order square 11 suchthatXX"* *I «X” X, itis In matrix theory proved that if the rows of a matrix are square linearly Independent, the columns are it is linearly independent. Also proved that only matrices which are square and have linearly inde­ pendent rows and columns have inverses. The inverse of a matrix is unique. In the proof which follows, will mean the same thing which it has meant throughout the discussion. An m x n matrix means a matrix having m rows and n columns. To say x > 0 means that each component > 0. Xj Nov the linear programming problem may be stated as t minimise f • c’x, where x and c are n x 1 and x is subject to Ax>bandx 0. Aismxnandbismx1. Itsdual may bo stated as: maximize g = w'b, where w is m x 1 and w < is subject to w*A c* and w > 0, K. PROOF OF THE DUALITY THEOREM OF LINEAR PROGRAMMING The first step in proving the theorem is to convert set Ax bto the of inequalities the equivalent system “ - of equations: Ax I a b. I is the mx m unit matrix; = ais m 1: a>O. Nowifx (x,x a,a ), x. ¦— 12 O1]u + then [*• -D * b and [>• -] has (n m) columns. In the x “ q findsX>0suchthat ZP,X.•P simplex procedure one , j»i JJ where * m + n. The column vectors in -I have been denoted q 30 by P , P » •••» p A solution involving preciselyQ+l n+2 n+m * m P, * a can be obtained by the simplex method. This solution will involve the m column vectors P P P that .... i q V, V, m * 2 m is, P » S F, L, It is assumed that these column vectors j-iq 3 3 are linearly independent. Hence every column vector may be expressed in terms of them. The values in the following are expressions calculated! m Sx P U ¦ 1»2,...» n +a).il o 3q l«i i m » Then the values z. £ c x.. are then calculated. 3 11 ?! - aou” pP •••• ’ [_*¦ I J Pn+mj n n»l* ¦[>][«• *]« where X is the m x n matrix whose (i,j) element (ith row and »« jth column) is (i 1,2, mj j 1,2, n), ...» I is the m x m matrix whose (i,J) element is x, i,J +n * s (i 1j-ij..•, mf J 1|2| .•~ m). B is the matrix whose columns are PV,PV, .... P Sn These are the basis vectors. w *“>[-0 * (*,«• P Pn+mj * •••> n+2 - fp , P„(l -P BY. 1j L “I «-B”1 1 Therefore, Y , and Y -B*" . £x, |»| X, ~| m Alsoz » 0 1,2, n), ...» j -Xjj “ 3c *c“ (j 1,2» ***’ n)t jj j 2 C (j"I>2 m) +n »•••» » n i«i » j= «iXl +J and c»0 » (j 1,2, m). ..., n+j Let e**¦(e c c ). , , ..., Let /’ =(z-c z-c -c ) »c** X-c’ , , ...,* z„iia 2 n n' and -w' =(i* . --•*’ , , .... ) * B+l na nttt Let x*'5(X ,\ ,...,X„ ) and P=b. qqq„ o a a m Then b SP X. Bx*. 1 l-i qi For any feasible solution x of the first problem (not necessarily optimal) and any feasible solution w of its dual problem (not necessarily optimal) the following things are true. 32 Since Ax > b and w then w*Ax > w'b. >O, However, w'A < c’, so c’x > w*Ax > w'b (since x > 0). This is true of all solutions x to the first problem and all solutions w some to its dual. Therefore, min c'x > max w'b. If, for * x and some w, w'b c'x, then for this x and this w both c'x and w'b are optimal (minimal for c'x and maximal for w'b). This 1s true because for these values rain c'x < c'x • w'b < max w'b < min c'x, and the equality sign holds throughout. = Hence min c'x max w'b. If it can be shown that In an optimal tableau for = the first problem, w -(z) satisfies z , ..., n+in+m the conditions of the dual problem (w > 0 and w'A < c*) and that w’b * c'x, then it will have been proved that if a finite solution to the first problem exists, then * a finite solution to the dual problem exists and min c'x w'b. It will then follow that if solution max a to the dual problem exists and is finite, then a solution to the original problem exists and is finite and also “ min c'x w'b. In order to understand this, one needs max only to rewrite the two problems as follows: of the first (1) In place problem write the equiv­ alent one, "maximize (-c'x) where -Ax < -b and x > 0." (2) In place of the dual problem write "minimize (-w'b) where -w'A > -c' and w' >O." It should also be noted that min (-w'b) * max w'b and max((-x)c *x) * min c'x. Hence if the above conditions can be shown to hold, the theorem will be tableau for the proved. In an optimal first problem, 2 0 * 1,2, *.., n m), j —cj {ln the earlier text the optimal tableau was characterized -c by the fact that z > 0 for all values of j. The discussion there was for maximizing a functional. It is obvious that if the functional is to be minimized then 0 “ z S. for all values of j.) j Therefor®, J< 0 and -w O. Now w'b » (c*,B~i )b » (c^B^Hßx*) -c*'x* * min e'x. » Also «/* c**X -c' and -V > 0 so that (since A « BX and hence B“4A « -X) * , o’ «c**X J*-c*B“iA-.* «w*A •/>w*A. This completes the proof. To reduce a linear programming problem to the prob­ lem of finding any solution to a larger set of inequalities, write the problem in the formj * minimize f(x) c'x where Ax > b and x > 0. Then write down Its dual and the added restriction that w*b > c’x. From this one may conclude that any method of be solving inequalities may applied to linear programming problems. However, such methods are usually more cumbersome than the methods developed specifically for the linear programming problem. L. DUALITY IN OTHER FIELDS OF MATHEMATICS The theorem just proved is called the duality theorem of linear programming because of its similarity to the duality principle in other fields of mathematics. One classic illustration is the duality principle of projective geometry. According to this principle, "all the theorems the of projective geometry occur in pairs, each similar to other, and, so to speak, identical in structure” ]_2j . A theorem may be constructed from its "dual theorem” by inter changing "dual elements," For example, If point and line are dual elements, then interchanging them in one the­ orem will give its dual theorem, Pascal's Theorem states that, "If the vertices of a hexagon lie alternately on two straight lines, the points where opposite sides meet are colllnear (See Reference jIQ, page 191,) Its Brianchon's states that, "If the sides of a dual, Theorem, hexagon pass alternately through two points, the lines .join­ ing opposite vertices are concurrent" {_2j . The algebra of sets also has a duality principle. (See Reference [2j, pages 108-112.) If, in any one of the laws of the algebra "is a of sets, the expressions, subset of," "the empty set," and "the union of,’* are inter­ changed respectively with the expressions, "contains as a subset," "the universal set," and "the intersection of," then another one of the laws results. CHAPTER II TWO METHODS OF SOLUTION A. INTRODUCTION In Chapter I a theoretical background was developed for attacking the linear programming problem. It will be the purpose of this chapter to go Into two of the present methods of solution in some detail. The first method, which was introduced in Chapter I, is the simplex method. Its discussion will be followed by a discussion of the "dual'’ method of 0. E. Lemke [3l. B. FINDING AN INITIAL FEASIBLE EXTREME POINT SOLUTION In Chapter I the theory behind the simplex method was presented without going into the details of computation. It was assumed that an initial feasible extreme point solu­ tion to the linear programming problem was readily available. * Recall that such a solution X (X ,X X ) involves , .... 12 n precisely m non-zero components, which are the coefficients of linearly independent vectors. (See Chapter I, Section G.) In many problems the set of constraints is stated in terms n of inequalities of the forms 2a. J»1 ,X. J O, then denoting these last m columns by P ~,P the following n+i n+m m n+2 . “ is trues 2 b.P .. P , P is the m-dimensional column xn+l o o . i*i_, vector whose ith component is that is, the corresponding “ row vector P* (b ,b , b). This is an initial m o 12 solution, and all column vectors may easily be expressed in terms of the vectors P If the constraints ., ~,? n+i' n+a' ' n+m are not so conveniently expressed, it is still possible to get an Initial solution without having to determine a set of m linearly Independent vectors from the , set P^,P^,.•.,? n where a that the whose components are a j» j)« Suppose m 2 set Of constraints is expressed as equations; that is, n B Z a. .X, —b, (i 1,2, ~,, m), 1 j»l * A related problem is now considered, A new variable is attached to each equation. All the coefficients of the new either lor -1. variables are If 38 coefficient of the variable attached to the ith equation) *l* If then * 0, -1. The set of equations mayn+i n+m then be expressed * P ? . , •••, n+l n+2 n+m Z JJ j“1 are the unit vectors of W, some of them perhaps negative m unit vectors. “ Obviously, Z Hence one is j^jf*n+i provided with a solution in terms of m linearly independent vectors (a ’'basic" solution) to this related problem. The question arises of the exact relationship between the two problems. Any solution to the original problem automat­ ically provides a solution to the related problem (with * ==» X X ... ... X„ 0), Assume that, by some manner,zi+i n+ja. ~. it is possible to construct a new functional, f*, for the related problem, such that, for any solution Involving only the variables of the original problem, f'» f (the functional of the original problem) and, for any solution involving at least one of the "attached" variables, f* is less than the minimum value of f (if f is to be maximized) or greater than the maximum value of f (if f is to be minimized). Maximum and minimum values of f exist since by appending n Z X. < B to the original problem one can assure boundedness J of the solution set. (See Chapter I, Section E.) if any solution to the original problem exists, the Then, 39 optimal solution to the original problem will be the optimal n solution to the related problem. If f * S 0.X,, then f* 11 i“i n+m will satisfy the above conditions if f* » f M Z X., - 1 i»n+i where M la a value so large that its appearance in the func­ tional makes it smaller than the minimum value of f. (Obviously, this is for maximizing fj if f is to be minimized, n n+m then f* ¦ Z c.X, + M 2 X..) There is no question of x1 1 i»i i»n+i the existence of values of M large enough because (1) the solution set to the related problem can be made bounded n+m (by the addition of ZX, B*) and hence will have a finite 1 i»i number of extreme points (some possibly involving B*), of them and (2) the functional (this is not the functional to be maximized or minimized but is the multiplier of M in f*) n+m Z X. has a finite minimum over all extreme points i*=n+i involving at least one positive > n). M then has some n+m value such that M( 2 X.) is larger than . 1 ffiin i«n+i nn (£cA.) -(2c.X.) . . v 1max + 1min 1=! + is£l There is always a solution to this related problem; and if a solution exists to the original problematic optimal solution to the original problem will be the optimal solution to the related problem. It should be noted that, by the methods discussed in the preceding paragraph, it is possible to find a basis of W and a solution in terms of it. This solution does not necessarily involve precisely m non-zero coordinates. When it is it does not, a "degenerate" solution* and this will be resolved now. C. RESOLUTION OF DEGENERACY Th® next difficulty to be resolved in the simplex method is that of "degeneracy." In using the simplex method one starts with a solution in terms of a set of m linearly independent vectors in W, that is, in terms of a basis of W. The is carried out replacing one vector ? in the process by r set by a vector not in the set and obtaining a new solu­ tion in terms of the new set of vectors. This is done as follows* if X * (X^X^,..~X Q ) is th© original solution and > 0 (i * 1#2,,,.,m)l # 2,,,.,m) and X i * 0 (i »m + 1, m + m then 2 * P Q and •••» are linearly independent vectors forming a basis of W, that is, m-dimenslonal space. Hence any m-dimensional vector may be expressed in terms of m them. In particular P, » 2 x.JP.. Therefore, m P « 2 +GP,­ X,P. GP, oii kk in m »2XP-G2 P-+GP 11 1K i*i 3 p+ ** i (See Chapter I, Section G, Equation 1-1.) If G> 0 and all -—°» th®n this provides a solution. If Gis taken to be the smallest positive value of and there is only one value of which is equal to G, then the new solution will have m positive coordinates; they will be the coefficients of linearly independent Then the process may be repeated. (Recall that is chosen to increase the functional if possible. If z is the value of o the functional determined the first then the by solution, value determined by the solution obtained when is brought m *+ ¦ into the basis is z* 2 where z2 k x^c^, 0 I*l (See Chapter I, Section G, Equation 1-2.) Two assumptions have been made without being justified* (1) that the original solution has precisely m non-zero coordinates and (2) that at each stage there exists only one smallest value N/Xik* Again a related problem is considered. The related problem will satisfy the above assumptions and will yield the solutions to the original problem, ? in the Q n ee 0J original problem is replaced by (P + Z J P,), where is unspecified but positive. The rest of the problem remains unchanged. Now if X * (X ,X X 0,0) is a , ..., ,X St solution to the original problem in terms of m linearly in­ dependent vectors, P then ,,,, ,ffl mn n Z (X. + Zx. J.)P. * P + Z eJ.P., ,e 11 J i»i j*i This is true because m m B Jp a Z X, .P. , n n m Z , e J P, ¦ . ZeJ Z x,.P,, j=i J j-i i=i 1 n , m n , Z e J P, « Z ( Z j»i J i»i j»l J m Also P 0 » Z P.. 1 1 i-i n mn J Hence P + Z eJ.P. -2 (X. . Z e.x.,)P,. ° Jii jsl !•! j.i Now. if f (X) denotes the value of the functional e for a solution to the related problem where X is a solution to the original problem, then j . f (X) *-2 c.X. 2 c.( 2 e x..) 8 111 i«i i«i j«i m nm . = 2 e.X, + Z 2 x.,c. 11 XJ1 i»i j»j i»i mn . *+ Z c.X. 2 e*'a 11 j. i-i j-i n 1 J e z, °3 j., ® +2 where z is the value of the functional determined by X. n 1 ,, Consider now the polynomial 2 e x. .. Any polyno­ j*i J mial in e is dominated by its lowest power if e is suffi­ ciently small. Consider the two polynomials in es a(e)*a .ae* + +a e*"1 e ... iz q q «. .e ... b(e) be be2 . b la q is some finite number. Then where q u -«-b+-+ +­ a(e) b(e) (a )e (a b )e‘: (a b )eq ... . 11 22 qq - Let (a b ) be the first non-zero coefficient in 3S this last line. Then A 2 q”S -« c a(e) b(e) (a-b)(e3)(l+ e + ® ® +•**+c ) g a s+l a+2 q® where -b -a ii = ‘ .°1 a-b ss Now let 1 2 q"3 ++ ++ c ce o{e)S(1 ... c ). ee g+J s+2 q Then a(e) b(e) * (a b >(e 3 )C(e). SS Now the limit of 0(e) as e approaches zero from the positive side is unity. Hence there exists some positive number < e such that if 0< e e , then C(e) >O. Therefore, for QO of ** t^lo tll9 si C b(e) if and only if a b Q, s s This result will be used later. Now the first assumption which was mads and not justified was that the first solution had exactly m non-zero coordinates. It was shown in the preceding section of this 45 chapter how it is possible to get a basis of W, but there was no guarantee that, based on these vectors, all > 0. In the corresponding e-problem, however, each component in the solution will be than Consider that zero. greater X * X ) the original solution. Then a solution m to the corresponding e-problem is = 0) p (IVIV ***’ Pm* °» •••» where » 1 +L * =X s (i 1,2, ««., m). i Also nl.in 1 X.+2 ,*X..e + 2 eJ x. .. 11 *• t*m+l j»1 j This last statement is true because +. +. P, -(0)P + (0)P (1)P, (0)P ... ... . 1m 112 Hence * 0 (j « 1,2, ..., mj j+ i) and x *l. Since jLi n 1 ii + 2 is the smallest in the expression, e e power j*m+i of e, then for e sufficiently small (since e is positive) 1 ¦? (e + 2 e Jx.,) is positive. Also all X. >0; therefor©, =m+l j nn . J. +J . 3 all (X. + e + 2 ex..) (X. 2 ex.,)>0 1J 1J J-m+i J-i * (i 1$2, m). ..~ 46 Hence the first solution to the e-problem has precisely m non-zero components. The next assumption was that at each stage there exists only one smallest value where x >O. ik nx J In the e-problem + 2 e ~ corresponds to X^A^ j*1 in the original problem. and \ are two If Xj.A gAgj, distinct values of and X A x , th®n obviouslyr rk gk there exist values of e small enough such that U* + f (X x rk> rs * /x then there exist only a finite number of If XyAyfc 3 gk values of e such that 3 * •* 0x (X x) (X., rx r rj-rk xk- There exist only a finite number of solutions to the equation* = r*rk-*x 0 .k unless all the coefficients are identically zero. This is rs not true in this case because the coefficients of e and e are lA and lA respectively. Hence if there is a ~ rk sit set of values * 1,2, *•*» <*) which are all equal, there exist values of e smaller than any preassigned value n . J such that in the set (X, + Z e x..) f x. there exists t a unique minimum. Therefore, in the s-problem, the simplex vector of the old basis process replaces only one by the new vector being brought into the basis | at every stage the solution will involve precisely m non-zero components. An optimal solution to the a-problea automatically provides an optimal solution to the original problem. Assume that this is not truej that is, that f.z 0 where z' is value of the functional determined by X* the * 0 If all z * *s are based on the solution X* (X* is necessarily an extreme point and hence may be expressed in terms of n. ® +S 9 m linearly independent vectors), then f (X*) z* ©^z!. ° J j«i n. Since f (X) « z + Z Bje,8je, is the optimal value of f , then e° •3 j-i n nJi. J1 , + * "Z >z S ez.. °j. j ez ”° J-i J-i If e is sufficiently small, > z*, but z < z', which is z,. oo 48 a contradiotion. Therefore, an optimal solution to the e- problem automatically provides an optimal solution to the original problem, and one may deal with this non-degenerate not e-problem. Actually, as will be demonstrated, it is ever necessary to specify the value of e. D. DEVELOPMENT OF THE COMPUTATIONAL PROCEDURE The first step in the actual simplex procedure is to obtain a basis of V and a solution in terms of it to the original problem. Then, in terms of this basis, for all values of i and j are calculated, as well as all values of z and c XU these calculations are tabulated j Zj” j• in the first "tableau" as follows: Column Vectors Unit Hasis pp p p p *** *** Values Elements o i *** j k n FX xx x «.»x.».. •i• c i ii 11ij ik in •• * ••• ••• 000 •«• •«• ••• ••• «t• x P Xx ****xx ci i iii**ij lk in ••• 000 *•* 000 •• « *•• ••• ••• x_••x• « Xi •x c r r rnrj ric ra «•0 000 000 000 f««•••000000000 xx ? x *** *** *** °m m Nami mj mk "San f(X) 2 000 000 ••• 2j 2^ Dif•• jr(X) 2 *C 000 2 .**o 0 2¦* “"C* 000 2 “*0 11 Jj.kk n Terences In the section headed "Column Vectors” each column represents a vector, and its elements are its coefficients in terms of the basis vectors. The element in the ith row is the coefficient of the ith vector, as indicated to the left of each row in the column headed "Basis Elements," - The next step is to examine the row. If c^) I case or case II Is found to be true, the problem is solved. (See Chapter I, Section H.) If case 111 is true, a new "tableau" must be calculated, A new vector must be chosen to enter the new basis and replace some vector Since P^, a * « 2 + #(c, ) and a maximum value of the functional is -z, oo &x being sought, the logical choice for is the vector whose net difference (C. ) I. largest, hence whose k (zy o^) k is most negative. The next step is to determine ? Each . r X,^ is divided by its corresponding If this set of values has a un^O, X^/X ik then the vector corresponding to this value is P Suppose, , r however, that there are several values (such that > 0) of which are equal and smaller than all the others. As the reader has already seen, their corresponding values in the e-problem are not equal. If each element in the ith row is divided by then these values will be the cooffi­ n . J .cients of the polynomial (XS e Xjj) -f in ascending i powers of e. It has been assumed that the are equal. The second elements in each row, that is, the ar® compared. If they have a unique smallest value, its vector is P If not, the vectors corresponding to all except . r these smallest values are eliminated and then the third ele­ ments are compared, and the process is continued until a unique smallest value is obtained. Such a value must exist in the first m elements. When corresponding elements are compared, then vectors corresponding to any except the smallest but the matrix of values may be eliminated, the first m columns in the tableau is the m x m unit matrix. Hence if has one of the minimum it will be eliminated the first time because x /x ® 1/x > 0 and 11 ik. ik, * 0 * 2,3, Similarly, if has a min- XuAlk imum it will be eliminated on the second comparison. In this first tableau the value which is a minimum and has the largest value of i will determine P r# Mow P, and P have been determined, and a new kr tableau based on the new basis must be calculated. If is the new value based on the new basis, then: ?* XP+XP z* +X C * lk k J i^lj kj j kj i=fr ifr (j 1,2, n). * P=xP+ # k rkr ifr m Als° ¦EP ¦ x (J 1,2, ..., n), Pj ij i m (2-1) P x 0 1»2, >••) n). Pje +rj “ r i+r a - But, from above, x P -? Z rkr k x^P^ i*8! i+r »1v *i (2-2) P P 2-ii P « x* xx rk i»i rk i^r Substituting this expression for P in the expression for P r gives e arj a x.^ * +— P2xP P*2 P xiJ ijxi x k*rjJ x*i1 i»i rk i=*i rk ifr i+r « (j 1,2, ...» n); - U-3) *rS i^r * (j 1,2, ••*, n). E. THE SIMPLEX ALGORITHM Since the expression for in terms of the new basis is unique, the following algorithm is provided for calculat­ ing the new tableaut x rj ¦* X ' (2 A) *ij *IJ lk * * (i 2, •*•, mj i Vf j I,*?, ••>, n)j X rj m * (2-5) x (j 1»2, n). kjx ot| an. n J J, ++ Now 2 2 ex.-.&?.»? 2 (X. ©x..)P. P. e 1ii.ik1 k o ji«i j«i n \ \* 2eJx r rj. j-i and &* I x rk nn . . X .2.X . 2. rr r * .m n. j-i j-i .+ . • 2 (X. 2 .x..)?. P 1 x 1xkK i«i J-i rk rk x m x n nxri «v «rJ —i . •2X. Xr +2e*(x j-ix ) + +2 t)P 1-rxuJ-lk x xfc i«i rk rk rkj»i rk -2(X! . 2eJx* )P. . (X* . 2 ejx’)P., Ai«¦ *¦ AJ j_al jml and if is denoted by x* » then the same algorithm may be o used for computing it as for the other computing x^, Also, as may be easily proved, this same algorithm may be used to * compute the z, ~c, ---(z-c ). c^.) kk x*k When the new tableau is computed, it is again examined to see whether case I, case 11, or case 111 applies. If it is case the is repeated. Since there 111, process are only a finite number of sets of m linearly independent vectors and since, in the e-problem, each solution provides a value of the functional larger than values of all preced­ ing solutions, there is no possibility of returning to the same basis and "cycling"j hence case I or case II will eventually be reached. Case I will be ruled out by append-n+i Ing the equation £X. * B, If the problem has no maximum, 1 i»i then at some stage some other than X will involve B,Q+l and the problem will be solved, (See Chapter I, Section E.) F. DETERMINING ALL OPTIMAL SOLUTIONS The problem now arises of determining all optimal solutions when one optimal solution has been found. If the solution set is bounded, the set of optimal solutions is the set of all extreme points which yield this optimal value of the functional and the convex set determined by them. « Denote by X X 0) solu­ , 0, . the optimal m tion reached through the simplex method, so that XP+XF+ +XP«P ... . i1 mmo 22 + = Let «¦.P. £P+.,.+& P P kkqq PP o denote any other solution, Now mm m P. » Zx.P,andP * Zx.P,and...andP » Zx.P. k iki iqi tq p ipi. i=l iai i=l Therefore, iam m €.. Zx.,P.+#Zx.P-+...+&Zx.P.»P kik1 qI?i Pipi °. i«i i*! istl However, the expression for in terms of P ,P P is , ..., *eo m i2 unique. Hence, #,x .&x +..*+ &x » X kik, qiq Pip i + x +~,+&x X %x ¦ , kak qaq P2p 2 %X +£X + +#_X *X_. . ... k m mk qmq pmp 55 Multiplying each equation for by gives t £,cx .¦&cx +...+£cx =c X kjik, qiiq piip xl V«"V* V°P*V’ o * ** * (2-6) *(»-«) -» ) ••• , )­ kkk»„<«, q V«p p Now because all > 0 and all -z^) 0 x (?) y>0 57 and such that z * y + 2x is a maximum. The problem will first be solved by the simplex method, and then it will be solved geometrically. The first step is to convert the inequalities to equations a new variable to each by adding non-negative inequality. These new variables will be denoted by X , X , 1 2 X X X and y and x will henceforth be denoted by X ,,, »*45 6 and X respectively. The equations then ares 7 - X6 X--3X 16 7 X+ X.3X«15 2 67 * X+X-X 2 3 67 X.X+ X »7 4 67 X -18 X-2X 27. » 5 67 The matrix of coefficients then isi PPPPPPPF 12345670 1 00 00-1-3 6 01000 13 15 00100 1-1 2 00010 117 0 0 0 0 1-18-2 27. If one uses P , P P as the first basis, he may easily ..... l' 2 5 m calculate the first tableau. Recall that *. » Z x^.c^. ** J On page 59 are the three tableaus in the problem. Notice that, in the first, has the most negative value of -and hence becomes The positive values of x i? arex » *and *3, x «1, and then X/x 15/3 5 27 47 227 X/x ®7. Hence o*s and P is P The second tableau . r 447 2 may now be calculated. The elements in it (where i is * x| the subscript of the basis vector and J of the column vec­ tor) are calculated from the elements in the first tableau by the algorithm; !!i • * X x“ x ijij x ik rk - and x,.,* . x rk Since the are used in computing the other elements, it is to calculate them first. A check on the com- convenient - putation is to calculate z! by the algorithm and then byr] -E In the second tableau the only negative -is -® -1/3. Hence P becomes the new P, and P the 666 «.A P , When the third tableau is completed, all a., c.. a c new - r are non-negative; hence, the solution is optimal. Since Zt -* 0 only for the basis vectors, there are no other Cj optimal solutions. Tableau c<5lumn Vectors > Unit Basis P P p P P P p p Values Elements O l 2 3 A 5 6 7 0 P 6 1 0 0 0 0 -1 -3 x 0 p 15 0 1 0 0 0 1 3 2 0 p 2 0 0 1 0 0 1 -1 3 0 P 7 0 0 0 1 0 1 1 JL 0 P 27 0 0 0 0 1 -18 -2 3 Z 0 0 0 0 0 0 0 0 J Net Dif­ 0 0 0 0 0 0 -1 -2 ferences Tableau IIi [I Column Vectors Unit Basis p P P P P p p F Values Elements 0 i 2 3 4 5 6 7 0 P 21 1 1 0 0 0 0 0 X 2 P 7 5 0 1/3 0 0 0 1/3 1 0 p 3 7 0 1/3 1 0 0 4/3 0 0 p A 2 0 -1/3 0 1 0 2/3 0 0 p 5 37 0 2/3 0 0 1 . -52/3 0 •l 10 0 2/3 0 0 0 2/3 2 Met Dif­ferences z i ~ c j 10 0 2/3 0 0 0 -1/3 0 Tableau 111[II Column Vectors Walt Basis p P P P P P p p Values Elements 0 1 2 3 A 5 6 7 0 P 21 1 1 0 0 0 c 0 1 A 2 P 7 U 0 1/2 0 -1/2 0 0 1 0 p 3 0 1 1 -2 0 0 0 > 1 p 6 3 0 -1/2 0 3/2 0 1 0 0 p 89 0 -8 0 26 1 0 0 5 z i 11 0 1/2 0 1/2 0 1 2 Net Dif­ferences z 3 ~ °4 11 0 1/2 0 1/2 0 0 0 H. THE GEOMETRIC SOLUTION To solve the problem geometrically, one starts by drawing the two-dimensional graph on page 61, The shaded area is the convex set determined by the inequalities. The first extreme point solution corresponds to the origin. The next extreme point solution, to (5,0); and the final to (4,3). The family of lines + 2x ® z is the one, y with -2. + family of lines slope Maximizing the form y 2x over the convex set determined by the inequalities corresponds to finding that point (or points) of the convex + set which lies on the line (of the family y 2x ® z) which has the largest possible y-interoept and still contains at least one point of the convex set. I. STATEMENT OF THE DUAL PROBLEM FROM THE LINEAR PROGRAMMING PROBLEM G. E. Lemke has devised a method of solving a linear programming problem based on the duality theorem, a proof of which was presented in Chapter I, Section K, Assume that n the linear programming problem is to minimize z « Z X.c, JJ J asj d n subject to P “ Z X.P, and X. >0 (j * 1,2,,,,,n). Then ” ®»• J J m * its dual problem be stated: to maximize S w may i» i = i 61 « where ¦ ...» bj and w* w )» m m •<• subject to Pjw Z (j 1,2, ..., n) and where * • Tli® P and in-m Pj a j)primes on w, Q, dicate that ? and , w, are column vectors# and hence, o w', and p! row vectors. are J. AN APPLICATION OF THE DUALITY THEOREM It is to show easy that if, for some solution X to the programming problem and some solution w to its dual n problem, P'w ® Z X.c., then w and X are optimal solutions to ° j=l« « the two problems. To do this, return to the vector notation explained in Chapter I, Section J. In this notation X', c’, w*, and (or b') represent row vectors (X X ), Pj , ,X^,,.., iQ (t •pc ) $ (a »a 2 j»•••»a )» »w »•••»w )» an<* t**n 1j jjij a m (b^jb^,.,,,b) respectively. The corresponding symbols with- m out primes represent the corresponding column vectors, A represents the m x n matrix Now the program­ . ,P^,..., P R * ming problem may be stated* minimize c'X subject to AX b and X> 0. Its dual problem may be stated* maximize w’b subject to w'a o'. Now for any solution X and solution not w, necessarily optimal, » AX b. * Hence w’AX w*b. Also w’A c* . Then, since X > 0, w*AX < c*X. Therefore c*X > w'AX ¦ w'b, c'X > w'b, min c*X > max w'b. Hence, if for some X and some w, c'X * w'b, this X and this w are optimal solutions to both problems. K. THE SOLUTION SET OF THE DUAL PROBLEM The next step is to examine the set of '’feasible" solutions to the dual problem. This set will be called _o_ . Each closed in inequality w'Pj represents a half space M {m-dimensional apace) bounded by the hyperplane * , A is then the convex polyhedron bounded by the n hyper­ * planes, •c, (J 1,2,,,,,n). It is assumed that n>m. if andw two i 3 convex because, are points in jf)_ , then wl? j -°s •••, n), w*p «Cc 2j j - Hence if 0 < w[|j <*?} •ll *23 M. =ft a “111 and * P# 0 (i 1,2, m), o than one is provided with a solution to the original pro­ gramming problem. Its corresponding value of the functional is « 2 (&F )o_ a 0 °r. i-i i Bat «*­ »;»!• ri 81 1 * Hence z 2 (a P ){w*a,). ° ooi i«i i 2m » =P +P ++ Now P 2(aP)a(a1)a (a )a UF)a ... Q oiQ3Q2 0a I*l Notice that each quantity in parentheses is simply a scalar. + (a1? )a (a2P )a + . (amP )a 0i 0a... 0m Hence »v'q j * F .P .. (a 1)w;a(a)w;a(amP 0 ... 2 )w;affi io2 o = (al?)(wa )i^ o oi=V 1 Therefore, a solution w to the dual problem and a solution X o (suchthatX.*0ifiisnotintheset,r,r, r and ..., , 3. izm a*T ® X, if i is in that set) have been found where 1o * w i> c Hence, these are optimal solutions to both wok “ oo problems, m i± Now that,inP * 2(aP)a., aFK0for suppose ° °10 i«i 33 - -. somei,sayi sjthatis, a?<0. Nowletw* «;-Sa o *0 * Since fori £jand »1forj i,thenfor value of ¦©¦ it is true that any *-s» w’a. w'a. 6asa. w'a, c ifi+ s. i oii oi r^ Also, for any positive value of one may that &, say w'a *w'a 9asa * c -& 0, then for this value of j and , any positive value of & ¦,? ¦ %p jj 69 If a3f 0 tllen w * would be a solution to j for *ll Pj's, » (j n) w’Pj 1,2, ..., 3 for any positive 0-; and w*P * -0a P could be made o w^PQ 3 arbitrarily large (since a P < 0), contrary to assumption. o Hence there exists a value of j such that a 3 0. Then # is chosen as the minimum of the values: w*P. c. - °J3 . — P —n 3J <,a <0) - Bp j a (Obviously, if a 3 °» then is not an because ss »a a a, 0 for i s and a a 1.) Notice that all of the X3 values described above are positive because if is not - an then < C by assumption, w’P, e. - OK k Suppose then that O-* -• •F k Then i'F --*)a3P ­ -(!^~-C fc kV ae k s Foranyother a < 0 f w; “;?k -°k 3 -°3 ... D a*P“ a3 k Fj 70 Hence, multiplying through fay the negative number a P^, - and w;? ««*F j j * **-*, ± •,* s It has already been shown that if a >O, then < * Hence w' is a solution to the dual problem. Also, w'a, * c 1 » (i 1,2, if s) and _w'a < c (from above), ~,, mj s * 3_ thats Summarizing, one may say w* aO, w’P sinceft> Gand a P<0. m 0000 oo o Hence, at each stage the functional is increased. There are only a finite number of extreme point solutions, and, due to the increase of the functional at each stage, there can be no repetition or cycling of extreme point solutions. A stage will finally be reached where all (a*F ) 0, and the problem o will be solved. M. RESOLUTION OF ASSUMPTIONS MADE IN SECTION L It now remains only to clear up the assumptions of Section L. Assumptions 2 and 4 may be disposed of easily. By referring to the proof of the duality theorem in Chapter I, Section K, one may see that if the programming problem has an optimal solution, then its dual problem has an optimal of solution. Each of these solutions will be extreme points their respective convex sets. Hence, if there exists no ex­ treme point solution of the dual problem, there exists no extreme if point solution of the programming problem. Also there exists no finite maximum to w'P there exists no - o finite minimum to c'X. of m Assumption 3 was that a set linearly independ­ ent vectors, P P had been determined from , ~,, ,,rr 1m 2 the set, P ,P such that w'P. 0 ~ and P ® 2 X.P., and ? is any vec­ « j«i • • tor which may be expressed as a positive linear combination of the P Then this provides a basic solution to the . r i new simplex problem. Finding an optimal solution to the new simplex problem will provide an extreme point solution to Jfl The final unresolved assumption (number 1) was that * (i m) and the m a.* s are lin­ when “ ® 1»2, ri r1 early independent and w is a point of then < if q j is not in the set, ,,,, r The difficulty which ffl, arises when this restriction is removed is that there is then a possibility of "cycling” in the process. Now suppose that for some value of j, say q, not of the set, ..., r^, w*P * c o qq and s a P )be defined as the set of all solutions to < + (j • 1,2, n). Cj It is possible to find e > 0 such that for 0< < o e oo the extreme lie on m of the points of JTL(e) precisely hyper- w »c planes *Pj , + (j * 1,2, n). Let be linearly independent column vectors forming a basis of W and such that each is one of * the P's (j 1»2, ..., n). Suppose also that w'(e) is 3 r the solution to tw*a, 1 ® c + e (i “ 1,2, m). i m . If P. not thenP. » 2 is an (a P.)a,. a., 1 1 J JJ Hence » 2 ) J^(o)a^| mf r~ . x 1 * 2 (a P.) .e c j . r i-i i * + This last expression can be equal to e J for at most n values of e. If which is not an is in every expressed this form and this is done for all possible bases, then it can be seen that there exists a finite number of values of e for which an extreme point of e) lies on more than * mof the hyperplanes + . Hence if e is some o positive number smaller than the smallest such value of e, then for 0< e< e maximizing w*P subject to QQ w*p < (j ®1,2, n)j is a "non-degenerate" problem. Also if w (s) is a maximum q extreme point solution to the non-degenerate dual problem, * w (0) w is a maximum extreme point solution to the orig- o o inal dual problem. In the actual computation it is not necessary to spec­ ify e. When an extreme point solution w to the dual problem q has been found, associated with the basis, a ,a ~..,a . in la than the coefficients of ? and the 's (j * 1,2,...,n) in o terms of this basis are recorded, along with the values -c., in a tableau analogous to the simplex tableau. This tableau is shown on page 77, The problem then becomes degenerate if the value, - »;*, 3 i***k n aPPP P i oi*•’k n ’ &l^ alp &lf> alp c a ri 0j*••k n 22 2 c a a...a a a2 PP P, P„ r2•o. i . k . n 2 • • • • • • •00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 o a a 3 P a 3 P •• • a sP, a 3P r s.o.i . k . n 3 • 0 0 • • 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 • 0 * * c a ia„ s. P m n 6i P ••• m„ €i P * • m a ••*•• 6t 1“ r m o i k n 2& w'P. w’P w’P c w’P, C. w'P ----c c. .... oj j oogi i....ok k on a Now a scheme will be demonstrated whereby the re­ placing vector will be uniquely determined, and no cycling will occur. First, since the are linearly independent vectors, then the are also linearly independent. (The remember, are m-dimensional column vectors, and the a *s are the row vectors of the inverse of the matrix Q. mQ. B. « •#•« • X HI 2 4 ofthe ' The independence a s may be proved by matrix theory.) Therefore, any m-dimensional row vector x* may be expressed m i in terms of them as* x* « S (x'a,)ax In particular* , 1 i*i mm . . w'* 2(w* X*Sc l a,)& a. ° °1r i-i i-i i m mr. » Also w*(e) * 2 (w*(e)a.)ax. Z(c . e 1 )& . 1r i-i i«i i m® r, . j ¦2ca.Ze a | r i=l i i=i m i* *+ ­ w*(e) w* Ze . oo I*l for o1-p. a ’ a3j -if 0 - a*P 128 0 2-f -i• i ’ a P*s or _i.i 1 3 2 . a3“ a3 , Hence.“ and P ' - anda a-P. f k P3* S J 4 STAGE 111 *= a»Pa=Pa»P a P a P 11 27 3345 96 * The new value of (0, -1/2, 0, -1/2, 0). aaaaa T Inverse 12345 1-300-1 a1»11000 03001 a2=ofo-f0 2 * 0-1101 a 3 *Oll -2 0 01001 a** 0-8 0 26 1 0-20 1-18 a5 ¦0-f0 0 * 21 a 1 Hence an optimal solution to the original "3 2 problem is: 21 p , a • _ 3Pp*3, K 0 a2 Q a4P*B9 S-3 c ° 5P * a 3 X4" 0 * \ 89 9 • 7¦X 3 O ¦ x X *4. 7 The ease with which this problem was solved is de­ ceiving, In most problems the calculation of the inverses will be very time-consuming unless done on electronic computers. However, there are straightforward methods of calculating inverses which may be adapted to electronic This noted has some computers. method, as before, computa­ tional advantages over the simplex method. CHAPTER III THE PROBLEM IN WHICH THE COST COEFFICIENTS ARE LINEAR FUNCTIONS OF A PARAMETER A. INTRODUCTION The problem so far considered has been that of minimizing (or maximizing) linear functional subject to a a set of constraints. It has been shown that these con­ straints may always be expressed as a set of m linear equa­ tions in n unknowns, where n> m. The linear functional is n » expressed in the following forms f(X) 2 c.X,, where 3 j*i 3 the c.’s are constants and X = (X ,X X ) is subject J i2, n to the constraint S X.P, = P P,P,P, .... P are m-Jj o. o'i*a* n dimensional vectors. Saaty and Gass have studied the problem [XT in which the cost coefficients, that is, the are linear functions of one or more t parameters (t ,t ), , ...» 12 q The one-parameter case has been completely resolved. When parameters are introduced into the cost coefficients, the I matrix of coefficients P P P of the con­ ,, “J a straint aquations is unaffected. Hence there are still a finite number of feasible extreme point solutions; therefore, there are a finite number of possible optimal solutions. If the solution set is bounded, then for every value of the 86 or of values parameter t every set of the parameters the optimal solution solutions will be or ,,,, a subset of the set of feasible extreme point solutions, which has a finite number of members. B. THE ONE-PARAMETER CASE SOLVED BY THE SIMPLEX METHOD The one-parameter case will be considered first. With every value of the parameter there will be associated a set of one or more feasible extreme point solutions which optimize the functional if the functional has an op­ timum value. In the discussion non-degeneracy will be assumed. The degenerate problem may be handled by introduc­ ing the corresponding "e-problem," as in the simplex method. Now the problem may be stated as followsi for every value t find the solution to of the parameter n 2 •P„ JJ 0 n * which maximizes f(X) Z cA. J• ®. 85 where c,. (j 1,2, n) and all values of and are constants. Now assume that, for some fixed value t of the single parameter t, a maximal solution has been found by the simplex method. Also assume that the basis vectors of this maximal P solution are ,P , P , Then if P. is any vector in the 12 7mj setP P then ,P , ••#, n*, i 2 ‘ pj «w,i j l a and z * 2 x, .c. . Now if the solution involving the vectors 1 J i=i P ,P is maximal, then i * 2, ...» 'a o (j 1,2, n), «• •, zj** * a «“ Z (J 1, *2, •, ®), XijCi a a“ (j 1,2, a), Zj“Cj Cj a *. +* z. ­ S -(gj tut) {j 1,2, ...» n), mm +» -*-­ (j 1,2, n). Cj gj)hj)t Now let m ~ (3-15 (j »2 *0» t •••» Qj ** '“ (32) l3 = “ J h3(3 1»2 n). Then -. (J »1,2, n), and (3-3) +>o « (j 1,2, n) Oj if the solution based on P ,P P is maximal. * *, ~,,3 m ia set of The inequalities (3-3) defines & convex set of values of t. Let t and t be two points in r 12 one-dimensional space which satisfy the inequalities (3-3). Then >0 (j * 1,2, ..., n) and + >o (j * 1,2, ..., n). Qj If 00 ­ (1 *)aj pj(l $)t (J 1,2, n), ...» 2 - ©a..(!-*)«. + (^t ) >0 (j 1,2, ..., n), j + + U * e-)t >o, (J -1,2, ..., n). Hence the set of points in one-dimensional space defined by (3-3) is It may take one of three forms if it convex. contains at least one point, and it has been assumed that it contains one point. 1. It may be a single point, 2. It may be a closed interval# that is, it may consist of all the points t such that tt. 3. It may be a "ray” or "half line"# that is, it may consist of all points t such that, for some point t, t< t or of all points t such that, for some point t, t > t_. Notice that the meaning of "ray" employed here is different from the definition of a ray in Chapter I, Section B, Hence the term "half line" will be used. Now each of the inequalities in (3-3) imposes either an oralowerboundontif 0.If *0,then upper f a. > 0 because it has been assumed that the solution is max­ imal for some value of t. Obviously, if is a basis vector * * a Pj °* j • If (3,> 0, thent> 3 Ifp.< 0, thent< 3 Now the upper limit of t imposed by the inequalities (3-3) is the minimum of over the values of j for which Pj < 0. The lower limit of t is the maximum of ( over the values of j for which > 0, Lett=minimumof for < 0 and ~ t_ maximum of for >o. Then if t * t, the set of points satisfying (3-3) is this single point. If t> t, then all points t such that » t 0 * (j 1,2, ~,, n), then all points t such that t >;t satisfy (3-3). Since it has been assumed that for some value of t the solution based on the vectors P,P P_ is maximal, , ...,* ' i' m t must be greater than or equal to t. If t were less than t, there would b© no value of t which satisfied (3-3) and hence no value of t for which the solution is maximal. If > 0 for all values of such that is not J Uj = in the basis and pk 0, then for t < t<, t the solution is a unique maximal solution. This may be clarified by recall­ ing that introducing any vector into the solution with - positive coefficient €. increases the functional by &(c z ) qq that is, decreases it by &(z -c). If t 0 and *o, then -0 for all values of j such c^) > that is not in the basis and all positive values of ©¦, If a 0 for some value of j such is not a cij that » 0 and basis vector, then introducing this vector into the basis to form a new basis (by the simplex method) will yield another optimal solution for the same interval of values of t. The next step is to find the optimal solutions for values of t greater than t and less than ;t. Let k be that value of j such that a 1ak Pj< 0' f>, P* Unless the set of values of t for which the known solution is maximal has no upper bound, there will always be at least one such value. If there are several such values of j, then k may be any one of them. The following argument is for finding the maximal solutions for values of t greater than t. An entirely analogous argument applies to finding the maximal solutions for values of t less than t^. °k » - Now t Pk and j$< 0, k * + Hence 0. There are two possibilities. Either cannot be introduced into the basis by the simplex method because 0 « or can be introduced into the basis (i 1,2, m), by the simplex method to form a new basis. If P cannot be introduced into the basis because v x ® (i • 1,2, m), then for t> t there is no ik maximal solution; that is, the value of the functional is unbounded. To understand this, recall that this is case I of the simplex method. (See Chapter I, Section H.) The solution which maximizes the functional for t < t < t is a feasible solution (though not optimal) for t > t. The are unaffected by the value of t. Hence < 0x^k x^k (i 1,2, m). Also if t » ..., t, then pt< pt (since P<0)kk k and a +Pt °* Hence if t> t and <0 (1 * 1,2, m), then thex^k conditions for ease X of the simplex method are fulfilled; and the functional has no maximal solution. If be introduced by the simplex method into can * the basis to form a new solution, then for t t this new solution involving ? is an alternate maximal solution. k This is true since t • 0. Mow if t< t, then £t> t (aince < G),k ** ¦ °* h%> °k * * z- k °k “kV> °> - > °­ “k °k -<°­ °k *k Hence the feasible solution obtained by into introducing the basis is not optimal for t < t. It has been shown that the range of t for which any solution is optimal is either a point, a closed interval of the real line, or a half line. Therefore, t is the lower limit of the range of t for which the new extreme point solution (obtained by introducing also into the basis) is maximal. If t is the upper limit of this new range of values, then this new solution is maximal * If there is another only for the point t t. upper limit, say t*, then the new solution is optimal for t t. The new range of values is obtained by construct­ ing the new simplex tableau and calculating the new values of (j 1,2, u). Zj—Cj * If the set of values of t for which the new basis * is maximal is the one point t t, then in the new set of Inequalities + {i.t > 0» a, JJ aa j1 JJ max min s .\ ( ,r “i> cV pj Oi i w a j “ * t o m^x ) new lower limit > Fj rPj a j and * t 1* ® n , ) new upperlimit, Rf Pj In either case the new vector P, is that vector such again that ,!i. (.!i) P pj<°V k This vector is introduced to form another basis and to give another feasible solution, and the simplex tableau is again The be calculated. process may repeated. Each repetition gives a new basis. There will be no cycling of bases because the interval of values of t for which a particular basis is maximal is convex, that is, has no discontinuities or is continuous. There are a finite number of possible bases, and for every value of t either there is a maximal solution or the functional is unbounded. Hence finally one of two eventualities will occur. Either for half line some t > some extreme point solution will be maximal, or for some such half line there will be no maximal solution; that is, the functional will be unbounded. one an In order to complete the problem finds, by entirely similar procedure, the maximal solutions for values of t (j • 1,2, n), Pj 0 where w* is an m-dimensional row vector, {See Chapter 11, Sections I and J.) 97 In the dual method it is assumed that an initial feasible extreme point solution has been found to the dual problem. (See Chapter 11, Sections K and L.) In other words it is assumed that a sat of ra linearly Independent vectors P ,P has been found such that w! » (w ,w ~..,w ), , t2 2 J& O® l the unique solution to the set of m linearly independent •» equations (i 1»2, m), also satisfies the “ constraints (j 1,2, n) to the dual prob­ —c j lem. Now recall that in the dual method the column vectors P ,P , ...» P are denoted by a ,a , a . respectively,i'2m m 12 and that the inverse of the matrix |a ,a ..., a~j is L 2, ay i calculated. (See Chapter 11, Section L.) The rows of this l3l l 3 inverse are denoted by a a . It was also shown in ,a , ..., Chapter 11, Section L, that P‘ o j.CI'.)*! * and that if a*P 0 (i 1,2,.,.,m), then an optimal solu­ 0 * tion to the original programming problem is X (X X ,X2 , ~.,X ) ** where i a P (1 1,2, ...» m) o ¦0 ¦++ m and (i m 1, 2, ..., n). Now in solving the parameter problem by the dual of method one would assume that a set m linearly independent vectors P P ..., P had been found (1) such that ,, m’ x& J& o—¦ where the have been defined above, and (2) such that for some value of t, say t the solution w* to the m oo equations +» (3-4) c *g(i 1,2, m) c ii also satisfies Notice that this is an extreme point solution to the original programming problem and that it is optimal only for those values of t for which the second condition is true. Now w* o is a function of t, and equation (3-4) will be solved explicitly for t. The vector and matrix notation of Chapter I, Section J, will be employed. Then +* w'P *c *ght i,2, m)iiii * becomes c )* m pjj w* So P,P , ...» Pj a 1 “(c,c,..,,’ c) a 1 j m aa —* i*zf *n ai2 * •• *m #a aa - - w' °) a*• w'flj B I a > Now the constraints (j * 1,2, n) may be written* c) a* ..., >Oj <| • 1,2, n), a a “ •i 1»2*0• • ( ••» aip 3 i (3-5) a fj Recall (Chapter 11, Section L) that mm .. I1 - - P. 2 (a P,)a, Z (a ?.)?. Ji»i * 1i«i *> 1 m * and P. Z x..P.. 3* ** I*l Since P^, P are linearly independent, the expression ffi for any m-dimensional vector (in particular ) in terms of them is unique. i Therefore, a *x^ and the inequalities (3-5) may be writtenj Cx “ (VV **** ij -cj 0 I*2, n), X 42J CX+x* cx 12 *** -e» ***» iij °azi + mmjj » « Hence (j 1,2, ..., n) and ->0 (j «1,2, n). ..., Cj This is precisely the same set of inequalities which must be satisfied in solving the problem by the simplex method. Hence the two methods are equivalent. D. THE TWO-PARAMETER AND MULTI-PARAMETER PROBLEM In Reference jjTj Saaty and Gass discuss the solution to the multi-parameter problem, with particular emphasis on the two-parameter case. The one-parameter problem was solutions to find the optimal solution or for every point t in one-dimensicnal space. The two-parameter problem is to find the optimal solution or solutions for every point, 101 is to find the optimal solution or solutions for point every in n-dimensional space. By the same method as that used in the one-parameter it case, may be shown that the region in n-dimensional space for which a particular extreme point solution is optimal is a convex region. In two-dimensional space these regions are convex polygons defined by a set of inequalities. If the basis of the extreme point solution being considered is ~,,,Pm then the inequalities which define the convex , region for which this solution is maximal are: -c.. » . + 0 (j=m+l, m +2, n). Zj The boundaries of this polygon ares ++3 »++ Q 0 (j m 1, m 2,..,,n).j +® Along the particular boundary 0 the solution obtained by introducing into the basis by the simplex method is an alternate optimal solution. By introduc­ into the basis and calculating the new simplex ing tableau, one will arrive at a new set of defining inequali­ ties and hence determine a new convex region. However, if this is done for all boundaries and then for all the even boundaries of the new convex polygons at each stage, it is possible to "reach an impasse without having considered all 102 possible bases and their associated regions" jjT| . The only completely general method available now for solving the n-parameter case is that of considering each possible basis separately and the convex regions in n-dimensional space determined by the inequalities > 0 O, 3 3~ z-e>O, A A­ *= Now 2-C t, 22 33 -C 2« -f-ft 32 AA from Tableau I. a * - Hence -f-, ji P3 a and 5‘ -“ f, 104 Therefore, if 1/3 K t K 5, the solution ¦ X (4, 2,0, 0,4, 103) is optimal. Introducing P into the basis gives a solution 4 which is an alternate optimal solution for t = 5* Calculat­ ing the new tableau. Tableau 11, allows one to determine the range of values of t for which this solution X * (2, 0,0, 4., 14-, 55) is optimal. It may be easily verified from the inequalities >O, z-c 2 2” a-c >0 3 3“ that this solution is optimal if t >5. Optimal solutions have now been found for all values of t greater than or equal to 1/3, One may find optimal solu­ tions for values of t less than 1/3 by returning to Tableau I and introducing the vector into the basis to form a new solution by the simplex method. Opon this new basis Tableau 111 is calculated, and from the Inequalities >O, a-c 44 z-c 0 5 3" may that this solution, that is, one see is -. X . (S-, o, 0, 4.7), optimal if -*f < t < 105 Introducing into this last basis gives still another extreme point solution X -(£, ii, 1, o, 0). be shown and It may easily by calculating Tableau IV the inequalities - zc >O, 3 5~ z c>0 - - 66 that this solution is optimal if t < Hence the problem is solved. Below is a table of extreme point solutions with corresponding values of t for which each solution is optimal. « Fromt a Tot Tableau Number Solution I X * (4, 2,0, 0,4, 103) f 5 - +oo II X (2,0,0, 4» 14, 55) 5 - 111 X (f-, af. f, 0,0, 47) -f }¦ IV X (f-, if, 1,0, 0) -f * -oo ~ - c =2+t c*3 2t c 0 3 12 s 0 c =0 c®o c 6 43 Tableau I Unit Values Basis Elements *o P i P 2 p 3 p P 5 P 6 2 + t P 1 A 1 0 1. 2 1 2 0 0 3 -2t P 2 2 0 1 -i. T i. r 0 0 0 P 5 A 0 0 1 2 2 1 0 0 P 6 103 0 0 21 12 0 1 Net Differ­ences v°j - 14 0 0 -i. T + 2-t 1 2 -ft 0 0 Tableau II Unit Values Basis Elements P o P i p 2 P 3 P 4 p 5 p 6 2 + t P i 2 1 -1 1 0 0 0 0 P 4 4 0 2 -1 1 0 0 0 P 5 14 0 5 -1 0 1 0 0 P 6 55 0 -24 33 0 0 1 Net Differ­ences V°J _ 4 +2t 0 -5 +t 2 + t 0 0 0 Tableau 1111ableau III Unit Values Basis Elements F o p 1 Column Vecttors P a p 3 P 4 P 5 P 6 2 + t P 1 L 3 1 0 0 f* •43 0 3 > 2t P a 10 3 0 1 0 -3“ 1 3 0 0 P 3 L 3 0 0 1 3 1 3 0 0 P 6 17 0 0 0 17 >11 1 Net Differ­ences Vc j ““ 3 -It 0 0 0 3 +2t 1 3 -t 0 Tableau Unit Values XVIV Basis Elements ? o P i Column Vao1"-ors P 2 p 3 P 4 P 5 P 6 2 + t P i £ 3 1 0 0 0 3>mn»i 47 £_ 141 3 -2t P 2 1 1 3 0 1 0 0 1I 47 i 141 0 P 3 1L 3 0 0 1 0 JL— 47 141* 0 P 4 1 0 0 0 1 _i£ 47 47 Net Differ-eases Vj “ 3 -6t 0 0 0 G » -H* l_ 141 ..•MnaMM. 141 CHAPTER IV THE PROBLEMS IN WHICH THE COST COEFFICIENTS ARE NONLINEAR FUNCTIONS OF A PARAMETER A. INTRODUCTION In Chapter 111 there was a discussion of the linear programming problem in which the cost coefficients, that is, the linear s (j 1,2,,,,,n), are functions of a single c^' “ parameter t. In other words an optimal solution is sought for valueoft, and * + (j * every c., 1,2,...,n) where and are constants. In this chapter a similar problem gj will bo discussed; however, the cost coefficients will not be linear functions of t. In Section B the problem in which the cost coefficients are parabolic functions of a single param­ eter will be discussed, while Section C will contain a discussion of the problem the coefficients in which cost are functions of periodic a single parameter. B. THE PARABOLIC CASE In many ways the parabolic case is similar to the linear case. The first step is to assign a particular value to t and for this value to find the maximal solution by the simplex method. The next step is to determine for what set of values of t the known extreme point solution is opti­ mal, This latter step is accomplished by solving the set of inequalities, 108 109 “ “ c (j n)» j •••» corresponding to the extreme point solution being considered. Since ¦ the a,. *s (j 1,2, ..., n) are linear functions of the cj*s and the are parabolic functions of t, then c 1s -be written in the may form, +« z. (4-1) -Cj-Oj pjt + /jt2 (j 1,2, n). Now if the problem is to find a maximal solution for every value of t and if a simplex tableau has been cal­ culated which gives a maximal solution for some particular value of t, then the set of values of t for which this solution is maximal is the set of values of t satisfying the inequalities, (4-2) -Cj M + (3jt + v^t2 >0 (j * 1,2, n). The set of values of t satisfying any one of the inequalities (4-2) takes one of four possible forms if it contains at least one value. The particular form depends upon the con­ stants cu, and . The possibilities may be divided py into two groups, depending on whether */. is positive or neg­ ++ ative. Let . Then the graph of E ctj Pjt v^t2 plotted against t will be concave upward if /. is positive and concave downward if is negative. If the graph of (-t) is concave upward, it either intersects the t-axis in two places, is tangent to the t-axis, or lies entirely above the t-axis. If the graph is concave downward, of y.j(t) it the t-axis in to either intersects two places or is tangent the t-axis. The possibility that the graph of concave downward and lies entirely below the t-axis is ruled out by the assumption that, for some value of t and every value of j, intersects the t-axis at two places, say at t * and t *t $ where < value of >0 for every t except then y^(-fc) those values between t and t If y.(t) is concave upward , i2 J and is either tangent to the t-axis or entirely above it, for every value of t. If the graph then y.j(t) of y^(t) is concave downward and intersects the t-axis at two places, « att Bt andt t wheret t theny.(t)>0 say9 ,,i 2 12* * value of t such that 0 (j » 1,2, n). If < 0 for any value of j, then for ...» this value of j, (t,) is concave downward and contains at moat a finite closed interval. Hence T contains no outside this interval. The reader may determine all points the possibilities of "intersections" of the by examining the table of possible forms of . extreme Upon determining T for a given point solu­ the set of values of t tion, one may proceed to determining 113 for which another extreme solution is optimal. The set point of values for which the first solution considered is optimal is either the entire t-axis (in which case the problem is solved), or it contains at least one end point. At this end point * 0 for some value of •k. If be j, sayj may introduced into the basis by the simplex method, then intro­ ducing P into the basis to give a new extreme point solution fc gives an alternate optimal solution at this end point. (See Section B.) Chapter 111, Then by constructing the new simplex tableau one may determine the set of values for which the new solution is optimal. The process may be repeated until for each value of t either an optimal solution has been determined or the functional is known to have no optimum value. If cannot be introduced into the basis by the sim­ plex method because for every value of i, < 0, then for thosevaluesoft suchthat -* a + + ®the jc problem has no optimal solution. By referring to the column of in the table labeled "Description Complement on of Tj" one see for what set of values the problem has page 111, may no maximal solution. In other words is the set of values 2 of t for which and comple­ -t ment of this set of values is the set of values for which . ° and hence for which there is no -««. -v^t2 114 maximal solution. Aecall that this is case I of the simplex method. (See Chapter I, Section H, and Chapter 111, Section of this of B.) An example type problem will be worked in Section D of this chapter. C. THE PERIODIC CASE The next case to be considered is the case in which the cost coefficients the s are periodic functions °j* of a single parameter. In other words, one such case could be sint+ t « cos (j 1,2, n). cj ¦gj The problem is to determine an optimal solution to a linear for programming problem every value of t. Again one starts by assigning a value to t and determining an optimal solution for this particular value of t by the simplex method. Then, as before, one must determine for what set of values of t this solution is optimal by solving the inequalities x 0 (j 1,2, ..., n).cj * This set of inequalities may be rewritten: * - (4.-3) *Oj sin t . cos t>0 (j 1,2, ..., n). to The inequalities (4-3) correspond the inequalities (4-2) of the parabolic case, (See Section Bof this chapter.) Solving the inequalities (4-3) entails first solving the equations «* (4-4) sint + t 0 (j n). a,. cos 1,2, ..., It is easily verified that the solution to these equations is = t arctan The solution is multivalued, and the values are at intervals of n along the t-axis. Let spaced t^ be defined as the solution to (4-4) such that 0 and n, let t be defined as the solution to (4-4) such that z hi < t < 2n. In a manner similar to the parabolic case let ~ 2 = = sint + cost (j 1,2, n). Now is a continuous function of t: therefore, between t and t it 12 is either everywhere positive or everywhere negative. Except = for the special case in which t C and hence t = rr r 12 - the value t « is between t and t Therefore, the sign of . 12 determines the sign of throughout the interval t t i,q+** m and let t be the maximum of these values. Then, in order iu to t must satisfy the inequalities (4-6), a point satisfy the conditions 0 < t t and Tis the set of points t such that 2U 2S t I < 2n, in which case the problem is solved. At this end point * 0 for some value of j, say j *k. If may be introduced into the basis by the simplex method, then introducing P, into the basis to give a new extreme £ point solution gives an alternate optimal solution at the end point. Then by constructing the new simplex tableau one determine the set of values for which the new solu­ may tion Is optimal. If cannot be introduced into the basis by the simplex method because for every value of t, < 0, then for those values of t such that . -c,. sint cost<0Zy « the problem has no optimal solution. The set of these val­ ues such that 0 < t < 2n will be either an interval open is of length n or two intervals, the sum of whose lengths n. In other words it will be all those values such that 0 < t < 2n and which are not contained in By repeating the processes outlined above, one will eventually determine an optimal solution for every value of t or determine for what values of t there is no optimal solution. In Section B of a this chapter problem of the type outlined in this section will be worked. D. SOLUTION OF A PARABOLIC PROBLEM The best way to understand the problems discussed in Sections B and G of this chapter is to see the solution of numerical For consider problems. example, the problem: 6 * maximize f(X) S cA, j-i JJ to subject X+ X+ X+X *lO 12 34 —2X 6X 3X +X *—6 123 3 - -15X+ 2OX-12X+ X 120. 123 6 The coefficient matrix is pppP PPp 12 34560 11110 0 10 -2-6-3 0 1 0-6 -15 20-12 0 0 1 120. 123 As of an example the parabolic problem one may define the cost coefficients as follows* 2 * e-t3t 2 ( * c 2t2 +4.t 1 3 - t2 t+2 »+ - c -4-t2 2t 3 2 c “-2t+ t +3. 6 The first step is to assign a value to t and to solve the problem for this particular value. Let t * 1. *a« * « ** Then* c 4#c -61c 5:c 4ic -5|c 2. i 2 34*5 6 In order to start solving for an optimal solution for this particular value of t it is necessary first to find any extreme point solution. The first paragraph of Section B, Chapter 11, provides the method for obtaining the solution. A non-negative solution cannot be obtained in terms of the . unit vectors by -P then a non-negative solution is available, hot -P , 55 be denoted by P and added to the coefficient matrix. Then 7 let c be -M, a number discussed in Section B, Chapter 11. ? Then, an entirely equivalent problem is the following* ® maximize f(X) t> c.X, -MX J•* 7 124 to subject P PPPP 12 34 5 11110 X-2 +X-6 +X-3 +X 0 +X 1 12 3 45 -15 20-12 0 0 P PP 6 70 0 010 +* X 0 +X-1 -6 6 7 1 0 120 Then the first extreme point solution is the solution in terms of P P and P Tableau I, on 131, is the tab­ , , , page -46 7 leau corresponding to this solution. Proceeding by the simplex method, one may calculate Tableaux 11, 111, IV, and V, 2>. 0 - Since, in Tableau V, (j • 1,2,~.,7), Tableau V * provides the optimal solution for t 1. In order to find the entire set of values for which this solution is optimal it is necessary to solve the inequalities; >0 - 2C 22 -c z >0 3 3­ 2 C >O. - 4 4~ 125 Motion that z -o >0 is not Included. This is true 7 ?“ because is not a vector in the original problem but was only introduced to provide the initial extreme point solution. these entails first solving Solving inequalities the corresponding equations: . - Z-O»X0.X c X C 0 2 2 525 626 121 2 * + +­ -4c35c cc a 612 +« * -52ta . 32t 120 0| + z-c»xo-*-xc xc-c 6 131 5 553565 3 +. « -c 3o c*o 5 6 13 * at+ ­ -3t-u Cf - Z-C»XC* X c+x c c 4 545 646 141 4 4 - “2c +15c .c c 6 14 5 * ~Aot* + 22t +33-0, Denoting by one may make the following statements. z 3­ —-—“ * The solution to » 0 is t 1.36, ->1.22, • The solution to » 0 is t * 1.29, -0,74. 126 * “ The solution to y 0 is t * 2, -2l-2,00, -2,33. 3 * Also Therefore, is the set of points t such that -1.22 < t < 1.86. is the set of points t such that -2.33 < t < 2.00. is the set of points t such that -0.74 < t < 1.29. Notice that approximate values are used, finally T for Tableau V is the set of values t such that -0,74 < t < 1.29. Since, at both end points of T for Tableau V, ® 0, y the new tableau is constructed by introducing into the basis by the simplex method. This gives a basis involving P F and P . This is the basis of Tableau IV, Hence no , 6*, i 4 tableau needs to be constructed. new From Tableau IV the following may be calculated: z c s -132t a + 76t + 196 2 2 z - c ¦ -23t2 + lot + 33 3 3 z -c * 20t 2 - lit -19. 3 5 A Also y * 0 at t a 1,54, -0.96. 127 And *oatt • -1, » 1.43. y -1,00, 3 !Lz_zsHl L Andy-0att-1.29,-0,74. 5 4« Now and are concave downward, and is concave upward. Hence T is the set of points t such that ? -0.96 < t < 1.54. set of t such is the points that -1.00 < t < 1.43. is the set of points t such that either t*> 1.29 or t<£ -0,74. For Tableau IV, T is the set of points common to T T and ,, 23 T . In other words it is those points t such that 5 either 1.29 2.23, Since and and is concave downward, are concave upward then T is the set of points t such that either t>1.54 or t < -0.96. is the set of points t such that either t 1,60 or t < -0.95. T is the set of points t such that -2.23 < t < 3.06. Finally, for Tableau 11, T is the set of points t such that either -2.23 1,43 or t < -1,00. is the set of points t such that -0.95 < t < 1,60. is the set of points t such that either t > 1.17 or t < -0.55. T Then for Tableau 111 is the set of points t such that 1.43 < t < 1.60. Reviewing the various sets of values for which each extreme point solution is optimal, one may see that for every value of t except those values less than -2,23 and those values greater than 3*06 an optimal solution has already bean obtained. At t * -2.23, the optimal solution is the one t« based on Tableau 11, and at 3.06, the optimal solution is the based this * 0att * same. Also, on tableau, y -2.23, 5 3,06. Hence introducing P into Tableau II gives a new 5 basis, which also determines an optimal solution for these two points. The new basis is P , P , and P , and the new 4s a tableau ia Tableau VI (see page 133). The following calculations are based on this tableau. -«-+ z c 31t2 ift *z ii *t z "a—17 3'^t ““ Z6~ C6*s** a* Also * 0att* » 3«06. -2.23, It is not difficult to verify the fact that there is no real ** solution to y oor to y 0. Since and y y,y,i 3i' 6 are all concave upward, T and T are the same and are that 6 set of points t such that either t>3.06 or t < -2.23. To summarize the results, the table on page 131 has been constructed. Approximate values have been used. The exact values may be found in the text in fractional or radical form. 131 Left End Point Right End Point Tableau Which of the Interval of the Interval Provides the Optimal Solution t • -oo t » -2,23 Tableau VI t --2,23 t * —0,96 Tableau XI t • -0.96 t • -0,74 Tableau XV t * -0,74 t • 1,29 Tableau V t - 1.29 t « 1,43 Tableau IV t • 1,43 t • 1,60 tableau 111 t • 1,60 t « 3,06 Tableau II t - 3,06 t • +vo Tableau VI TableaulTableau I Unit Basis Values Elements P O P i Column Vectors F ? P 2 i 4 p 5 P 6 p 7 4 P 4 10 1 1 1 1 G 0 0 2 F 6 120 -15 20 -12 0 0 i 0 -H P ? 6 2 6 3 0 0 I Mot Differ­enoes *j * °J 28C -64* 01 -2M 50 -6a -25 -3M 0 5 +M 0 0 Tableau Unit Values 4 a -6 II 3aais Elements P 4 P 6 P 2 P © 9 100 1 P 1 X 3 „SLL 3 L 3 Col;uan Vetstore P F P a 3 4 0 1 2 1 0 -22 0 1 X a 0 F 3 X a LSI 3 ~6 P 6 0 1 0 p 7 “6 .Xfi. 3 X 6 Nat Differ­enoes * °i 230 3 0 -50 0 Xfi. 3 c -XX 3 .M Tableau III111 Unit Values Basis Elements P 0 P i Column Vectors P 2 p 3 p 4 p 3 P 6 p 7 4 P 4 8 i 3 -1 0 1 1. 3 0 3 2 P 6 144 -7 44 0 0 -4 1 4 5 P 3 2 3 2 1 0 JL 3 0 i 3 Net Differ­ences z i -a i 330 3 100 0 0 3 0 3 +M Tableau IV Unit Basis Values Elements P 0 P i Column Vectors P P ? 2 3 4 p 5 p 6 p 7 4 P 4 7 0 -2 JL 3 1 1_ 2 0 1 ”2 2 P 6 165 0 65 LL 2 0 -UL 2 1 4i 2 4 P 1 3 1 3 a. 3 0 1 2 0 i. 2 Net Differ­ences *1 ' °j 370 0 140 20 0 —1 c 0 15 +M Tableau Unit Values V Basis Elements P o P i Column Vectors P p p 2 3 4 P 5 p 6 p 7 -5 P 5 14 0 -4 -1 2 1 0 -1 2 P 6 270 c 35 3 15 0 1 0 4 P 1 10 1 1 1 1 0 0 0 Net Differ­ences * c 3 510 0 100 10 20 0 0 5 +M 133 tableau VI Co]Luan V«motors Tableau V] Basis P P Pr~P~ P prp p 0 234567 i Elements P Ai0$. 10 10”20 4 45 P30 0 01-1 «UL -li. JL_ S 2 5l© P 6-2. 1-.2-0 0 0 4 S1# 2 E. SOLUTION OF A PERIODIC PROBLEM the same problem will be used to illustrate the case in which the cost coefficients are periodic functions of t except that of course the cost coefficients will be defined as differently. For example, one may define them follows* + 1 c *» sint cost - c * 3sint 2 oost 2 c * -sint + 5 cost 3 c -3sin 4cost « t+ c ¦ -sint-3 cost =+ c 7aint 3 cost. 6 Again the first step is to assign a value to t and to determine an optimal solution for this particular value of t. **» »* Lett®0, Then* c Ifc -2fc 5}o 4fc -3f 12 3 49 o 6 * 3* Using these values of verify that Tableau V provides an optimal solution for t » 0 -o z by calculating the values,* z -c , z , and -c , 233 A a’ V which are all greater than zero. To find the entire set of values of t for which Tableau V provides the optimal solu­ tion, one again solves the inequalities: - a c>0 2 2~ z -c >0 3 3~ z -c &>O, 44 first. As before, the corresponding equations must be solved Then T T and T the solutions to the individual ,, , 23 4 inequalities, will be determined, and from them T, the solu­ tion to the entire set of inequalities, can be determined, + - s-c=xc+x c x c c 2 2 323 626 121 2 34 t+ 247sin 120cost»0. a z-c xc+xc+xc-c 3 3 535 636 131 3 » « 24sint + 8 cost 0. z-c*xc+xc+xc-c 4 4 543 646 141 4 * +» 107sint 36cost 0, Again denoting may summarize the-Cjby y, one results as follows* * the solution to « 0 is t arctan * arctan (-0,4858) « 2.689, 5.831; » the solution to y » 0 is t arctan (-£¦) 3 3 » arctan (-0.3333) » 2.820, 5.962; the solution to * 0 is t “ arctan (“Xot’) ** arctan (-0.3364) * 2.817, 5.959. At t ¦ «,