Research on interior-point methods (IPMs) has dominated the field of mathematical programming for the last two decades. Two contrasting approaches in the analysis and implementation of IPMs are the so-called small-update and large-update methods, although, until now, there has been a notorious gap between the theory and practical performance of these two strategies. This book comes close to bridging that gap, presenting a new framework for the theory of primal-dual IPMs based on the notion of the self-regularity of a function. The authors deal with linear optimization, nonlinear complementarity problems, semidefinite optimization, and second-order conic optimization problems. The framework also covers large classes of linear complementarity problems and convex optimization. The algorithm considered can be interpreted as a path-following method or a potential reduction method. Starting from a primal-dual strictly feasible point, the algorithm chooses a search direction defined by some Newton-type system derived from the self-regular proximity. The iterate is then updated, with the iterates staying in a certain neighborhood of the central path until an approximate solution to the problem is found. By extensively exploring some intriguing properties of self-regular functions, the authors establish that the complexity of large-update IPMs can come arbitrarily close to the best known iteration bounds of IPMs. Researchers and postgraduate students in all areas of linear and nonlinear optimization will find this book an important and invaluable aid to their work.
"synopsis" may belong to another edition of this title.
Jiming Peng is Professor of Mathematics at McMaster University and has published widely on nonlinear programming and interior-points methods. Cornelis Roos holds joint professorships at Delft University of Technology and Leiden University. He is an editor of several journals, coauthor of more than 100 papers, and coauthor (with Tamas Terlaky and Jean-Philippe Vial) of "Theory and Algorithms for Linear Optimization". Tamas Terlaky is Professor in the Department of Computing and Software at McMaster University, founding Editor in Chief of "Optimization and Engineering", coauthor of more than 100 papers, and an editor of several journals and two books.
"The new idea of self-regular functions is very elegant and I am sure that this book will have a major impact on the field of optimization."--Robert Vanderbei, Princeton University
"The progress outlined in Self-Regularity represents one of the really major events in our field during the last five years or so. This book requires just standard mathematical background on the part of the reader and is thus accessible to beginners as well as experts."--Arkadi Nemirovski, Technion-Israel Institute of Technology
Preface..........................................................................................................................................................................viiAcknowledgements.................................................................................................................................................................ixNotation.........................................................................................................................................................................xiList of Abbreviations............................................................................................................................................................xvChapter 1. Introduction and Preliminaries........................................................................................................................................1Chapter 2. Self-Regular Functions and Their Properties...........................................................................................................................27Chapter 3. Primal-Dual Algorithms for Linear Optimization Based on Self-Regular Proximities......................................................................................47Chapter 4. Interior-Point Methods for Complementarity Problems Based on Self-Regular Proximities.................................................................................67Chapter 5. Primal-Dual Interior-Point Methods for Semidefinite Optimization Based on Self-Regular Proximities....................................................................99Chapter 6. Primal-Dual Interior-Point Methods for Second-Order Conic Optimization Based on Self-Regular Proximities..............................................................125Chapter 7. Initialization: Embedding Models for Linear Optimization, Complementarity Problems, Semidefinite Optimization and Second-Order Conic Optimization.....................159Chapter 8. Conclusions...........................................................................................................................................................169References.......................................................................................................................................................................175Index............................................................................................................................................................................183
Linear Programming is a revolutionary development that permits us, for the first time in our long evolutionary history, to make decisions about the complex world in which we live that can approximate, in some sense, the optimal or best decision. George B. Dantzig
A short survey about the fields of linear optimization and interior-point methods is presented in this chapter. Based on the simple model of standard linear optimization problems, some basic concepts of interior-point methods and various strategies used in the algorithm are introduced. The purpose of this work, as well as some intuitive observations that sparked the authors' research, are described. Several preliminary technical results are presented as a preparation for later analysis, and the contents of the book are also outlined.
1.1 HISTORICAL BACKGROUND OF INTERIOR-POINT METHODS
1.1.1 Prelude
There is no doubt that the major breakthroughs in the field of mathematical programming are always inaugurated in linear optimization. Linear optimization, hereafter LO, deals with a simple mathematical model that exhibits a wonderful combination of two contrasting aspects: it can be considered as both a continuous and a combinatorial problem. The continuity of the problem is finding a global minimizer of a continuous linear function over a continuous convex polyhedral constrained set, and its combinatorial character is looking for optimality over a set of vertices of a polyhedron. The Simplex algorithm, invented by Dantzig in the mid-1940s, explicitly explores its combinatorial structure to identify the solution by moving from one vertex to an adjacent one of the feasible set with improving values of the objective function. Dantzig's Simplex method has achieved tremendous success in both theory and application, and it remains to this day one of the efficient workhorses for solving LO.
The invention of the digital computer has brought dramatic changes to the world, and thus to the area of optimization. By combining computers with efficient methods, we can now solve many large and complex optimization problems that were unsolvable before the advent of computers, or even two decades ago. Inspired by the fast development of computers, scientists began to study the complexity theory of methods in the 1960s and 1970s. An algorithm was termed polynomial if the number of arithmetic operations taken by the algorithm to solve the problem, could be bounded above by a polynomial in the input "size" of the problem. Correspondingly such a polynomial algorithm was recognized as an efficient algorithm. Although the Simplex method with certain pivot rules works very well in practice and its probabilistic computational complexity is strongly polynomial (see Borgwardt]), it might take 2n - 1 iterations to solve the LO problem constructed by Klee and Minty in 1972, where n is the number of variables in the problem with 2n constraints. Similar "exponential examples" for different variants of the Simplex method have also been reported. Agitated by these difficult exponential examples for the Simplex method, researchers in the field became interested in the issue of whether LO problems are solvable in polynomial time.
In 1979, an affirmative answer to the above question was given by Khachiyan, who utilized the so-called ellipsoid method to solve the LO problem and proved that the algorithm has a polynomial O(n2L) iteration complexity with a total of O(n4L) bit operations. Khachiyan's results were immediately hailed in the international press, and the ellipsoid algorithm was subsequently studied intensively by various scholars in both theory and implementation. Unfortunately, in contrast to people's high expectation, even the best known implementation of the ellipsoid method is far from competitive with existing Simplex solvers. In this first challenge of polynomial algorithms to the Simplex method, the latter was the obvious winner. Nevertheless, the sharp contrast between the practical efficiency and the theoretical worst-case complexity of an algorithm set the stage for further exciting developments.
1.1.2 A Brief Review of Modern Interior-Point Methods
The new era of interior-point methods (IPMs) started in 1984 when Karmarkar proposed his LO algorithm, which enjoyed a polynomial complexity of O(nL) iterations with O(n3,5L) bit operations, and made the announcement that his algorithm could solve large-scale LO problems much faster than the Simplex method. At that time, the name of Karmarkar and his algorithm reached the front page of the New York Times despite the fact that his claim was received with much skepticism by some experts in the field. Nowadays, it is clear that Karmarkar opened a new field: the flourishing field of modern IPMs.
Karmarkar's algorithm in its original form is a primal projective potential reduction method. The potential function was employed to measure the progress of the algorithm and to force the iterates to stay in the feasible region. Although Karmarkar considered a nonstandard LO model, it soon turned out that the projective transformation in is not necessary for solving standard LO problems. Shortly after the publication of, Gill et al. observed that some simple variants of Karmarkar's algorithm could be tracked back to a very old algorithm in nonlinear optimization: the logarithmic barrier method. This observation led to a revival of some old methods for continuous optimization including the logarithmic barrier method by Frisch and Fiacco and McCormick, the center method by Huard, and the affine scaling method by Dikin. For example, it was proven by Roos and Vial that the basic logarithmic barrier method for LO has a polynomial complexity. Most of the early work on IPMs in the 1980s followed Karmarkar's primal setting, but focused on more efficient implementation or better complexity bounds. A remarkable work at that time was the algorithm by Renegar, who proposed using upper bounds on the optimal value of the objective function to form successively smaller subsets of the feasible set, and employ Newton's method to follow the analytic centers of these subsets to get the primal optimal solution. Closely related to the center of a polyhedron is another very important concept in the IPM literature: the so-called central path first recognized by Sonnevend and Megiddo. Almost all known polynomial-time variants of IPMs use the central path as a guideline to the optimal set, and some variant of Newton's method to follow the central path approximately. These Newton-type methods fall into different groups with respect to the strategies used in the algorithms to follow the central path. The reader may consult the survey paper by Gonzaga or the monograph of den Hertog for further details.
Among versatile types of IPMs, the so-called primal-dual path-following methods have emerged as particularly attractive and useful. The primal-dual setting was first suggested by Megiddo, Monteiro and Adler, Tanabe and Kojima et al. for LO problems, and later extensively investigated for complementarity problems by a Japanese group led by Kojima. In the theoretical aspect, primal-dual IPMs possess the appealing theoretical polynomiality and allow transparent extension to other problems such as convex programming and complementarity problems. In practice they also form the basis of most efficient IPM solvers. A notable work with respect to the practical IPM is due to Mehrotra who described a predictor-corrector algorithm in 1992. Mehrotra's scheme has been proved to be very effective in practice and is employed in most existing successful IPM solvers. Much effort went into investigating preprocessing techniques, warm start, sparse Cholesky factorization, and other implementation issues. By these efforts, the implementation of IPMs was enhanced greatly at both the commercial and the academic level. The paper present a thorough survey of the implementation and performance of IPMs. As claimed there, compared with the Simplex method, IPMs appear to be a strong rival for solving LO problems of medium size, and the winner for large-scale ones (see also).
The theory of IPMs developed into a mature principle during the 1990s. One main contribution to IPM theory in that period came from two mathematicians, Nesterov and Nemirovskii, who invented the theory of the, by now, well-known self-concordant functions, allowing the algorithms based on the logarithmic barrier function for LO to be transparently extended to more complex problems such as nonlinear convex programming, nonlinear complementarity problems, variational inequalities, and particularly to semidefinite optimization (SDO) and second-order conic optimization (SOCO). Nesterov and Todd further generalized the primal-dual algorithms to linear optimization over so-called self-scaled cones, which still include SDO and SOCO as concrete instances. To some extent, SDO recently became the most active area of mathematical programming. SDO involves minimizing the values of a linear function with a matrix argument subject to linear equality constraints and requiring that the matrix argument be positive semidefinite. The SDO paradigm includes as special cases LO, quadratic optimization (QO), the linear complementarity problem (LCP), and SOCO. It has a great variety of applications in various areas such as optimal control, combinatorics, structural optimization, pattern recognition, etc. We refer to the excellent survey by Vandenberghe and Boyd for more details. An extremely important fact is that, while problems such as LO, QP and LCP can also be solved by other methods (and hence IPM is only one alternative), IPMs appear to be the first and also most efficient approach for SDO. As such, SDO is a good advertisement for the power of IPMs, although the theory and especially implementation of IPMs for SDO are still far from mature. There are also many studies of other theoretical properties of IPMs such as local convergence of the algorithm, procedures to get an exact solution from an approximate solution and sensitivity analysis. Interested readers are referred to recent books by Roos, Terlaky and Vial, Wright and Ye and the references therein. The book edited by Terlaky collects some state-of-the-art review papers on various topics of IPMs and a large number of related references. For more recent achievements on IPMs we refer to the Interior-Point Methods Online Web site, www.mcs.anl.gov/otc/InteriorPoint, where most of the technical reports in the past five years are listed.
1.2 PRIMAL-DUAL PATH-FOLLOWING ALGORITHM FOR LO
1.2.1 Primal-Dual Model for LO, Duality Theory and the Central Path
In this monograph we are mainly concerned with the complexity theory of IPMs. This is the major theme in the recent IPM literature. We start with the following standard linear optimization problem:
(LP) min{cT x : Ax = b, x = 0}, (1:1)
where A [member of] Rmxn, b [member of] Rm, c [member of] Rn, and its dual problem
(LD) max{bT y : AT y + s = c, s = 0). (1:2)
Throughout this work, we assume that A has full row rank, that is, rank(A) = m. This implies that for a given dual feasible s, the vector y is uniquely defined. Hence we may identify a feasible solution of (LD) only by s. The following definitions introduce some basic concepts for LO.
Definition 1.2.1 A point x is said to be feasible (or strictly feasible) for (LP) if Ax = b and x = 0 (or x > 0). A point (y; s) is feasible (or strictly feasible) for (LD) if AT y + s = c and s = 0 (or s > 0). The point (x, y, s) is primal-dual feasible (or strictly feasible) if x and (y; s) are feasible (or strictly feasible) for (LP) and (LD), respectively.
The relationships between (LP) and (LD) have been well explained by the duality theory of LO. For instance, if (x, y, s) is a primal-dual feasible pair, then there holds bTy = cTx. In other words, the objective value in (LD) gives a lower bound for the objective in (LP), and the objective in (LP) provides an upper bound for that in (LD). The main duality results can be summarized by the following strong duality theorem.
Theorem 1.2.2 For (LP) and (LD) one of the following four alternatives holds:
(i) (LP) and (LD) are feasible and there exists a primal-dual feasible pair (x*, y*, s*) such that
cTx* = bTy*.
(ii) (LP) is infeasible and (LD) is unbounded.
(iii) (LD) is infeasible and (LP) is unbounded.
(iv) Both (LP) and (LD) are infeasible.
Hence, solving LO amounts to detecting which of these four cases holds, and in case (i) an optimal solution (x*, y*, s*) must be found. Note that in case (i), the two objective values in (LP) and (LD) coincide with each other at the solution (x*, y*, s*), that is cTx* = bTy*, which further yields
(s*)Tx* = (c - ATy*)Tx* = cTx* - bTy* = 0.
Observe that since x*, s* = 0, the above equality can also be written as
x*i s*i = 0, i = 1, ..., n.
An intrinsic property of LO is given by the following result.
Theorem 1.2.3 Suppose that both (LP) and (LD) are feasible. Then there exists a primal-dual feasible pair (x*, y*, s*) such that (x*)Ts* = 0 and x* + s* > 0. A solution (x*, s*) with this property is called strictly complementary.
This theorem was first established by Goldman and Tucker and later studied by other researchers using different approaches. It plays an important role in the design and analysis of IPMs, particularly in the procedures for getting an exact solution from an approximate solution obtained by IPMs, or detecting the infeasiblity of the problem.
Starting from an initial point (x0, y0, s0) with x0, s0 > 0, all primal-dual interior-point algorithms generate a point sequence (xk, yk, sk) with xk, sk > 0 such that the sequence converges to the set of optimal solutions as k goes to infinity. If, at each iteration, the point (xk, yk, sk) further satisfies the linear equality constraints, then we call the algorithm a feasible interior-point algorithm, and otherwise an infeasible interior-point algorithm. The choice between feasible and infeasible IPMs depends on whether a feasible starting point is available or not. If a strictly feasible point is known, then by starting from this point and carefully updating the iterates, we can keep the iterative sequence feasible.
In this work, the class of feasible IPMs is taken as the framework for our discussions. The reasons for this are twofold. First, from a theoretical viewpoint it is more convenient to analyze a feasible IPM rather than an infeasible one. Needless to say, with some extra effort, the analysis for feasible IPMs can usually be extended to its infeasible counterpart. Second, for some problems such as LO, LCP, and SDO, by slightly increasing the size of the problem, we can always use the self-dual embedding model (see Chapter 7 or) to reformulate the original problem as a new problem for which a strictly feasible starting point is readily available. Hence from now on we assume without loss of generality that both (LP) and (LD) satisfy the interior-point condition, that is, there exists (x0, y0, s0,) such that
[Ax0 = b, x0 > 0, AT y0 + s0 = c, s0 > 0: (1:3)
(Continues...)
Excerpted from Self-Regularity: A New Paradigm for Primal-Dual Interior-Point Algorithmsby Jiming Peng Cornelis Roos Tamàs Terlaky Copyright © 2002 by Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
"About this title" may belong to another edition of this title.
£ 4.47 shipping from U.S.A. to United Kingdom
Destination, rates & speedsSeller: PsychoBabel & Skoob Books, Didcot, United Kingdom
Paperback. Condition: New. First Edition. Paperback in as-new condition: minor shelfwear only: contents clean, sound, bright. Used. Seller Inventory # 219667
Quantity: 1 available
Seller: BooksRun, Philadelphia, PA, U.S.A.
Paperback. Condition: Very Good. Ship within 24hrs. Satisfaction 100% guaranteed. APO/FPO addresses supported. Seller Inventory # 0691091935-8-1
Quantity: 1 available
Seller: Powell's Bookstores Chicago, ABAA, Chicago, IL, U.S.A.
Condition: Used - Like New. 2002. Paperback. Fine. Seller Inventory # m900446
Quantity: 1 available
Seller: Kadriin Blackwell, Greensville, ON, Canada
Trade Paperback. Condition: As New. Book. Seller Inventory # 11230
Quantity: 1 available
Seller: Basi6 International, Irving, TX, U.S.A.
Condition: Brand New. New. US edition. Excellent Customer Service. Seller Inventory # ABEJUNE24-112397
Quantity: 1 available
Seller: Labyrinth Books, Princeton, NJ, U.S.A.
Condition: New. Seller Inventory # 127371
Quantity: 9 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: New. Seller Inventory # 400147-n
Quantity: 1 available
Seller: PBShop.store UK, Fairford, GLOS, United Kingdom
PAP. Condition: New. New Book. Shipped from UK. Established seller since 2000. Seller Inventory # WP-9780691091938
Quantity: 1 available
Seller: PBShop.store US, Wood Dale, IL, U.S.A.
PAP. Condition: New. New Book. Shipped from UK. Established seller since 2000. Seller Inventory # WP-9780691091938
Quantity: 1 available
Seller: GreatBookPricesUK, Woodford Green, United Kingdom
Condition: As New. Unread book in perfect condition. Seller Inventory # 400147
Quantity: 1 available