7 - Detecting Infeasibility
Published online by Cambridge University Press: 05 November 2011
Summary
Abstract
We study interior-point methods for optimization problems in the case of infeasibility or unboundedness. While many such methods are designed to search for optimal solutions even when they do not exist, we show that they can be viewed as implicitly searching for well-defined optimal solutions to related problems whose optimal solutions give certificates of infeasibility for the original problem or its dual. Our main development is in the context of linear programming, but we also discuss extensions to more general convex programming problems.
Introduction
The modern study of optimization began with G.B. Dantzig's formulation of the linear programming problem and his development of the simplex method in 1947. Over the more than five decades since then, the sizes of instances that could be handled grew from a few tens (in numbers of variables and of constraints) into the hundreds of thousands and even millions. During the same interval, many extensions were made, both to integer and combinatorial optimization and to nonlinear programming. Despite a variety of proposed alternatives, the simplex method remained the workhorse algorithm for linear programming, even after its non-polynomial nature in the worst case was revealed. In 1979, L.G. Khachiyan showed how the ellipsoid method of D.B. Yudin and A.S. Nemirovskii could be applied to yield a polynomial-time algorithm for linear programming, but it was not a practical method for large-scale problems. These developments are well described in Dantzig's and Schrijver's books [4, 25] and the edited collection [18] on optimization.
- Type
- Chapter
- Information
- Foundations of Computational Mathematics, Minneapolis 2002 , pp. 157 - 192Publisher: Cambridge University PressPrint publication year: 2004
- 2
- Cited by