Introduction to Linear Optimization

Preface:

The purpose of this book is to provide a unified, insightful, and modern treatment of linear optimization, that is, linear programming, network flow problems, and discrete linear optimization. We discuss both classical topics, as well as the state of the art. We give special attention to theory, but also cover applications and present case studies. Our main objective is to help the reader become a sophisticated practitioner of (linear) optimization, or a researcher. More specifically, we wish to develop the ability to formulate fairly complex optimization problems, provide an appreciation of the main classes of problems that are practically solvable, describe the available solution methods, and build an understanding of the qualitative properties of the solutions they provide.

Our general philosophy is that insight matters most. For the subject matter of this book, this necessarily requires a geometric view. On the other hand, problems are solved by algorithms, and these can only be described algebraically. Hence, our focus is on the beautiful interplay between algebra and geometry. We build understanding using figures and geometric arguments, and then translate ideas into algebraic formulas and algorithms. Given enough time, we expect that the reader will develop the ability to pass from one domain to the other without much effort.

Another of our objectives is to be comprehensive, but economical. We have made an effort to cover and highlight all of the principal ideas in this field. However, we have not tried to be encyclopedic, or to discuss every possible detail relevant to a particular algorithm. Our premise is that once mature understanding of the basic principles is in place, further details can be acquired by the reader with little additional effort.

Our last objective is to bring the reader up to date with respect to the state of the art. This is especially true in our treatment of interior point methods, large scale optimization, and the presentation of case studies that stretch the limits of currently available algorithms and computers.

The success of any optimization methodology hinges on its ability to deal with large and important problems. In that sense, the last chapter, on the art of linear optimization, is a critical part of this book. It will, we hope, convince the reader that progress on challenging problems requires both problem specific insight, as well as a deeper understanding of the underlying theory.

In any book dealing with linear programming, there are some important choices to be made regarding the treatment of the simplex method. Traditionally, the simplex method is developed in terms of the full simplex tableau, which tends to become the central topic. We have found that the full simplex tableau is a useful device for working out numerical examples. But other than that, we have tried not to overemphasize its importance.

Let us also mention another departure from many other textbooks. Introductory treatments often focus on standard form problems, which is sufficient for the purposes of the simplex method. On the other hand, this approach often leaves the reader wondering whether certain properties are generally true, and can hinder the deeper understanding of the subject. We depart from this tradition: we consider the general form of linear programming problems and define key concepts (e.g., extreme points) within this context. (Of course, when it comes to algorithms, we often have to specialize to the standard form.) In the same spirit, we separate the structural understanding of linear programming from the particulars of the simplex method. For example, we include a derivation of duality theory that does not rely on the simplex method.

Finally, this book contains a treatment of several important topics that are not commonly covered. These include a discussion of the column geometry and of the insights it provides into the efficiency of the simplex method, the connection between duality and the pricing of financial assets, a unified view of delayed column generation and cutting plane methods, stochastic programming and Benders decomposition, the auction algorithm for the assignment problem, certain theoretical implications of the ellipsoid algorithm, a thorough treatment of interior point methods, and a whole chapter on the practice of linear optimization. There are also several noteworthy topics that are covered in the exercises, such as Leontief systems, strict complementarity, options pricing, von Neumann's algorithm, submodular function minimization, and bounds for a number of integer programming problems.

Here is a chapter by chapter description of the book.

**Chapter 1:**
Introduces the linear programming problem,
together with a number of examples, and provides some background material on
linear algebra.

**Chapter 2:** Deals with the basic geometric properties of polyhedra,
focusing on the definition and the existence of extreme points,
and emphasizing the interplay betwen the geometric and the algebraic viewpoints.

**Chapter 3:** Contains more or less the classical material
associated with the simplex method, as well as a discussion of the column geometry.
It starts with a high-level and geometrically motivated derivation of the simplex method.
It then introduces the revised
simplex method, and concludes with the simplex tableau. The usual topics of
Phase I and anticycling are also covered.

**Chapter 4:** It is a comprehensive treatment of linear programming duality.
The duality theorem is first obtained as a corollary of the simplex method.
A more abstract derivation is also provided, based on the separating hyperplane
theorem, which is developed from first principles.
It ends with a deeper look into the geometry of polyhedra.

**Chapter 5:**Discusses sensitivity analysis,
that is, the dependence of solutions and the optimal cost
on the problem data, including parametric programming.
It also develops a characterization of dual optimal solutions as subgradients
of a suitably defined optimal cost function.

**Chapter 6:**
Presents the complementary ideas of delayed column generation and
cutting planes. These methods are first developed at a high level, and are then
made concrete by discussing the cutting stock problem, Dantzig-Wolfe decomposition,
stochastic programming, and Benders decomposition.

**Chapter 7:** Provides a comprehensive review of the principal
results and methods for the different variants of the network flow problem.
It contains representatives from all major types of algorithms: primal
descent (the simplex method), dual ascent (the primal-dual method),
and approximate dual ascent (the auction algorithm). The focus is on the
major algorithmic ideas, rather than on the refinements that can lead to better
complexity estimates.

**Chapter 8:**
Includes a discussion of complexity,
a development of the ellipsoid method, and a proof of the polynomiality of
linear programming. It also discusses the equivalence of separation and
optimization,
and provides examples where
the ellipsoid algorithm can be used to derive polynomial time results
for problems involving an exponential number
of constraints.

**Chapter 9:** Contains an overview of all major classes of interior point methods,
including affine scaling, potential reduction, and path following (both
primal and primal-dual) methods. It includes
a discussion of the underlying geometric ideas and computational issues, as
well as convergence proofs and
complexity analysis.

**Chapter 10:** Introduces integer programming formulations of discrete optimization problems.
It provides a number of examples, as well as some intuition as to what constitutes
a ``strong'' formulation.

**Chapter 11:**
Covers the major classes of integer programming algorithms,
including exact methods (branch and bound, cutting planes, dynamic programming),
approximation algorithms, and heuristic methods (local
search and simulated annealing).
It also introduces a duality theory for integer programming.

**Chapter 12:**
Deals with the art in linear optimization, i.e., the process
of modeling, exploiting problem structure, and fine tuning of optimization
algorithms. We discuss the relative performance of interior point
methods and different variants of the simplex method, in a realistic large scale
setting. We also give some indication of the size of problems that can be currently solved.

An important theme that runs through several chapters is the modeling, complexity, and algorithms for problems with an exponential number constraints. We discuss modeling in Section 10.3, complexity in Section 8.5, algorithmic approaches in Chapter 6 and 8.5, and we conclude with a case study in Section 12.5.

There is a fair number of exercises that are given at the end of each chapter. Most of them are intended to deepen the understanding of the subject, or to explore extensions of the theory in the text, as opposed to routine drills. However, several numerical exercises are also included. Starred exercises are supposed to be fairly hard. A solutions manual for qualified instructors can be obtained from the authors.

We have made a special effort to keep the text as modular as possible, allowing the reader to omit certain topics without loss of continuity. For example, much of the material in Chapters 5 and 6 is rarely used in the rest of the book. Furthermore, in Chapter 7 (on network flow problems), a reader who has gone through the problem formulation (Sections 7.1-7.2) can immediately move to any later section in that chapter. Also, the interior point algorithms of Chapter 9 are not used later, with the exception of some of the applications in Chapter 12. Even within the core chapters (Chapters 1-4), there are many sections that can be skipped during a first reading. Some sections have been marked with a star indicating that they contain somewhat more advanced material that is not usually covered in an introductory course.

The book was developed while we took turns teaching a first-year graduate course at M.I.T., for students in engineering and operations research. The only prerequisite is a working knowledge of linear algebra. In fact, it is only a small subset of linear algebra that is needed (e.g., the concepts of subspaces, linear independence, and the rank of a matrix). However, these elementary tools are sometimes used in subtle ways, and some mathematical maturity on the part of the reader can lead to a better appreciation of the subject.

The book can be used to teach several different types of courses. The first two suggestions below are one-semester variants that we have tried at M.I.T., but there are also other meaningful alternatives, depending on the students' background and the course's objectives.

- Cover most of Chapters 1-7, and if time permits,
cover a small number of topics from Chapters 9-12.
- An alternative could be the same as above, except that interior point
algorithms (Chapter 9) are fully covered, replacing network flow problems
(Chapter 7).
- A broad overview course can be constructed by concentrating on the easier material in most of the chapters. The core of such a course could consist of Chapter 1, Sections 2.1-2.4, 3.1-3.5, 4.1-4.3, 5.1, 7.1-7.3, 9.1, 10.1, some of the easier material in Chapter 11, and an application from Chapter 12.
- Finally, the book is also suitable for a half-course on integer programming, based on parts of Chapters 1 and 8, as well as Chapters 10-12.

There is a truly large literature on linear optimization, and we make no attempt to provide a comprehensive bibliography. To a great extent, the sources that we cite are either original references of historical interest, or recent texts where additional information can be found. For those topics, however, that touch upon current research, we also provide pointers to recent journal articles.

We would like to express our thanks to a number of individuals. We are grateful to our colleagues Dimitri Bertsekas and Rob Freund, for many discussions on the subjects in this book, as well as for reading parts of the manuscript. Several of our students, colleagues, and friends have contributed by reading parts of the manuscript, providing critical comments, and working on the exercises: Jim Christodouleas, Thalia Chryssikou, Austin Frakt, David Gamarnik, Leon Hsu, Spyros Kontogiorgis, Peter Marbach, Gina Mourtzinou, Yannis Paschalidis, Georgia Perakis, Lakis Polymenakos, Jay Sethuraman, Sarah Stock, Paul Tseng, and Ben Van Roy. But mostly, we are grateful to our families for their patience, love, and support in the course of this long project.

Dimitris Bertsimas

dbertsim@aris.mit.edu

John N. Tsitsiklis

jnt@mit.edu

[Return to Athena Scientific Homepage]

info@athenasc.com