
Applied Mathematical Programming (1977) - luu
http://web.mit.edu/15.053/www/
======
kxyvr
I'm glad there's more free material online for optimization modeling and
algorithms. Personally, here's my favorite list for algorithms and modeling
for optimization:

For learning the simplex method, Linear Programming by Vasek Chvatal (first 10
chapters):
[https://books.google.com/books?id=DN20_tW_BV0C&printsec=fron...](https://books.google.com/books?id=DN20_tW_BV0C&printsec=frontcover)

For learning branch and bound, Integer Programming by Laurence A. Wolsey
[https://books.google.com/books?id=x7RvQgAACAAJ](https://books.google.com/books?id=x7RvQgAACAAJ)

For odd modeling tricks for integer programming, Logic and Integer Programming
by Williams, H. Paul
[https://www.springer.com/us/book/9780387922799](https://www.springer.com/us/book/9780387922799)

For modeling tricks for convex programming, Convex Optimization by Stephen
Boyd and Lieven Vandenberghe
[https://stanford.edu/~boyd/cvxbook/](https://stanford.edu/~boyd/cvxbook/)

For basic nonlinear optimization algorithms, Numerical Optimization by
Nocedal, Jorge, Wright, S.
[https://www.springer.com/us/book/9780387303031](https://www.springer.com/us/book/9780387303031)

I've never really found the ultimate optimization book, but the references
above tend to have the right modeling techniques, algorithms, and references.
I've found they're a good place to get the right idea and the right words to
search for more material.

~~~
vitaut
Just to add, the book "AMPL: A Modeling Language for Mathematical Programming"
by Robert Fourer, David M. Gay, and Brian W. Kernighan (yes, the "K" in K&R)
is also freely available online: [http://ampl.com/resources/the-ampl-
book/chapter-downloads/](http://ampl.com/resources/the-ampl-book/chapter-
downloads/) . It is not only a reference for the AMPL modeling language, but
also a great introduction to mathematical optimization.

------
psykotic
'Programming' as in linear programming, dynamic programming, etc, not
'programming' as in computer programming.

~~~
vitaut
People outside of this field are often confused by this. That's why the term
"mathematical optimization" is often used instead of "mathematical
programming" nowadays.

------
vitaut
Mathematical programming (optimization) has so many useful applications and I
am very pleased to see this classical book on the topic available online.

Unfortunately because it has been written a while ago it doesn't cover
algebraic modeling languages and examples are in Excel which is not an
adequate optimization or teaching tool (even XKCD has something to say about
it in his webcomic [https://xkcd.com/1667/](https://xkcd.com/1667/)).

------
eggy
Old, but after skimming it for an hour, it seems very applicable. I had been
researching genetic algorithms to do scheduling optimization in the past based
on a 1996 thesis 'A Genetic Algorithm for Resource-Constrained Scheduling' [1]
(PDF download alert).

I am not an operations manager, but I seem to always have a need to perform
similar analyses or be in a position to comment on others. Operations
management texts tend to be very broad, or geared to a non-mathematical
manager, so I am finding more texts like this one to be more enjoyable and
informative for my purposes.

The last optimization problem WRT to OM I worked on was to review an OEM's
recommended spares list for validity given lead times on certain items, and
the FMEA (Failure Modes and Effects Analysis) to see how many of each spare,
and when we should order replenishments to achieve a 95% or greater confidence
level that we would have stock of a particular item, and not have any
downtime.

There is huge software that automatically does this for large companies, but
for little to mid-level companies, it is great to be able to dig in to the
algorithms to gain a better understanding, and heck, just for the fun and
learning experience especially when said company doesn't see the benefit of
the larger software.

Thanks for the link!

[1]
[http://lancet.mit.edu/~mbwall/phd/thesis/thesis.pdf](http://lancet.mit.edu/~mbwall/phd/thesis/thesis.pdf)

------
pasbesoin
The link to the Excel Excercises is currently 403, but they are on
archive.org:

[http://wayback.archive.org/web/20150309105904/http://web.mit...](http://wayback.archive.org/web/20150309105904/http://web.mit.edu/15.053/www/ExcelExercises/)

------
princeb
management science is a great topic and is one of the important applications
of various optimization algorithms in mathematics. it's roughly the same field
as industrial engineering.

most people get by just figuring out how to put problems in excel and hit
"SOLVER" (or Matlab and fmincon) but there is some value in figuring out the
algorithm works. then it's a hop skip and jump away from the other classes of
optimization that you see in machine learning.

~~~
graycat
In practice, the real problems don't have to get very large or complicated
before just typing the problem into AMPL or clicking on SOLVE won't work --
that is, _solver_ capacities may be exceeded or the running time can become
absurdly long.

Then you must exploit whatever special structure you can find in your problem
and dig into the algorithms to make refined usage. So, maybe do some
Lagrangian relaxation. Maybe find an embedded network flow problem and use a
network flow version of the simplex algorithm.

------
graycat
This book is comparatively old, but the authors are experts, and likely the
material is still surprisingly current.

But I will insert another point, call it the "Dirty Little Secrets of
Optimization Your Books and Professors Don't Tell You."

Here is a bottom line, bold, summary fact of life about such _optimization_ ,
_mathematical programming_ , _operations research_ : It's super tough to find
real world problems where can apply such material, do well on the problems,
please the customers, and get paid well enough to have a career. Trying such
work either as a consultant or an employee is tough.

There are exceptions, but they are rare. If you can find an exception,
terrific, but, did I mention, they are rare. Nearly anything else you can do
with a BS, MS, or Ph.D. in a STEM field will be an easier way to make a
living.

In practice, the field has a bad reputation: Early on there was a lot of hype
and a lot of projects that failed. Leading reasons for project failure were
(1) the basic data was too difficult to collect, (2) some custom software was
needed, and it was too difficult to write, (3) the available optimization
software was not powerful enough, (4) the mathematical techniques were not
powerful enough, at least not without a lot of special work unique to the
particular practical problem, (5) the computer run times were too long, too
often days or weeks.

In addition there were various _social_ problems: So, the optimization people
were seen as wizards with tall, conical black hats decorated with stars and
moons and were not really welcome, were not understood, were threatening, etc.

Here is a larger problem: The optimization people did not have a recognized
profession. Lawyers, physicians, insurance actuaries, and more do have a
recognized profession, maybe with government licensing, a code of ethics,
professional peer review, legal liability, graduate education which is not
just academic but also professional, likely with practical apprenticeship, a
serious professional society, professional certification, etc. That way, a
generalist line manager can hire a respected professional, as an employee or
consultant, with relatively low risk to his career. Without professional
status, such a line manager is reluctant and for good reason.

Then there's another secret: Early on, when there was a lot of optimism, there
were two ideas, maybe just implicit but, still, powerful:

The first idea was that such optimization would, to borrow a more recent
phrase, "eat the world". That is, essentially every economic activity in the
world would be directed by such optimization.

The second idea was that the _optimal solution_ was the absolute goal.
Accepting a solution that was 10 cents away from optimal was seen as a moral
lapse. The word _optimal_ was a stick used to beat up on everything and
everyone else.

Then reality set in: The applied math available was nowhere nearly powerful
enough to solve just any practical problems that could be found in routine
operations. And that was just the clouds before the hurricane:

The hurricane happened when the question P versus NP was formulated and, then,
it was seen that large, practical problems in NP-complete were as common as
pebbles in a stream bed. Darned near every problem in scheduling, routing,
logistics, resource allocation, and more could want at least something as
difficult as integer linear programming, but that problem is in NP-complete.
Then considering non-linear aspects gave a storm that made a hurricane look
like a gentle rain. Then it got worse: A lot of real problems had a lot of
randomness. Okay, a lot of those problems can be formulated as stochastic
optimal control problems, maybe continuous time, maybe discrete time, but then
get bad news right away: First don't have enough data on either the system
_dynamics_ or the probability distributions of the random effects. Second on
realistic problems, stochastic optimal control can eat super computer time by
the months per problem.

The worst of the chuckholes in the road was the struggle of P versus NP. And
with the moral attitude that _optimal_ was to be obtained made the P versus NP
challenge much worse. Indeed the US economy went for decades declining to
implement solutions that could save 15% of operating costs because saving 16%
for an optimal solution was too difficult for the math and the computing. That
is, commonly saving that last 10 cents is by far the hardest. Or there might
be millions of dollars to be saved, but still could not save the last 10
cents, and people gave up due to the difficulty of that last 10 cents.

And there was a huge cultural problem: Various universities set up
professorships, sometimes departments, concentrating on optimization. Well,
there the professors concentrated on publishing papers and largely ignored
real problems. And anything like professional education was pushed out as
inferior to the research agenda. Then, sure, the students got relatively
little benefit in non-academic careers -- e.g., they were not in a profession.

For a while, there was an eager customer who did put up with a lot of the hype
and poor or failed projects -- the US DoD.

So, bottom line: A lot of very good applied math has been done for
optimization. And there is a lot of high quality software. But real problems
that can be attacked successfully with these tools and where the benefits are
really worth the _botheration_ , time, money, and effort are rare.

If you can find suitable real problems and make a living, you will be very
rare, but good for you.

What will likely happen is that occasionally some optimization will become
part of some software that is sold as, say, SaaS. And, in some fields where
optimization is known to be effective, say, feed mixing, we can expect that
optimization is an accepted tool in that field.

So, what was the biggest result of the academic research? Sure, they
identified the problem of P versus NP. Gads, I should have started a pizza
shop!

~~~
Tarrosion
For what it's worth, my experience thus far has been quite different:

\- I agree that operations research has a brand problem, but the problem is
that nobody recognizes the name. Every week I have this conversation: "What do
you do?" "I'm a PhD student in operations research at MIT." [blank look] "Have
you heard of OR? I wouldn't blame you; most people haven't." "No, what is it?"
"It's something of a mix between computer science, applied math, and
statistics, and in particular we do a lot of optimization work." "Wow, that's
really useful!"

So no, we don't have the brand of computer science, but I get a uniformly
positive reception when I describe what I do. From technical people there's a
fair amount of "Oh! I have this optimization problem in my work, do you
think..."

\- People graduating from MIT operations research seem to have 100% success in
finding good jobs, both in academia and industry. I think we'll be surprised
to learn that finding good jobs is tough.

Yes, we all benefit tremendously from the name MIT, no doubt. Nonetheless, if
everyone graduating from my program has an excellent offer lined up by the
time they cross the stage...that doesn't suggest a congested market with few
opportunities.

\- Yes, integer programming is Hard in the technical sense. However, with
overwhelmingly high probability, the problem you actually care about is not
hard at all. Write down the first formulation that comes to mind, throw it at
Gurobi or CPLEX, go for a coffee, and when you come back you'll have an answer
with tiny optimality gap, perhaps certifiably optimal.

Integer programming in particular is one of the great academic success
stories; the combined speedup of hardware and algorithms in the last ~30 years
has made IP solvers ~10e9 faster.

___

You write as someone who has personal experience and knows what they're
talking about, so perhaps I should update my priors. But suffice to say I
think OR and its practitioners are in a fantastic spot right now.

~~~
graycat
MIT? Yup, D. Bertsekas! And his student S. Shreve! And, of course, T.
Magnanti!

Right: Magnanti gave a Goldman lecture at Johns Hopkins on design of internet
networks. Sure, it was network design at Bell Labs that got Garey and Johnson
at Bell Labs to write that first, important book on NP-completeness. At about
the time of Magnanti's lecture, I was also looking into the same problem,
especially for the core of the Internet with multi-protocol label switching,
border gateway protocol, etc. I couldn't get any VCs interested in funding me
(since then I learned more about what VCs want -- not a problem posed in a
lecture by the Dean of Science at MIT and attacked by a Ph.D. in
optimization).

So, soon I heard of a venture funded startup doing network design and sent
them a resume. They flew me down. The CEO met me at the door; I hadn't looked
up his name; that was a fatal social flub; and he refused his interview slot
with me.

But, some of his venture funders were there, very concerned, nicer to me,
liked that I'd been the first Director of Operations Research at FedEx and
with my work on scheduling the fleet had saved the company -- literally.

The CEO of that startup had been an IBM executive, and I'd been at the IBM
Watson lab. Mostly the team I was on worked in AI, but I worked also in some
mathematical statistics and some optimization. My optimization work knocked
off an old problem in the Kuhn-Tucker conditions and solved a problem stated
but not solved in the famous Arrow, Hurwicz, Uzawa paper in mathematical
economics. My statistic paper was quite different.

It was that CEO's error: Since I was also from IBM, apparently he expected me
to have a good IBM style executive handshake. Sorry 'bout that.

His company was in deep trouble; no doubt that is why his VCs were right
there. Solving integer linear programming problems was just crucial for his
business, essentially as in Magnanti's lecture.

Their head applied mathematician had just left to return to his university
slot. Again, the CEO was in deep, smelly, sticky stuff.

I met with their team. What they really cared about were software object
oriented inheritance hierarchies. Gads. Next to irrelevant for solving the
integer programming problems, but okay by me -- I'd be glad to write in C++
and their hierarchy if they wanted.

Then we got to optimization: I said that my most recent work was on a
commercial problem in marketing resource allocation where I found a feasible
solution to a 0-1 integer linear programming problem with 40,000 constraints
and 600,000 variables in 905 seconds on a 90 MHz PC. From the primal-dual
bounding, my feasible solution was within 0.025% of optimality. That work
should be a good initial qualification!

Alas, they had heard that 0-1 integer linear programming was in NP-complete,
thus believed that only small, trivial problems could be attacked (a serious
mistake that is far too common), and concluded that I was lying! Gads.

Well, they told me about their approach -- enumeration. They confessed that it
was running too long, for days. Gads.

In a back room they had a high end PC and a copy of C-PLEX, but they were not
touching it.

I didn't get an offer, and soon the startup went out of business. Sorry, guys.
Looks like Darwin was on he case.

That experience was a good example of the _cultural,_ _social,_ or
_professional_ problem: The CEO was in a business that desperately depended on
getting good results from integer linear programming. But the CEO was a
traditional, blue suit, white shirt, polished shoes, strong handshake, no
nonsense, tough, results-oriented manager, focused on the bottom line, ... who
however didn't know the first thing about optimization or how to hire a team
that did and didn't recognize a good optimization guy when one walked in the
door.

And I didn't come in with enough professional status, say, like that of a
lawyer advising on an IPO or some such, for him to swallow his pride in
knowing everything and take me seriously.

Actually, that problem of 40,000 constraints, etc. was another example: I'd
found some guys who had some customers high in banking and had talked the
customers into letting these guys attack problems in optimization, or anything
else they could get paid to work on.

Well, the bank had a resource allocation optimization problem. Well the head
guy was a salesman type, but he had a technical guy he liked. The technical
guy knew next to nothing about optimization but had heard about simulated
annealing. And, maybe to his credit, he did formulate the banking problem as
the 0-1 integer linear programming (ILP) problem I mentioned.

So, I met those two guys over coffee, and they sent me e-mail with the 0-1 ILP
formulation.

I looked at the formulation, derived some non-linear duality for essentially
Lagrangian relaxation, considered several approaches, knew that since the
primal problem was a minimization, the dual problem was a maximization of a
concave function that could be approximated with supporting hyper planes. The
dual problem was in 16 dimensions, that is, there were 16 Lagrange
multipliers.

In this case, given values of the Lagrange multipliers, the primal problem was
just trivial to solve exactly, just with a page or so of simple code for some
simple arithmetic. So, all that was left was to maximize the concave dual
problem in 16 variables.

So, I wrote them back that I thought I saw a path. So, I wrote the code and
for the linear programming part for the concave maximization called the IBM
Optimization Subroutine Library (OSL), went through 500 primal-dual
iterations, and found the feasible solution within 0.025% of optimality. So,
after two weeks, I sent them e-mail on my success.

They went silent. There had been hints: The technical guy had been trying
simulated annealing, running for days, and stopping when he gave up, of
course, maybe without being feasible and certainly with no indication of how
close he was to optimality. He didn't like me and was pinging on his CEO buddy
to keep me away from the work.

So, from my distant view of the political situation through a glass darkly, my
work, 905 seconds and 0.025%, just totally pissed off and humiliated the
technical guy, let the CEO know that he'd picked an incompetent for his lead
technical guy, didn't want to admit his need for me, and just gave up.

So, again, as lawyers have learned: A working professional technical
specialist should report only to a person with such qualifications and never
to a generalist line manager. That is, the CEO guy needed a professional that
he could respect for his professionalism, and he was not yet ready to do that
with me.

Really, in real business, darned few generalist middle managers want to have
to manage someone doing important technical work the manager does not
understand. It's almost like, if he had a heart attack, he'd rather go to a
nurse than a high end cardiologist. The key is, the cardiologist has a lot of
professional status.

There was another case similar to the banking thingy: Some guys were selling
to some companies with a big sales force and wanted to use optimization to get
better sales results. They had a problem formulation that was a good start.
So, they hired me, and soon I found that the 0-1 ILP really was a case of min
cost network flows. I typed up the math using TeX and sent them a nice paper.
So, I reviewed W. Cunningham's paper on _strongly feasible basic solutions_
and started writing code. Again the CEO guy had a jealous technical guy who
knew next to nothing about optimization and talked the CEO guy out letting me
finish the code. I got fired.

And, there are other examples.

I gave up -- tough to sell optimization in business. The business guys would
rather do without optimization, even if it puts them out of business. That is,
the CEO guys want to do it their way or the way of some not very good
technical guy they have a _personal relationship_ with or not at all.

Net, the people I knew just didn't much care for optimization. They didn't
want it.

Maybe now some people do want some optimization. Maybe.

Yup, I can believe that Bixby's latest software is terrific. Last I heard, he
was working with Ellis Johnson and George Nemhauser and/or some of their
students at Georgia Tech. Of course, Johnson helped get the OSL going and did
some good things with it at American Airlines.

Gee, maybe I'll send another 1000 resumes on applied optimization, stochastic
processes, statistics! Maybe.

------
ejanus
Any easy-to-understand materials?

------
pabb
I won't assume that someone on HN wouldn't potentially benefit from this, but
it honestly seems like OP submit this without reading the first sentence of
the first page. That, or this was blatant trolling of some benign fashion.

~~~
vinchuco
It was recently commented on a different thread.
[https://news.ycombinator.com/item?id=11845604](https://news.ycombinator.com/item?id=11845604)

