Optimization Problem Types

Nonsmooth Optimization (NSP)

The most difficult type of optimization problem to solve is a nonsmooth problem (NSP). Such a problem normally is, or must be assumed to be non-convex.  Hence it may not only have multiple feasible regions and multiple locally optimal points within each region -- because some of the functions are non-smooth or even discontinuous, derivative or gradient information generally cannot be used to determine the direction in which the function is increasing (or decreasing). In other words, the situation at one possible solution gives very little information about where to look for a better solution.

In all but the simplest problems, it is impractical to exhaustively enumerate all of the possible solutions and pick the best one, even on a fast computer. Hence, most methods rely on some sort of random sampling of possible solutions. Such methods are nondeterministic or stochastic -- they may yield different solutions on different runs, even when started from the same point on the same model, depending on which points are randomly sampled.

Solving NSP Problems

Genetic or Evolutionary Algorithms offer one way to find "good" solutions to nonsmooth optimization problems. (In a genetic algorithm the problem is encoded in a series of bit strings that are manipulated by the algorithm; in an "evolutionary algorithm," the decision variables and problem functions are used directly. Most commercial Solver products are based on evolutionary algorithms.)

These algorithms maintain a population of candidate solutions, rather than a single best solution so far.  From existing candidate solutions, they generate new solutions through either random mutation of single points or crossover or recombination of two or more existing points.  The population is then subject to selection that tends to eliminate the worst candidate solutions and keep the best ones. This process is repeated, generating better and better solutions; however, there is no way for these methods to determine that a given solution is truly optimal.

Tabu Search and Scatter Search offer another approach to find "good" solutions to nonsmooth optimization problems.  These algorithms also maintain a population of candidate solutions, rather than a single best solution so far, and they generate new solutions from old ones.  However, they rely less on random selection and more on deterministic methods.  Tabu search uses memory of past search results to guide the direction and intensity of future searches.  These methods generate successively better solutions, but as with genetic and evolutionary algorithms, there is no way for these methods to determine that a given solution is truly optimal.