A *solution* (set of values for the decision variables) for which all of the constraints in the Solver model are satisfied is called a *feasible solution*. In some problems, a feasible solution is already known; in others, finding a feasible solution may be the hardest part of the problem.

An *optimal solution* is a feasible solution where the objective function reaches its maximum (or minimum) value – for example, the most profit or the least cost. A *globally optimal solution* is one where there are no other feasible solutions with better objective function values. A *locally optimal solution* is one where there are no other feasible solutions “in the vicinity” with better objective function values – you can picture this as a point at the top of a “peak” or at the bottom of a “valley” which may be formed by the objective function and/or the constraints.

Solver is designed to find feasible and optimal solutions. In the best case, it will find the globally optimal solution – but this is not always possible. In other cases, it will find a locally optimal solution, and in still others, it will stop after a certain amount of time with the best solution it has found so far. But like many users, you may decide that it’s most important to find a *good solution* – one that is better than the solution, or set of choices, you are using now.

The kind of solution Solver can find depends on the nature of the mathematical relationships between the variables and the objective function and constraints (and the solution algorithm used). As explained below, if your model is **smooth convex**, you can expect to find a globally optimal solution; if it is smooth but **non-convex**, you will usually be able to find a locally optimal solution; if it is **non-smooth**, you may have to settle for a “good” solution that may or may not be optimal.