Misplaced Pages

Mathematical optimization

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Mathematical optimization (alternatively spelled optimisation ) or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization . Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics , and the development of solution methods has been of interest in mathematics for centuries.

#167832

48-435: In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics . Optimization problems can be divided into two categories, depending on whether

96-463: A molecular dynamics version of parallel tempering: this is usually known as replica-exchange molecular dynamics or REMD. Essentially, one runs N copies of the system, randomly initialized, at different temperatures. Then, based on the Metropolis criterion one exchanges configurations at different temperatures. The idea of this method is to make configurations at high temperatures available to

144-455: A 'first-order condition' or a set of first-order conditions. Optima of equality-constrained problems can be found by the Lagrange multiplier method. The optima of problems with equality and/or inequality constraints can be found using the ' Karush–Kuhn–Tucker conditions '. While the first derivative test identifies points that might be extrema, this test does not distinguish a point that is

192-422: A candidate solution satisfies the first-order conditions, then the satisfaction of the second-order conditions as well is sufficient to establish at least local optimality. The envelope theorem describes how the value of an optimal solution changes when an underlying parameter changes. The process of computing this change is called comparative statics . The maximum theorem of Claude Berge (1963) describes

240-526: A good solution. Parallel tempering , also known as replica exchange MCMC sampling , is a simulation method aimed at improving the dynamic properties of Monte Carlo method simulations of physical systems, and of Markov chain Monte Carlo (MCMC) sampling methods more generally. The replica exchange method was originally devised by Swendsen, then extended by Geyer and later developed, among others, by Giorgio Parisi ., Sugita and Okamoto formulated

288-429: A local maximum; finally, if indefinite, then the point is some kind of saddle point . Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers . Lagrangian relaxation can also provide approximate solutions to difficult constrained problems. Optimization problem In mathematics , engineering , computer science and economics , an optimization problem

336-618: A minimum from one that is a maximum or one that is neither. When the objective function is twice differentiable, these cases can be distinguished by checking the second derivative or the matrix of second derivatives (called the Hessian matrix ) in unconstrained problems, or the matrix of second derivatives of the objective function and the constraints called the bordered Hessian in constrained problems. The conditions that distinguish maxima, or minima, from other stationary points are called 'second-order conditions' (see ' Second derivative test '). If

384-490: A possibly nonlinear and non-convex continuous function f : Ω ⊂ R n → R {\displaystyle f:\Omega \subset \mathbb {R} ^{n}\to \mathbb {R} } with the global minima f ∗ {\displaystyle f^{*}} and the set of all global minimizers X ∗ {\displaystyle X^{*}} in Ω {\displaystyle \Omega } ,

432-447: A simple 'yes' or 'no'. In the field of approximation algorithms , algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem. Global optimization Global optimization

480-561: A specific distance) and then optimize our travel decisions to identify the best path to follow taking that uncertainty into account. Stochastic tunneling (STUN) is an approach to global optimization based on the Monte Carlo method - sampling of the function to be objectively minimized in which the function is nonlinearly transformed to allow for easier tunneling among regions containing function minima. Easier tunneling allows for faster exploration of sample space and faster convergence to

528-497: A structural design, one would desire a design that is both light and rigid. When two objectives conflict, a trade-off must be created. There may be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and rigidity. The set of trade-off designs that improve upon one criterion at the expense of another is known as the Pareto set . The curve created plotting weight against stiffness of

SECTION 10

#1732771832168

576-539: Is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the maximization of the real-valued function g ( x ) {\displaystyle g(x)} is equivalent to the minimization of the function f ( x ) := ( − 1 ) ⋅ g ( x ) {\displaystyle f(x):=(-1)\cdot g(x)} . Given

624-476: Is a corresponding decision problem that asks whether there is a feasible solution for some particular measure m 0 . For example, if there is a graph G which contains vertices u and v , an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with

672-457: Is a method developed by mathematicians since the 1950s and 1960s as an approach to putting bounds on rounding errors and measurement errors in mathematical computation and thus developing numerical methods that yield reliable results. Interval arithmetic helps find reliable and guaranteed solutions to equations and optimization problems. Real algebra is the part of algebra which is relevant to real algebraic (and semialgebraic) geometry. It

720-463: Is always necessary to continuously evaluate the quality of a data model by using a cost function where a minimum implies a set of possibly optimal parameters with an optimal (lowest) error. Typically, A is some subset of the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , often specified by a set of constraints , equalities or inequalities that

768-455: Is delegated to the decision maker. In other words, defining the problem as multi-objective optimization signals that some information is missing: desirable objectives are given but combinations of them are not rated relative to each other. In some cases, the missing information can be derived by interactive sessions with the decision maker. Multi-objective optimization problems have been generalized further into vector optimization problems where

816-612: Is distinguished from local optimization by its focus on finding the minimum or maximum over the given set, as opposed to finding local minima or maxima. Finding an arbitrary local minimum is relatively straightforward by using classical local optimization methods. Finding the global minimum of a function is far more difficult: analytical methods are frequently not applicable, and the use of numerical solution strategies often leads to very hard challenges. Typical examples of global optimization applications include: The most successful general exact strategies are: In both of these strategies,

864-436: Is mostly concerned with the study of ordered fields and ordered rings (in particular real closed fields ) and their applications to the study of positive polynomials and sums-of-squares of polynomials . It can be used in convex optimization Several exact or inexact Monte-Carlo-based algorithms exist: In this method, random simulations are used to find an approximate solution. Example: The traveling salesman problem

912-403: Is no such maximum as the objective function is unbounded, so the answer is " infinity " or " undefined ". Consider the following notation: or equivalently This represents the value (or values) of the argument x in the interval (−∞,−1] that minimizes (or minimize) the objective function x + 1 (the actual minimum value of that function is not what the problem asks for). In this case,

960-509: Is not guaranteed that different solutions will be obtained even with different starting points in multiple runs of the algorithm. Common approaches to global optimization problems, where multiple local extrema may be present include evolutionary algorithms , Bayesian optimization and simulated annealing . The satisfiability problem , also called the feasibility problem , is just the problem of finding any feasible solution at all without regard to objective value. This can be regarded as

1008-461: Is null or negative. The extreme value theorem of Karl Weierstrass states that a continuous real-valued function on a compact set attains its maximum and minimum value. More generally, a lower semi-continuous function on a compact set attains its minimum; an upper semi-continuous function on a compact set attains its maximum point or view. One of Fermat's theorems states that optima of unconstrained problems are found at stationary points , where

SECTION 20

#1732771832168

1056-901: Is the problem of finding the best solution from all feasible solutions . Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete : The standard form of a continuous optimization problem is minimize x f ( x ) s u b j e c t t o g i ( x ) ≤ 0 , i = 1 , … , m h j ( x ) = 0 , j = 1 , … , p {\displaystyle {\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)\leq 0,\quad i=1,\dots ,m\\&&&h_{j}(x)=0,\quad j=1,\dots ,p\end{aligned}}} where If m = p = 0 ,

1104-450: Is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. However, let's assume that instead of wanting to minimize the total distance traveled to visit each desired destination, we wanted to minimize

1152-438: The variables are continuous or discrete : An optimization problem can be represented in the following way: Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming , but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework. Since

1200-595: The (partial) ordering is no longer given by the Pareto ordering. Optimization problems are often multi-modal; that is, they possess multiple good solutions. They could all be globally good (same cost function value) or there could be a mix of globally good and locally good solutions. Obtaining all (or at least some of) the multiple solutions is the goal of a multi-modal optimizer. Classical optimization techniques due to their iterative approach do not perform satisfactorily when they are used to obtain multiple solutions, since it

1248-411: The answer is x = −1 , since x = 0 is infeasible, that is, it does not belong to the feasible set . Similarly, or equivalently represents the { x , y } pair (or pairs) that maximizes (or maximize) the value of the objective function x cos y , with the added constraint that x lie in the interval [−5,5] (again, the actual maximum value of the expression does not matter). In this case,

1296-408: The best designs is known as the Pareto frontier . A design is judged to be "Pareto optimal" (equivalently, "Pareto efficient" or in the Pareto set) if it is not dominated by any other design: If it is worse than another design in some respects and no better in any respect, then it is dominated and is not Pareto optimal. The choice among "Pareto optimal" solutions to determine the "favorite solution"

1344-512: The continuity of an optimal solution as a function of underlying parameters. For unconstrained problems with twice-differentiable functions, some critical points can be found by finding the points where the gradient of the objective function is zero (that is, the stationary points). More generally, a zero subgradient certifies that a local minimum has been found for minimization problems with convex functions and other locally Lipschitz functions , which meet in loss function minimization of

1392-403: The first derivative or the gradient of the objective function is zero (see first derivative test ). More generally, they may be found at critical points , where the first derivative or gradient of the objective function is zero or is undefined, or on the boundary of the choice set. An equation (or set of equations) stating that the first derivative(s) equal(s) zero at an interior optimum is called

1440-424: The following is valid: it suffices to solve only minimization problems. However, the opposite perspective of considering only maximization problems would be valid, too. Problems formulated using this technique in the fields of physics may refer to the technique as energy minimization , speaking of the value of the function f as representing the energy of the system being modeled . In machine learning , it

1488-410: The following notation: This denotes the minimum value of the objective function x + 1 , when choosing x from the set of real numbers R {\displaystyle \mathbb {R} } . The minimum value in this case is 1, occurring at x = 0 . Similarly, the notation asks for the maximum value of the objective function 2 x , where x may be any real number. In this case, there

Mathematical optimization - Misplaced Pages Continue

1536-425: The former as actual solutions to the original problem. Global optimization is the branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a nonconvex problem. Optimization problems are often expressed with special notation. Here are some examples: Consider

1584-509: The members of A have to satisfy. The domain A of f is called the search space or the choice set , while the elements of A are called candidate solutions or feasible solutions . The function f is variously called an objective function , criterion function , loss function , cost function (minimization), utility function or fitness function (maximization), or, in certain fields, an energy function or energy functional . A feasible solution that minimizes (or maximizes)

1632-455: The neural network. The positive-negative momentum estimation lets to avoid the local minimum and converges at the objective function global minimum. Further, critical points can be classified using the definiteness of the Hessian matrix : If the Hessian is positive definite at a critical point, then the point is a local minimum; if the Hessian matrix is negative definite, then the point is

1680-406: The objective function is called an optimal solution . In mathematics, conventional optimization problems are usually stated in terms of minimization. A local minimum x * is defined as an element for which there exists some δ > 0 such that the expression f ( x *) ≤ f ( x ) holds; that is to say, on some region around x * all of the function values are greater than or equal to

1728-711: The problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem . A maximization problem can be treated by negating the objective function. Formally, a combinatorial optimization problem A is a quadruple ( I , f , m , g ) , where The goal is then to find for some instance x an optimal solution , that is, a feasible solution y with m ( x , y ) = g { m ( x , y ′ ) : y ′ ∈ f ( x ) } . {\displaystyle m(x,y)=g\left\{m(x,y'):y'\in f(x)\right\}.} For each combinatorial optimization problem, there

1776-450: The root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm. Interval arithmetic , interval mathematics , interval analysis , or interval computation ,

1824-419: The set of feasible elements), it is also the global minimum, but a nonconvex problem may have more than one local minimum not all of which need be global minima. A large number of algorithms proposed for solving the nonconvex problems – including the majority of commercially available solvers – are not capable of making a distinction between locally optimal solutions and globally optimal solutions, and will treat

1872-641: The set over which a function is to be optimized is approximated by polyhedra. In inner approximation, the polyhedra are contained in the set, while in outer approximation, the polyhedra contain the set. The cutting-plane method is an umbrella term for optimization methods which iteratively refine a feasible set or objective function by means of linear inequalities, termed cuts . Such procedures are popularly used to find integer solutions to mixed integer linear programming (MILP) problems, as well as to solve general, not necessarily differentiable convex optimization problems. The use of cutting planes to solve MILP

1920-403: The simulations at low temperatures and vice versa. This results in a very robust ensemble which is able to sample both low and high energy configurations. In this way, thermodynamical properties such as the specific heat, which is in general not well computed in the canonical ensemble, can be computed with great precision. Other approaches include heuristic strategies to search the search space in

1968-509: The solutions are the pairs of the form {5, 2 k π } and {−5, (2 k + 1) π } , where k ranges over all integers . Operators arg min and arg max are sometimes also written as argmin and argmax , and stand for argument of the minimum and argument of the maximum . Fermat and Lagrange found calculus-based formulae for identifying optima, while Newton and Gauss proposed iterative methods for moving towards an optimum. The term " linear programming " for certain optimization cases

Mathematical optimization - Misplaced Pages Continue

2016-405: The special case of mathematical optimization where the objective value is the same for every solution, and thus any solution is optimal. Many optimization algorithms need to start from a feasible point. One way to obtain such a point is to relax the feasibility conditions using a slack variable ; with enough slack, any starting point is feasible. Then, minimize that slack variable until the slack

2064-565: The standard minimization problem can be given as that is, finding f ∗ {\displaystyle f^{*}} and a global minimizer in X ∗ {\displaystyle X^{*}} ; where Ω {\displaystyle \Omega } is a (not necessarily convex) compact set defined by inequalities g i ( x ) ⩾ 0 , i = 1 , … , r {\displaystyle g_{i}(x)\geqslant 0,i=1,\ldots ,r} . Global optimization

2112-421: The theoretical aspects of linear programming (like the theory of duality ) around the same time. Other notable researchers in mathematical optimization include the following: In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): Adding more than one objective to an optimization problem adds complexity. For example, to optimize

2160-421: The total time needed to reach each destination. This goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine our optimal path we would want to use simulation - optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than

2208-400: The value at that element. Local maxima are defined similarly. While a local minimum is at least as good as any nearby elements, a global minimum is at least as good as every feasible element. Generally, unless the objective function is convex in a minimization problem, there may be several local minima. In a convex problem , if there is a local minimum that is interior (not on the edge of

2256-597: Was due to George B. Dantzig , although much of the theory had been introduced by Leonid Kantorovich in 1939. ( Programming in this context does not refer to computer programming , but comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems Dantzig studied at that time.) Dantzig published the Simplex algorithm in 1947, and also John von Neumann and other researchers worked on

2304-406: Was introduced by Ralph E. Gomory and Václav Chvátal . Branch and bound ( BB or B&B ) is an algorithm design paradigm for discrete and combinatorial optimization problems. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search : the set of candidate solutions is thought of as forming a rooted tree with the full set at

#167832