When discussing about the mathematics and computer science stream, optimization problems refer to the procedure of finding the most appropriate solution out of all feasible solutions.
The optimization problem can be defined as a computational situation where the objective is to find the best of all possible solutions.
Types of Optimization Technique
An essential step to optimization technique is to categorize the optimization model since the algorithms used for solving optimization problems are customized as per the nature of the problem. Let us walk through the various optimization problem types:
Continuous Optimization versus Discrete Optimization
Models with discrete variables are discrete optimization problems, while models with continuous variables are continuous optimization problems. Continuous optimization problems are easier to solve than discrete optimization problems. In a discrete optimization problem, the aim is to look for an object such as an integer, permutation, or graph from a countable set. However, with improvements in algorithms coupled along with advancements in computing technology, there has been an increase in the size and complexity of discrete optimization problems that can be solved efficiently. It is to note that Continuous optimization algorithms are essential in discrete optimization because many discrete optimization algorithms generate a series of continuous sub-problems.
Unconstrained Optimization versus Constrained Optimization
An essential distinction between optimization problems is the situation where problems have constraints on the variables and problems in which there are constraints on the variables.
Unconstrained optimization problems arise primarily in many practical applications and also in the reformulation of constrained optimization problems. Constrained optimization problems appear from applications where there are explicit constraints on the variables. Constrained optimization problems are further divided according to the nature of the limitations, such as linear, nonlinear, convex, and functional smoothness, such as differentiable or non-differentiable.
None, One, or Many Objectives
Although most optimization problems have a single objective function, there have been peculiar cases when optimization problems have either - no objective function or multiple objective functions. Multi-objective optimization problems arise in streams such as engineering, economics, and logistics. Often, problems with multiple objectives are reformulated as single-objective problems.
Deterministic Optimization versus Stochastic Optimization
Deterministic optimization is where the data for the given problem is known accurately. But sometimes, the data cannot be known precisely for a variety of reasons. A simple measurement error can be a reason for that. Another reason is that some data describe information about the future, hence cannot be known with certainty. In optimization under uncertainty, when the uncertainty is incorporated into the model, it is called stochastic optimization.
Optimization problems are classified into two types:
Linear Programming: In linear programming (LP) problem, the objective and all of the constraints are linear functions of the decision variables.
As all linear functions are convex, solving linear programming problems is innately easier to solve than non-linear problems.
Quadratic Programming: In the quadratic programming (QP) problem, the objective is a quadratic function of the decision variables, and the constraints are all linear functions of the variables.
A widely used Quadratic Programming problem is Markowitz mean-variance portfolio optimization problem, where the objective is the portfolio variance, and the linear constraints dictate a lower bound for portfolio return.
Linear and Quadratic programming
Optimization is something we all abide by. It is a way of life. We all want to make the most of our available time and make it productive. Optimization finds its use from time usage to solving supply chain problems.
Previously we have learned that optimization refers to the process of finding the best possible solutions out of all feasible solutions. Optimization can be further divided into two categories: Linear programming and Quadratic programming. Let us take a walkthrough.
Linear programming is a simple technique to find the best outcome or more precisely optimum points from complex relationships depicted through linear relationships. In simple words, the real relationships could be much more complex , but it can be simplified into linear relationships.
Linear programming is a widely used in optimization for several reasons, which can be:
- In operation research, complex real life problems can be expressed as linear programming problems.
- Many algorithms in ceratin type of optimization problems operate by solving Linear Programming problems as sub-problems.
- Many key concepts of optimization theory, such as duality, decomposition, convexity, and convexity generalizations have been inspired by and derived from ideas of Linear programming
- The early formation of microeconomics witnessed usage of Linear programming and it is still used in departments of planning, pro production, transportation, technology etc.
Quadratic programming is the method of solving a special case of optimization problem, where it optimizes (minimize or maximize) a quadratic objective functions subject to one or more linear constraints. Sometimes, the quadratic programming can be referred as nonlinear programming.
The objective function in QP may carry bilinear or upto second order polynomial terms. The constraints are usually linear and can be both equalities and inequalities.
Quadratic Programming is widely used in optimization. Reasons being:
- Image and signal processing
- Optimization of financial portfolios
- Performing the least-squares method of regression
- Controlling scheduling in chemical plants
- Solving more complex non-linear programming problems
- Usage in operations research and statistical work