Optimization Methods in Finance
Optimization Methods in Finance
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: THEend8_
Optimization Methods in Finance
Optimization models play an increasingly important role in financial de-
cisions. Many computational finance problems ranging from asset allocation
to risk management, from option pricing to model calibration can be solved
efficiently using modern optimization techniques. This course discusses sev-
eral classes of optimization problems (including linear, quadratic, integer,
dynamic, stochastic, conic, and robust programming) encountered in finan-
cial models. For each problem class, after introducing the relevant theory
(optimality conditions, duality, etc.) and efficient solution methods, we dis-
cuss several problems of mathematical finance that can be modeled within
this problem class. In addition to classical and well-known models such
as Markowitz’ mean-variance optimization model we present some newer
optimization models for a variety of financial problems.
Introduction
Optimization is a branch of applied mathematics that derives its importance
both from the wide variety of its applications and from the availability of
efficient algorithms. Mathematically, it refers to the minimization (or max-
imization) of a given objective function of several decision variables that
satisfy functional constraints. A typical optimization model addresses the
allocation of scarce resources among possible alternative uses in order to
maximize an objective function such as total profit.
Decision variables, the objective function, and constraints are three es-
sential elements of any optimization problem. Problems that lack constraints
are called unconstrained optimization problems, while others are often re-
ferred to as constrained optimization problems. Problems with no objective
functions are called feasibility problems. Some problems may have multiple
objective functions. These problems are often addressed by reducing them
to a single-objective optimization problem or a sequence of such problems.
If the decision variables in an optimization problem are restricted to
integers, or to a discrete set of possibilities, we have an integer or discrete
optimization problem. If there are no such restrictions on the variables, the
problem is a continuous optimization problem. Of course, some problems
may have a mixture of discrete and continuous variables. We continue with
a list of problem classes that we will encounter in this book.
1.1 Optimization Problems
We start with a generic description of an optimization problem. Given a
function f(x) : IRn → IR and a set S ⊂ IRn, the problem of finding an
x∗ ∈ IRn that solves
minx f(x)
s.t. x ∈ S (1.1)
is called an optimization problem. We refer to f as the objective function and
to S as the feasible region. If S is empty, the problem is called infeasible. If
it is possible to find a sequence xk ∈ S such that f(xk) → −∞ as k → +∞,
then the problem is unbounded. If the problem is neither infeasible nor
9
10 CHAPTER 1. INTRODUCTION
unbounded, then it is often possible to find a solution x∗ ∈ S that satisfies
f(x∗) ≤ f(x), ∀x ∈ S.
Such an x∗ is called a global minimizer of the problem (1.1). If
f(x∗) < f(x), ∀x ∈ S, x 6= x∗,
then x∗ is a strict global minimizer. In other instances, we may only find an
x∗ ∈ S that satisfies
f(x∗) ≤ f(x), ∀x ∈ S ∩Bx∗(ε)
for some ε > 0, where Bx∗(ε) is the open ball with radius ε centered at x
∗,
i.e.,
Bx∗(ε) = {x : ‖x− x∗‖ < ε}.
Such an x∗ is called a local minimizer of the problem (1.1). A strict local
minimizer is defined similarly.
In most cases, the feasible set S is described explicitly using functional
constraints (equalities and inequalities). For example, S may be given as
S := {x : gi(x) = 0, i ∈ E and gi(x) ≥ 0, i ∈ I},
where E and I are the index sets for equality and inequality constraints.
Then, our generic optimization problem takes the following form:
minx f(x)
gi(x) = 0, i ∈ E
gi(x) ≥ 0, i ∈ I.
(1.2)
Many factors affect whether optimization problems can be solved effi-
ciently. For example, the number n of decision variables, and the total num-
ber of constraints |E| + |I|, are generally good predictors of how difficult
it will be to solve a given optimization problem. Other factors are related
to the properties of the functions f and gi that define the problem. Prob-
lems with a linear objective function and linear constraints are easier, as are
problems with convex objective functions and convex feasible sets. For this
reason, instead of general purpose optimization algorithms, researchers have
developed different algorithms for problems with special characteristics. We
list the main types of optimization problems we will encounter. A more
complete list can be found, for example.
1.1.1 Linear and Nonlinear Programming
One of the most common and easiest optimization problems is linear opti-
mization or linear programming (LP). It is the problem of optimizing a linear
objective function subject to linear equality and inequality constraints. This
corresponds to the case where the functions f and gi in (1.2) are all linear.
1.1. OPTIMIZATION PROBLEMS 11
If either f or one of the functions gi is not linear, then the resulting problem
is a nonlinear programming (NLP) problem.
The standard form of the LP is given below:
minx c
T x
Ax = b
x ≥ 0,
(1.3)
where A ∈ IRm×n, b ∈ IRm, c ∈ IRn are given, and x ∈ IRn is the variable
vector to be determined. In this book, a k-vector is also viewed as a k × 1
matrix. For an m × n matrix M , the notation MT denotes the transpose
matrix, namely the n×m matrix with entries MTij = Mji. As an example, in
the above formulation cT is a 1×n matrix and cT x is the 1× 1 matrix with
entry
∑n
j=1 cjxj . The objective in (1.3) is to minimize the linear function∑n
j=1 cjxj .
As with (1.2), the problem (1.3) is said to be feasible if its constraints are
consistent and it is called unbounded if there exists a sequence of feasible vec-
tors {xk} such that cT xk → −∞. When (1.3) is feasible but not unbounded
it has an optimal solution, i.e., a vector x that satisfies the constraints and
minimizes the objective value among all feasible vectors. Similar definitions
apply to nonlinear programming problems.
The best known and most successful methods for solving LPs are the
simplex method and interior-point methods. NLPs can be solved using
gradient search techniques as well as approaches based on Newton’s method
such as interior-point and sequential quadratic programming methods.
1.1.2 Quadratic Programming
A more general optimization problem is the quadratic optimization or the
quadratic programming (QP) problem, where the objective function is now
a quadratic function of the variables. The standard form QP is defined as
follows:
minx
1
2x
T Qx + cT x
Ax = b
x ≥ 0,
(1.4)
where A ∈ IRm×n, b ∈ IRm, c ∈ IRn, Q ∈ IRn×n are given, and x ∈ IRn.
Since xT Qx = 12x
T (Q + QT )x, one can assume without loss of generality
that Q is symmetric, i.e. Qij = Qji.
The objective function of the problem (1.4) is a convex function of x
when Q is a positive semidefinite matrix, i.e., when yT Qy ≥ 0 for all y
(see the Appendix for a discussion on convex functions). This condition is
equivalent to Q having only nonnegative eigenvalues. When this condition
is satisfied, the QP problem is a convex optimization problem and can be
solved in polynomial time using interior-point methods. Here we are referring
to a classical notion used to measure computational complexity. Polynomial
time algorithms are efficient in the sense that they always find an optimal
solution in an amount of time that is guaranteed to be at most a polynomial
function of the input size.
12 CHAPTER 1. INTRODUCTION
1.1.3 Conic Optimization
Another generalization of (1.3) is obtained when the nonnegativity con-
straints x ≥ 0 are replaced by general conic inclusion constraints. This is
called a conic optimization (CO) problem. For this purpose, we consider
a closed convex cone C (see the Appendix for a brief discussion on cones)
in a finite-dimensional vector space X and the following conic optimization
problem:
minx c
T x
Ax = b
x ∈ C.
(1.5)
When X = IRn and C = IRn+, this problem is the standard form LP. How-
ever, much more general nonlinear optimization problems can also be for-
mulated in this way. Furthermore, some of the most efficient and robust
algorithmic machinery developed for linear optimization problems can be
modified to solve these general optimization problems. Two important sub-
classes of conic optimization problems we will address are: (i) second-order
cone optimization, and (ii) semidefinite optimization. These correspond to
the cases when C is the second-order cone:
Cq := {x = (x1, x2, . . . , xn) ∈ IRn : x21 ≥ x22 + . . . + x2n, x1 ≥ 0},
and the cone of symmetric positive semidefinite matrices:
Cs :=
X =
x11 · · · x1n
...
. . .
...
xn1 · · · xnn
∈ IRn×n : X = XT , X is positive semidefinite
.
When we work with the cone of positive semidefinite matrices, the standard
inner products used in cT x and Ax in (1.5) are replaced by an appropriate
inner product for the space of n-dimensional square matrices.
1.1.4 Integer Programming
Integer programs are optimization problems that require some or all of the
variables to take integer values. This restriction on the variables often makes
the problems very hard to solve. Therefore we will focus on integer linear
programs, which have a linear objective function and linear constraints. A
pure integer linear program (ILP) is given by:
minx c
T x
Ax ≥ b
x ≥ 0 and integral,
(1.6)
where A ∈ IRm×n, b ∈ IRm, c ∈ IRn are given, and x ∈ INn is the variable
vector to be determined.
An important case occurs when the variables xj represent binary decision
variables, that is, x ∈ {0, 1}n. The problem is then called a 0–1 linear
program.