Markowitz portfolio optimization with chance constraints#

# Install AMPL and solvers
%pip install -q amplpy pandas

SOLVER = "highs"

from amplpy import AMPL, ampl_notebook

ampl = ampl_notebook(
    modules=["highs"],  # modules to install
    license_uuid="default",  # license to use
)  # instantiate AMPL object and register magics
from IPython.display import Markdown, HTML
import pandas as pd
import numpy as np

We consider here another variant of the Markowitz portfolio optimization problem, which we already encountered in the context of convex optimization here and in the context of conic optimization here.

Assuming there is an initial unit capital \(C\) that needs to be invested in a selection of \(n\) possible assets, each of them with a unknown return rate \(r_i\), \(i=1,\dots,n\). Let \(x\) be the vector whose \(i\)-th component \(x_i\) describes the fraction of the capital invested in asset \(i\). The return rate vector \(r\) can be modelled by a multivariate Gaussian distribution with mean \(\mu\) and covariance \(\Sigma\). Assume there is also a risk-free asset with guaranteed return rate \(R\) and let \(\tilde{x}\) the amount invested in that asset. We want to determine the portfolio that maximizes the expected return \(\mathbb{E} ( R \tilde{x} + r^\top x )\), which in view of our assumptions rewrites as \( \mathbb{E} ( R \tilde{x} + r^\top x ) = R \tilde{x} + \mu^\top x\).

Additionally, we includ a loss risk chance constraint of the form

\[ \mathbb{P} ( r^\top x \leq \alpha) \leq \beta. \]

Thanks to the assumption that \(r\) is multivariate Gaussian, this chance constraint can be equivalently rewritten as

\[ \mu^\top x \geq \Phi^{-1}(1-\beta) \| \Sigma^{1/2} r \|_2 + \alpha, \]

which the theory guarantees to be a convex constraint if \(\beta \leq 1/2\). The resulting portfolio optimization problem written as a SOCP is

\[\begin{split} \begin{align*} \max \; & R \tilde{x} + \mu^\top x\\ \quad \text{ s.t. } & \Phi^{-1}(1-\beta) \| \Sigma^{1/2} x \|_2 \leq \mu^\top x - \alpha,\\ & \sum_{i=1}^n x_i + \tilde{x} = C, \\ & \tilde{x} \geq 0 \\ & x_i \geq 0 & i=1,\dots,n. \end{align*} \end{split}\]

We now implement an AMPL model and solve it for \(n=3\), \(\alpha = 0.6\), \(\beta =0.3\), the given vector \(\mu\) and semi-definite positive covariance matrix \(\Sigma\).

# we import the inverse CDF or quantile function for the standard normal norm.ppf() from scipy.stats
from scipy.stats import norm

# We set our risk threshold and risk levels (sometimes you may get an infeasible problem if the chance
# constraint becomes too tight!)
alpha = 0.6
beta = 0.3

# We specify the initial capital, the risk-free return the number of risky assets, their expected returns, and their covariance matrix.
C = 1
R = 1.05
n = 3
mu = np.array([1.25, 1.15, 1.35])
Sigma = np.array([[1.5, 0.5, 2], [0.5, 2, 0], [2, 0, 5]])

# Check how dramatically the optimal solution changes if we assume i.i.d. deviations for the returns.
# Sigma = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])

# If you want to change covariance matrix, make sure you input a semi-definite positive one.
# The easiest way to generate a random covariance matrix is first generating a random m x m matrix A
# and then taking the matrix A^T A (which is always semi-definite positive)
# m = 3
# A = np.random.rand(m, m)
# Sigma = A.T @ A
%%writefile markowitz_chanceconstraints.mod

set N;

param Sigma{N, N};
param mu{N};
param C;
param R;
param gamma;

param phi_val;
param alpha;

var x_tilde >= 0;
var x{N} >= 0;

maximize expected_return:
    sum{i in N} mu[i] * x[i];
      
subject to bounded_variance:
    phi_val * (sum{i in N, j in N} x[i] * Sigma[i, j] * x[j]) <= sum{i in N} mu[i] * x[i] - alpha;

subject to total_assets:
    sum{i in N} x[i] + x_tilde = C;
Overwriting markowitz_chanceconstraints.mod
def markowitz_chanceconstraints(alpha, beta, mu, Sigma):
    m = AMPL()
    m.read("markowitz_chanceconstraints.mod")

    Sigma_df = pd.DataFrame(Sigma, index=range(n), columns=range(n))
    phi_val = norm.ppf(1 - beta)

    m.set["N"] = range(n)

    m.param["phi_val"] = phi_val
    m.param["alpha"] = alpha
    m.param["C"] = C
    m.param["R"] = R
    m.param["mu"] = mu
    m.param["Sigma"] = Sigma_df

    m.option["solver"] = SOLVER

    m.solve()
    solve_result = m.get_value("solve_result")

    return solve_result, m


result, model = markowitz_chanceconstraints(alpha, beta, mu, Sigma)

x_tilde = model.var["x_tilde"].value()
x = model.var["x"].to_dict()

display(Markdown(f"**Solver result:** *{result}*"))
display(
    Markdown(
        f"**Solution:** $\\tilde x = {x_tilde:.3f}$, $x_1 = {x[0]:.3f}$,  $x_2 = {x[1]:.3f}$,  $x_3 = {x[2]:.3f}$"
    )
)
display(
    Markdown(
        f"**Maximizes objective value to:** ${model.obj['expected_return'].value():.2f}$"
    )
)
HiGHS 1.5.1: HiGHS 1.5.1: optimal solution; objective 1.229721418
860 simplex iterations
5 branching nodes
absmipgap=8.14792e-06, relmipgap=6.62583e-06

Solver result: solved

Solution: $\tilde x = 0.000$, $x_1 = 0.797$, $x_2 = 0.203$, $x_3 = 0.000$

Maximizes objective value to: $1.23$