Quickstart¶
As a first simple example, let's consider a vector set $\mathcal{S} = \{\mathbf{y} \in \mathbb{R}^n : \mathbf{A} \mathbf{y} \geq \mathbf{b}, \mathbf{y} \geq \mathbf{1} \}$, for some $n,m \in \mathbb{R}$ and for some $\mathbf{A} \in \mathbb{R}^{m \times n}$ and $\mathbf{b} \in \mathbb{R}^n$.
import numpy as np
import cvxpy as cp
import matplotlib.pyplot as plt
from monviso import VI
np.random.seed(2024)
# Create the problem data
n, m = 30, 40
A = np.random.uniform(0, 1, size=(m, n))
b = np.random.uniform(45, 50, size=(m,))
# Define S
y = cp.Variable(n)
S = [A @ y >= b, y >= 1]
Also, let's define $\mathbf{F}(\mathbf{x}) = \mathbf{H} \mathbf{x}$, for some $\mathbf{H} \succ \mathbf{0}$, as the VI vector mapping, and $g(\mathbf{x}) = \|\mathbf{x}\|_1$ as the scalar mapping.
# Create a positive definite H
H = np.random.uniform(2, 10, size=(n, n))
H = H @ H.T
# Define F, g
F = lambda x: H @ x
g = lambda x: cp.norm(x, 1)
It is straightforward to verify that $\mathbf{F}(\cdot)$ is strongly monotone with $\mu = \lambda_{\min}(\mathbf{H})$ and Lipschitz with $L = \|\mathbf{H}\|_2$.
mu = np.linalg.eigvals(H).min()
L = np.linalg.norm(H, 2)
Strong monotonicity allows us to use the proximal gradient algorithm to solve such a VI, with a step size of $\chi = 2\mu / L^2$.
step_size = 2 * mu / L**2
Let's declare this VI in monviso
first.
vi = VI(y, F, g, S)
Then, let's create an initial solution.
xk = np.random.uniform(0, 1, n)
In monviso
, the proximal gradient method is called pg
. Let's iterate over it and collect the distance between two consecutive iteration results in a residual
vector.
max_iters = 10
residuals = np.zeros(max_iters)
for k in range(max_iters):
xk1 = vi.pg(xk, step_size)
residuals[k] = np.linalg.norm(xk - xk1)
xk = xk1
The API docs have some detail on the convention for naming iterative steps. Let's take a look at the residuals
fig, ax = plt.subplots()
ax.plot(residuals)
ax.grid(True, alpha=0.2)
ax.set_xlabel("Iterations ($k$)")
ax.set_ylabel(r"$\|\mathbf{x}_k - \mathbf{x}_{k+1}\|$")
plt.show()