API
The convention used for naming indexed terms is the following:
- Indexed terms' names are single letters
xkstands for $x_k$xknstands for $x_{k+n}$xnkstands for $x_{k-n}$
Therefore, as examples, y0 is $y_0$, tk is $t_k$, z1k is $z_{k-1}$, and sk2 is $s_{k+2}$. The vector corresponding to the decision variable of the VI is always denoted with $\mathbf{x}$; all other vectors that might be used or returned are generically referred to as auxiliary points.
Monviso.VI — Type
VI(F::Function, prox::Function)The variational inequality object.
Monviso.VI — Method
VI(
F;
g=nothing,
y::Union{Nothing, AbstractArray{VariableRef}}=nothing,
model::Union{Nothing, Model}=nothing,
analytical_prox=nothing,
norm_cone=MOI.SecondOrderCone
)The constructor for VI.
Arguments
F- a function of the form(x::AbstractVector, params...) -> AbstractVectorof the same lenght ofx, i.e., the VI mapping $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$. Termparamscollects optional arguments that characterizeFand might change at each iteration.
Keywords
y::AbstractVector{VariableRef}- the container ofJuMP.VariableRefassociated tomodel, i.e., $\mathbf{y}$.model::Model- theJuMP.Modeldescribing the projection set $\mathcal{S}$.g::Function=nothing- a function of the formx::AbstractVector{VariableRef} -> Real, i.e., the scalar function $g : \mathbb{R}^n \to \mathbb{R}$.analytical_prox::Function=nothing- the analytical form of the proximal operator for the given set. If provided, it replaces ofprox.norm_cone=MOI.SecondOrderCone- the cone related to the norm characterizing the proximal operator.
Monviso.agraal — Function
agraal(
vi::VI,
xk::AbstractArray,
x1k::AbstractArray,
yk::AbstractArray,
s1k::Real,
tk::Real=1,
params...;
ϕ::Real=GOLDEN_RATIO,
χ_large::Real=1e6
)Adaptive golden ratio algorithm
Description
The Adaptive Golden Ratio Algorithm (aGRAAL) algorithm is a variation of the Golden Ratio Algorithm, with adaptive step size. Following [7], let $\theta_0 = 1$, $\rho = 1/\phi + 1/ \phi^2$, where $\phi \in (0,\varphi]$ and $\varphi = \frac{1+\sqrt{5}}{2}$ is the golden ratio. Moreover, let $\bar{\chi} \gg 0$ be a constant (arbitrarily large) step-size. Given the initial terms $\mathbf{x}_0,\mathbf{x}_1 \in \mathbb{R}^n$, $\mathbf{y}_0 = \mathbf{x}_1$, and $\chi_0 > 0$, the $k$-th iterate for aGRAAL is the following:
\[\begin{align*} \chi_k &= \min\left\{\rho\chi_{k-1}, \frac{\phi\theta_k \|\mathbf{x}_k -\mathbf{x}_{k-1}\|^2}{4\chi_{k-1}\|\mathbf{F}(\mathbf{x}_k) -\mathbf{F}(\mathbf{x}_{k-1})\|^2}, \bar{\chi}\right\} \\ \mathbf{x}_{k+1}, \mathbf{y}_{k+1} &= \texttt{graal}(\mathbf{x}_k, \mathbf{y}_k, \chi_k, \phi) \\ \theta_{k+1} &= \phi\frac{\chi_k}{\chi_{k-1}} \end{align*}\]
The convergence guarantees discussed for GRAAL also hold for aGRAAL.
Monviso.arg — Method
arg(
vi::VI,
xk::AbstractArray,
x1k::AbstractArray,
x0::AbstractArray,
k::Int,
χ::Real,
params...
)The accelerated reflected gradient iterate.
Description
Given a constant step-size $\chi > 0$ and initial vectors $\mathbf{x}_1,\mathbf{x}_0 \in \mathbb{R}^n$, the basic $k$-th iterate of the accelerated reflected gradient (ARG) is the following [8]:
\[\begin{align*} \mathbf{y}_k &= 2\mathbf{x}_k - \mathbf{x}_{k-1} + \frac{1}{k+1} (\mathbf{x}_0 - \mathbf{x}_k) - \frac{1}{k}(\mathbf{x}_k - \mathbf{x}_{k-1}) \\ \mathbf{x}_{k+1} &= \text{prox}_{g,\mathcal{S}}\left(\mathbf{x}_k - \chi \mathbf{F}(\mathbf{y}_k) + \frac{1}{k+1}(\mathbf{x}_0 - \mathbf{x}_k)\right) \end{align*}\]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of the ARG algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{12L}\right)$.
Monviso.cfogda — Method
cfogda(
vi::VI,
xk::AbstractArray,
x1k::AbstractArray,
y1k::AbstractArray,
zk::AbstractArray,
k::Int,
χ::Real,
params...;
α::Real=2.1,
)Constrained fast optimistic gradient descent-ascent iterate
Description
Given a constant step-size $\chi > 0$ and initial vectors $\mathbf{x}_1 \in \mathcal{S}$, $\mathbf{z}_1 \in N_{\mathcal{S}}(\mathbf{x}_1)$, $\mathbf{x}_0, \mathbf{y}_0 \in \mathbb{R}^n$, the basic $k$-th iterate of Constrained Fast Optimistic Gradient Descent Ascent (CFOGDA) is the following [9]:
\[\begin{align*} \mathbf{y}_k &= \mathbf{x}_k + \frac{k}{k+\alpha}(\mathbf{x}_k - \mathbf{x}_{k-1}) - \chi \frac{\alpha}{k+\alpha}(\mathbf{F}(\mathbf{y}_{k-1}) + \mathbf{z}_k) \\ \mathbf{x}_{k+1} &= \text{prox}_{g,\mathcal{S}}\left(\mathbf{y}_k - \chi\left(1 + \frac{k}{k+\alpha}\right)(\mathbf{F}(\mathbf{y}_k) - \mathbf{F} (\mathbf{y}_{k-1}) - \zeta_k)\right) \\ \mathbf{z}_{k+1} &= \frac{k+\alpha}{\chi (2k+\alpha)}( \mathbf{y}_k - \mathbf{x}_{k+1}) - (\mathbf{F}(\mathbf{y}_k) - \mathbf{F}(\mathbf{y}_{k-1}) - \zeta_k) \end{align*}\]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of the CFOGDA algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{4L}\right)$ and $\alpha > 2$.
Monviso.eag — Method
eag(vi::VI, xk::AbstractArray, x0::AbstractArray, k::Int, χ::Real, params...)The extra-anchored gradient iterate.
Description
Given a constant step-size $\chi > 0$ and an initial vector $\mathbf{x}_0 \in \mathbb{R}^n$, the $k$-th iterate of extra anchored gradient (EAG) algorithm is [10]:
\[\begin{align*} \mathbf{y}_k &= \text{prox}_{g,\mathcal{S}}\left(\mathbf{x}_k - \chi \mathbf{F}(\mathbf{x}_k) + \frac{1}{k+1}(\mathbf{x}_0 - \mathbf{x}_k)\right) \\ \mathbf{x}_{k+1} &= \text{prox}_{g,\mathcal{S}}\left(\mathbf{x}_k - \chi \mathbf{F}(\mathbf{y}_k) + \frac{1}{k+1}(\mathbf{x}_0 - \mathbf{x}_k)\right) \end{align*}\]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of the EAG algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{\sqrt{3}L} \right)$.
Monviso.eg — Method
eg(vi::VI, xk::AbstractArray, χ::Real, params...)The extragradient iterate
Description
Given a constant step-size $\chi > 0$ and an initial vector $\mathbf{x}_0 \in \mathbb{R}^n$, the $k$-th iterate of the extragradient algorithm (EG) is [11]:
\[\begin{align*} \mathbf{y}_k &= \text{prox}_{\mathcal{S}}(\mathbf{x}_k - \chi \mathbf{F} (\mathbf{x}_k)) \\ \mathbf{x}_{k+1} &= \text{prox}_{\mathcal{S}}(\mathbf{y}_k - \chi \mathbf{F} (\mathbf{x}_k)) \end{align*}\]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of the EGD algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{L}\right)$.
Monviso.fbf — Method
fbf(vi::VI, xk::AbstractArray, χ::Real, params...)The forward-backward-forward iterate.
Description
Given a constant step-size $\chi > 0$ and an initial vector $\mathbf{x}_0 \in \mathbb{R}^n$, the $k$-th iterate of Forward-Backward-Forward (FBF) algorithm is [12]:
\[\begin{align*} \mathbf{y}_k &= \text{prox}_{\mathcal{S}}(\mathbf{x}_k - \chi F(\mathbf{x}_k)) \\ \mathbf{x}_{k+1} &= \mathbf{y}_k - \chi (F(\mathbf{y}_k) - \chi F(\mathbf{x}_k)) \end{align*}\]
where $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of the FBF algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{L}\right)$.
Monviso.fogda — Method
fogda(
vi::VI,
xk::AbstractArray,
x1k::AbstractArray,
y1k::AbstractArray,
k::Int,
χ::Real,
params...;
α::Real=2.1,
)(Explicit) fast optimistic gradient descent-ascent iterate
Description
Given a constant step-size $\chi > 0$ and initial vectors $\mathbf{x}_1,\mathbf{x}_0, \mathbf{y}_0 \in \mathbb{R}^n$, the basic $k$-th iterate of the explicit fast OGDA (FOGDA) is the following [13]:
\[\begin{align*} \mathbf{y}_k &= \mathbf{x}_k + \frac{k}{k+\alpha}(\mathbf{x}_k - \mathbf{x}_{k-1}) - \chi \frac{\alpha}{k+\alpha} \mathbf{F}(\mathbf{y}_{k-1}) \\ \mathbf{x}_{k+1} &= \mathbf{y}_k - \chi \frac{2k+\alpha} {k+\alpha} (\mathbf{F}(\mathbf{y}_k) - \mathbf{F}(\mathbf{y}_{k-1})) \end{align*}\]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of the ARG algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{4L}\right)$ and $\alpha > 2$.
Monviso.frb — Method
frb(vi::VI, xk::AbstractArray, x1k::AbstractArray, χ::Real, params...)The forward-reflected-backward iterate.
Description
Given a constant step-size $\chi > 0$ and initial vectors $\mathbf{x}_1,\mathbf{x}_0 \in \mathbb{R}^n$, the basic $k$-th iterate of the Forward-Reflected-Backward (FRB) is the following [14]:
\[\mathbf{x}_{k+1} = \text{prox}_{g,\mathcal{S}}(\mathbf{x}_k - \chi (2\mathbf{F} (\mathbf{x}_k) + \mathbf{F}(\mathbf{x}_{k-1})))\]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of the FRB algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{2L}\right)$.
Monviso.graal — Method
graal(
vi::VI,
xk::AbstractArray,
yk::AbstractArray,
χ::Real,
params...;
ϕ::Real=GOLDEN_RATIO,
)Golden ratio algorithm iterate
Description
Given a constant step-size $\chi > 0$ and initial vectors $\mathbf{x}_0,\mathbf{y}_0 \in \mathbb{R}^n$, the basic $k$-th iterate the golden ratio algorithm (GRAAL) is the following [7]:
\[\begin{align*} \mathbf{y}_{k+1} &= \frac{(\phi - 1)\mathbf{x}_k + \phi\mathbf{y}_k}{\phi} \\ \mathbf{x}_{k+1} &= \text{prox}_{g,\mathcal{S}}(\mathbf{y}_{k+1} - \chi \mathbf{F}(\mathbf{x}_k)) \end{align*}\]
The convergence of GRAAL algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constants $L < +\infty$, when $\chi \in \left(0,\frac{\varphi}{2L} \right]$ and $\phi \in (1,\varphi]$, where $\varphi = \frac{1+\sqrt{5}}{2}$ is the golden ratio.
Monviso.hgraal_1 — Method
hgraal_1(
vi::VI,
xk::AbstractArray,
x1k::AbstractArray,
yk::AbstractArray,
s1k::Real,
tk::Real,
ck::Int,
params...;
ϕ::Real=GOLDEN_RATIO,
χ_large::Real=1e6,
)
Hybrid golden ratio algorithm IDescription
The HGRAAL-1 algorithm [15] is a variation of the Adaptive Golden Ratio Algorithm. Let $\theta_0 = 1$, $\rho = 1/\phi + 1/\phi^2$, where $\phi \in (0,\varphi]$ and $\varphi = \frac{1+\sqrt{5}}{2}$ is the golden ratio. The residual at point $\mathbf{x}_k$ is given by $J : \mathbb{R}^n \to \mathbb{R}$, defined as follows:
\[J(\mathbf{x}_k) = \|\mathbf{x}_k - \text{prox}_{g,\mathcal{S}} (\mathbf{x}_k - \mathbf{F}(\mathbf{x}_k))\|\]
Moreover, let $\bar{\chi} \gg 0$ be a constant (arbitrarily large) step-size. Given the initial terms $\mathbf{x}_0,\mathbf{x}_1 \in\mathbb{R}^n$, $\mathbf{y}_0 = \mathbf{x}_1$, and $\chi_0 > 0$, the $k$-th iterate for HGRAAL-1 is the following:
\[\begin{align*} \chi_k &= \min\left\{\rho\chi_{k-1}, \frac{\phi\theta_k \|\mathbf{x}_k -\mathbf{x}_{k-1}\|^2}{4\chi_{k-1}\|\mathbf{F}(\mathbf{x}_k) -\mathbf{F}(\mathbf{x}_{k-1})\|^2}, \bar{\chi}\right\} \\ c_k &= \left(\langle J(\mathbf{x}_k) - J(\mathbf{x}_{k-1}) > 0 \rangle \text{ and } \langle f_k \rangle \right) \text{ or } \left\langle \min\{J(\mathbf{x}_{k-1}), J(\mathbf{x}_k)\} < J(\mathbf{x}_k) + \frac{1}{\bar{k}} \right\rangle \\ f_k &= \text{not $\langle c_k \rangle$} \\ \bar{k} &= \begin{cases} \bar{k}+1 & \text{if $c_k$ is true} \\ \bar{k} & \text{otherwise} \end{cases} \\ \mathbf{y}_{k+1} &= \begin{cases} \dfrac{(\phi - 1)\mathbf{x}_k + \phi\mathbf{y}_k}{\phi} & \text{if $c_k$ is true} \\ \mathbf{x}_k & \text{otherwise} \end{cases} \\ \mathbf{x}_{k+1} &= \text{prox}_{g,\mathcal{S}}(\mathbf{y}_{k+1} - \chi_k \mathbf{F}(\mathbf{x}_k)) \\ \theta_{k+1} &= \phi\frac{\chi_k}{\chi_{k-1}} \end{align*}\]
Monviso.pg — Method
pg(vi::VI, xk::AbstractArray, χ::Real, params...)The proximal gradient iterate.
Description
Given a constant step-size $\chi > 0$ and an initial vector $\mathbf{x}_0 \in \mathbb{R}^n$, the basic $k$-th iterate of the proximal gradient (PG) algorithm is [16]:
\[\mathbf{x}_{k+1} = \text{prox}_{g,\mathcal{S}}(\mathbf{x}_k - \chi \mathbf{F}(\mathbf{x}_k))\]
where $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of PG is guaranteed for Lipschitz strongly monotone operators, with monotone constant $\mu > 0$ and Lipshitz constants $L < +\infty$, when $\chi \in (0, 2\mu/L^2)$.
Monviso.popov — Method
popov(vi::VI, xk::AbstractArray, yk::AbstractArray, χ::Real, params...)The Popov's method iterate
Description
Given a constant step-size $\chi > 0$ and an initial vectors $\mathbf{x}_0,\mathbf{y}_0 \in \mathbb{R}^n$, the $k$-th iterate of Popov's Method (PM) is [17]:
\[\begin{align*} \mathbf{y}_{k+1} &= \text{prox}_{g,\mathcal{S}}(\mathbf{x}_k - \chi \mathbf{F} (\mathbf{y}_k)) \\ \mathbf{x}_{k+1} &= \text{prox}_{g,\mathcal{S}}(\mathbf{y}_{k+1} - \chi \mathbf{F}(\mathbf{x}_k)) \end{align*}\]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of PM is guaranteed for Lipschitz monotone operators, with Lipschitz constant $L < +\infty$, when $\chi \in \left(0,\frac{1}{2L}\right)$.
Monviso.prg — Method
prg(vi::VI, xk::AbstractArray, x1k::AbstractArray, χ::Real, params...)The projected reflected gradient iterate.
Description
Given a constant step-size $\chi > 0$ and initial vectors $\mathbf{x}_1,\mathbf{x}_0 \in \mathbb{R}^n$, the basic $k$-th iterate of the projected reflected gradient (PRG) is the following [18]:
\[\mathbf{x}_{k+1} = \text{prox}_{g,\mathcal{S}}(\mathbf{x}_k - \chi \mathbf{F}(2\mathbf{x}_k - \mathbf{x}_{k-1})) \]
where $g : \mathbb{R}^n \to \mathbb{R}$ is a scalar convex (possibly non-smooth) function, while $\mathbf{F} : \mathbb{R}^n \to \mathbb{R}^n$ is the VI mapping. The convergence of PRG algorithm is guaranteed for Lipschitz monotone operators, with Lipschitz constants $L < +\infty$, when $\chi \in (0,(\sqrt{2} - 1)/L)$. Differently from the EGD iteration, the PRGD has the advantage of requiring a single proximal operator evaluation.
Monviso.prox — Method
prox(
y::AbstractArray{VariableRef},
model::Model;
g=nothing,
norm_cone=MOI.SecondOrderCone,
)The proximal operator iterate.
Arguments
y::AbstractVector{VariableRef}- the container ofJuMP.VariableRefassociated tomodel, i.e., $\mathbf{y}$.model::Model- theJuMP.Modeldescribing the projection set $\mathcal{S}$.
Keywords
g::Function=nothing- a function of the formx::AbstractVector{VariableRef} -> Real, i.e., the scalar function $g : \mathbb{R}^n \to \mathbb{R}$.norm_cone=MOI.SecondOrderCone- the cone related to the norm characterizing the proximal operator.