Algorithms for Convex Problem


Advertisements

Method of Steepest Descent

This method is also called Gradient method or Cauchy's method. This method involves the following terminologies −

$$x_{k+1}=x_k+\alpha_kd_k$$

$d_k= - \bigtriangledown f\left ( x_k \right )$ or $ d_k= -\frac{\bigtriangledown f\left ( x_k \right )}{\left \| \bigtriangledown f\left ( x_k \right ) \right \|}$

Let $\phi \left (\alpha \right )=f\left ( x_k +\alpha d_k\right )$

By differentiating $\phi$ and equating it to zero, we can get $\alpha$.

So the algorithm goes as follows −

  • Initialize $x_0$,$\varepsilon_1$,$\varepsilon_2$ and set $k=0$.

  • Set $d_k=-\bigtriangledown f\left ( x_k \right ) $or $d_k=-\frac{\bigtriangledown f\left (x_k \right )}{\left \|\bigtriangledown f\left (x_k \right ) \right \|}$.

  • find $\alpha_k$ such that it minimizes $\phi\left ( \alpha \right )=f\left ( x_k+\alpha d_k \right )$.

  • Set $x_{k+1}=x_k+\alpha_kd_k$.

  • If $\left \| x_{k+1-x_k} \right \|

  • The optimal solution is $\hat{x}=x_{k+1}$.

Newton Method

Newton Method works on the following principle −

$f\left ( x \right )=y\left ( x \right )=f\left ( x_k \right )+\left ( x-x_k \right )^T \bigtriangledown f\left ( x_k \right )+\frac{1}{2}\left ( x-x_k \right )^T H\left ( x_k \right )\left ( x-x_k \right )$

$\bigtriangledown y\left ( x \right )=\bigtriangledown f\left ( x_k \right )+H\left ( x_k \right )\left ( x-x_k \right )$

At $x_{k+1}, \bigtriangledown y\left ( x_{k+1} \right )=\bigtriangledown f\left ( x_k \right )+H\left ( x_k \right )\left ( x_{k+1}-x_k \right )$

For $x_{k+1}$ to be optimal solution $\bigtriangledown y\left ( x_k+1 \right )=0$

Thus, $x_{k+1}=x_k-H\left ( x_k \right )^{-1} \bigtriangledown f\left ( x_k \right )$

Here $H\left ( x_k \right )$ should be non-singular.

Hence the algorithm goes as follows −

Step 1 − Initialize $x_0,\varepsilon$ and set $k=0$.

Step 2 − find $H\left ( x_k \right ) \bigtriangledown f\left ( x_k \right )$.

Step 3 − Solve for the linear system $H\left ( x_k \right )h\left ( x_k \right )=\bigtriangledown f\left ( x_k \right )$ for $h\left ( x_k \right )$.

Step 4 − find $x_{k+1}=x_k-h\left ( x_k \right )$.

Step 5 − If $\left \| x_{k+1}-x_k \right \|< \varepsilon$ or $\left \| \bigtriangledown f\left ( x_k \right ) \right \| \leq \varepsilon$ then go to step 6, else set $k=k+1$ and go to step 2.

Step 6 − The optimal solution is $\hat{x}=x_{k+1}$.

Conjugate Gradient Method

This method is used for solving problems of the following types −

$min f\left ( x \right )=\frac{1}{2}x^T Qx-bx$

where Q is a positive definite nXn matrix and b is constant.

Given $x_0,\varepsilon,$ compute $g_0=Qx_0-b$

Set $d_0=-g_0$ for $k=0,1,2,...,$

Set $\alpha_k=\frac{g_{k}^{T}g_k}{d_{k}^{T}Q d_k}$

Compute $x_{k+1}=x_k+\alpha_kd_k$

Set $g_{k+1}=g_k+\alpha_kd_k$

Compute $\beta_k=\frac{g_{k}^{T}g_k}{d_{k}^{T}Qd_k}$

Compute $x_{k+1}=x_k+\alpha_kd_k$

Set $g_{k+1}=x_k+\alpha_kQd_k$

Compute $\beta_k=\frac{g_{k+1}^{T}g_{k+1}}{g_{k}^{T}gk}$

Set $d_{k+1}=-g_{k+1}+\beta_kd_k$.

Advertisements