Convex Optimization - Programming Problem



There are four types of convex programming problems −

Step 1minf(x), where xS and S be a non-empty convex set in Rn and f(x) is convex function.

Step 2minf(x),xRn subject to

gi(x)0,1m1 and gi(x) is a convex function.

gi(x)0,m1+1m2 and gi(x) is a concave function.

gi(x)=0,m2+1m and gi(x) is a linear function.

where f(x) is a convex fucntion.

Step 3maxf(x) where xS and S be a non-empty convex set in Rn and f(x) is concave function.

Step 4minf(x), where xRn subject to

gi(x)0,1m1 and gi(x) is a convex function.

gi(x)0,m1+1m2 and gi(x) is a concave function.

gi(x)=0,m2+1m and gi(x) is a linear function.

where f(x) is a concave function.

Cone of feasible direction

Let S be a non-empty set in Rn and let ˆxClosure(S), then the cone of feasible direction of S at ˆx, denoted by D, is defined as D={d:d0,ˆx+λdS,λ(0,δ),δ>0}

Each non-zero vector dD is called feasible direction.

For a given function f:RnR the cone of improving direction at ˆx is denoted by F and is given by

F={d:f(ˆx+λd)f(ˆx),λ(0,δ),δ>0}

Each direction dF is called an improving direction or descent direction of f at ˆx

Theorem

Necessary Condition

Consider the problem minf(x) such that xS where S be a non-empty set in Rn. Suppose f is differentiable at a point ˆxS. If ˆx is a local optimal solution, then F0D=ϕ where $F_0=\left \{ d:\bigtriangledown f\left ( \hat{x} \right )^T d

Sufficient Condition

If F0D=ϕ f is a pseudoconvex function at ˆx and there exists a neighbourhood of ˆx,Nε(ˆx),ε>0 such that d=xˆxD for any xSNε(ˆx), then ˆx is local optimal solution.

Proof

Necessary Condition

Let F0Dϕ, ie, there exists a dF0D such that dF0 and dD

Since dD, therefore there exists δ1>0 such that ˆx+λdS,λ(0,δ1).

Since dF0, therefore $\bigtriangledown f \left ( \hat{x}\right )^T d

Thus, there exists δ2>0 such that $f\left ( \hat{x}+\lambda d\right )

Let δ=min{δ1,δ2}

Then $\hat{x}+\lambda d \in S, f\left (\hat{x}+\lambda d \right )

But ˆx is local optimal solution.

Hence it is contradiction.

Thus F0D=ϕ

Sufficient Condition

Let F0Dϕ nd let f be a pseudoconvex function.

Let there exists a neighbourhood of ˆx,Nε(ˆx) such that d=xˆx,xSNε(ˆx)

Let ˆx is not a local optimal solution.

Thus, there exists ˉxSNε(ˆx) such that $f \left ( \bar{x} \right )

By assumption on SNε(ˆx),d=(ˉxˆx)D

By pseudoconvex of f,

$$f\left ( \hat{x} \right )>f\left ( \bar{x} \right )\Rightarrow \bigtriangledown f\left ( \hat{x} \right )^T\left ( \bar{x}-\hat{x} \right )

$\Rightarrow \bigtriangledown f\left ( \hat{x} \right) ^T d

dF0

hence F0Dϕ

which is a contradiction.

Hence, ˆx is local optimal solution.

Consider the following problem:minf(x) where xX such that gx(x)0,i=1,2,...,m

f:XR,gi:XRn and X is an open set in Rn

Let S={x:gi(x)0,i}

Let ˆxX, then M={1,2,...,m}

Let I={i:gi(ˆx)=0,iM} where I is called index set for all active constraints at ˆx

Let $J=\left \{i:g_i\left ( \hat{x}\right )

Let $F_0=\left \{ d \in \mathbb{R}^m:\bigtriangledown f\left ( \hat{x} \right )^T d

Let $G_0=\left \{ d \in \mathbb{R}^m:\bigtriangledown gI\left ( \hat{x} \right )^T d

or $G_0=\left \{ d \in \mathbb{R}^m:\bigtriangledown gi\left ( \hat{x} \right )^T d

Lemma

If S={xX:gi(x)0,iI} and X is non-empty open set in Rn. Let ˆxS and gi are different at ˆx,iI and let gi where iJ are continuous at ˆx, then G0D.

Proof

Let dG0

Since ˆxX and X is an open set, thus there exists δ1>0 such that ˆx+λdX for λ(0,δ1)

Also since gˆx0, $g_i\left ( \hat{x}+\lambda d\right )

Since dG0, therefore, $\bigtriangledown g_i\left ( \hat{x}\right )^T d 0, g_i\left ( \hat{x}+\lambda d\right )

Let δ=min{δ1,δ2,δ3}

therefore, $\hat{x}+\lambda d \in X, g_i\left ( \hat{x}+\lambda d\right )

ˆx+λdS

dD

G0D

Hence Proved.

Theorem

Necessary Condition

Let f and gi,iI, are different at ˆxS, and gj are continous at ˆxS. If ˆx is local minima of S, then F0G0=ϕ.

Sufficient Condition

If F0G0=ϕ and f is a pseudoconvex function at (ˆx,gi9x),iI are strictly pseudoconvex functions over some ε - neighbourhood of ˆx,ˆx is a local optimal solution.

Remarks

  • Let ˆx be a feasible point such that f(ˆx)=0, then F0=ϕ. Thus, F0G0=ϕ but ˆx is not an optimal solution

  • But if g(ˆx)=0, then G0=ϕ, thus F0G0=ϕ

  • Consider the problem : min f(x) such that g(x)=0.

    Since g(x)=0, thus $g_1\left(x \right)=g\left(x \right)

    Let ˆxS, then g1(ˆx)=0 and g2(ˆx)=0.

    But g1(ˆx)=g2(ˆx)

    Thus, G0=ϕ and F0G0=ϕ.

Advertisements