×

On the linear convergence of descent methods for convex essentially smooth minimization. (English) Zbl 0756.90084

The authors consider the following minimization problem: (P) \(\min f(x)\), s.t. \(x\in X\), where \(X\) is a polyhedral set in \(R^n\) and \(f(x)=g(Ex)+\langle q,x\rangle\) with \(g: R^n\to (-\infty,\infty]\), \(E\) is a \(m\times n\) matrix and \(q\) is a vector in \(R^n\). The following assumptions are made for the function \(g\): (a) the effective domain of \(g\), denoted by \(C_g\) is non-empty and open; (b) \(g\) is strictly convex twice continuously differentiable on \(C_g\); (c) \(g(t)=\infty\) as \(t\) approaches any boundary point of \(C_g\).
In the terminology of R. T. Rockafeller [see “Convex analysis.” Princeton, N. J.: Princeton University Press (1970; Zbl 0193.18401)] \(g\) is said to be a strictly convex essentially smooth function. Under additional assumptions on \(f\) and \(X\), general conditions under which a sequence of feasible points converges at least linearly to an optimal solution of (P) are introduced. Further, it is shown that these conditions are satisfied by the gradient projection algorithm and by the matrix splitting algorithm using regular splitting. Possible extensions of the convergence conditions are discussed.

MSC:

90C30 Nonlinear programming
65K10 Numerical optimization and variational techniques

Citations:

Zbl 0193.18401