×

An inexact first-order method for constrained nonlinear optimization. (English) Zbl 1501.90096

Summary: The primary focus of this paper is on designing an inexact first-order algorithm for solving constrained nonlinear optimization problems. By controlling the inexactness of the subproblem solution, we can significantly reduce the computational cost needed for each iteration. A penalty parameter updating strategy during the process of solving the subproblem enables the algorithm to automatically detect infeasibility. Global convergence for both feasible and infeasible cases is proved. Complexity analysis for the KKT residual is also derived under mild assumptions. Numerical experiments exhibit the ability of the proposed algorithm to rapidly find inexact optimal solution through cheap computational cost.

MSC:

90C30 Nonlinear programming

Software:

CUTEst; AdaGrad; ALGENCAN

References:

[1] Absil, P.-A.; Mahony, R.; Sepulchre, R., Optimization algorithms on matrix manifolds (2008), Princeton Univ. Press: Princeton Univ. Press, Princeton, NJ · Zbl 1147.65043
[2] Andreani, R.; Martnez, J. M.; Ramos, A.; Silva, P. J.S., A cone-continuity constraint qualification and algorithmic consequences, SIAM. J. Optim., 26, 96-110 (2016) · Zbl 1329.90162 · doi:10.1137/15M1008488
[3] Baker, T. E.; Lasdon, L. S., Successive linear programming at exxon, Manage. Sci., 31, 264-274 (1985) · Zbl 0608.90090 · doi:10.1287/mnsc.31.3.264
[4] Beck, A.; Teboulle, M., Mirror descent and nonlinear projected subgradient methods for convex optimization, Oper. Res. Lett., 31, 167-175 (2003) · Zbl 1046.90057 · doi:10.1016/S0167-6377(02)00231-6
[5] Beck, A.; Teboulle, M., A fast iterative shrinkage-thresholding algorithm for linear inverse problems, SIAM. J. Imaging. Sci., 2, 183-202 (2009) · Zbl 1175.94009 · doi:10.1137/080716542
[6] Birgin, E. G.; Martnez, J. M., Practical Augmented Lagrangian Methods for Constrained Optimization (2014), Society for Industrial and Applied Mathematics: Society for Industrial and Applied Mathematics, Philadelphia, PA · Zbl 1339.90312 · doi:10
[7] Bottou, L., Stochastic learning, in Advanced lectures on machine learning, Springer, 2004, pp. 146-168. · Zbl 1120.68426
[8] Bottou, L., Large-scale machine learning with stochastic gradient descent, in Proceedings of COMPSTAT’2010, Springer, 2010, pp. 177-186. · Zbl 1436.68293
[9] Bottou, L., Stochastic gradient descent tricks, in Neural networks: Tricks of the trade, Springer, 2012, pp. 421-436.
[10] Burke, J., A sequential quadratic programming method for potentially infeasible mathematical programs, J. Math. Anal. Appl., 139, 319-351 (1989) · Zbl 0719.90066 · doi:10.1016/0022-247X(89)90111-X
[11] Burke, J. V.; Curtis, F. E.; Wang, H., A sequential quadratic optimization algorithm with rapid infeasibility detection, SIAM. J. Optim., 24, 839-872 (2014) · Zbl 1301.49069 · doi:10.1137/120880045
[12] Burke, J.V., Curtis, F.E., Wang, H., and Wang, J., A dynamic penalty parameter updating strategy for matrix-free sequential quadratic optimization, Preprint (2018). Available at arXiv:1803.09224.
[13] Byrd, R. H.; Curtis, F. E.; Nocedal, J., An inexact sqp method for equality constrained optimization, SIAM. J. Optim., 19, 351-369 (2008) · Zbl 1158.49035 · doi:10.1137/060674004
[14] Byrd, R. H.; Curtis, F. E.; Nocedal, J., Infeasibility detection and sqp methods for nonlinear optimization, SIAM. J. Optim., 20, 2281-2299 (2010) · Zbl 1211.90179 · doi:10.1137/080738222
[15] Byrd, R. H.; Gould, N. I.; Nocedal, J.; Waltz, R. A., An algorithm for nonlinear optimization using linear programming and equality constrained subproblems, Math. Program., 100, 27-48 (2003) · Zbl 1146.90513 · doi:10.1007/s10107-003-0485-4
[16] Byrd, R. H.; Nocedal, J.; Waltz, R. A., Steering exact penalty methods for nonlinear programming, Opt. Methods Softw., 23, 197-213 (2008) · Zbl 1211.90224 · doi:10.1080/10556780701394169
[17] Clarke, F. H., Generalized gradients and applications, Trans. Am. Math. Soc., 205, 247-262 (1975) · Zbl 0307.26012 · doi:10.1090/S0002-9947-1975-0367131-6
[18] Daubechies, I.; Defrise, M.; De Mol, C., An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Commun. Pure Appl. Math. J. Courant Inst. Math. Sci., 57, 1413-1457 (2004) · Zbl 1077.65055 · doi:10.1002/cpa.20042
[19] Duchi, J.; Hazan, E.; Singer, Y., Adaptive subgradient methods for online learning and stochastic optimization, J. Mach. Learn. Res., 12, 2121-2159 (2011) · Zbl 1280.68164
[20] Fletcher, R., Practical methods of optimization (2013), Wiley, Hoboken, NJ · Zbl 0905.65002
[21] Fletcher, R.; de la Maza, E. S., Nonlinear programming and nonsmooth optimization by successive linear programming, Math. Program., 43, 235-256 (1989) · Zbl 0724.90062 · doi:10.1007/BF01582292
[22] Gould, N. I.M.; Orban, D.; Toint, P. L., Cutest: a constrained and unconstrained testing environment with safe threads for mathematical optimization, Comput. Optim. Appl., 60, 545-557 (2015) · Zbl 1325.90004 · doi:10.1007/s10589-014-9687-3
[23] Han, S. P., A globally convergent method for nonlinear programming, J. Optim. Theory. Appl., 22, 297-309 (1977) · Zbl 0336.90046 · doi:10.1007/BF00932858
[24] Han, S. P.; Mangasarian, O. L., Exact penalty functions in nonlinear programming, Math. Program., 17, 251-269 (1979) · Zbl 0424.90057 · doi:10.1007/BF01588250
[25] Hock, W.; Schittkowski, K., Test examples for nonlinear programming codes, J. Optim. Theory. Appl., 30, 127-129 (1980) · Zbl 0393.90072 · doi:10.1007/BF00934594
[26] Johnson, R. and Zhang, T., Accelerating stochastic gradient descent using predictive variance reduction, in Advances in neural information processing systems. 2013, pp. 315-323.
[27] Juditsky, A.; Nemirovski, A.; Tauvel, C., Solving variational inequalities with stochastic mirror-prox algorithm, Stochast. Syst., 1, 17-58 (2011) · Zbl 1291.49006 · doi:10.1287/10-SSY011
[28] Karush, W., Minima of functions of several variables with inequalities as side conditions, in Traces and Emergence of Nonlinear Programming, Springer, 2014, pp. 217-245.
[29] Kuhn, H.W. and Tucker, A.W., Nonlinear programming, in Traces and emergence of nonlinear programming, Springer, 2014, pp. 247-258.
[30] Lasdon, L.; Waren, A.; Sarkar, S.; Palacios, F., Solving the pooling problem using generalized reduced gradient and successive linear programming algorithms, ACM Sigmap Bull., 27, 9-15 (1979) · doi:10.1145/1111246.1111247
[31] Luss, R.; Teboulle, M., Conditional gradient algorithmsfor rank-one matrix approximations with a sparsity constraint, SIAM Rev., 55, 65-98 (2013) · Zbl 1263.90094 · doi:10.1137/110839072
[32] Oberlin, C.; Wright, S. J., Active set identification in nonlinear programming, SIAM. J. Optim., 17, 577-605 (2006) · Zbl 1174.90813 · doi:10.1137/050626776
[33] Pshenichnyj, B.N., The linearization method for constrained optimization, Springer Series in Computational Mathematics (English summary) Translated from the 1983 Russian original by Stephen S. Wilson., Vol. 22, Springer-Verlag, 1994. · Zbl 0804.49027
[34] Udriste, C., Convex functions and optimization methods on Riemannian manifolds, Math. Appl., 297 (1994), Kluwer Academic Publishers: Kluwer Academic Publishers, Dordrecht · Zbl 0932.53003
[35] Xiao, L.; Zhang, T., A proximal stochastic gradient method with progressive variance reduction, SIAM. J. Optim., 24, 2057-2075 (2014) · Zbl 1321.65016 · doi:10.1137/140961791
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.