Abstract
Many practical problems require the solution of large-scale constrained optimization problems for which preserving feasibility is a key issue, and the evaluation of the objective function is very expensive. In these cases it is mandatory to start with a feasible approximation of the solution, the obtention of which should not require objective function evaluations. The necessity of solving this type of problems motivated us to revisit the classical barrier approach for nonlinear optimization, providing a careful implementation of a modern version of this method. This is the main objective of the present paper. For completeness, we provide global convergence results and comparative numerical experiments with one of the state-of-the-art interior-point solvers for continuous optimization.
Similar content being viewed by others
Notes
Available at http://www.hsl.rl.ac.uk/catalogue/mc58.html.
The bound constraints might be dynamically relaxed by Ipopt during the optimization process (Wächter and Biegler (2006), §3.5), starting from a relative relaxation factor whose initial value is \(10^{-8}\).
It means that, in problem (1), A has more rows than columns. Lcmin eliminates redundant constraints, which makes A to have full row rank in most cases, except in those in which the feasible set is empty. Ipopt does not start optimization in these cases, stopping with the output Problem has too few degrees of freedom.
References
Andreani R, Birgin EG, Martínez JM, Schuverdt ML (2008) On augmented Lagrangian methods with general lower-level constraints. SIAM J Optim 18(4):1286–1309
Birgin EG, Martínez JM (2014) Practical augmented Lagrangian methods for constrained optimization. Society for Industrial and Applied Mathematics, Philadelphia
Chen L, Goldfarb D (2006) Interior-point \(\ell _2\)-penalty methods for nonlinear programming with strong global convergence properties. Math Progr 108(1):1–36
Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Progr 91(2):201–213
Fiacco AV, McCormick GP (1968) Nonlinear programming: sequential unconstrained minimization techniques. Wiley, New York (1968, Reprinted by SIAM Publications, 1990)
Frisch KR (1955) The logarithmic potential method of convex programming. Tech. rep., University Institute of Economics, Oslo, Norway
Gill PE, Murray W, Saunders MA, Tomlin JA, Wright MH (1986) On projected Newton barrier methods for linear programming and an equivalence to Karmarkar’s projective method. Math Progr 36(2):183–209
Gould NIM, Orban D, Toint PL (2015) CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput Optim Appl 60(3):545–557
Karmarkar N (1984) A new polynomial-time algorithm for linear programming. In: Proceedings of the Sixteenth Annual ACM Symposium on theory of computing, STOC ’84, pp. 302–311. ACM, New York, NY, USA
Nocedal J, Wright SJ (2006) Numerical optimization, 2nd edn. Springer, New York
Wächter A, Biegler LT (2006) On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math Progr 106(1):25–57
Acknowledgements
The authors are thankful to the anonymous reviewers for providing insightful suggestions which improved the presentation of this work.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work has been partially supported by FAPESP (Grants 2012/05725-0, 2013/07375-0, and 2018/24293-0) and CNPq (Grants 302915/2016-8, 302538/2019-4, and 302682/2019-8).
Rights and permissions
About this article
Cite this article
Birgin, E.G., Gardenghi, J.L., Martínez, J.M. et al. On the solution of linearly constrained optimization problems by means of barrier algorithms. TOP 29, 417–441 (2021). https://doi.org/10.1007/s11750-020-00559-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11750-020-00559-w