A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.



GSoC 2017 - Submission


During the last few months I have worked on my Google Summer of Code (GSoC) project, that consists of implementing a large-scale optimization algorithm to be integrated to Scipy.

Numerical Results


In this blog post, I will present numerical results obtained solving problems from the CUTEst collection [1] using the algorithms implemented during my GSoC project.

Usage Example


In this blog post I will provide an simple example of application.

Interior-Point Method


In this post the interior point method described in [1] will be discussed. This algorithm solve the nonlinearly constrained optimization problem:

Byrd-Omojokun Trust-Region SQP


During the previous two weeks I have been implementing a trust-region Sequential Quadratic Programming (SQP) method. This method is able to solve the equality-constrained nonlinear programming problem:

Projected Conjugate Gradient


The projected conjugate gradient (CG) method was implemented during my first GSoC weeks. It solves the equality-constrained quadratic programming (EQP) problems of the form:

GSoC 2017 - Scipy: Large-scale Constrained Optimization


This year I was chosen as the student for Google Summer of Code. I’ll be working on one of the core Python scientific libraries called Scipy. My task is to implement a constrained optimization algorithm able to deal with large (and possibly sparse) problems.



Relaçoes Estáticas de Modelos NARX MISO e sua Representaçao de Hammerstein

XX Congresso Brasileiro de Automática, 2014

This paper presents the conditions under which the static function of a Multi-Input Single-Output (MISO) Non-linear AutoRegressive with eXogenous inputs (NARX) polynomial model can be written in rational or polynomial form. A particularly useful situation is when the static function is characterized by a set of polynomial functions. The existence and uniqueness of this set is shown to depend on conditions which are given in three lemmas. Based on these lemmas, a three step procedure for MISO Hammerstein model identification is presented. The procedure is illustrated identifying a Hammerstein model using data from an oil platform.

Antônio H. Ribeiro and Luis A. Aguirre. "Relaçoes Estáticas de Modelos NARX MISO e sua Representaçao de Hammerstein." XX Congresso Brasileiro de Automática. 2014.

Selecting Transients Automatically for the Identification of Models for an Oil Well

2nd IFAC Workshop on Automatic Control in Offshore Oil and Gas Production, 2015

This paper proposes a procedure to automatically select transient windows for system identification from routine operation data. To this end two metrics are proposed. One quantifies the transient content in a given window and the other provides an overall measure of correlation between such transients and the chosen model input. The procedure is illustrated using data from an oil well that operates in deep waters.

Antônio H. Ribeiro and Luis A. Aguirre. "Selecting Transients Automatically for the Identification of Models for an Oil Well." IFAC-PapersOnLine 48.6 (2015): 154-158.

Shooting Methods for Parameter Estimation of Output Error Models

IFAC World Congress, 2017

This paper studies parameter estimation of output error (OE) models. The commonly used approach of minimizing the free-run simulation error is called single shooting in contrast with the new multiple shooting approach proposed in this paper, for which the free-run simulation error of sub-datasets is minimized subject to equality constraints. The names “single shooting” and “multiple shooting” are used due to the similarities with techniques for estimating ODE (ordinary differential equation) parameters. Examples with nonlinear polynomial models illustrate the advantages of OE models as well as the capability of the multiple shooting approach to avoid undesirable local minima.

Antônio H. Ribeiro and Luis A. Aguirre. "Shooting Methods for Parameter Estimation of Output Error Models." IFAC World Congress (2017)

“Parallel Training Considered Harmful?”: Comparing Series-Parallel and Parallel Feedforward Network Training

Preprint (arXiv:1706.07119), 2017

Neural network models for dynamic systems can be trained either in parallel or in series-parallel configurations. Influenced by early arguments, several papers justify the choice of series-parallel rather than parallel configuration claiming it has a lower computational cost, better stability properties during training and provides more accurate results. The purpose of this work is to review some of those arguments and to present both methods in an unifying framework, showing that parallel and series-parallel training actually results from optimal predictors that use different noise models. A numerical example illustrate that each method provides better results when the noise model they implicit consider are consistent with the error in the data. Furthermore, it is argued that for feedforward networks with bounded activation functions the possible lack of stability does not jeopardize the training; and, a novel complexity analysis indicates the computational cost in the two configurations is not significantly different. This is confirmed through numerical examples.

Antônio H. Ribeiro and Luis A. Aguirre. ""Parallel Training Considered Harmful?": Comparing Series-Parallel and Parallel Feedforward Network Training" arXiv:1706.07119

Lasso Regularization Paths for NARMAX Models via Coordinate Descent

Preprint (arXiv:1710.00598), 2017

We propose a new algorithm for estimating NARMAX models with L1 regularization for models represented as a linear combination of basis functions. Due to the L1-norm penalty the Lasso estimation tends to produce some coefficients that are exactly zero and hence gives interpretable models. The proposed algorithm uses cyclical coordinate descent to compute the parameters of the NARMAX models for the entire regularization path and, to the best of the authors knowledge, it is first the algorithm to allow the inclusion of error regressors in the Lasso estimation. This is made possible by updating the regressor matrix along with the parameter vector. In comparative timings we find that this modification does not harm the global efficiency of the algorithm and can provide the most important regressors in very few inexpensive iterations. The method is illustrated for linear and polynomial models by means of two examples.

Antônio H. Ribeiro and Luis A. Aguirre. "Lasso Regularization Paths for NARMAX Models via Coordinate Descent" arXiv:1710.00598


Talk 1 on Relevant Topic in Your Field

UC San Francisco, Department of Testing, 2012

This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!


Teaching experience 1

University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.