talks


Regularization properties of adversarially-trained linear regression (December 2023)
   Conference on Neural Information Processing Systems (NeurIPS).
Abstract: State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial training is an effective approach to defend against it. Formulated as a min-max problem, it searches for the best solution when the training data were corrupted by the worst-case attacks. Linear models are among the simple models where vulnerabilities can be observed and are the focus of our study. In this case, adversarial training leads to a convex optimization problem which can be formulated as the minimization of a finite sum. We provide a comparative analysis between the solution of adversarial training in linear regression and other regularization methods. Our main findings are that: (A) Adversarial training yields the minimum-norm interpolating solution in the overparameterized regime (more parameters than data), as long as the maximum disturbance radius is smaller than a threshold. And, conversely, the minimum-norm interpolator is the solution to adversarial training with a given radius. (B) Adversarial training can be equivalent to parameter shrinking methods (ridge regression and Lasso). This happens in the underparametrized region, for an appropriate choice of adversarial radius and zero-mean symmetrically distributed covariates. (C) For $\ell_\infty$-adversarial training---as in square-root Lasso---the choice of adversarial radius for optimal bounds does not depend on the additive noise variance. We confirm our theoretical findings with numerical examples.
Linear adversarial training, robustness in machine learning and applications to cardiology (November 2023)
   Royal Institute of Technology, KTH, Sweden @ Division of Robotics, Perception and Learning.
Abstract: State-of-the-art machine learning models can be vulnerable to minimal input perturbations that are adversarially constructed. Adversarial training is an effective approach to defend against it. Formulated as a min-max problem, it searches for the best solution when worst-case attacks corrupt the training data. Linear models are among the simplest models where vulnerabilities can be observed and are the focus of our study. We provide a comparative analysis between the solution of adversarial training in linear regression and other regularization methods. We use this comparison to fully characterize adversarial training in linear regression. The results are used to characterize the tradeoffs between model size and adversarial robustness and motivated throughout the talk with examples of machine learning in cardiology, specifically applications where we use deep neural network models for the automatic diagnosis of the electrocardiogram.
Ataques adversáriais em modelos lineares (October, 2023)
   Laboratório Nacional de Computação Científica @ Petrópolis, RJ, Brazil (online).

Abstract: Modelos de aprendizado de máquina podem ser vulneráveis a perturbações de entrada muito pequenas construídas de forma adversárial. Examplos adversariais receberam muita atenção devido ao seu alto impacto no desempenho de modelos que, normalmente, produziriam resultados estado-da-arte. O treinamento adversárial é uma das abordagens mais eficazes para a defesa contra tais exemplos adversários e considera amostras perturbadas pelo adversário durante o treinamento para produzir modelos mais robustos. Nessa apresentação faremos uma análise comparativa entre a solução de treinamento adversárial em regressão linear e outros métodos de regularização. Há uma forte razão para esse foco: modelos de regressão linear permitem uma análise detalhada do problema. Além disso, muitos fenômenos interessantes observados em modelos não lineares ainda estão presentes e podem ser melhor entendidos. Por exemplo, a configuração pode usada para estudar como a alta dimensionalidade pode ser uma fonte adicional de robustez ou fragilidade para modelos. O método também possui propriedades favoráveis que permitem seu uso como um método básico para a estimar de modelos lineares.
Robustness in large-scale machine learning and its relevance to AI-enabled ECG (July, 2023)
   Imperial College, UK @ Imperial Centre for Translational and Experimental Medicine.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

The Three Challenges of Using Deep Neural Networks in Electrocardiography (May, 2023)
   IEEE EMBS @ Germany Chapter, Göttingen (online).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

Revisitando o princípio da parcimônia na identificação de sistemas e aprendizado de máquina (May, 2023)
   PUC Rio, Brazil @ Department of Mechanical Engineering.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

Abstract: As áreas de identificação de sistemas e o aprendizado de máquina visam construir modelos matemáticos a partir de dados. Tradicionalmente, a escolha da família de modelos a ser considerada exige do designer um equilíbrio entre dois objetivos de natureza conflitante; esta deve ser ser flexível o suficiente para capturar a dinâmica do sistema, mas não tão flexível que aprenda efeitos espúrios a partir conjunto de dados. No entanto, a boa performance de famílias de modelos com grande flexibilidade, como redes neurais profundas, levaram a uma revisão desse paradigma. Nessa apresentação discutiremos os fenômenos de "double-descent" e de "benign-overfitting" que ajudam a interpretar essas idéias. Com especial foco em dois casos de interesse: sistemas dinâmicos e ataques adversáriais (para os quais a entrada do sistema é contaminada com perturbações selecionadas de forma a fazer o sistema fazer predições errôneas).

Related publications:
  • Overparameterized Linear Regression under Adversarial Attacks (2023). IEEE Transactions on Signal Processing. Antônio H. Ribeiro, Thomas B. Schön
  • Surprises in adversarially-trained linear regression (2022). arXiv:2205.12695. Antônio H. Ribeiro, Dave Zachariah, Thomas B. Schön
  • Beyond Occam's Razor in System Identification: Double-Descent when Modeling Dynamics (2021). IFAC Symposium on System Identification (SYSID). Antônio H. Ribeiro, Johannes N. Hendriks, Adrian G. Wills, Thomas B. Schön
  • Deep networks for system identification: a Survey (2023). Automatica (Provisionally accepted). Gianluigi Pillonetto, Aleksandr Aravkin, Daniel Gedon, Lennart Ljung, Antonio H. Ribeiro, Thomas Bo Schön
Overparametrized linear regression under adversarial attacks (March, 2023)
   INRIA Paris, France @ SIERRA team.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

Abstract: State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial attacks are a popular framework for studying these vulnerabilities. They consider worst-case input disturbances designed to maximize model error and got a lot of attention due to their impact on the performance of state-of-the-art models. Adversarial training considers extending model training with these examples and is an effective approach to defend against such attacks. This talk will explore adversarial attacks and training in linear regression. There is a strong reason for this focus, for linear regression, adversarial training can be formulated as a convex and quadratic problem. Moreover, many interesting phenomena that can be observed in nonlinear models are still present. The setup is used to study the relationship between high dimensionality and robustness. And to reveal the connection between adversarial training, parameter-shrinking methods, and minimum-norm solutions.
Adversarially-trained linear regression (November 2022)
   Uppsala University, Sweden @ System and Control Division (Microseminar).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Overparameterized Linear Regression under Adversarial Attacks (2023). IEEE Transactions on Signal Processing. Antônio H. Ribeiro, Thomas B. Schön
  • Surprises in adversarially-trained linear regression (2022). arXiv:2205.12695. Antônio H. Ribeiro, Dave Zachariah, Thomas B. Schön
Adversarial Attacks in Linear Regression (November 2022)
   Seminars on Advances in Probabilistic Machine Learning @ Aalto University and ELLIS unit Helsinki.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

Abstract: State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial attacks are a popular framework for studying these vulnerabilities. They consider worst-case input disturbances designed to maximize model error and got a lot of attention due to their impact on the performance of state-of-the-art models. Adversarial training considers extending model training with these examples and is an effective approach to defend against such attacks. This talk will explore adversarial attacks and training in linear regression. There is a strong reason for this focus, for linear regression, adversarial training can be formulated as a convex and quadratic problem. Moreover, many interesting phenomena that can be observed in nonlinear models are still present. The setup is used to study the role of high dimensionality in robustness. And to reveal the connection between adversarial training, parameter-shrinking methods and minimum-norm solutions.

Related publications:
  • Overparameterized Linear Regression under Adversarial Attacks (2023). IEEE Transactions on Signal Processing. Antônio H. Ribeiro, Thomas B. Schön
  • Surprises in adversarially-trained linear regression (2022). arXiv:2205.12695. Antônio H. Ribeiro, Dave Zachariah, Thomas B. Schön
Learning signals and systems and its applications to electrocardiography (June 2022)
   Aalto University, Finland @ Jobtalk (Online).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Automatic diagnosis of the 12-lead ECG using a deep neural network (2020). Nature Communications. Antônio H. Ribeiro, Manoel Horta Ribeiro, Gabriela M. M. Paixão, Derick M. Oliveira, Paulo R. Gomes, Jéssica A. Canazart, Milton P. S. Ferreira, Carl R. Andersson, Peter W. Macfarlane, Wagner Meira Jr., Thomas B. Schön, Antonio Luiz P. Ribeiro
  • Overparameterized Linear Regression under Adversarial Attacks (2023). IEEE Transactions on Signal Processing. Antônio H. Ribeiro, Thomas B. Schön
Overparameterized Linear Regression under Adversarial Attacks (June 2022)
   University of British Columbia, Canada @ Christos Thrampoulidis group (Online).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.

Abstract: State-of-the-art machine learning models can be vulnerable to very small input perturbations that are adversarially constructed. Adversarial attacks are a popular framework for studying these vulnerabilities and got a lot of attention due to their high impact on deep neural network performance. Adversarial training is one of the most effective approaches to defending against such adversarial examples and considers adversarially perturbed samples during training to produce more robust models. This talk will explore adversarial attacks and training in a simpler setting than it is usually studied, linear regression. There is a strong reason for this focus, for linear regression models adversarial training can be simplified into a convex and quadratic form. Moreover, a lot of interesting phenomena that can still be observed in nonlinear models are still present. The setup is used to study how high-dimensionality can be either a source of additional robustness or brittleness. And also to show, in the linear setting, similarities (and differences) between $\ell_\infty$-adversarial training and the lasso and between $\ell_2$-adversarial training and ridge regression.

Related publications:
  • Overparameterized Linear Regression under Adversarial Attacks (2023). IEEE Transactions on Signal Processing. Antônio H. Ribeiro, Thomas B. Schön
  • Surprises in adversarially-trained linear regression (2022). arXiv:2205.12695. Antônio H. Ribeiro, Dave Zachariah, Thomas B. Schön
Deep Neural Networks for Automatic ECG Analysis (March 2022)
   University of Luxembourg @ Systems Control Group, LCSB (Online).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Automatic diagnosis of the 12-lead ECG using a deep neural network (2020). Nature Communications. Antônio H. Ribeiro, Manoel Horta Ribeiro, Gabriela M. M. Paixão, Derick M. Oliveira, Paulo R. Gomes, Jéssica A. Canazart, Milton P. S. Ferreira, Carl R. Andersson, Peter W. Macfarlane, Wagner Meira Jr., Thomas B. Schön, Antonio Luiz P. Ribeiro
  • Deep neural network estimated electrocardiographic-age as a mortality predictor (2021). Nature Communications. Emilly M. Lima, Antônio H. Ribeiro, Gabriela MM Paixão, Manoel Horta Ribeiro, Marcelo M. Pinto Filho, Paulo R. Gomes, Derick M. Oliveira, Ester C. Sabino, Bruce B. Duncan, Luana Giatti, Sandhi M. Barreto, Wagner Meira, Thomas B. Schön, Antonio Luiz P. Ribeiro
  • Atrial fibrillation risk prediction from the 12-lead ECG using digital biomarkers and deep representation learning (2021). European Heart Journal - Digital Health. Shany Biton, Sheina Gendelman, Antônio H Ribeiro, Gabriela Miana, Carla Moreira, Antonio Luiz P Ribeiro, Joachim A Behar
  • Overparameterized Linear Regression under Adversarial Attacks (2023). IEEE Transactions on Signal Processing. Antônio H. Ribeiro, Thomas B. Schön
On the robustness of overparametrized models (Nov. 2021)
   Uppsala University, Sweden @ System and Control Division (Microseminar).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Overparametrized Regression Under L2 Adversarial Attacks (2021). Workshop on the Theory of Overparameterized Machine Learning (TOPML). Antonio H Ribeiro, Thomas B Schön
  • Beyond Occam's Razor in System Identification: Double-Descent when Modeling Dynamics (2021). IFAC Symposium on System Identification (SYSID). Antônio H. Ribeiro, Johannes N. Hendriks, Adrian G. Wills, Thomas B. Schön
Aprendendo modelos para sinais e sistemas (Oct. 2021)
   Premio UFMG de Teses.


This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Learning nonlinear differentiable models for signals and systems: with applications (2020). . Antônio H. Ribeiro
Beyond Occam's Razor in System Identification: Double-Descent when Modeling Dynamics (July 2021)
   19th IFAC symposium on System Identification: learning models for decision and control.

This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Beyond Occam's Razor in System Identification: Double-Descent when Modeling Dynamics (2021). IFAC Symposium on System Identification (SYSID). Antônio H. Ribeiro, Johannes N. Hendriks, Adrian G. Wills, Thomas B. Schön
How convolutional neural networks deal with aliasing (June 2021)
   IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • How convolutional neural networks deal with aliasing (2021). IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Antonio H. Ribeiro, Thomas B. Schon
Overparametrized Regression Under L2 Adversarial Attacks (April 2021)
   Workshop on the Theory of Overparameterized Machine Learning.


This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Overparametrized Regression Under L2 Adversarial Attacks (2021). Workshop on the Theory of Overparameterized Machine Learning (TOPML). Antonio H Ribeiro, Thomas B Schön
Artificial intelligence for ECG classifcation and prediction of the risk of death (April 2021)
   International Congress on Electrocardiology (Online).


This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Deep neural network estimated electrocardiographic-age as a mortality predictor (2021). Nature Communications. Emilly M. Lima, Antônio H. Ribeiro, Gabriela MM Paixão, Manoel Horta Ribeiro, Marcelo M. Pinto Filho, Paulo R. Gomes, Derick M. Oliveira, Ester C. Sabino, Bruce B. Duncan, Luana Giatti, Sandhi M. Barreto, Wagner Meira, Thomas B. Schön, Antonio Luiz P. Ribeiro
  • Automatic 12-lead ECG classification using a convolutional network ensemble (2020). 2020 Computing in Cardiology (CinC). Antonio H Ribeiro, Daniel Gedon, Daniel Martins Teixeira, Manoel Horta Ribeiro, Antonio L Pinho Ribeiro, Thomas B Schon, Wagner Meira Jr
Artificial intelligence for ECG classification and prediction of the risk of death (March 2021)
   Techinion, Israel @ AIMLab group (Online).

This browser does not support PDFs. Please download the PDF to view it: Download PDF.


Related publications:
  • Deep neural network estimated electrocardiographic-age as a mortality predictor (2021). Nature Communications. Emilly M. Lima, Antônio H. Ribeiro, Gabriela MM Paixão, Manoel Horta Ribeiro, Marcelo M. Pinto Filho, Paulo R. Gomes, Derick M. Oliveira, Ester C. Sabino, Bruce B. Duncan, Luana Giatti, Sandhi M. Barreto, Wagner Meira, Thomas B. Schön, Antonio Luiz P. Ribeiro
  • Automatic 12-lead ECG classification using a convolutional network ensemble (2020). 2020 Computing in Cardiology (CinC). Antonio H Ribeiro, Daniel Gedon, Daniel Martins Teixeira, Manoel Horta Ribeiro, Antonio L Pinho Ribeiro, Thomas B Schon, Wagner Meira Jr
Beyond exploding and vanishing gradients: analysing RNN training using attractors and smoothness (2020)
   International Conference On Artificial Intelligence And Statistics (AISTATS).


Related publications:
  • Beyond exploding and vanishing gradients: attractors and smoothness in the analysis of recurrent neural network training (2020). International Conference on Artificial Intelligence and Statistics (AISTATS). Antônio H. Ribeiro, Koen Tiels, Luis A. Aguirre, Thomas B. Schön