Ir al contenido

Documat


Resumen de Contributions to large scale bayesian inference and adversarial machine learning

Víctor Adolfo Gallego Alcalá

  • español

    El campo del aprendizaje automático (AA) ha experimentado un auge espectacular en los últimos años, tanto en desarrollos teóricos como en áreas de aplicación. Sin embargo, la rápida adopción de las metodologías del AA ha mostrado que los modelos que habitualmente se emplean para toma de decisiones no tienen en cuenta la incertidumbre en sus predicciones o, más crucialmente, pueden ser vulnerables a ejemplos adversarios, datos manipulados estratégicamente con el objetivo de engañar estos sistemas de AA. Por ejemplo, en el sector de la hostelería, un modelo puede predecir unas ventas esperadas muy altas para la semana que viene, fijado cierto plan de inversión en publicidad. Sin embargo, la varianza predictiva también puede ser muy grande, haciendo la predicción escasamente útil según el nivel de riesgo que el negocio pueda tolerar. O, en el caso de la detección de spam, un atacante puede introducir palabras adicionales en un correo de spam para evadir el ser clasificado como tal y aparecer legítimo. Por tanto, creemos que desarrollar sistemas de AA que puedan tener en cuenta también las incertidumbres en las predicciones y ser más robustos frente a ejemplos adversarios es una necesidad para tareas críticas en el mundo real. Esta tesis es un paso hasta alcanzar este objetivo...

  • English

    The field of machine learning (ML) has experienced a major boom in the past years, both in theoretical developments and application areas. However, the rampant adoption of ML methodologies has revealed that models are usually adopted to make decisions without taking into account the uncertainties in their predictions. More critically, they can be vulnerable to adversarial examples, strategic manipulations of the data with the goal of fooling those systems. For instance, in retailing, a model may predict very high expected sales for the next week, given a certain advertisement budget. However, the predicted variance may also be quite big, thus making the prediction almost useless depending on the risk tolerance of the company. Similarly, in the case of spam detection, an attacker may insert additional words in a given spam email to evade being classified as spam by making it to appear more legit.

    Thus, developing ML systems that take into account predictive uncertainties and are robust against adversarial examples is a must for critical, real-world tasks. This thesis is a step towards achieving this goal.

    In Chapter 1, we start with a case study in retailing. We propose a robust implementation of the Nerlove'Arrow model using a Bayesian structural time series model to explain the relationship between advertising expenditures of a country-wide fast-food franchise network with its weekly sales. Its Bayesian nature facilitates incorporating prior information reflecting the manager's views, which can be updated with relevant data. However, this case study adopted classical Bayesian techniques, such as the Gibbs sampler. Nowadays, the ML landscape is pervaded with complex models, huge in the number of parameters. This is the realm of neural networks and this chapter also surveys current developments in this sub-field. In doing this, three challenges that constitute the core of this thesis are identified.

    Chapter 2 is devoted to the first challenge. In it, we tackle the problem of scaling Bayesian inference to complex models and large data regimes. In the first part, we propose a unifying view of two different Bayesian inference algorithms, Stochastic Gradient Markov Chain Monte Carlo and Stein Variational Gradient Descent, leading to improved and efficient novel sampling schemes. Then, we develop a framework to boost the efficiency of Bayesian inference in probabilistic models by embedding a Markov chain sampler within a variational posterior approximation. We call this framework 'variationally inferred sampling'. This framework has several benefits, such as its ease of implementation and the automatic tuning of sampler parameters, leading to a faster mixing time through automatic differentiation. Experiments show the superior performance of both developments compared to baselines.

    In Chapter 3, we address the challenge of protecting ML classifiers from adversarial examples. So far, most approaches to adversarial classification have followed a classical game-theoretic framework. This requires unrealistic common knowledge conditions untenable in the security settings typical of the adversarial ML realm. After reviewing such approaches, we present an alternative perspective on AC based on adversarial risk analysis, and leveraging the scalable approaches from chapter 2.

    In Chapter 4, we turn our attention to reinforcement learning (RL), addressing the challenge of supporting an agent in a sequential decision making setting where there can be adversaries, specifically modelled as other players. We introduce Threatened Markov Decision Processes (TMDPs) as an extension of the classical Markov Decision Process framework for RL. We also propose a level-k thinking scheme resulting in a novel learning approach to deal with TMDPs. After introducing our framework and deriving theoretical results, relevant empirical evidence is given via extensive experiments, showing the benefits of accounting for adversaries in RL while the agent learns.


Fundación Dialnet

Mi Documat