#4: Adversarial Machine Learning for Recommenders with Felice Merra
Feb 23, 2022
auto_awesome
Felice Merra, an applied scientist at Amazon, discusses Adversarial Machine Learning in Recommender Systems. Topics include perturbing data and model parameters, defense strategies, motivations for attacks, and privacy-preserving learning. The goal is to make systems more robust against potential attacks. They also touch on the challenges of robustifying multimedia recommender systems.
The impact of catalog size and user quantity on the vulnerability of recommender systems to attacks is crucial.
Safeguarding visual elements in recommender systems against adversarial attacks is essential for maintaining recommendation accuracy.
Defending against white box attacks targeting model parameters is vital for preserving the integrity of recommendation algorithms.
Deep dives
Effect of Large Catalogs on Model Attacks
Having a very huge catalog with few users makes it challenging to perform an attack on a recommender model. Conversely, a small catalog with many users makes it easier to influence the recommended items. This illustrates the impact of catalog size and user quantity on the vulnerability of a system to attacks.
Adversarial Attacks on Visual Content in Recommender Systems
Attacks on recommender models through perturbations in image content represent a significant threat. Injecting fake images with imperceptible changes can alter the recommendations, leading to potentially harmful outcomes. Such attacks highlight the importance of safeguarding visual elements in recommender systems.
Consequences of Model Parameter Attacks
Adversarial attacks targeting model parameters can severely impact the accuracy and performance of recommender systems. Understanding and defending against these white box attacks, where the adversary has access to model parameters, is crucial to maintaining the integrity and effectiveness of recommendation algorithms.
Need for Comprehensive Model Defense Strategies
In the face of evolving adversarial threats, robustification mechanisms for recommender systems must address diverse attack vectors. Balancing security measures with system performance metrics such as accuracy, coverage, and popularity bias is essential for effective defense strategies.
Challenges in Reinforcement Learning and Multi-Modal Recommender Models
Exploring reinforcement learning in recommender systems poses challenges, particularly in academic environments due to resource constraints. Additionally, the integration of multi-modalities in recommender models introduces complexities and potential vulnerabilities that require specialized protection mechanisms.
In episode four my guest is Felice Merra, who is an applied scientist at Amazon. Felice obtained his PhD from Politecnico di Bari where he was a researcher at the Information Systems Lab (SisInf Lab). There, he worked on Security and Adversarial Machine Learning in Recommender Systems.
We talk about different ways to perturb interaction or content data, but also model parameters, and elaborated various defense strategies. In addition, we touch on the motivation of individuals or whole platforms to perform attacks and look at some examples that Felice has been working on throughout his research. The overall goals of research in Adversarial Machine Learning for Recommender Systems is to identify vulnerabilities of models and systems in order to derive proper defense strategies that make systems more robust against potential attacks. Finally, we also briefly discuss privacy-preserving learning and the challenges of further robustification of multimedia recommender systems.
Felice has published multiple papers at KDD, ECIR, SIGIR, and RecSys. He also won the Best Paper Award at KDD's workshop on Adversarial Learning Methods.
Enjoy this enriching episode of RECSPERTS - Recommender Systems Experts.