Join the discussion on our Discord server\nAs ML plays a more and more relevant role in many domains of everyday life, it\u2019s quite obvious to see more and more attacks to ML systems. In this episode we talk about the most popular attacks against machine learning systems and some mitigations designed by researchers Ambra Demontis and Marco Melis, from the University of Cagliari (Italy). The guests are also the authors of SecML, an open-source Python library for the\xa0security evaluation\xa0of Machine Learning (ML) algorithms.\xa0Both Ambra and Marco are members of research group PRAlab, under the supervision of Prof. Fabio Roli.\xa0\nSecML Contributors\nMarco Melis (Ph.D Student, Project Maintainer, https://www.linkedin.com/in/melismarco/)Ambra Demontis (Postdoc, https://pralab.diee.unica.it/it/AmbraDemontis)\xa0Maura Pintor (Ph.D Student, https://it.linkedin.com/in/maura-pintor)Battista Biggio (Assistant Professor, https://pralab.diee.unica.it/it/BattistaBiggio)\nReferences\nSecML: an open-source Python library for the\xa0security evaluation\xa0of Machine Learning (ML) algorithms\xa0https://secml.gitlab.io/.\nDemontis et al., \u201cWhy Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks,\u201d presented at the 28th USENIX Security Symposium (USENIX Security 19), 2019, pp. 321\u2013338.\xa0https://www.usenix.org/conference/usenixsecurity19/presentation/demontis\nW. Koh and P. Liang, \u201cUnderstanding Black-box Predictions via Influence Functions,\u201d in International Conference on Machine Learning (ICML), 2017.\xa0https://arxiv.org/abs/1703.04730\nMelis, A. Demontis, B. Biggio, G. Brown, G. Fumera, and F. Roli, \u201cIs Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid,\u201d in 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 2017, pp. 751\u2013759.\xa0https://arxiv.org/abs/1708.06939\nBiggio and F. Roli, \u201cWild Patterns: Ten Years After the Rise of Adversarial Machine Learning,\u201d Pattern Recognition, vol. 84, pp. 317\u2013331, 2018.\xa0https://arxiv.org/abs/1712.03141\nBiggio et al., \u201cEvasion attacks against machine learning at test time,\u201d in Machine Learning and Knowledge Discovery in Databases (ECML PKDD), Part III, 2013, vol. 8190, pp. 387\u2013402.\xa0https://arxiv.org/abs/1708.06131\nBiggio, B. Nelson, and P. Laskov, \u201cPoisoning attacks against support vector machines,\u201d in 29th Int\u2019l Conf. on Machine Learning, 2012, pp. 1807\u20131814.\xa0https://arxiv.org/abs/1206.6389\nDalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, \u201cAdversarial classification,\u201d in Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Seattle, 2004, pp. 99\u2013108.\xa0https://dl.acm.org/citation.cfm?id=1014066\nSundararajan, Mukund, Ankur Taly, and Qiqi Yan. "Axiomatic attribution for\xa0deep networks." Proceedings of the 34th International Conference on Machine\xa0Learning-Volume 70. JMLR. org, 2017.\xa0https://arxiv.org/abs/1703.01365\xa0\nRibeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Model-agnostic\xa0interpretability of machine learning." arXiv preprint arXiv:1606.05386 (2016).\xa0https://arxiv.org/abs/1606.05386\nGuo, Wenbo, et al. "Lemna: Explaining deep learning based security\xa0applications." Proceedings of the 2018 ACM SIGSAC Conference on Computer\xa0and Communications Security. ACM, 2018.\xa0https://dl.acm.org/citation.cfm?id=3243792\nBach, Sebastian, et al. "On pixel-wise explanations for non-linear classifier\xa0decisions by layer-wise relevance propagation." PloS one 10.7 (2015):\xa0E0130140.\xa0https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140