Attacking machine learning for fun and profit (with the authors of SecML)

As machine learning plays a more and more relevant role in the many domains of everyday life, attacking machine learning for fun and profit might become quite a common scenario. In this episode we learn how to attack machine learning.

First of all, we talk about the most popular attacks against machine learning systems. Therefore, we discuss about some mitigations against such attacks, designed by researchers Ambra Demontis and Marco Melis, from the University of Cagliari (Italy).

Moreover, the guests of the show are the authors of SecML, an open-source Python library for the security evaluation of Machine Learning algorithms. Both Ambra and Marco are members of research group PRAlab. They developed SecML under the supervision of Prof. Fabio Roli.

During the show, Marco and Ambra talk about explainable machine learning. Why it is important and how SecML improves the explainability of learning algorithms.

Finally, among the numerous advantages of SecML, there is seamless support of dense and sparse data in all classifiers and attacks. It offers a security evaluation of models and integrated explanation methods.

Other contributors of SecML are Attacking machine learning for fun and profit

Marco Melis (Ph.D Student, Project Maintainer, https://www.linkedin.com/in/melismarco/)Ambra Demontis (Postdoc, https://pralab.diee.unica.it/it/AmbraDemontis) Maura Pintor (Ph.D Student, https://it.linkedin.com/in/maura-pintor)Battista Biggio (Assistant Professor, https://pralab.diee.unica.it/it/BattistaBiggio)

Are you ready to learn how to attack machine learning?

Join the discussion on our Discord server

References

  • SecML: an open-source Python library for the security evaluation of Machine Learning (ML) algorithms https://secml.gitlab.io/.
  • Demontis et al., “Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks,” presented at the 28th USENIX Security Symposium (USENIX Security 19), 2019, pp. 321–338. https://www.usenix.org/conference/usenixsecurity19/presentation/demontis
  • W. Koh and P. Liang, “Understanding Black-box Predictions via Influence Functions,” in International Conference on Machine Learning (ICML), 2017. https://arxiv.org/abs/1703.04730
  • Melis, A. Demontis, B. Biggio, G. Brown, G. Fumera, and F. Roli, “Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid,” in 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 2017, pp. 751–759. https://arxiv.org/abs/1708.06939
  • Biggio and F. Roli, “Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning,” Pattern Recognition, vol. 84, pp. 317–331, 2018. https://arxiv.org/abs/1712.03141
  • Biggio et al., “Evasion attacks against machine learning at test time,” in Machine Learning and Knowledge Discovery in Databases (ECML PKDD), Part III, 2013, vol. 8190, pp. 387–402. https://arxiv.org/abs/1708.06131
  • Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in 29th Int’l Conf. on Machine Learning, 2012, pp. 1807–1814. https://arxiv.org/abs/1206.6389
  • Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, “Adversarial classification,” in Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), Seattle, 2004, pp. 99–108. https://dl.acm.org/citation.cfm?id=1014066
  • Sundararajan, Mukund, Ankur Taly, and Qiqi Yan. “Axiomatic attribution for deep networks.” Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017. https://arxiv.org/abs/1703.01365
  • Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “Model-agnostic interpretability of machine learning.” arXiv preprint arXiv:1606.05386 (2016). https://arxiv.org/abs/1606.05386
  • Guo, Wenbo, et al. “Lemna: Explaining deep learning based security applications.” Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018. https://dl.acm.org/citation.cfm?id=3243792
  • Bach, Sebastian, et al. “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation.” PloS one 10.7 (2015): E0130140. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140

Leave a Reply

Discord community chat

Join our Discord community to discuss the show, suggest new episodes and chat with other listeners!


Support us