Research Output
Launching Adversarial Label Contamination Attacks Against Malicious URL Detection
  Web addresses, or Uniform Resource Locators (URLs), represent a vector by which attackers are able to deliver a multitude of unwanted and potentially harmful effects to users through malicious software. The ability to detect and block access to such URLs has traditionally been enabled through reactive and labour intensive means such as human verification and whitelists and blacklists. Machine Learning has shown great potential to automate this defence and position it as proactive through the implementation of classifier models. Work in this area has produced numerous high-accuracy models, though the algorithms themselves remain fragile to adversarial manipulation if implemented without consideration being given to their security. Our work aims to investigate the robustness of several classifiers for malicious URL detection by randomly perturbing samples in the training data. It is shown that without a measure of defence to adversarial influence, highly accurate malicious URL detection can be significantly and adversely affected at even low degrees of training data perturbation.

  • Date:

    01 September 2021

  • Publication Status:

    Published

  • Publisher

    Springer International Publishing

  • DOI:

    10.1007/978-3-030-86586-3_5

  • Cross Ref:

    10.1007/978-3-030-86586-3_5

  • Funders:

    Edinburgh Napier Funded

Citation

Marchand, B., Pitropakis, N., Buchanan, W. J., & Lambrinoudakis, C. (2021). Launching Adversarial Label Contamination Attacks Against Malicious URL Detection. In Trust, Privacy and Security in Digital Business: 18th International Conference, TrustBus 2021, Virtual Event, September 27–30, 2021, Proceedings (69-82). https://doi.org/10.1007/978-3-030-86586-3_5

Authors

Keywords

Malicious URL, Detection, Adversarial machine learning

Monthly Views:

Available Documents