no code implementations • 30 Oct 2020 • Tzvika Shapira, David Berend, Ishai Rosenberg, Yang Liu, Asaf Shabtai, Yuval Elovici
The performance of a machine learning-based malware classifier depends on the large and updated training set used to induce its model.
1 code implementation • 16 Aug 2020 • Guy Amit, Moshe Levy, Ishai Rosenberg, Asaf Shabtai, Yuval Elovici
Deep neural networks (DNNs) perform well at classifying inputs associated with the classes they have been trained on, which are known as in distribution inputs.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 6 Feb 2020 • Guy Amit, Ishai Rosenberg, Moshe Levy, Ron Bitton, Asaf Shabtai, Yuval Elovici
In many cases, neural network classifiers are likely to be exposed to input data that is outside of their training distribution data.
no code implementations • 30 Nov 2019 • Ishai Rosenberg, Guillaume Sicard, Eli David
We record the dynamic behavior of the APT when run in a sandbox and use it as raw input for the neural network, allowing the DNN to learn high level feature abstractions of the APTs itself.
no code implementations • 28 Jan 2019 • Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach
Using our methods we were able to decrease the effectiveness of such attack from 99. 9% to 15%.
Cryptography and Security
no code implementations • 23 Apr 2018 • Ishai Rosenberg, Asaf Shabtai, Yuval Elovici, Lior Rokach
In this paper, we present a generic, query-efficient black-box attack against API call-based machine learning malware classifiers.
no code implementations • 27 Nov 2017 • Ishai Rosenberg, Guillaume Sicard, Eli David
The task of attributing an APT to a specific nation-state is extremely challenging for several reasons.
no code implementations • 19 Jul 2017 • Ishai Rosenberg, Asaf Shabtai, Lior Rokach, Yuval Elovici
In this paper, we present a black-box attack against API call based machine learning malware classifiers, focusing on generating adversarial sequences combining API calls and static features (e. g., printable strings) that will be misclassified by the classifier without affecting the malware functionality.