Black Hat 2015: Machine learning security must add variety

LAS VEGAS — Two researchers from a Northrop Grumman subsidiary argued in a Black Hat 2015 session that vendors that incorporate machine learning security into their products but don’t use site-specific training data to differentiate one deployment to another are missing a chance to make the broader security ecosystem more robust.

Calling their concept “moving defense,” Bob Klein, a machine learning specialist who works on a product called BluVector Cyber Intelligence Platform for Northrop subsidiary Acuity Solutions, explained that in a village where the locksmith sells the same kind of lock to everyone, only a very brittle security is achieved. A thief can obtain one of the locks and then has unrestricted time to learn how to pick the lock, after which every house in town is his for the robbing.

Even with machine learning systems, Klein and his fellow speaker and BluVector engineer Ryan Peters argued, using the same data for training the system results in a “lock” that is still functionally identical wherever it is deployed.

“The reality is that the attacker has two main advantages against the machine learning system that carried over from signature-based products,” Klein said. “The first is that they can be fairly confident that the model has not changed since they obtained their copy of the software. . . . The second is that all the targets have the same model. If you can defeat one copy of the model, you can defeat all the copies.

“All that’s really different now is that there’s basically a new locksmith in town. He makes new, really robust locks under the brand “Machine Learning.” And they really are better than the old ones. They are way more robust and they are far less susceptible to lock picking. But guess what. He only sells identical locks too.”

The two researchers have explored the effect of introducing somewhat different data sets for a machine learning security model. They established a baseline test where they created different permutations of an otherwise easily detected malware sample. Each permutation was tested to see if it could bypass the machine learning security system and the process was repeated until, 1900 iterations and 15 hours later, a permutation was created that bypassed the system.

At this point, the central question of the exercise emerged. Could slight variations be introduced into different deployments of the model that would cause some deployments to catch some permutations (and, it should be said, miss others)?

The variations could theoretically come from the vendor or through some other mechanism, but what the researchers were interested in was using data from the sites where the systems were deployed, an approach they referred to as “in situ” learning. “What’s particularly exciting about an “in situ” approach is not that we’re allowing people to buy differently keyed locks —  we’re actually basically making everybody a locksmith,” Klein said.

Furthermore, it’s possible, that adding local data will provide better detection. In some cases, such as classified documents that by definition cannot be in the data sets used by the vendor when originally creating the machine learning model, whole new areas of detection be added. In Klein and Peters’ testing, the permutated model saw significant improvements in detection.

Specifically, when four somewhat different data sets were used for training, the obfuscated version of the attack file that had been found to bypass the machine learning security tool was detected in three out of four differently trained models. While it’s true that one of the four didn’t detect the attack, neither did the system that didn’t deploy a “moving defense” approach.

Audience members were intrigued by the “moving defense” concept but raised potential practical issues where implementation came into play. Anil Gathala, a research engineer at Qualcomm, said that “the problem of generating classifiers that are almost equally good but still not very far from the base model – I think that’s not a trivial problem.” 

There are other questions, too. If you’ve bought something from a vendor, but it’s your neighbor who got the permutation of the lock that thwarted an attack whereas you wound up with a different permutation that didn’t, is the vendor liable even if you supplied the data that differentiated your model? Even if the approach is better overall, will the statistics that prove the system’s superiority make you feel any better when the crown jewels are gone?

“We’re really trying to keep a hacker’s perspective on this,” Klein said in an interview after the session.” Yes, if you were breached and you had a different lock — that would be frustrating. But it really is better for everyone, individuals included, to have this moving defense. Everything sharing the same thing is really brittle.”

Next Steps

p>Black Hat 2015: How to rebuild IT security after a cyber disaster

Source: Black Hat 2015: Machine learning security must add variety

Via: Google Alert for ML

Pin It on Pinterest

Share This