Can Machine Learning Systems 'Overlearn'?

Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Sam Curry of Cybereason on When to Trust ML Systems(APACinfosec) • March 18, 2019     Sam Curry, CSO, Cybereason

Machine learning systems adapt their behavior on the basis of a feedback loop, so they can overlearn and develop blind spots, which if not understood by practitioners can lead to dangerous situations, says Sam Curry, chief security officer at Cybereason.

See Also: 10 Incredible Ways You Can Be Hacked Through Email & How To Stop The Bad Guys

"Don't just say you are using ML. Tell me how and where you are applying it and then how you are training it," he says. "Because that deeper understanding is what's going to let me know when to trust it and when not to."

In a video interview with Information Security Media Group at RSA Conference 2019 in San Francisco, Curry discusses:

The promise and dangers of AI and ML; The Increase in use of automation and analytics by threat actors.

Curry, chief security officer at Cybereason, has more than 20 years of IT security industry experience. Previously, Curry served as chief technology and security officer at Arbor Networks. He also spent more than seven years at RSA, the security division of EMC, in a variety of senior management positions.